We identify and validate from a large corpus constraints from conjunctions on the positive or negative semantic orientation of the conjoined adjectives. A log-linear regression model uses these constraints to predict whether conjoined adjectives are of same or different orientations, achieving 82% accuracy in this task when each conjunction is considered independently. Combining the constraints across many adjectives, a clustering algorithm separates the adjectives into groups of different orientations, and finally, adjectives are labeled positive or negative. ...
Such predictions of the theory as that increases in women’s labour productivity
reduce the household demand for children are borne out in cross-country evidence (Schultz,
1997). Nevertheless, the study of isolated households is not a propitious one in which to explore
the possibilities of collective failure among households. For example, there have been few
attempts to estimate reproductive externalities.
We propose a set of open-source software modules to perform structured Perceptron Training, Prediction and Evaluation within the Hadoop framework. Apache Hadoop is a freely available environment for running distributed applications on a computer cluster. The software is designed within the Map-Reduce paradigm. Thanks to distributed computing, the proposed software reduces substantially execution times while handling huge data-sets. The distributed Perceptron training algorithm preserves convergence properties, thus guaranties same accuracy performances as the serial Perceptron. ...
Efﬁcient decoding for syntactic parsing has become a necessary research area as statistical grammars grow in accuracy and size and as more NLP applications leverage syntactic analyses. We review prior methods for pruning and then present a new framework that uniﬁes their strengths into a single approach. Using a log linear model, we learn the optimal beam-search pruning parameters for each CYK chart cell, effectively predicting the most promising areas of the model space to explore.
We present a pointwise approach to Japanese morphological analysis (MA) that ignores structure information during learning and tagging. Despite the lack of structure, it is able to outperform the current state-of-the-art structured approach for Japanese MA, and achieves accuracy similar to that of structured predictors using the same feature set. We also ﬁnd that the method is both robust to outof-domain data, and can be easily adapted through the use of a combination of partial annotation and active learning. ...
Correct stress placement is important in text-to-speech systems, in terms of both the overall accuracy and the naturalness of pronunciation. In this paper, we formulate stress assignment as a sequence prediction problem. We represent words as sequences of substrings, and use the substrings as features in a Support Vector Machine (SVM) ranker, which is trained to rank possible stress patterns. The ranking approach facilitates inclusion of arbitrary features over both the input sequence and output stress pattern. ...
In this paper, we study different centrality measures being used in predicting noun phrases appearing in the abstracts of scientific articles. Our experimental results show that centrality measures improve the accuracy of the prediction in terms of both precision and recall. We also found that the method of constructing Noun Phrase Network significantly influences the accuracy when using the centrality heuristics itself, but is negligible when it is used together with other text features in decision trees. ...
One of the major problems when translating from Japanese into a European language such as German or English is to determine definiteness of noun phrases in order to choose the correct determiner in the target language. Even though in Japanese, noun phrase reference is said to depend in large parts on the discourse context, we show that in many cases there also exist linguistic markers for definiteness.
Sentence ﬂuency is an important component of overall text readability but few studies in natural language processing have sought to understand the factors that deﬁne it. We report the results of an initial study into the predictive power of surface syntactic statistics for the task; we use ﬂuency assessments done for the purpose of evaluating machine translation. We ﬁnd that these features are weakly but signiﬁcantly correlated with ﬂuency. Machine and human translations can be distinguished with accuracy over 80%.
It is not always clear how the differences in intrinsic evaluation metrics for a parser or classiﬁer will affect the performance of the system that uses it. We investigate the relationship between the intrinsic evaluation scores of an interpretation component in a tutorial dialogue system and the learning outcomes in an experiment with human users. Following the PARADISE methodology, we use multiple linear regression to build predictive models of learning gain, an important objective outcome metric in tutorial dialogue.
The purpose of this paper is to evaluate if Önancial asset prices and, in par-
ticular, sectoral stock prices can help to predict real economic growth. The study
is applied to euro area Önancial market prices and real economic growth over the
sample 1973 to 2006. The evaluation of the predictive power between the Önan-
cial assets is based on the relative improvements in the Mean Square Forecast
Errors (MSFE) compared to the MSFE of a simple optimal autoregressive (AR)
model, in an out-of-sample forecasting exercise.
Parallel data in the domain of interest is the key resource when training a statistical machine translation (SMT) system for a speciﬁc purpose. Since ad-hoc manual translation can represent a signiﬁcant investment in time and money, a prior assesment of the amount of training data required to achieve a satisfactory accuracy level can be very useful. In this work, we show how to predict what the learning curve would look like if we were to manually translate increasing amounts of data.
Modeling of individual users is a promising way of improving the performance of spoken dialogue systems deployed for the general public and utilized repeatedly. We deﬁne “implicitly-supervised” ASR accuracy per user on the basis of responses following the system’s explicit conﬁrmations. We combine the estimated ASR accuracy with the user’s barge-in rate, which represents how well the user is accustomed to using the system, to predict interpretation errors in barge-in utterances. Experimental results showed that the estimated ASR accuracy improved prediction performance.
This paper describes a parser which generates parse trees with empty elements in which traces and ﬁllers are co-indexed. The parser is an unlexicalized PCFG parser which is guaranteed to return the most probable parse. The grammar is extracted from a version of the PENN treebank which was automatically annotated with features in the style of Klein and Manning (2003). The annotation includes GPSG-style slash features which link traces and ﬁllers, and other features which improve the general parsing accuracy. ...
This paper presents a novel application of Alternating Structure Optimization (ASO) to the task of Semantic Role Labeling (SRL) of noun predicates in NomBank. ASO is a recently proposed linear multi-task learning algorithm, which extracts the common structures of multiple tasks to improve accuracy, via the use of auxiliary problems. In this paper, we explore a number of different auxiliary problems, and we are able to signiﬁcantly improve the accuracy of the NomBank SRL task using this approach.
We use machine learners trained on a combination of acoustic conﬁdence and pragmatic plausibility features computed from dialogue context to predict the accuracy of incoming n-best recognition hypotheses to a spoken dialogue system. Our best results show a 25% weighted f-score improvement over a baseline system that implements a “grammar-switching” approach to context-sensitive speech recognition.
be used for efficiency by providing a best-first search heuristic to order the parsing agenda. This paper proposes an agenda-based probabilistic chart parsing algorithm which is both robust and efficient. The algorithm, 7)icky 1, is considered robust because it will potentially generate all constituents produced by a pure bottom-up parser and rank these constituents by likelihood. The efficiency of the algorithm is achieved through a technique called probabilistic prediction, which helps the algorithm avoid worst-case behavior. ...
We present a corpus-based study of methods that have been proposed in the linguistics literature for selecting the semantically unmarked term out of a pair of antonymous adjectives. Solutions to this problem are applicable to the more general task of selecting the positive term from the pair. Using automatically collected data, the accuracy and applicability of each method is quantified, and a statistical analysis of the significance of the results is performed.
What can be determined from such a heterogeneous aggregation of studies, concern-
ing a wide array of predictands and involving such a variety of judges, mechanical
combination methods, and data? Quite a lot, as it turns out. To summarize these data
quantitatively for the present purpose (see Grove et al., 2000, for details omitted here),
we took the median difference between all possible pairs of clinical versus mechanical
predictions for a given study as the representative outcome of that study.
To achieve success as a telemarketer, you’ll need to invest
the necessary time and effort to learn and master a variety of
different skills. Only then will you be able to truly tap your
potential for generating the sales results you and your em-
ployer want. This book divides up the skills you need, explains
them in detail, and then offers strategies for developing and