A model of lexical

Xem 1-20 trên 242 kết quả A model of lexical
  • This paper introduces new methods based on exponential families for modeling the correlations between words in text and speech. While previous work assumed the effects of word co-occurrence statistics to be constant over a window of several hundred words, we show that their influence is nonstationary on a much smaller time scale.

    pdf8p bunthai_1 06-05-2013 23 3   Download

  • During early language acquisition, infants must learn both a lexicon and a model of phonetics that explains how lexical items can vary in pronunciation—for instance “the” might be realized as [Di] or [D@]. Previous models of acquisition have generally tackled these problems in isolation, yet behavioral evidence suggests infants acquire lexical and phonetic knowledge simultaneously.

    pdf10p nghetay_1 07-04-2013 37 2   Download

  • The paper describes GEMS, a system for Generating and Expressing the Meaning of Sentences, focussing on the generation task, i.e. how GEMS extracts a set of propositional units from a knowledge store that can be expressed with a well-formed sentence in a target language. GEMS is lexically distributed. After a central processor has selected the first unit(s) from the knowledge store and activated the corresponding lexical entry, the further construction of the sentences meaning is entrusted to the entries in the vocabulary.

    pdf5p buncha_1 08-05-2013 30 1   Download

  • Informal and formal (“T/V”) address in dialogue is not distinguished overtly in modern English, e.g. by pronoun choice like in many other languages such as French (“tu”/“vous”). Our study investigates the status of the T/V distinction in English literary texts. Our main findings are: (a) human raters can label monolingual English utterances as T or V fairly well, given sufficient context; (b), a bilingual corpus can be exploited to induce a supervised classifier for T/V without human annotation.

    pdf11p bunthai_1 06-05-2013 28 2   Download

  • Default inheritance is a useful tool for encoding linguistic generalisations that have exceptions. In this paper we show how the use of an order independent typed default unification operation can provide non-redundant highly structured and concise representation to specify a network of lexical types, that encodes linguistic information about verbal subcategorisation.

    pdf4p bunthai_1 06-05-2013 32 2   Download

  • This paper describes a computational model of concept acquisition for natural language. W e develop a theory of lexical semantics, the Eztended Aspect Calculus, which together with a ~maxkedness theory" for thematic relations, constrains what a possible word meaning can be. This is based on the supposition that predicates from the perceptual domain axe the primitives for more abstract relations. W e then describe an implementation of this model, TULLY, which mirrors the stages of lexical acquisition for children. ...

    pdf7p bungio_1 03-05-2013 27 1   Download

  • A major focus of current work in distributional models of semantics is to construct phrase representations compositionally from word representations. However, the syntactic contexts which are modelled are usually severely limited, a fact which is reflected in the lexical-level WSD-like evaluation methods used.

    pdf11p bunthai_1 06-05-2013 40 3   Download

  • A tool is described which helps in the creation, extension and updating of lexical knowledge bases (LKBs). Two levels of representation are distinguished: a static storage level and a dynamic knowledge level. The latter is an object-oriented environment containing linguistic and lexicographic knowledge. At the knowledge level, constructors and filters can be defined. Constructors are objects which extend the LKB both horizontally (new information) and vertically (new entries) using the linguistic knowledge.

    pdf5p buncha_1 08-05-2013 24 2   Download

  • We think the parts are of interest in their o~. The paper consists of three sections: (I) We give a detailed description of the PROLOG implementation of the parser which is based on the theory of lexical functional grammar (I/V.). The parser covers the fragment described in [1,94]. I.e., it is able to analyse constructions involving functional control and long distance dependencies.

    pdf6p buncha_1 08-05-2013 32 2   Download

  • This paper presents a Bayesian decision framework that performs automatic story segmentation based on statistical modeling of one or more lexical chain features. Automatic story segmentation aims to locate the instances in time where a story ends and another begins. A lexical chain is formed by linking coherent lexical items chronologically. A story boundary is often associated with a significant number of lexical chains ending before it, starting after it, as well as a low count of chains continuing through it.

    pdf4p hongphan_1 15-04-2013 25 1   Download

  • These words are commonly termed "lexlcal ambiguities", although it is probably more accurate to speak of them as potentially ambiguous. Determining how the contextually appropriate reading of a word is identified presents an important and unavoidable problem for persons developing theories of natural language processing. A large body of psycholingulstlc research on ambiguity resolution has failed to yield a consistent set of findings or a general, non-controverslal theory.

    pdf4p bungio_1 03-05-2013 19 1   Download

  • Lexicon definition is one of the main bottlenecks in the development of new applications in the field of Information Extraction from text. Generic resources (e.g., lexical databases) are promising for reducing the cost of specific lexica definition, but they introduce lexical ambiguity. This paper proposes a methodology for building application-specific lexica by using WordNet. Lexical ambiguity is kept under control by marking synsets in WordNet with field labels taken from the Dewey Decimal Classification. tion requirement.

    pdf4p bunthai_1 06-05-2013 28 3   Download

  • We present and experimentally evaluate a new model of pronunciation by analogy: the paradigmatic cascades model. Given a pronunciation lexicon, this algorithm first extracts the most productive paradigmatic mappings in the graphemic domain, and pairs them statistically with their correlate(s) in the phonemic domain. These mappings are used to search and retrieve in the lexical database the most promising analog of unseen words. We finally apply to the analogs pronunciation the correlated series of mappings in the phonemic domain to get the desired pronunciation. ...

    pdf8p bunthai_1 06-05-2013 31 3   Download

  • This paper describes a novel approach to generate potential foreign-accented phonetic transcriptions using phonological rewrite rules. For each pair of a native language (Li) and a target language (L2), a set of postlexical rules is designed to transform canonical phonetic dictionaries of L2 into adapted dictionaries for native Li speakers. Some general considerations on the design of such a rule-based system are presented.

    pdf4p bunthai_1 06-05-2013 33 2   Download

  • In this paper a bidirectional parser for Lexicalized Tree Adjoining Grammars will be presented. The algorithm takes advantage of a peculiar characteristic of Lexicalized TAGs, i.e. that each elementary tree is associated with a lexical item, called its anchor. The algorithm employs a mixed strategy: it works bottom-up from the lexical anchors and then expands (partial) analyses making top-down predictions.

    pdf6p buncha_1 08-05-2013 30 2   Download

  • To study PP attachment disambiguation as a benchmark for empirical methods in natural language processing it has often been reduced to a binary decision problem (between verb or noun attachment) in a particular syntactic configuration. A parser, however, must solve the more general task of deciding between more than two alternatives in many different contexts. We combine the attachment predictions made by a simple model of lexical attraction with a full-fledged parser of German to determine the actual benefit of the subtask to parsing.

    pdf8p hongvang_1 16-04-2013 23 1   Download

  • We describe a statistical approach for modeling agreements and disagreements in conversational interaction. Our approach first identifies adjacency pairs using maximum entropy ranking based on a set of lexical, durational, and structural features that look both forward and backward in the discourse. We then classify utterances as agreement or disagreement using these adjacency pairs and features that represent various pragmatic influences of previous agreement or disagreement on the current utterance. ...

    pdf8p bunbo_1 17-04-2013 26 1   Download

  • We present a new approach to stochastic modeling of constraintbased grammars that is based on loglinear models and uses EM for estimation from unannotated data. The techniques are applied to an LFG grammar for German. Evaluation on an exact match task yields 86% precision for an ambiguity rate of 5.4, and 90% precision on a subcat frame match for an ambiguity rate of 25. Experimental comparison to training from a parsebank shows a 10% gain from EM training.

    pdf8p bunrieu_1 18-04-2013 22 1   Download

  • A number of grammatical formalisms were introduced to define the syntax of natural languages. Among them are parallel multiple context-free grammars (pmcfg's) and lexical-functional grammars (lfg's). Pmcfg's and their subclass called multiple context-free grammars (mcfg's) are natural extensions of cfg's, and pmcfg's are known to be recognizable in polynomial time. Some subclasses of lfg's have been proposed, but they were shown to generate an AlP-complete language. Finite state translation systems (fts') were introduced as a computational model of transformational grammars. ...

    pdf10p bunmoc_1 20-04-2013 28 1   Download

  • We investigate the lexical and syntactic flexibility of a class of idiomatic expressions. We develop measures that draw on such linguistic properties, and demonstrate that these statistical, corpus-based measures can be successfully used for distinguishing idiomatic combinations from non-idiomatic ones. We also propose a means for automatically determining which syntactic forms a particular idiom can appear in, and hence should be included in its lexical representation.

    pdf8p bunthai_1 06-05-2013 22 1   Download


p_strKeyword=A model of lexical

nocache searchPhinxDoc
Đồng bộ tài khoản