The lexical expressions

Xem 1-20 trên 41 kết quả The lexical expressions
  • Comparative expressions (CEs) such as "bigger than" and "more oranges than" are highly ambiguous, and their meaning is context dependent. Thus, they pose problems for the semantic interpretation algorithms typically used in natural language database interfaces. We focus on the comparison attribute ambiguities that occur with CEs. To resolve these ambiguities our natural language interface interacts with the user, finding out which of the possible interpretations was intended.

    pdf8p bungio_1 03-05-2013 31 1   Download

  • This article outlines a quantitative method for segmenting texts into thematically coherent units. This method relies on a network of lexical collocations to compute the thematic coherence of the different parts of a text from the lexical cohesiveness of their words. We also present the results of an experiment about locating boundaries between a series of concatened texts. 1 Introduction Several quantitative methods exist for thematically segmenting texts. Most of them are based on the following assumption: the thematic coherence of a text segment finds expression at the lexical level. ...

    pdf3p bunrieu_1 18-04-2013 28 2   Download

  • Default inheritance is a useful tool for encoding linguistic generalisations that have exceptions. In this paper we show how the use of an order independent typed default unification operation can provide non-redundant highly structured and concise representation to specify a network of lexical types, that encodes linguistic information about verbal subcategorisation.

    pdf4p bunthai_1 06-05-2013 39 2   Download

  • The SRI Core Language Engine (CLE) is a general-purpose natural language front end for interactive systems. It translates English expressions into representations of their literal meanings. This paper presents the lexical acquisition component of the CLE, which allows the creation of lexicon entries by users with knowledge of the application domain but not of linguistics or of the detailed workings of the system. It is argued that the need to cater for a wide range of types of back end leads naturally to an approach based on eliciting grammaticality judgments from the user.

    pdf8p buncha_1 08-05-2013 35 2   Download

  • Sentiment classification refers to the task of automatically identifying whether a given piece of text expresses positive or negative opinion towards a subject at hand. The proliferation of user-generated web content such as blogs, discussion forums and online review sites has made it possible to perform large-scale mining of public opinion. Sentiment modeling is thus becoming a critical component of market intelligence and social media technologies that aim to tap into the collective wisdom of crowds.

    pdf9p hongphan_1 14-04-2013 39 4   Download

  • Many applications of natural language processing technologies involve analyzing texts that concern the psychological states and processes of people, including their beliefs, goals, predictions, explanations, and plans. In this paper, we describe our efforts to create a robust, large-scale lexical-semantic resource for the recognition and classification of expressions of commonsense psychology in English Text.

    pdf8p bunbo_1 17-04-2013 29 2   Download

  • Much effort has been put into computational lexicons over the years, and most systems give much room to (lexical) semantic data. However, in these systems, the effort put on the study and representation of lexical items to express the underlying continuum existing in 1) language vagueness and polysemy, and 2) language gaps and mismatches, has remained embryonic.

    pdf7p bunrieu_1 18-04-2013 26 2   Download

  • In this paper, we address the issue of syntagmatic expressions from a computational lexical semantic perspective. From a representational viewpoint, we argue for a hybrid approach combining linguistic and conceptual paradigms, in order to account for the continuum we find in natural languages from free combining words to frozen expressions. In particular, we focus on the place of lexical and semantic restricted co-occurrences.

    pdf5p bunrieu_1 18-04-2013 26 2   Download

  • This paper presents a tool for extracting multi-word expressions from corpora in Modern Greek, which is used together with a parallel concordancer to augment the lexicon of a rule-based machinetranslation system. The tool is part of a larger extraction system that relies, in turn, on a multilingual parser developed over the past decade in our laboratory. The paper reviews the various NLP modules and resources which enable the retrieval of Greek multi-word expressions and their translations: the Greek parser, its lexical database, the extraction and concordancing system. ...

    pdf4p bunthai_1 06-05-2013 31 2   Download

  • Most algorithms dedicated to the generation of referential descriptions widely suffer from a fundamental problem: they make too strong assumptions about adjacent processing components, resulting in a limited coordination with their perceptive and linguistics data, that is, the provider for object descriptors and the lexical expression by which the chosen descriptors is ultimately realized.

    pdf8p bunthai_1 06-05-2013 25 2   Download

  • “Lightweight” semantic annotation of text calls for a simple representation, ideally without requiring a semantic lexicon to achieve good coverage in the language and domain. In this paper, we repurpose WordNet’s supersense tags for annotation, developing specific guidelines for nominal expressions and applying them to Arabic Wikipedia articles in four topical domains.

    pdf6p nghetay_1 07-04-2013 27 1   Download

  • We present a data-driven approach to learn user-adaptive referring expression generation (REG) policies for spoken dialogue systems. Referring expressions can be difficult to understand in technical domains where users may not know the technical ‘jargon’ names of the domain entities. In such cases, dialogue systems must be able to model the user’s (lexical) domain knowledge and use appropriate referring expressions.

    pdf10p hongdo_1 12-04-2013 28 1   Download

  • This paper describes an on-going project concerning with an ontological lexical resource based on the abundant conceptual information grounded on Chinese characters. The ultimate goal of this project is set to construct a cognitively sound and computationally effective character-grounded machine-understandable resource. Philosophically, Chinese ideogram has its ontological status, but its applicability to the NLP task has not been expressed explicitly in terms of language resource.

    pdf6p hongvang_1 16-04-2013 36 1   Download

  • We present a novel approach to the word sense disambiguation problem which makes use of corpus-based evidence combined with background knowledge. Employing an inductive logic programming algorithm, the approach generates expressive disambiguation rules which exploit several knowledge sources and can also model relations between them. The approach is evaluated in two tasks: identification of the correct translation for a set of highly ambiguous verbs in EnglishPortuguese translation and disambiguation of verbs from the Senseval-3 lexical sample task. ...

    pdf8p hongvang_1 16-04-2013 24 1   Download

  • We present the design and evaluation of a translator’s amenuensis that uses comparable corpora to propose and rank nonliteral solutions to the translation of expressions from the general lexicon. Using distributional similarity and bilingual dictionaries, the method outperforms established techniques for extracting translation equivalents from parallel corpora. The interface to the system is available at:

    pdf8p hongvang_1 16-04-2013 33 1   Download

  • Context-free grammar (CFG) has been a well accepted framework for computational linguistics for a long time. While it has drawbacks, including the inability to express some linguistic constructions, it has the virtue of being computationally efficient, O(n3)-time in the worst case. Recently there has been a gain in interest in the so-called 'mildly' context-sensitive formalisms (Vijay-Shanker, 1987; Weir, 1988; Joshi, VijayShanker, and Weir, 1991; Vijay-Shanker and Weir, 1993a) that generate only a small superset of context-free languages. ...

    pdf9p bunmoc_1 20-04-2013 34 1   Download

  • This study employs a knowledge intensive corpus analysis to identify the elements of the communicative context which can be used to determine the appropriate lexical and grammatical form of instructional texts. IMAGENE, an instructional text generation system based on this analysis: is presented, particularly with reference to its expression of precondition relations. INTRODUCTION Technical writers routinely employ a range of forms of expression for precondition expressions in instructional text.

    pdf8p bunmoc_1 20-04-2013 34 1   Download

  • Referring expressions and other object descriptions should be maximal under the Local Brevity, No Unnecessary Components, and Lexical Preference preference rules; otherwise, they may lead hearers to infer unwanted conversational implicatures. These preference rules can be incorporated into a polynomial time generation algorithm, while some alternative formalizations of conversational impficature make the generation task NP-Hard. and avoid utterance (lb). Incorrect conversational implicatures may also arise from inappropriate attributive (informational) descriptions. ...

    pdf8p bungio_1 03-05-2013 29 1   Download

  • In this paper we present a proposal to extend WordNet-like lexical databases by adding phrasets, i.e. sets of free combinations of words which are recurrently used to express a concept (let's call them recurrent free phrases). Phrasets are a useful source of information for different NLP tasks, and particularly in a multilingual environment to manage lexical gaps. Two experiments are presented to check the possibility of acquiring recurrent free phrases from dictionaries and corpora.

    pdf4p bunthai_1 06-05-2013 33 1   Download

  • This paper shows how higher levels of generalization can be introduced into unification grammars by exploiting methods for typing grammatical objects. We discuss the strategy of using global declarations to limit possible linguistic structures, and sketch a few unusual aspects of our typechecking algorithm. We also describe the sort system we use in our semantic representation language and illustrate the expressive power gained by being able to state global constraints over these sorts.

    pdf8p buncha_1 08-05-2013 27 1   Download



p_strKeyword=The lexical expressions

nocache searchPhinxDoc


Đồng bộ tài khoản