intTypePromotion=1
zunia.vn Tuyển sinh 2024 dành cho Gen-Z zunia.vn zunia.vn
ADSENSE

Lexical acquisition

Xem 1-20 trên 24 kết quả Lexical acquisition
  • Ebook "Foundations of statistical natural language processing" includes content: Lexical acquisition, introduction, mathematical foundations, linguistic essentials, corpus based work, collocations, statistical inference - n gram models over sparse data, word sense disambiguation,.... and other contents.

    pdf704p haojiubujain07 20-09-2023 6 2   Download

  • Vocabulary acquisition is central to language learning and of great importance to the English Language Learners. Many learners face difficulty in correctly selecting words to deliver their ideas due to their insufficient lexical knowledge. This work aims to analyze the significance of sense relations instruction in vocabulary teaching.

    pdf9p vicross2711 20-06-2019 103 11   Download

  • The lexicons for Knowledge-Based Machine Translation systems require knowledge intensive morphological, syntactic and semantic information. This information is often used in different ways and usually formatted for a specific NLP system. This tends to make both the acquisition and maintenance of lexical databases cumbersome, inefficient and error-prone. In order to solve these problems, we have developed a program called COOL which automates the acquisition and maintenance processes and allows us to standardize and centralize the databases. ...

    pdf9p buncha_1 08-05-2013 53 2   Download

  • The SRI Core Language Engine (CLE) is a general-purpose natural language front end for interactive systems. It translates English expressions into representations of their literal meanings. This paper presents the lexical acquisition component of the CLE, which allows the creation of lexicon entries by users with knowledge of the application domain but not of linguistics or of the detailed workings of the system. It is argued that the need to cater for a wide range of types of back end leads naturally to an approach based on eliciting grammaticality judgments from the user.

    pdf8p buncha_1 08-05-2013 47 2   Download

  • We apply machine learning techniques to classify automatically a set of verbs into lexical semantic classes, based on distributional approximations of diatheses, extracted from a very large annotated corpus. Distributions of four grammatical features are sufficient to reduce error rate by 50% over chance. We conclude that corpus data is a usable repository of verb class information, and that corpus-driven extraction of grammatical features is a promising methodology for automatic lexical acquisition. ...

    pdf8p bunthai_1 06-05-2013 45 3   Download

  • This paper presents an exploratory data analysis in lexical acquisition for adjective classes using clustering techniques. From a theoretical point of view, this approach provides large-scale empirical evidence for a sound classification. From a computational point of view, it helps develop a reliable automatic subclassification method. Results show that the features used in theoretical work can be successfully modelled in terms of shallow cues.

    pdf8p bunthai_1 06-05-2013 50 1   Download

  • This paper presents a formalization of automatic grammar acquisition that is based on lexicalized grammar formalisms (e.g. LTAG and HPSG). We state the conditions for the consistent acquisition of a unique lexicalized grammar from an annotated corpus. idea in this study is to automatically obtain the lexical entries from an annotated corpus, which will greatly reduce the cost of building the grammar.

    pdf4p bunthai_1 06-05-2013 35 2   Download

  • We describe how unknown lexical entries are processed in a unification-based framework with large-coverage grammars and how from their usage lexical entries are extracted. To keep the time and space usage during parsing within bounds, information from external sources like Part of Speech (PoS) taggers and morphological analysers is taken into account when information is constructed for unknown words.

    pdf4p bunthai_1 06-05-2013 40 2   Download

  • This paper presents a computational model of verb acquisition which uses what we will callthe principle of structured overeommitment to eliminate the need for negative evidence. The learner escapes from the need to be told that certain possibilities cannot occur (i.e.,are "ungrammatical") by one simple expedient: It assumes that all properties it has observed are either obligatory or forbidden until it sees otherwise, at which point it decides that what it thought was either obligatory or forbidden is merely optional.

    pdf8p bungio_1 03-05-2013 45 1   Download

  • This paper describes a computational model of concept acquisition for natural language. W e develop a theory of lexical semantics, the Eztended Aspect Calculus, which together with a ~maxkedness theory" for thematic relations, constrains what a possible word meaning can be. This is based on the supposition that predicates from the perceptual domain axe the primitives for more abstract relations. W e then describe an implementation of this model, TULLY, which mirrors the stages of lexical acquisition for children. ...

    pdf7p bungio_1 03-05-2013 39 1   Download

  • INKA is a natural language interface to facilitate knowledge acquisition during expert system development for electronic instrument trouble-thooting. The expert system design methodology develops a domain definition, called GLIB, in the form of a semantic grammar. This grammar format enables GLIB to be used with the INGLISH interface, which constrains users to create statements within a subset of English. Incremental patting in INGLISH allows immediate remedial information to be generated if a user deviates from the sublanguage.

    pdf8p bungio_1 03-05-2013 61 1   Download

  • This paper deals with the discovery, representation, and use of lexical rules (LRs) during large-scale semi-automatic computational lexicon acquisition. The analysis is based on a set of LRs implemented and tested on the basis of Spanish and English business- and finance-related corpora. We show that, though the use of LRs is justified, they do not come costfree. Semi-automatic output checking is required, even with blocking and preemtion procedures built in.

    pdf8p bunmoc_1 20-04-2013 27 2   Download

  • Most natural language processing tasks require lexical semantic information. Automated acquisition of this information would thus increase the robustness and portability of NLP systems. This paper describes an acquisition method which makes use of fixed correspondences between derivational affixes and lexical semantic information. One advantage of this method, and of other methods that rely only on surface characteristics of language, is that the necessary input is currently available.

    pdf7p bunmoc_1 20-04-2013 40 2   Download

  • Automatic acquisition of translation rules from parallel sentence-aligned text takes a variety of forms. Some machine translation (MT) systems treat aligned sentences as unstructured word sequences. Other systems, including our own ((Grishman, 1994) and (Meyers et al., 1996)), syntactically analyze sentences (parse) before acquiring transfer rules (cf. (Kaji et hi., 1992), (Matsumoto et hi., 1993), and (Kitamura and Matsumoto, 1995)). This has the advantage of acquiring structural as well as lexical correspondences. ...

    pdf5p bunrieu_1 18-04-2013 54 2   Download

  • We introduce an approach to the automatic acquisition of new concepts fi'om natural language texts which is tightly integrated with the underlying text understanding process. The learning model is centered around the 'quality' of different forms of linguistic and conceptual evidence which underlies the incremental generation and refinement of alternative concept hypotheses, each one capturing a different conceptual reading for an unknown lexical item.

    pdf7p bunrieu_1 18-04-2013 38 3   Download

  • This paper presents our work on accumulation o f lexical sets which includes acquisition o f dictionary resources and production o f new lexical sets from this. The method for the acquisition, using a context-free syntax-directed translator and text modification techniques, proves easy-to-use, flexible, and efficient. Categories of production are analyzed, and basic operations are proposed which make up a formalism for specifying and doing production.

    pdf5p bunrieu_1 18-04-2013 60 2   Download

  • We describe the ongoing construction of a large, semantically annotated corpus resource as reliable basis for the largescale acquisition of word-semantic information, e.g. the construction of domainindependent lexica. The backbone of the annotation are semantic roles in the frame semantics paradigm. We report experiences and evaluate the annotated data from the first project stage. On this basis, we discuss the problems of vagueness and ambiguity in semantic annotation.

    pdf8p bunbo_1 17-04-2013 42 2   Download

  • Supervised learning methods for WSD yield better performance than unsupervised methods. Yet the availability of clean training data for the former is still a severe challenge. In this paper, we present an unsupervised bootstrapping approach for WSD which exploits huge amounts of automatically generated noisy data for training within a supervised learning framework. The method is evaluated using the 29 nouns in the English Lexical Sample task of SENSEVAL2.

    pdf8p bunbo_1 17-04-2013 38 1   Download

  • The EM clustering algorithm (Hofmann and Puzicha, 1998) used here is an unsupervised machine learning algorithm that has been applied in many NLP tasks, such as inducing a semantically labeled lexicon and determining lexical choice in machine translation (Rooth et al., 1998), automatic acquisition of verb semantic classes (Schulte im Walde, 2000) and automatic semantic labeling (Gildea and Jurafsky, 2002).

    pdf8p bunbo_1 17-04-2013 42 1   Download

  • This paper addresses two remaining challenges in Chinese word segmentation. The challenge in HLT is to find a robust segmentation method that requires no prior lexical knowledge and no extensive training to adapt to new types of data. The challenge in modelling human cognition and acquisition it to segment words efficiently without using knowledge of wordhood. We propose a radical method of word segmentation to meet both challenges.

    pdf4p hongvang_1 16-04-2013 46 2   Download

CHỦ ĐỀ BẠN MUỐN TÌM

TOP DOWNLOAD
207 tài liệu
1446 lượt tải
320 tài liệu
1228 lượt tải
ADSENSE

nocache searchPhinxDoc

 

Đồng bộ tài khoản
2=>2