Xem 1-20 trên 96 kết quả Simple sentence
  • One problem for the generation of natural language text is determining when to use a sequence of simple sentences and when a single complex one is more appropriate. In this paper, we show how focus of attention is one factor that influences this decision and describe its implementation in a system that generates explanations for a student advisor expert system.

    pdf8p bungio_1 03-05-2013 16 1   Download

  • A simple sentence has one clause, beginning with a noun group called the subject. The subject is the person or thing that the sentence is about. This is followed by a verb group, which tells you what the subject is doing, or describes the subject's situation.

    doc133p thequang 07-07-2009 380 100   Download

  • Tham khảo bài thuyết trình 'week 5 - the simple sentence (p1) - edited', ngoại ngữ, ngữ pháp tiếng anh phục vụ nhu cầu học tập, nghiên cứu và làm việc hiệu quả

    ppt46p nguyenlanthu 24-03-2011 142 25   Download

  • We describe a computational system which parses discourses consisting of sequences of simple sentences. These contain a range of temporal constructions, including time adverbials, progressive aspect and various aspectual classes. In particular, the grammar generates the required readings, according to the theoretical analysis of (Glasbey, forthcoming), for sentence-final 'then'.

    pdf10p buncha_1 08-05-2013 13 1   Download

  • Unit 1: Clause and sentence structure. Main points: Simple sentences have one clause. Clauses usually consist of a noun group as the subject, and a verb group. Clauses can also have another noun group as the object or complement. Clauses can have an adverbial, also called an adjunct. Changing the order of the words in a clause can change its meaning. Compound sentences consist of two or more main clauses. Complex sentences always include a subordinate clause, as well as one or more main clauses....

    doc153p dinhluyen2704 06-09-2010 275 157   Download

  • Exercises A. Complete the sentences. Ejemplo: I didn´t watch TV last night. 1. On Saturday I (play) computer games with my cousins. 2. My mum (not cook) dinner last night. 3. I (walk) to school because there weren´t any buses. 4. They (not dance) at the party. 5. My brother (travel) to Ireland last summer.

    pdf6p stevenchau10 23-08-2012 79 15   Download

  • We consider a new subproblem of unsupervised parsing from raw text, unsupervised partial parsing—the unsupervised version of text chunking. We show that addressing this task directly, using probabilistic finite-state methods, produces better results than relying on the local predictions of a current best unsupervised parser, Seginer’s (2007) CCL. These finite-state models are combined in a cascade to produce more general (full-sentence) constituent structures; doing so outperforms CCL by a wide margin in unlabeled PARSEVAL scores for English, German and Chinese. ...

    pdf10p hongdo_1 12-04-2013 29 2   Download

  • In this paper we examine the task of sentence simplification which aims to reduce the reading complexity of a sentence by incorporating more accessible vocabulary and sentence structure. We introduce a new data set that pairs English Wikipedia with Simple English Wikipedia and is orders of magnitude larger than any previously examined for sentence simplification.

    pdf5p hongdo_1 12-04-2013 21 2   Download

  • It has previously been assumed in the psycholinguistic literature that finite-state models of language are crucially limited in their explanatory power by the locality of the probability distribution and the narrow scope of information used by the model. We show that a simple computational model (a bigram part-of-speech tagger based on the design used by Corley and Crocker (2000)) makes correct predictions on processing difficulty observed in a wide range of empirical sentence processing data. ...

    pdf8p hongvang_1 16-04-2013 17 2   Download

  • For each of the sentences in the text, they provided a ranking of how important that sentence is with respect to the content of the text, on an integer scale from 1 (not important) to 7 (very important). The approaches we evaluated are a simple paragraph-based approach that serves as a baseline, two word-based algorithms, and two coherencebased approaches1.

    pdf8p bunbo_1 17-04-2013 17 2   Download

  • Researchers in both machine Iranslation (e.g., Brown et al., 1990) and bilingual lexicography (e.g., Klavans and Tzoukermann, 1990) have recently become interested in studying parallel texts, texts such as the Canadian Hansards (parliamentary proceedings) which are available in multiple languages (French and English). This paper describes a method for aligning sentences in these parallel texts, based on a simple statistical model of character lengths. The method was developed and tested on a small trilingual sample of Swiss economic reports.

    pdf8p bunmoc_1 20-04-2013 22 2   Download

  • The limited capacity of working memory is intrinsic to human sentence processing, and therefore must be addressed by any theory of human sentence processing. This paper gives a theory of garden-path effects and processing overload that is based on simple assumptions about human short term memory capacity. hypothesis, is easily compatible with the above view of processing load calculation: given a choice between two different representations for the same input string, simply choose the representation that is associated with the lower processing load. ...

    pdf8p bungio_1 03-05-2013 15 2   Download

  • In this paper we describe a method for simplifying sentences using Phrase Based Machine Translation, augmented with a re-ranking heuristic based on dissimilarity, and trained on a monolingual parallel corpus. We compare our system to a word-substitution baseline and two state-of-the-art systems, all trained and tested on paired sentences from the English part of Wikipedia and Simple Wikipedia.

    pdf10p nghetay_1 07-04-2013 9 1   Download

  • The ability to compress sentences while preserving their grammaticality and most of their meaning has recently received much attention. Our work views sentence compression as an optimisation problem. We develop an integer programming formulation and infer globally optimal compressions in the face of linguistically motivated constraints. We show that such a formulation allows for relatively simple and knowledge-lean compression models that do not require parallel corpora or largescale resources. The proposed approach yields results comparable and in some cases superior to state-of-the-art.

    pdf8p hongvang_1 16-04-2013 13 1   Download

  • This set of candidate surface strings, represented as a word lattice, is then rescored by a wordbigram language model, to produce the bestranked output sentence. FERGUS (Bangalore and Rambow, 2000), on the other hand, employs a model of syntactic structure during sentence realization. In simple terms, it adds a tree-based stochastic model to the approach taken by the Nitrogen system. This tree-based model chooses a best-ranked XTAG representation for a given dependency structure.

    pdf8p bunmoc_1 20-04-2013 11 1   Download

  • In this paper, we describe a fast algorithm for aligning sentences with their translations in a bilingual corpus. Existing efficient algorithms ignore word identities and only consider sentence length (Brown el al., 1991b; Gale and Church, 1991). Our algorithm constructs a simple statistical word-to-word translation model on the fly during alignment. We find the alignment that maximizes the probability of generating the corpus with this translation model.

    pdf8p bunmoc_1 20-04-2013 16 1   Download

  • This paper presents a computational model of verb acquisition which uses what we will callthe principle of structured overeommitment to eliminate the need for negative evidence. The learner escapes from the need to be told that certain possibilities cannot occur (i.e.,are "ungrammatical") by one simple expedient: It assumes that all properties it has observed are either obligatory or forbidden until it sees otherwise, at which point it decides that what it thought was either obligatory or forbidden is merely optional.

    pdf8p bungio_1 03-05-2013 16 1   Download

  • This paper describes some operational aspects of a language comprehension model which unifies the linguistic theory and the semantic theory in respect to operations. The computational model, called Augmented Dependency Grammar (ADG), formulates not only the linguistic dependency structure of sentences but also the semantic dependency structure using the extended deep case grammar and fleld-oriented fact-knowledge based inferences. Fact knowledge base and ADG model clarify the qualitative difference between what we call semantics and logical meaning. ...

    pdf7p buncha_1 08-05-2013 15 1   Download

  • In a few minutes you'll be able to start practicing and testing your TOEIC vocabulary skills in both an effective and pleasant way. You will experience a new and exciting method of increasing your TOEIC word power.Learning TOEIC vocabulary words fast is simple but it certainly isn't easy. There is one single word that describes the secret to improving your TOEIC vocabulary: HABITS. Yes, you've heard that right. If you want to learn all the TOEIC words, you have to establish "TOEIC vocabulary habits". In a few moments you will get access to a tool that helps you develop the habits you...

    pdf703p hopean 08-08-2009 6814 6213   Download

  • In Part I you will find a clear and concise summary of English grammar: its forms, principles, and basic terminology. The material is presented in non-technical language and in easy, natural steps, beginning with the structure of the simple sentence, and continuing through the various parts of speech and other common sentence elements to the more difficult constructions. All terms and forms are amply illustrated with models and practice exercises. The section ends with " A Dictionary of Grammatical Terms," in Chapter 20, which will be useful for ready reference.

    pdf208p thanhthaoluong 03-06-2010 1172 969   Download

Đồng bộ tài khoản