The knowledge representation is an important factor in natural language generation since it limits the semantic capabilities of the generation system. This paper identifies several information types in a knowledge representation that can be used to generate meaningful responses to questions about database structure. Creating such a knowledge representation, however, is a long and tedious process. A system is presented which uses the contents of the database to form part of this knowledge representation automatically. ...
Tuyển tập các báo cáo nghiên cứu về y học được đăng trên tạp chí y học quốc tế cung cấp cho các bạn kiến thức về ngành y đề tài:Agent-based dynamic knowledge representation of Pseudomonas aeruginosa virulence activation in the stressed gut: Towards characterizing hostpathogen interactions in gut-derived sepsis
Tuyển tập các báo cáo nghiên cứu về y học được đăng trên tạp chí y học quốc tế cung cấp cho các bạn kiến thức về ngành y đề tài: " Introduction of an agent-based multi-scale modular architecture for dynamic knowledge representation of acute inflammation
Hence, i t is quite possible that some of the comments may turn out to be inappropriate or else they have already been taken care of in the f u l l texts. In a couple of cases~ I had the benefit of reading some e a r l i e r longer related reports, which were very helpful. All the papers (except by Sangster) deal with either knowledge representation, particular types of knowledge to be represented, or how certain types of knowledge are to be used.
Although ultimately intended functions include text generation (e.g., 4), present efforts focus on text analysis: developing the capability to take in essentially unconstrained business text and to output grammar and style critiques, on a sentence by sentence basis. Briefly, we use a large on-line dictionary and a bottom-up parser in connection with an Augmented Phrase Structure Grammar (5) to obtain an approximately correct structural description of the surface t e x t (e.g., we posit no transformations or recovery of deleted material to infer underlying "deep" structures). ...
Knowledge-Based Report Generation is a technique for automatically generating natural language reports from computer databases. It is so named because it applies knowledge-based expert systems software to the problem of text generation. The first application of the technique, a system for generating natural language stock reports from a daily stock quotes database, is partially implemented.
In order to represent speech acts, in a multi-agent context, we choose a knowledge representation based on the modal logic of knowledge KT4 which is defined by Sato. Such a formalism allows us to reason about knowledge and represent knowledge about knowledge, the notions of truth value and of definite reference.
What is knowledge? How can the knowledge be explicitly represented? Many
scientists from different fields of study have tried to answer those questions through
history, though seldom agreed about the answers. Many representations have been
presented by researchers working on a variety of fields, such as computer science,
mathematics, cognitive computing, cognitive science, psychology, linguistic, and
philosophy of mind. Some of those representations are computationally tractable,
some are not; this book is concerned only with the first kind....
We describe novel aspects of a new natural language generator called Nitrogen. This generator has a highly flexible input representation that allows a spectrum of input from syntactic to semantic depth, and shifts' the burden of many linguistic decisions to the statistical post-processor. The generation algorithm is compositional, making it efficient, yet it also handles non-compositional aspects of language. Nitrogen's design makes it robust and scalable, operating with lexicons and knowledge bases of one hundred thousand entities. ...
We present a syntactically enriched vector model that supports the computation of contextualized semantic representations in a quasi compositional fashion. It employs a systematic combination of ﬁrst- and second-order context vectors. We apply our model to two different tasks and show that (i) it substantially outperforms previous work on a paraphrase ranking task, and (ii) achieves promising results on a wordsense similarity task; to our knowledge, it is the ﬁrst time that an unsupervised method has been applied to this task. ...
Knowledge of the anaphoricity of a noun phrase might be proﬁtably exploited by a coreference system to bypass the resolution of non-anaphoric noun phrases. Perhaps surprisingly, recent attempts to incorporate automatically acquired anaphoricity information into coreference systems, however, have led to the degradation in resolution performance. This paper examines several key issues in computing and using anaphoricity information to improve learning-based coreference systems. In particular, we present a new corpus-based approach to anaphoricity determination.
This paper presents a system which automatically generates shallow semantic frame structures for conversational speech in unrestricted domains. We argue that such shallow semantic representations can indeed be generated with a minimum amount of linguistic knowledge engineering and without having to explicitly construct a semantic knowledge base.
A two-tier model for the description of morphological, syntactic and semantic variations of multi-word terms is presented. It is applied to term normalization of French and English corpora in the medical and agricultural domains. Five different sources of morphological and semantic knowledge are exploited (MULTEXT, CELEX, AGROVOC, WordNetl.6, and Microsoft Word97 thesaurus).
In this paper we present a parser which al lows to make explicit the interconnections between syntax and semantics, to analyze the sentences in a quasi-deterministic fashion and, in many cases, to identify the roles of the various constituents even if the sentance is ill-formed.
A tool is described which helps in the creation, extension and updating of lexical knowledge bases (LKBs). Two levels of representation are distinguished: a static storage level and a dynamic knowledge level. The latter is an object-oriented environment containing linguistic and lexicographic knowledge. At the knowledge level, constructors and filters can be defined. Constructors are objects which extend the LKB both horizontally (new information) and vertically (new entries) using the linguistic knowledge.
In the field of knowledge based systems for natural language processing, one of the most challenging aims is to use parts of an existing knowledge base for different domains and/or different tasks. We support the point that this problem can only be solved by using adequate metainformation about the content and structuring principles of the representational systems concerned. One of the prerequisites in this respect is the transparency of modelling decisions.
We describe a novel approach to unsupervised learning of the events that make up a script, along with constraints on their temporal ordering. We collect naturallanguage descriptions of script-speciﬁc event sequences from volunteers over the Internet. Then we compute a graph representation of the script’s temporal structure using a multiple sequence alignment algorithm. The evaluation of our system shows that we outperform two informed baselines.
The third tier is the knowledge base (KB) that The three-tiered discourse representation defined in describes the belief system of one agent in the (Luperfoy, 1991) is applied to multimodal humandialogue, namely, the backend system being interfaced computer interface (HCI) dialogues. In the applied to. Figure 1 diagrams a partitioning of the system the three tiers are (1) a linguistic analysis information available to a dialogue processing agent.