This book explains, collects and reports on the latest research results that aim at narrowing the so-called multimedia "Semantic Gap": the large disparity between descriptions of multimedia content that can be computed automatically, and the richness and subjectivity of semantics in user queries and human interpretations of audiovisual media. Addressing the grand challenge posed by the "Semantic Gap" requires a multi-disciplinary approach (computer science, computer vision and signal processing, cognitive science, web science, etc....
Monolingual translation probabilities have recently been introduced in retrieval models to solve the lexical gap problem. They can be obtained by training statistical translation models on parallel monolingual corpora, such as question-answer pairs, where answers act as the “source” language and questions as the “target” language. In this paper, we propose to use as a parallel training dataset the deﬁnitions and glosses provided for the same term by different lexical semantic resources.
Minimal Recursion Semantics (MRS) is the standard formalism used in large-scale HPSG grammars to model underspeciﬁed semantics. We present the ﬁrst provably efﬁcient algorithm to enumerate the readings of MRS structures, by translating them into normal dominance constraints.
Much effort has been put into computational lexicons over the years, and most systems give much room to (lexical) semantic data. However, in these systems, the effort put on the study and representation of lexical items to express the underlying continuum existing in 1) language vagueness and polysemy, and 2) language gaps and mismatches, has remained embryonic.
This paper presents an algorithm to integrate different lexical resources, through which we hope to overcome the individual inadequacy of the resources, and thus obtain some enriched lexical semantic information for applications such as word sense disambiguation. We used WordNet as a mediator between a conventional dictionary and a thesaurus. Preliminary results support our hypothesised structural relationship, which enables the integration, of the resources. These results also suggest that we can combine the resources to achieve an overall balanced degree of sense discrimination. ...
If a q.a. system tries to transform an English question directly into the simplest possible formulation of the corresponding data base query, discrepancies between the English lexicon and the structure of the data base cannot be handled well. To be able to deal with such discrepancies in a systematic way, the PHLIQAI system distinguishes different levels of semantic representation; it contains modules which translate from one level to another, as well as a module which simplifies expressions within one level.
We define a back-and-forth translation between Hole Semantics and dominance constraints, two formalisms used in underspecified semantics. There are fundamental differences between the two, but we show that they disappear on practically useful descriptions. Our encoding bridges a gap between two underspecification formalisms, and speeds up the processing of Hole Semantics.
In this paper, we pursue a multimodular, statistical approach to WH dependencies, using a feedforward network as our modeling tool. The empirical basis of this model and the availability of performance measures for our system address deficiencies in earlier computational work on WH gaps, which require richer sources of semantic and lexical information in order to run. The statistical nature of our models allows them to be simply combined with other modules of grammar, such as a syntactic parser. ...
Làm gì khi máy không nhận thiết bị Bluetooth?
.Kết nối bluetooth giờ đây xuất hiện trên hầu hết các thiết bị di động và cố định. Tuy nhiên, người dùng cũng gặp không ít rắc rối với các thiết bị và cổng kết nối Bluetooth, đặc biệt là hiện tượng khi hệ thống không nhận ra thiết bị dạng này.
Many NLP tasks need accurate knowledge for semantic inference. To this end, mostly WordNet is utilized. Yet WordNet is limited, especially for inference between predicates. To help ﬁlling this gap, we present an algorithm that generates inference rules between predicates from FrameNet. Our experiment shows that the novel resource is effective and complements WordNet in terms of rule coverage.
carriers of semantic Information - were substituted for by a mere "-", whereas other formatives were left in their original shape and place. These transformed texts were presented to subjects who w e r e a s k e d to fill in the gaps in such a w a y that the texts thus obtained were to be both syntactically correct and reasonably coherent. The result of the e x p e r i m e n t was r a t h e r astonishing.