§Giới thiệu chung :
§cấu trúc cơ bản của của hệ thống:
§Các dạng điều chế quang coherent:
§Máy thu coherent:
§Tỉ số lỗi bit trong máy thu
§Các yếu tố ảnh hưởng đến độ nhạy máy thu
§Những ưu điểm của hệ thống coherent
In 1991, optical coherence tomography (OCT) was initially introduced to image the transpar‐
ent tissue of eyes at a level of resolution significantly greater than conventional ultrasound
technique. OCT uses infrared light to produce images on a micrometer scale. The intensity
of the reflected light is displayed as a false color or grey scale image. OCT imaging is analo‐
gous to ultrasound B mode imaging, except that it performs imaging by measuring the in‐
tensity of reflected or back scattered light rather than acoustic...
We present a novel model to represent and assess the discourse coherence of text. Our model assumes that coherent text implicitly favors certain types of discourse relation transitions. We implement this model and apply it towards the text ordering ranking task, which aims to discern an original text from a permuted ordering of its sentences.
This paper presents a new model of anaphoric processing that utilizes the establishment of coherence relations between clauses in a discourse. We survey data that comprises a currently stalemated argument over whether VP-ellipsis is an inherently syntactic or inherently semantic phenomenon, and show that the data can be handled within a uniform discourse processing architecture. This architecture, which revises the dichotomy between ellipsis vs.
It is claimed that a variety of facts concerning ellipsis, event reference, and interclausal coherence can be explained by two features of the linguistic form in question: (1) whether the form leaves behind an empty constituent in the syntax, and (2) whether the form is anaphoric in the semantics. It is proposed that these features interact with one of two types of discourse inference, namely Common Topic inference and Coherent Situation inference.
We extend the original entity-based coherence model (Barzilay and Lapata, 2008) by learning from more ﬁne-grained coherence preferences in training data. We associate multiple ranks with the set of permutations originating from the same source document, as opposed to the original pairwise rankings. We also study the eﬀect of the permutations used in training, and the eﬀect of the coreference component used in entity extraction.
Optical coherence tomography (OCT) is an interferometric technique based on optical
coherent gating. In OCT, imaging contrast originates from the sample inhomogeneous
scattering properties that are linearly dependent on the sample’s refractive indices.
OCT offers axial resolution of 2‐15 μm and penetration depth around 2 mm. Since its
invention in the late 1980s and early 1990s, OCT has experienced explosive growth in
both technology and application.
An experiment in the computer generation of coherent discourse was successfully conducted to test a hypothesis about the transitive nature of syntactic dependency relations among elements of the English language.
One goal of natural language generation is to produce coherent text that presents information in a logical order. In this paper, we show that topological ﬁelds, which model high-level clausal structure, are an important component of local coherence in German. First, we show in a sentence ordering experiment that topological ﬁeld information improves the entity grid model of Barzilay and Lapata (2008) more than grammatical role and simple clausal order information do, particularly when manual annotations of this information are not available. ...
Extractive methods for multi-document summarization are mainly governed by information overlap, coherence, and content constraints. We present an unsupervised probabilistic approach to model the hidden abstract concepts across documents as well as the correlation between these concepts, to generate topically coherent and non-redundant summaries. Based on human evaluations our models generate summaries with higher linguistic quality in terms of coherence, readability, and redundancy compared to benchmark systems. ...
In summarization, sentence ordering is conducted to enhance summary readability by accommodating text coherence. We propose a grouping-based ordering framework that integrates local and global coherence concerns. Summary sentences are grouped before ordering is applied on two levels: group-level and sentence-level. Different algorithms for grouping and ordering are discussed. The preliminary results on single-document news datasets demonstrate the advantage of our method over a widely accepted method. ...
One of the challenges in the automatic generation of referring expressions is to identify a set of domain entities coherently, that is, from the same conceptual perspective. We describe and evaluate an algorithm that generates a conceptually coherent description of a target set. The design of the algorithm is motivated by the results of psycholinguistic experiments.
This paper considers the problem of automatic assessment of local coherence. We present a novel entity-based representation of discourse which is inspired by Centering Theory and can be computed automatically from raw text. We view coherence assessment as a ranking learning problem and show that the proposed discourse representation supports the effective learning of a ranking function. Our experiments demonstrate that the induced model achieves signiﬁcantly higher accuracy than a state-of-the-art coherence model. ...
Our experience with a critiquing system shows that when the system detects problems with the user's performance, multiple critiques are often produced. Analysis of a corpus of actual critiques revealed that even though each individual critique is concise and coherent, the set of critiques as a whole may exhibit several problems that detract from conciseness and coherence, and consequently assimilation. Thus a text planner was needed that could integrate the text plans for individual communicative goals to produce an overall text plan representing a concise, coherent message. ...
This paper describes a method for recognizing coherence relations between clauses which are linked by te in Japanese - - a translational equivalent of English and. We consider that the coherence relations are categories each of which has a prototype structure as well as the relationships among them. By utilizing this organization of the relations, we can infer an appropriate relation from the semantic structures of the clauses between which that relation holds. We carried out an experiment and obtained the correct recognition ratio of 82% for the 280 sentences. ...
This paper explores the possibilities and limits of a discourse grammar applied to spontaneous speech. Most discourse grammars (e.g. SDRT, Asher, 1993; RST, Mann & Thompson, 1988) tend to be descriptive theories of written discourse which presuppose a coherent structure. This structure is the outcome of a goal directed planning process on the part of the producer. In order to obtain a better understanding of the planning process we analyse spoken discourse elicited in an experimental setting. ...
Current models of story comprehension have three major deficiencies: (1) lack of experimental support for the inference processes they involve (e.g. reliance on prediction); (2) indifference to 'kinds' of coherence (e.g. local and global); and (3) inability to find interpretations at variable depths. I propose that comprehension is driven by the need to find a representation that reaches a 'coherence threshold'. Variable inference processes are a reflection of different thresholds, and the skepticism of an individual inference process determines how thresholds are reached. ...
Research on coreference resolution and summarization has modeled the way entities are realized as concrete phrases in discourse. In particular there exist models of the noun phrase syntax used for discourse-new versus discourse-old referents, and models describing the likely distance between a pronoun and its antecedent. However, models of discourse coherence, as applied to information ordering tasks, have ignored these kinds of information.
We describe a generic framework for integrating various stochastic models of discourse coherence in a manner that takes advantage of their individual strengths. An integral part of this framework are algorithms for searching and training these stochastic coherence models. We evaluate the performance of our models and algorithms and show empirically that utilitytrained log-linear coherence models outperform each of the individual coherence models considered.
We use a reliably annotated corpus to compare metrics of coherence based on Centering Theory with respect to their potential usefulness for text structuring in natural language generation. Previous corpus-based evaluations of the coherence of text according to Centering did not compare the coherence of the chosen text structure with that of the possible alternatives.