This chapter presents the following content: Mathematical induction, strong induction, well-ordering, recursive definitions, structural induction, recursive algorithms, program correctness (not yet included in overheads).
This paper demonstrates that generating arguments in natural language requires planning at an abstract level, and that the appropriate abstraction cannot be captured by approaches based solely upon coherence relations. An abstraction based planning system is presented which employs operators motivated by empirical study and rhetorical maxims. These operators include a subset of traditional deductive rules of inference, argumentation theoretic rules of refutation, and inductive reasoning patterns. ...
Corpus-based grammar induction generally relies on hand-parsed training data to learn the structure of the language. Unfortunately, the cost of building large annotated corpora is prohibitively expensive. This work aims to improve the induction strategy when there are few labels in the training data. We show that the most informative linguistic constituents are the higher nodes in the parse trees, typically denoting complex noun phrases and sentential clauses. They account for only 20% of all constituents. ...
The influence of the fatty acid composition of chylomicron remnant-like
particles (CRLPs) on their uptake and induction of lipid accumulation in
macrophages was studied. CRLPs containing triacylglycerol enriched in
saturated, monounsaturated, n)6or n)3 polyunsaturated fatty acids
derived from palm, olive, corn or fish oil, respectively, and macrophages
derived from the human monocyte cell line THP-1 were used.
We present an approach to multilingual grammar induction that exploits a phylogeny-structured model of parameter drift. Our method does not require any translated texts or token-level alignments. Instead, the phylogenetic prior couples languages at a parameter level. Joint induction in the multilingual model substantially outperforms independent learning, with larger gains both from more articulated phylogenies and as well as from increasing numbers of languages.
We consider a new subproblem of unsupervised parsing from raw text, unsupervised partial parsing—the unsupervised version of text chunking. We show that addressing this task directly, using probabilistic ﬁnite-state methods, produces better results than relying on the local predictions of a current best unsupervised parser, Seginer’s (2007) CCL. These ﬁnite-state models are combined in a cascade to produce more general (full-sentence) constituent structures; doing so outperforms CCL by a wide margin in unlabeled PARSEVAL scores for English, German and Chinese. ...
This paper presents a novel metric-based framework for the task of automatic taxonomy induction. The framework incrementally clusters terms based on ontology metric, a score indicating semantic distance; and transforms the task into a multi-criteria optimization based on minimization of taxonomy structures and modeling of term abstractness. It combines the strengths of both lexico-syntactic patterns and clustering through incorporating heterogeneous features. The flexible design of the framework allows a further study on which features are the best for the task under various conditions. ...
In this paper we describe a new technique for parsing free text: a transformational grammar I is automatically learned that is capable of accurately parsing text into binary-branching syntactic trees with nonterminals unlabelled. The algorithm works by beginning in a very naive state of knowledge about phrase structure. By repeatedly comparing the results of bracketing in the current state to proper bracketing provided in the training corpus, the system learns a set of simple structural transformations that can be applied to reduce error.
The recently published novel integrinaIIbb3 ectodomain crystallographic
structure and NMR structures of its transmembrane⁄cytoplasmic segments
were employed to refine previously developed molecular models. Alterna-tive complete aIIbb3 models were built and evaluated, and their shape was
compared with EM maps and their computed hydrodynamic⁄conforma-tional properties were compared with the available experimental data.
The HIV-1 encoded virus protein U (VpU) is required for efficient viral
release from human host cells and for induction of CD4 degradation in the
endoplasmic reticulum. The cytoplasmic domain of the membrane protein
VpU (VpUcyt) is essential for the latter activity. The structure and dynam-ics of VpUcyt were characterized in the presence of membrane simulating
dodecylphosphatidylcholine (DPC) micelles by high-resolution liquid state
We ﬁrst show how a structural locality bias can improve the accuracy of state-of-the-art dependency grammar induction models trained by EM from unannotated examples (Klein and Manning, 2004). Next, by annealing the free parameter that controls this bias, we achieve further improvements. We then describe an alternative kind of structural bias, toward “broken” hypotheses consisting of partial structures over segmented sentences, and show a similar pattern of improvement.
We propose a novel algorithm for inducing semantic taxonomies. Previous algorithms for taxonomy induction have typically focused on independent classiﬁers for discovering new single relationships based on hand-constructed or automatically discovered textual patterns. By contrast, our algorithm ﬂexibly incorporates evidence from multiple classiﬁers over heterogenous relationships to optimize the entire structure of the taxonomy, using knowledge of a word’s coordinate terms to help in determining its hypernyms, and vice versa. ...
Previous studies in data-driven dependency parsing have shown that tree transformations can improve parsing accuracy for speciﬁc parsers and data sets. We investigate to what extent this can be generalized across languages/treebanks and parsers, focusing on pseudo-projective parsing, as a way of capturing non-projective dependencies, and transformations used to facilitate parsing of coordinate structures and verb groups.
In this paper we present a methodology for extracting subcategorisation frames based on an automatic LFG f-structure annotation algorithm for the Penn-II Treebank. We extract abstract syntactic function-based subcategorisation frames (LFG semantic forms), traditional CFG categorybased subcategorisation frames as well as mixed function/category-based frames, with or without preposition information for obliques and particle information for particle verbs.
We present a generative model for the unsupervised learning of dependency structures. We also describe the multiplicative combination of this dependency model with a model of linear constituency. The product model outperforms both components on their respective evaluation metrics, giving the best published ﬁgures for unsupervised dependency parsing and unsupervised constituency parsing. We also demonstrate that the combined model works and is robust cross-linguistically, being able to exploit either attachment or distributional regularities that are salient in the data. ...
This paper presents a framework for unsupervised natural language morphology induction wherein candidate suffixes are grouped into candidate inflection classes, which are then arranged in a lattice structure. With similar candidate inflection classes placed near one another in the lattice, I propose this structure is an ideal search space in which to isolate the true inflection classes of a language. This paper discusses and motivates possible search strategies over the inflection class lattice structure. ...
In this paper we present a novel, customizable IE paradigm that takes advantage of predicate-argument structures. We also introduce a new way of automatically identifying predicate argument structures, which is central to our IE paradigm. It is based on: (1) an extended set of features; and (2) inductive decision tree learning. The experimental results prove our claim that accurate predicate-argument structures enable high quality IE results.
This topic gives an overview of the mathematical technique of a proof by induction: We will describe the inductive principle, look at ten different examples, four examples where the technique is incorrectly applied, well-ordering of the natural numbers, strong induction, geometric problems.
This Instructor’s Manual is intended to accompany the fourth edition of Electric Machinery Fundamentals. To
make this manual easier to use, it has been made self-contained. Both the original problem statement and the
problem solution are given for each problem in the book. This structure should make it easier to copy pages from
the manual for posting after problems have been assigned.