Bowen’s formula relates the Hausdorﬀ dimension of a conformal repeller to the zero of a ‘pressure’ function. We present an elementary, self-contained proof to show that Bowen’s formula holds for C 1 conformal repellers. We consider time-dependent conformal repellers obtained as invariant subsets for sequences of conformally expanding maps within a suitable class.
ECONOMIC DEVELOPMENT IN GREENLAND: A TIME SERIES ANALYSIS OF DEPENDENCY GROWTH , AND INSTABILITY Column A presents the coefficient from a regression of school average
scores on an index of student quality, pooling all metropolitan schools in the NELS sample
and including a fixed effect for each MSA.50 As in the SAT data, peer effects and
effectiveness sorting are together substantial, inflating the school-level background index
coefficient by 90 percent relative to the coefficient of a within-school regression of
individual scores on own characteristics....
We live in an oil-dependent world, arriving at this level of dependency in a very short space of time by treating petroleum as if it were in infinite supply. Most of us avoid thinking about what happens when oil runs out (or becomes prohibitively expensive), but The Transition Handbook shows how the inevitable and profound
We develop a general dynamic programming technique for the tabulation of transition-based dependency parsers, and apply it to obtain novel, polynomial-time algorithms for parsing with the arc-standard and arc-eager models. We also show how to reverse our technique to obtain new transition-based dependency parsers from existing tabular methods. Additionally, we provide a detailed discussion of the conditions under which the feature models commonly used in transition-based parsing can be integrated into our algorithms. ...
This paper presents a detailed study of the integration of knowledge from both dependency parses and hierarchical word ontologies into a maximum-entropy-based tagging model that simultaneously labels words with both syntax and semantics. Our ﬁndings show that information from both these sources can lead to strong improvements in overall system accuracy: dependency knowledge improved performance over all classes of word, and knowledge of the position of a word in an ontological hierarchy increased accuracy for words not seen in the training data. ...
We consider the problem of learning context-dependent mappings from sentences to logical form. The training examples are sequences of sentences annotated with lambda-calculus meaning representations. We develop an algorithm that maintains explicit, lambda-calculus representations of salient discourse entities and uses a context-dependent analysis pipeline to recover logical forms. The method uses a hidden-variable variant of the perception algorithm to learn a linear model used to select the best analysis.
Most previous studies of morphological disambiguation and dependency parsing have been pursued independently. Morphological taggers operate on n-grams and do not take into account syntactic relations; parsers use the “pipeline” approach, assuming that morphological information has been separately obtained. However, in morphologically-rich languages, there is often considerable interaction between morphology and syntax, such that neither can be disambiguated without the other.
In this paper, we present a novel approach which incorporates the web-derived selectional preferences to improve statistical dependency parsing. Conventional selectional preference learning methods have usually focused on word-to-class relations, e.g., a verb selects as its subject a given nominal class.
We explore the contribution of morphological features – both lexical and inﬂectional – to dependency parsing of Arabic, a morphologically rich language. Using controlled experiments, we ﬁnd that deﬁniteness, person, number, gender, and the undiacritzed lemma are most helpful for parsing on automatically tagged input.
We propose a generative model based on Temporal Restricted Boltzmann Machines for transition based dependency parsing. The parse tree is built incrementally using a shiftreduce parse and an RBM is used to model each decision step. The RBM at the current time step induces latent features with the help of temporal connections to the relevant previous steps which provide context information. Our parser achieves labeled and unlabeled attachment scores of 88.72% and 91.65% respectively, which compare well with similar previous models and the state-of-the-art. ...
We consider a very simple, yet effective, approach to cross language adaptation of dependency parsers. We ﬁrst remove lexical items from the treebanks and map part-of-speech tags into a common tagset. We then train a language model on tag sequences in otherwise unlabeled target data and rank labeled source data by perplexity per word of tag sequences from less similar to most similar to the target. We then train our target language parser on the most similar data points in the source labeled data. ...
This paper presents the introduction of WordNet semantic classes in a dependency parser, obtaining improvements on the full Penn Treebank for the first time. We tried different combinations of some basic semantic classes and word sense disambiguation algorithms. Our experiments show that selecting the adequate combination of semantic features on development data is key for success.
In this paper, we propose a novel method for semi-supervised learning of nonprojective log-linear dependency parsers using directly expressed linguistic prior knowledge (e.g. a noun’s parent is often a verb). Model parameters are estimated using a generalized expectation (GE) objective function that penalizes the mismatch between model predictions and linguistic expectation constraints.
We would like to draw attention to Hidden Markov Tree Models (HMTM), which are to our knowledge still unexploited in the ﬁeld of Computational Linguistics, in spite of highly successful Hidden Markov (Chain) Models. In dependency trees, the independence assumptions made by HMTM correspond to the intuition of linguistic dependency. Therefore we suggest to use HMTM and tree-modiﬁed Viterbi algorithm for tasks interpretable as labeling nodes of dependency trees.
The paper investigates the problem of providing a formal device for the dependency approach to syntax, and to link it with a parsing model. After reviewing the basic tenets of the paradigm and the few existing mathematical results, we describe a dependency formalism which is able to deal with long-distance dependencies. Finally, we present an Earley-style parser for the formalism and discuss the (polynomial) complexity results.
Rambow, Wier and Vijay-Shanker (Rainbow et al., 1995) point out the differences between TAG derivation structures and semantic or predicateargument dependencies, and Joshi and VijayShanker (Joshi and Vijay-Shanker, 1999) describe a monotonic compositional semantics based on attachment order that represents the desired dependencies of a derivation without underspecifying predicate-argument relationships at any stage.
There is increasing recognition of the fact that the entire range of dependencies that transformational grammars in their various incarnations h a v e t r i e d t o a c c o u n t f o r c a n be satisfactorily captured by classes of rules that are non-transformational and at the same Clme highly constrlaned in terms of the classes of grammars and languages that they define .
Hungarian is a stereotype of morphologically rich and non-conﬁgurational languages. Here, we introduce results on dependency parsing of Hungarian that employ a 80K, multi-domain, fully manually annotated corpus, the Szeged Dependency Treebank. We show that the results achieved by state-of-the-art data-driven parsers on Hungarian and English (which is at the other end of the conﬁgurational-nonconﬁgurational spectrum) are quite similar to each other in terms of attachment scores.
In this paper we study spectral learning methods for non-deterministic split headautomata grammars, a powerful hiddenstate formalism for dependency parsing. We present a learning algorithm that, like other spectral methods, is efﬁcient and nonsusceptible to local minima. We show how this algorithm can be formulated as a technique for inducing hidden structure from distributions computed by forwardbackward recursions.