Social scientists teach that politicians favor groups that are organized over
those that are not. Representation through Taxation challenges this conventional
wisdom. Emphasizing that there are limits to what organized interests can cred-
ibly promise in return for favorable treatment, Gehlbach shows that politicians
may instead give preference to groups – organized or not – that by their na-
ture happen to take actions that are politically valuable.
This book is a study of Dutch mosque designs, objects of heated public debate. Until now, studies of diaspora mosque designs have largely consisted of normative architectural critiques that reject the ubiquitous 'domes and minarets' as hampering further Islamic-architectural evolution. The Architectural Representation of Islam: Muslim-Commissioned Mosque Design in The Netherlands represents a clear break with the architectural critical narrative, and meticulously analyzes twelve design processes for Dutch mosques.
This paper proposes a novel approach for effectively utilizing unsupervised data in addition to supervised data for supervised learning. We use unsupervised data to generate informative ‘condensed feature representations’ from the original feature set used in supervised NLP systems. The main contribution of our method is that it can offer dense and low-dimensional feature spaces for NLP tasks while maintaining the state-ofthe-art performance provided by the recently developed high-performance semi-supervised learning technique. ...
This note gives a new proof of the theorem, due to Ingleton and Pi , that the duals
of transversal matroids are precisely the strict gammoids. Section 1 denes the relevant
objects. Section 2 presents explicit representations of the families of transversal matroids
and strict gammoids. Section 3 uses these representations to prove the duality of these
Mapping documents into an interlingual representation can help bridge the language barrier of a cross-lingual corpus. Previous approaches use aligned documents as training data to learn an interlingual representation, making them sensitive to the domain of the training data. In this paper, we learn an interlingual representation in an unsupervised manner using only a bilingual dictionary.
Supervised sequence-labeling systems in natural language processing often suffer from data sparsity because they use word types as features in their prediction tasks. Consequently, they have difﬁculty estimating parameters for types which appear in the test set, but seldom (or never) appear in the training set. We demonstrate that distributional representations of word types, trained on unannotated text, can be used to improve performance on rare words. We incorporate aspects of these representations into the feature space of our sequence-labeling systems. ...
In this paper we present an ambiguity preserving translation approach which transfers ambiguous LFG f-structure representations. It is based on packed f-structure representations which are the result of potentially ambiguous utterances. If the ambiguities between source and target language can be preserved, no unpacking during transfer is necessary and the generator may produce utterances which maximally cover the underlying ambiguities.
This paper explores the relationshiDs between a computational meory of temporal representation (as developed by James Alien) and a Iormal linguiStiC theory Of tense (as developed by NorOert Hornstem) and aspect.
In this paper we deal with Named Entity Recognition (NER) on transcriptions of French broadcast data. Two aspects make the task more difﬁcult with respect to previous NER tasks: i) named entities annotated used in this work have a tree structure, thus the task cannot be tackled as a sequence labelling task; ii) the data used are more noisy than data used for previous NER tasks. We approach the task in two steps, involving Conditional Random Fields and Probabilistic Context-Free Grammars, integrated in a single parsing algorithm.
We present explicit formulas representations of the real diamond Lie algebra
obtained from the normal polarization on K-orbits. From this we have list irreducible unitary representations of the real diamond Lie group that is coincide with the representations via Fedosov deformation quantisation. Here the computations are more simple for use star-product.
If we take an existing supervised NLP system, a simple and general way to improve accuracy is to use unsupervised word representations as extra word features. We evaluate Brown clusters, Collobert and Weston (2008) embeddings, and HLBL (Mnih & Hinton, 2009) embeddings of words on both NER and chunking. We use near state-of-the-art supervised baselines, and ﬁnd that each of the three word representations improves the accuracy of these baselines.
We present a syntactically enriched vector model that supports the computation of contextualized semantic representations in a quasi compositional fashion. It employs a systematic combination of ﬁrst- and second-order context vectors. We apply our model to two different tasks and show that (i) it substantially outperforms previous work on a paraphrase ranking task, and (ii) achieves promising results on a wordsense similarity task; to our knowledge, it is the ﬁrst time that an unsupervised method has been applied to this task. ...
The main focus of this work is to investigate robust ways for generating summaries from summary representations without recurring to simple sentence extraction and aiming at more human-like summaries. This is motivated by empirical evidence from TAC 2009 data showing that human summaries contain on average more and shorter sentences than the system summaries. We report encouraging preliminary results comparable to those attained by participating systems at TAC 2009.
We present a probabilistic model extension to the Tesni` re Dependency Structure e (TDS) framework formulated in (Sangati and Mazza, 2009). This representation incorporates aspects from both constituency and dependency theory. In addition, it makes use of junction structures to handle coordination constructions. We test our model on parsing the English Penn WSJ treebank using a re-ranking framework.
Negation is present in all human languages and it is used to reverse the polarity of part of statements that are otherwise afﬁrmative by default. A negated statement often carries positive implicit meaning, but to pinpoint the positive part from the negative part is rather difﬁcult. This paper aims at thoroughly representing the semantics of negation by revealing implicit positive meaning. The proposed representation relies on focus of negation detection. For this, new annotation over PropBank and a learning algorithm are proposed. ...
This paper introduces a machine learning method based on bayesian networks which is applied to the mapping between deep semantic representations and lexical semantic resources. A probabilistic model comprising Minimal Recursion Semantics (MRS) structures and lexicalist oriented semantic features is acquired. Lexical semantic roles enriching the MRS structures are inferred, which are useful to improve the accuracy of deep semantic parsing.
Underspeciﬁcation-based algorithms for processing partially disambiguated discourse structure must cope with extremely high numbers of readings. Based on previous work on dominance graphs and weighted tree grammars, we provide the ﬁrst possibility for computing an underspeciﬁed discourse description and a best discourse representation efﬁciently enough to process even the longest discourses in the RST Discourse Treebank.
The interpretation of temporal expressions in text is an important constituent task for many practical natural language processing tasks, including question-answering, information extraction and text summarisation. Although temporal expressions have long been studied in the research literature, it is only more recently, with the impetus provided by exercises like the ACE Program, that attention has been directed to broad-coverage, implemented systems. In this paper, we describe our approach to intermediate semantic representations in the interpretation of temporal expressions. ...
In this paper we explore the utility of the Navigation Map (NM), a graphical representation of the discourse structure. We run a user study to investigate if users perceive the NM as helpful in a tutoring spoken dialogue system. From the users’ perspective, our results show that the NM presence allows them to better identify and follow the tutoring plan and to better integrate the instruction. It was also easier for users to concentrate and to learn from the system if the NM was present. Our preliminary analysis on objective metrics further strengthens these findings. ...