The ﬁrst high-level programming languages were designed during the 1950s. Ever
since then, programming languages have been a fascinating and productive area
of study. Programmers endlessly debate the relative merits of their favorite pro-
gramming languages, sometimes with almost religious zeal. On a more academic
level, computer scientists search for ways to design programming languages that
combine expressive power with simplicity and efﬁciency.
This book is about data analysis and the programming language called R. This is rapidly
becoming the de facto standard among professionals, and is used in every conceivable discipline
from science and medicine to business and engineering.
R is more than just a computer program; it is a statistical programming environment and language. R
is free and open source and is therefore available to everyone with a computer. It is very powerful and
flexible, but it is also unlike most of the computer programs you are likely used to.
As in many other disciplines, research methodology in language program evaluation is classified into different paradigms by different scholars. No matter what classification each researcher follows, research in language program evaluation can be conducted according to two general approaches: positivistic/quantitative and naturalistic/qualitative.
In the following we give an account of a computer program for the translation of natural languages. The program has the following features: (1) it is adaptable to the translation of any two natural languages, not just to some particular pair; (2) it is a self-modifying program—that is, given the information that it has produced an incorrect translation, together with the translation which it should have produced according to the linguistic judgment of an operator, it will modify itself so as to eliminate the cause of the incorrect translation....
This paper presents an unsupervised topic identiﬁcation method integrating linguistic and visual information based on Hidden Markov Models (HMMs). We employ HMMs for topic identiﬁcation, wherein a state corresponds to a topic and various features including linguistic, visual and audio information are observed. Our experiments on two kinds of cooking TV programs show the effectiveness of our proposed method.
This paper discusses the task of formulating a model of linguistic performance and proposes an approach toward this goal that is oriented toward an embodiment of the model as a digital-computer program. The methodology of current linguistic theory is criticized for several of its features that render it inapplicable to a realistic model of performance, and remedies for these deficiencies are proposed.
The paper describes a novel computational tool for multiple concept learning. Unlike previous approaches, whose major goal is prediction on unseen instances rather than the legibility of the output, our MPD (Maximally Parsimonious Discrimination) program emphasizes the conciseness and intelligibility of the resultant class descriptions, using three intuitive simplicity criteria to this end. We illustrate MPD with applications in componential analysis (in lexicology and phonology), language typology, and speech pathology. ...
In what sense is a grammar the union of its rules? This paper adapts the notion of composition, well developed in the context of programming languages, to the domain of linguistic formalisms. We s t u d y alternative definitions for the semantics of such formalisms, suggesting a denotational semantics that we show to be compositional and fully-abstract. This facilitates a clear, mathematically sound way for defining g r a m m a r modularity.
This paper examines efficient predictive broadcoverage parsing without dynamic programming. In contrast to bottom-up methods, depth-first top-down parsing produces partial parses that are fully connected trees spanning the entire left context, from which any kind of non-local dependency or partial semantic interpretation can in principle be read. We contrast two predictive parsing approaches, topdown and left-corner parsing, and find both to be viable.
[Mechanical Translation, vol.3, no.3, December 1956; pp. 81-88]
M. A. K. Halliday, Cambridge Language Research Unit, Cambridge, England
The grammar and lexis of a language exhibit a high degree of internal determination, affecting all utterances whether or not these are translated from another language.
A notational system for use in writing translation routines and related programs is described. The system is specially designed to be convenient for the linguist so that he can do his own programming. Programs in this notation can be converted into computer programs automatically by the computer.
The ability to compress sentences while preserving their grammaticality and most of their meaning has recently received much attention. Our work views sentence compression as an optimisation problem. We develop an integer programming formulation and infer globally optimal compressions in the face of linguistically motivated constraints. We show that such a formulation allows for relatively simple and knowledge-lean compression models that do not require parallel corpora or largescale resources. The proposed approach yields results comparable and in some cases superior to state-of-the-art.
We present a new approach to HPSG processing: compiling HPSG grammars expressed as type constraints into definite clause programs. This provides a clear and computationally useful correspondence between linguistic theories and their implementation. The compiler performs offline constraint inheritance and code optimization. As a result, we are able to efficiently process with HPSG grammars without haviog to hand-translate them into definite clause or phrase structure based systems.
A considerable body of accumulated knowledge about the design of languages for communicating information to computers has been derived from the subfields of programming language design and semantics. It has been the goal of the P A r R group at SRI to utilize a relevant portion of this knowledge in implementing tools to facilitate communication of linguistic information to computers. The PATR-II formalism is our current computer language for encoding linguistic information.
KIPS is an automatic programming system which generates standardized business application programs through interactive natural language dialogue. KIPS models the program under discussion and the content of the user's statements as organizations of dynamic objects in the object*oriented programming sense. This paper describes the statement*model and the program-model, their use in understanding Japanese program specifications, and bow they are shaped by the linguistic singularities of Japanese input sentences. ...
Our w o r k on the p r o c e s s i n ~ of a special kind of linguistic information, namely temporal information ,has led us to advocate the use of can lead to some pointless discussions between different approaches which, at an abstract level, can be s h o w n to be equivalent; secondly, any e x t e n s i o n or modification of the implemented model requires a different program instead of a m e r e a d j u s t m e n t at...
In this paper, we present a logic-based computational model for movement theory in Government and Binding Theory. For that purpose, we have designed a language called DISLOG. DISLOG stands for programming in logic with discontinuities and permits to express in a simple, concise and declarative way relations or constraints between non-contiguous elements in a structure. DISLOG is also weel adapted to model other types of linguistic phenomena like Quantifier Raising involving long-distance relations or constraints. ...
Chapter 2 - Syntax. We shall see that most of the syntactic structure of modern programming languages is defined using a linguistic formalism called the contextjree grammm: Other elements of syntax are outside the realm of context-free grammars, and are defined by other means. A careful treatment of programming language syntax appears in Chapter 2. This chapter provides knowledge of grammars of syntax: backus-naur form, derivations, parse trees.
Every college student in China seems to be studying English. I see them
listening to radio programs on their dormitory bed, studying the dictionary in
the back of the classroom, and completing grammar exercises in the cafeteria.
But still, these same students come to me and ask the same question:
“Teacher . . . my spoken English is very poor. How to improve my spoken
How to Make People Like You in 90 Seconds or Less is the work of a master of Neuro-Linguistic Programming whose career is teaching corporations and groups the secrets of successful face-to-face communication. Aimed at establishing rapport-that stage between meeting and communicating-How to Make People Like You focuses on the concept of synchrony. It shows how to synchronize attitude, synchronize body language, and synchronize voice tone so that you instantly and imperceptibly become someone the other person likes.