Là một mô hình lập trình được đề xuất trong các ngôn ngữ lập trình hàm như Lisp, ML. Một trong những tính năng nổi bật của các ngôn ngữ lập trình hàm là các higher-order function. Higher-order function là dạng function mà chấp nhận một function khác như là tham số của nó.
We show that for almost every frequency α ∈ R\Q, for every C ω potential v : R/Z → R, and for almost every energy E the corresponding quasiperiodic Schr¨dinger cocycle is either reducible or nonuniformly hyperbolic. This result o gives very good control on the absolutely continuous part of the spectrum of the corresponding quasiperiodic Schr¨dinger operator, and allows us to complete o the proof of the Aubry-Andr´ conjecture on the measure of the spectrum of e the Almost Mathieu Operator. ...
This document is one of several white papers that summarize readily available information on control techniques and measures to mitigate greenhouse gas (GHG) emissions from specific industrial sectors.
Compression of executable code in embedded microprocessor systems, used in the past mainly to reduce the memory footprint of embedded software, is gaining interest for the potential reduction in memory bus traffic and power consumption. We propose three new schemes for code compression, based on the concepts of static (using the static representation of the executable) and dynamic (using program execution traces) entropy and compare them with a state-of-the-art compression scheme, IBM’s CodePack.
Work can be a stressful place, wherever you earn your living, whether in an office, a factory, or a school. Some stress is good. It motivates us and makes us stronger. Too much stress is bad. It makes us irrational and it can, quite literally, kill us. Fortunately, there are specific things you can do that will help you reduce your stress at work and better cope with it.
CCG s are directly compatible with binarybranching bottom-up parsing algorithms, in particular CKY and shift-reduce algorithms. While the chart-based approach has been the dominant approach for CCG, the shift-reduce method has been little explored. In this paper, we develop a shift-reduce CCG parser using a discriminative model and beam search, and compare its strengths and weaknesses with the chart-based C&C parser. We study different errors made by the two parsers, and show that the shift-reduce parser gives competitive accuracies compared to C&C. ...
Letter-to-phoneme (L2P) conversion is the process of producing a correct phoneme sequence for a word, given its letters. It is often desirable to reduce the quantity of training data — and hence human annotation — that is needed to train an L2P classiﬁer for a new language. In this paper, we confront the challenge of building an accurate L2P classiﬁer with a minimal amount of training data by combining several diverse techniques: context ordering, letter clustering, active learning, and phonetic L2P alignment.
Iterative bootstrapping algorithms are typically compared using a single set of handpicked seeds. However, we demonstrate that performance varies greatly depending on these seeds, and favourable seeds for one algorithm can perform very poorly with others, making comparisons unreliable. We exploit this wide variation with bagging, sampling from automatically extracted seeds to reduce semantic drift. However, semantic drift still occurs in later iterations.
This paper presents a new bottom-up chart parsing algorithm for Prolog along with a compilation procedure that reduces the amount of copying at run-time to a constant number (2) per edge. It has applications to uniﬁcation-based grammars with very large partially ordered categories, in which copying is expensive, and can facilitate the use of more sophisticated indexing strategies for retrieving such categories that may otherwise be overwhelmed by the cost of such copying.
We introduce an algorithm for designing a predictive left to right shift-reduce non-deterministic push-down machine corresponding to an arbitrary unrestricted context-free grammar and an algorithm for efficiently driving this machine in pseudo-parallel. The performance of the resulting parser is formally proven to be superior to Earley's parser (1970). The technique employed consists in constructing before run-time a parsing table that encodes a nondeterministic machine in the which the predictive behavior has been compiled out. ...
Guangdong, a province of over 93 million residents, is located on the southern coast of China, boarding with Hong Kong, China. As China'Ź"s powerhouse for economic growth and a pioneer of reform and opening up, Guangdong has maintained an annual average GDP growth rate of 13.7 percent over the past three decades. Its historical achievements notwithstanding, Guangdong witnessed increased inequality and regional disparity.
This paper presents an effective approach to discard most entries of the rule table for statistical machine translation. The rule table is ﬁltered by monolingual key phrases, which are extracted from source text using a technique based on term extraction. Experiments show that 78% of the rule table is reduced without worsening translation performance. In most cases, our approach results in measurable improvements in BLEU score. that a source phrase is either a ﬂat phrase consists of words, or a hierarchical phrase consists of both words and variables. ...
Statistical language models should improve as the size of the n-grams increases from 3 to 5 or higher. However, the number of parameters and calculations, and the storage requirement increase very rapidly if we attempt to store all possible combinations of n-grams. To avoid these problems, the reduced n-grams’ approach previously developed by O’Boyle (1993) can be applied. A reduced n-gram language model can store an entire corpus’s phrase-history length within feasible storage limits.
Recently proposed deterministic classiﬁerbased parsers (Nivre and Scholz, 2004; Sagae and Lavie, 2005; Yamada and Matsumoto, 2003) offer attractive alternatives to generative statistical parsers. Deterministic parsers are fast, efﬁcient, and simple to implement, but generally less accurate than optimal (or nearly optimal) statistical parsers. We present a statistical shift-reduce parser that bridges the gap between deterministic and probabilistic parsers.
Native speakers of English show definite and consistent preferences for certain readings of syntactically ambiguous sentences. A user of a natural-language-processing system would naturally expect it to reflect the same preferences. Thus, such systems must model in some way the linguistic performance as well as the linguistic competence of the native speaker. We have developed a parsing algorithm--a variant of the LALR(I} shift.
Many parsing techniques including parameter estimation assume the use of a packed parse forest for efficient and accurate parsing. However, they have several inherent problems deriving from the restriction of locality in the packed parse forest. Deterministic parsing is one of solutions that can achieve simple and fast parsing without the mechanisms of the packed parse forest by accurately choosing search paths. We propose (i) deterministic shift-reduce parsing for unification-based grammars, and (ii) best-first shift-reduce parsing with beam thresholding for unification-based grammars.
The need for reducing the project duration occurs for many reasons such as imposed duration dates, time-to-market considerations, incentive contracts, key resource needs, high overhead costs, or simply unforeseen delays. This chapter presented a logical, formal process for assessing the implications of situations that involve shortening the project duration.