This book deals with the acceleration of EDA algorithms using hardware platforms such as FPGAs and GPUs. Widely applied CAD algorithms are evaluated and compared for potential acceleration on FPGAs and GPUs. Coverage includes discussion of conditions under which it is preferable to use one platform over another, e.g., when an EDA problem has a high degree of data parallelism, the GPU is typically the preferred platform, whereas when the problem has more control, an FPGA may be preferred.
JR is a language for concurrent programming. It is an imperative language
that provides explicit mechanisms for concurrency, communication, and synchronization.
JR is an extension of the Java programming language with additional
concurrency mechanisms based on those in the SR (Synchronizing
Resources) programming language. It is suitable for writing programs for both
shared- and distributed-memory applications and machines; it is, of course, also
suitable for writing sequential programs.
We estimate the parameters of a phrasebased statistical machine translation system from monolingual corpora instead of a bilingual parallel corpus. We extend existing research on bilingual lexicon induction to estimate both lexical and phrasal translation probabilities for MT-scale phrasetables. We propose a novel algorithm to estimate reordering probabilities from monolingual data. We report translation results for an end-to-end translation system using these monolingual features alone.
We propose a language-independent method for the automatic extraction of transliteration pairs from parallel corpora. In contrast to previous work, our method uses no form of supervision, and does not require linguistically informed preprocessing. We conduct experiments on data sets from the NEWS 2010 shared task on transliteration mining and achieve an F-measure of up to 92%, outperforming most of the semi-supervised systems that were submitted.
This paper extends previous work on extracting parallel sentence pairs from comparable data (Munteanu and Marcu, 2005). For a given source sentence S, a maximum entropy (ME) classiﬁer is applied to a large set of candidate target translations . A beam-search algorithm is used to abandon target sentences as non-parallel early on during classiﬁcation if they fall outside the beam. This way, our novel algorithm avoids any document-level preﬁltering step.
This paper describes the design of the Gamma database machine and the techniques employed in its imple-
mentation. Gamma is a relational database machine currently operating on an Intel iPSC/2 hypercube with 32 pro-
cessors and 32 disk drives. Gamma employs three key technical ideas which enable the architecture to be scaled to
100s of processors. First, all relations are horizontally partitioned across multiple disk drives enabling relations to
be scanned in parallel.
The main goal of Grid programming is the study of programming models, tools, and methods that support the effective development of portable and high-performance algorithms and applications on Grid environments. Grid programming will require capabilities and properties beyond that of simple sequential programming or even parallel and distributed programming.
We have implemented various inner loop algorithms, and
have evaluated their performance on a Pentium 4 machine.
The Pentium 4 has a SIMD instruction set that supports
SIMD operations using up to 128-bit registers. Other ar-
chitectures have similar SIMD instruction sets that support
processing of several data elements in parallel, and the tech-
niques presented here are applicable to those architectures
The Greenplum Database parallel query optimizer (Figure 6) is responsible for converting SQL or MapReduce into a
physical execution plan. It does this by using a cost-based optimization algorithm to evaluate a vast number of potential
plans and select the one that it believes will lead to the most efficient query execution.
Unlike a traditional query optimizer, Greenplum’s optimizer takes a global view of execution across the cluster, and factors
in the cost of moving data between nodes in any candidate plan.
We describe our experiments with training algorithms for tree-to-tree synchronous tree-substitution grammar (STSG) for monolingual translation tasks such as sentence compression and paraphrasing. These translation tasks are characterized by the relative ability to commit to parallel parse trees and availability of word alignments, yet the unavailability of large-scale data, calling for a Bayesian tree-to-tree formalism.
We present a FrameNet-based semantic role labeling system for Swedish text. As training data for the system, we used an annotated corpus that we produced by transferring FrameNet annotation from the English side to the Swedish side in a parallel corpus. In addition, we describe two frame element bracketing algorithms that are suitable when no robust constituent parsers are available. We evaluated the system on a part of the FrameNet example corpus that we translated manually, and obtained an accuracy score of 0.
In a language generation system, a content planner embodies one or more “plans” that are usually hand–crafted, sometimes through manual analysis of target text. In this paper, we present a system that we developed to automatically learn elements of a plan and the ordering constraints among them. As training data, we use semantically annotated transcripts of domain experts performing the task our system is designed to mimic.