We analyze estimation methods for DataOriented Parsing, as well as the theoretical criteria used to evaluate them. We show that all current estimation methods are inconsistent in the “weight-distribution test”, and argue that these results force us to rethink both the methods proposed and the criteria used.
This paper presents a comparative study of five parameter estimation algorithms on four NLP tasks. Three of the five algorithms are well-known in the computational linguistics community: Maximum Entropy (ME) estimation with L2 regularization, the Averaged Perceptron (AP), and Boosting. We also investigate ME estimation with L1 regularization using a novel optimization algorithm, and BLasso, which is a version of Boosting with Lasso (L1) regularization. We first investigate all of our estimators on two re-ranking tasks: a parse selection task and a language model (LM) adaptation task. ...
After studying this chapter, you will know: Understand the strategic role of cost estimation, understand the six steps of cost estimation, apply and understand each of cost estimation methods: the high-low method and regression analysis, explain the implementation issues of the cost estimation methods.
(BQ) Part 2 book "Process planning and cost estimation" has contents: Standard data, materials available to develop an estimate, methods of estimates, estimating procedure, constituents of a job estimate, allowances in estimation,...and other contents.
In the preceding chapters, we introduced several different estimation principles and
algorithms for independent component analysis (ICA). In this chapter, we provide
an overview of these methods. First, we show that all these estimation principles
are intimately connected, and the main choices are between cumulant-based vs.
negentropy/likelihood-based estimation methods, and between one-unit vs. multiunit
methods. In other words, one must choose the nonlinearity and the decorrelation
Tuyển tập báo cáo các nghiên cứu khoa học quốc tế ngành hóa học dành cho các bạn yêu hóa học tham khảo đề tài: Research Article An Entropy-Based Propagation Speed Estimation Method for Near-Field Subsurface Radar Imaging
We present a statistical model of Japanese unknown words consisting of a set of length and spelling models classified by the character types that constitute a word. The point is quite simple: different character sets should be treated differently and the changes between character types are very important because Japanese script has both ideograms like Chinese (kanji) and phonograms like English (katakana). Both word segmentation accuracy and part of speech tagging accuracy are improved by the proposed model. ...
Tuyển tập báo cáo các nghiên cứu khoa học quốc tế ngành hóa học dành cho các bạn yêu hóa học tham khảo đề tài:
Research Article A Low-Complexity LMMSE Channel Estimation Method for OFDM-Based Cooperative Diversity Systems with Multiple Amplify-and-Forward Relays
Tuyển tập báo cáo các nghiên cứu khoa học quốc tế ngành hóa học dành cho các bạn yêu hóa học tham khảo đề tài: An Improved Array Steering Vector Estimation Method and Its Application in Speech Enhancement
We compare four similarity-based estimation methods against back-off and maximum-likelihood estimation methods on a pseudo-word sense disambiguation task in which we controlled for both unigram and bigram frequency. The similarity-based methods perform up to 40% better on this particular task. We also conclude that events that occur only once in the training set have major impact on similarity-based estimates.
Conditional random ﬁelds (Lafferty et al., 2001) are quite effective at sequence labeling tasks like shallow parsing (Sha and Pereira, 2003) and namedentity extraction (McCallum and Li, 2003). CRFs are log-linear, allowing the incorporation of arbitrary features into the model. To train on unlabeled data, we require unsupervised estimation methods for log-linear models; few exist. We describe a novel approach, contrastive estimation. We show that the new technique can be intuitively understood as exploiting implicit negative evidence and is computationally efﬁcient. ...
Research objectives: Estimating the size of populations high risk to HIV (people who inject drug, female sex workers) applying different methods in Can Tho in 2012-2013; assessing reliability and feasibility of a number of methods to estimate the size of high risk to HIV populations.
The main contents of chapter 5 consist of the following: Factors influencing the quality of estimates; estimating guidelines for times, costs, and resources; top-down versus bottom-up estimating; methods for estimating project times and costs; level of detail; types of costs; refining estimates; creating a database for estimating.
While the stochastic volatility (SV) generalization has been shown to
improve the explanatory power over the Black-Scholes model, empirical
implications of SV models on option pricing have not yet been adequately
tested. The purpose of this paper is to ﬁrst estimate a multivariate SV
model using the efﬁcient method of moments (EMM) technique from
observations of underlying state variables and then investigate the respective
effect of stochastic interest rates, systematic volatility and idiosyncratic
volatility on option prices....
Regression models form the core of the discipline of econometrics. Although econometricians routinely estimate a wide variety of statistical models, using many diﬀerent types of data, the vast majority of these are either regression models or close relatives of them. In this chapter, we introduce the concept of a regression model, discuss several varieties of them, and introduce the estimation method that is most commonly used with regression models, namely, least squares.
Chapter 10 The Method of Maximum Likelihood
The method of moments is not the only fundamental principle of estimation, even though the estimation methods for regression models discussed up to this point (ordinary, nonlinear, and generalized least squares, instrumental variables
Texture, nutritive values and volatile compounds of Lentinula edodes, Pleurotus ostraetus and Pleroutus sajor-caju mushrooms were
determined. The volatiles have been found out with an estimation approach by carrying out gas chromatography and mass spectrophotometer
(GS–MS) Library Catalogue comparison. Neither regular increase nor decreases were observed for the values of texture, moisture,
ash and protein values of L. eddoes.
The subspecialty of population pharmacokinetics was introduced into clinical pharmacology
/ pharmacy in the late 1970s as a method for analyzing observational
data collected during patient drug therapy in order to estimate patient-based pharmacokinetic
parameters. It later became the basis for dosage individualization
and rational pharmacotherapy. The population pharmacokinetics method (i.e., the
population approach) was later extended to the characterization of the relationship
between pharmacokinetics and pharmacodynamics, and into the discipline of