intTypePromotion=1
ADSENSE

The probability models

Xem 1-20 trên 447 kết quả The probability models
  • Lecture "Advanced Econometrics (Part II) - Chapter 3: Discrete choice analysis - Binary outcome models" presentation of content: Discrete choice model, basic types of discrete values, the probability models, estimation and inference in binary choice model, binary choice models for panel data.

    pdf18p nghe123 06-05-2016 37 4   Download

  • Word alignment plays a crucial role in statistical machine translation. Word-aligned corpora have been found to be an excellent source of translation-related knowledge. We present a statistical model for computing the probability of an alignment given a sentence pair. This model allows easy integration of context-specific features. Our experiments show that this model can be an effective tool for improving an existing word alignment.

    pdf8p bunbo_1 17-04-2013 28 1   Download

  • Chapter 4: Bayes Classifier present of you about The naïve Bayes Probabilistic model, Constructing a Classifier from the probability model, An application of Naïve Bayes Classifier, Bayesian network.

    ppt27p cocacola_10 08-12-2015 35 1   Download

  • In this paper, we consider sequential point estimation of the probability of zero in Poisson distribution. Second order approximations to the expected sample size and the risk of the sequential procedure are derived as the cost per observations tends to zero. Finally, a simulation study is given.

    pdf13p tuongvidanh 06-01-2019 12 0   Download

  • This paper proposes a novel method for learning probability models of subcategorization preference of verbs. We consider the issues of case dependencies and noun class generalization in a uniform way by employing the maximum entropy modeling method. We also propose a new model selection algorithm which starts from the most general model and gradually examines more specific models.

    pdf7p bunrieu_1 18-04-2013 41 5   Download

  • This paper presents an algorithm for learning the probabilities of optional phonological rules from corpora. The algorithm is based on using a speech recognition system to discover the surface pronunciations of words in spe.ech corpora; using an automatic system obviates expensive phonetic labeling by hand. We describe the details of our algorithm and show the probabilities the system has learned for ten common phonological rules which model reductions and coarticulation effects.

    pdf8p bunmoc_1 20-04-2013 43 4   Download

  • (bq) part 1 book "essential statistics - exploring the world through data" has contents: introduction to data, picturing variation with graphs, numerical summaries of center and variation, regression analysis - exploring associations between variables, modeling variation with probability, modeling random events - the normal and binomial models.

    pdf324p bautroibinhyen27 11-05-2017 30 4   Download

  • Lots of Chinese characters are very productive in that they can form many structured words either as prefixes or as suffixes. Previous research in Chinese word segmentation mainly focused on identifying only the word boundaries without considering the rich internal structures of many words. In this paper we argue that this is unsatisfying in many ways, both practically and theoretically. Instead, we propose that word structures should be recovered in morphological analysis.

    pdf10p hongdo_1 12-04-2013 37 3   Download

  • The language model (LM) is a critical component in most statistical machine translation (SMT) systems, serving to establish a probability distribution over the hypothesis space. Most SMT systems use a static LM, independent of the source language input. While previous work has shown that adapting LMs based on the input improves SMT performance, none of the techniques has thus far been shown to be feasible for on-line systems.

    pdf5p hongdo_1 12-04-2013 33 3   Download

  • We investigate a number of simple methods for improving the word-alignment accuracy of IBM Model 1. We demonstrate reduction in alignment error rate of approximately 30% resulting from (1) giving extra weight to the probability of alignment to the null word, (2) smoothing probability estimates for rare words, and (3) using a simple heuristic estimation method to initialize, or replace, EM training of model parameters.

    pdf8p bunbo_1 17-04-2013 25 3   Download

  • We tackle the previously unaddressed problem of unsupervised determination of the optimal morphological segmentation for statistical machine translation (SMT) and propose a segmentation metric that takes into account both sides of the SMT training corpus. We formulate the objective function as the posterior probability of the training corpus according to a generative segmentation-translation model. We describe how the IBM Model-1 translation likelihood can be computed incrementally between adjacent segmentation states for efficient computation. ...

    pdf6p hongdo_1 12-04-2013 35 2   Download

  • We propose a language model based on a precise, linguistically motivated grammar (a hand-crafted Head-driven Phrase Structure Grammar) and a statistical model estimating the probability of a parse tree. The language model is applied by means of an N-best rescoring step, which allows to directly measure the performance gains relative to the baseline system without rescoring. To demonstrate that our approach is feasible and beneficial for non-trivial broad-domain speech recognition tasks, we applied it to a simplified German broadcast-news transcription task.

    pdf8p hongphan_1 15-04-2013 28 2   Download

  • It has previously been assumed in the psycholinguistic literature that finite-state models of language are crucially limited in their explanatory power by the locality of the probability distribution and the narrow scope of information used by the model. We show that a simple computational model (a bigram part-of-speech tagger based on the design used by Corley and Crocker (2000)) makes correct predictions on processing difficulty observed in a wide range of empirical sentence processing data. ...

    pdf8p hongvang_1 16-04-2013 34 2   Download

  • We propose a distribution-based pruning of n-gram backoff language models. Instead of the conventional approach of pruning n-grams that are infrequent in training data, we prune n-grams that are likely to be infrequent in a new document. Our method is based on the n-gram distribution i.e. the probability that an n-gram occurs in a new document. Experimental results show that our method performed 7-9% (word perplexity reduction) better than conventional cutoff methods.

    pdf7p bunrieu_1 18-04-2013 30 2   Download

  • This paper compares two different ways of estimating statistical language models. Many statistical NLP tagging and parsing models are estimated by maximizing the (joint) likelihood of the fully-observed training data. However, since these applications only require the conditional probability distributions, these distributions can in principle be learnt by maximizing the conditional likelihood of the training data.

    pdf8p bunrieu_1 18-04-2013 28 2   Download

  • Language modeling is to associate a sequence of words with a priori probability, which is a key part of many natural language applications such as speech recognition and statistical machine translation. In this paper, we present a language modeling based on a kind of simple dependency grammar. The grammar consists of head-dependent relations between words and can be learned automatically from a raw corpus using the reestimation algorithm which is also introduced in this paper. Our experiments show that the proposed model performs better than n-gram models at 11% to 11.

    pdf5p bunrieu_1 18-04-2013 31 2   Download

  • Language models for speech recognition typically use a probability model of the form Pr(an[al,a2,...,an-i). Stochastic grammars, on the other hand, are typically used to assign structure to utterances, A language model of the above form is constructed from such grammars by computing the prefix probability ~we~* Pr(al.-.artw), where w represents all possible terminations of the prefix al...an. The main result in this paper is an algorithm to compute such prefix probabilities given a stochastic Tree Adjoining Grammar (TAG). The algorithm achieves the required computation in O(n 6) time. ...

    pdf7p bunrieu_1 18-04-2013 32 2   Download

  • Distributional similarity is a useful notion in estimating the probabilities of rare joint events. It has been employed both to cluster events according to their distributions, and to directly compute averages of estimates for distributional neighbors of a target event. Here, we examine the tradeoffs between model size and prediction accuracy for cluster-based and nearest neighbors distributional models of unseen events.

    pdf8p bunrieu_1 18-04-2013 32 2   Download

  • In this project, traffic simulation according to the cellular automaton of the Nagel-Schreckenberg model (1992) with different boundary conditions. The sudden occurrence of traffic jams is successfully realised as well as boundary induced phases and phase transitions are observed in the Asymmetric Simple Exclusion Process. The extension to the Velocity Dependent Randomization model leads to metastabile high flow states and hysteresis of the flow. The impact of speed limits on the probability of the formation of traffic jams is investigated.

    pdf10p nguyenhaisu 07-08-2015 28 2   Download

  • In this paper, we encode topic dependencies in hierarchical multi-label Text Categorization (TC) by means of rerankers. We represent reranking hypotheses with several innovative kernels considering both the structure of the hierarchy and the probability of nodes. Additionally, to better investigate the role of category relationships, we consider two interesting cases: (i) traditional schemes in which node-fathers include all the documents of their child-categories; and (ii) more general schemes, in which children can include documents not belonging to their fathers. ...

    pdf9p nghetay_1 07-04-2013 31 1   Download

CHỦ ĐỀ BẠN MUỐN TÌM

ADSENSE

p_strKeyword=The probability models
p_strCode=theprobabilitymodels

nocache searchPhinxDoc

 

Đồng bộ tài khoản
2=>2