
Mô hình Maximum Entropy
-
Luận văn tập trung vào tìm hiểu các mô hình học máy có giám sát phổ biến, được ứng dụng trong bài toán phân lớp quan điểm người dùng cho dữ liệu văn bản thu được từ các kênh truyền thông xã hội. Mời các bạn cùng tham khảo nội dung chi tiết.
27p
tamynhan1
13-06-2020
43
3
Download
-
Trong luận văn, Tác giả cũng đã lựa chọn bộ phân lớp Maximum Entropy để cài đặt và thử nghiệm, đồng thời ứng dụng vào hệ thống tự động phân tích dữ liệu truyền thông xã hội trực tuyến phục vụ quản lý và hỗ trợ ra quyết định trong lĩnh vực đào tạo cho Đại học Quốc gia Hà Nội.
8p
tamynhan1
13-06-2020
20
1
Download
-
Nội dung của khóa luận được tổ chức thành ba chương như sau: Chương 1: Trình bày bài toán phân lớp quan điểm, nhiệm vụ của bài toán phân lớp quan điểm. Chương 2: Trình bày về mô hình và thuật toán Entropy cực đại cho bài toán phân lớp quan điểm. Chương 3: Trình bày những kết quả đánh giá thử nghiệm của khóa luận áp dụng cho bài toán phân lớp quan điểm.
32p
chieuwindows23
01-06-2013
149
32
Download
-
This paper describes algorithms which rerank the top N hypotheses from a maximum-entropy tagger, the application being the recovery of named-entity boundaries in a corpus of web data. The first approach uses a boosting algorithm for ranking problems. The second approach uses the voted perceptron algorithm. Both algorithms give comparable, significant improvements over the maximum-entropy baseline. The voted perceptron algorithm can be considerably more efficient to train, at some cost in computation on test examples.
8p
bunmoc_1
20-04-2013
26
1
Download
-
We present a framework for statistical machine translation of natural languages based on direct maximum entropy models, which contains the widely used source-channel approach as a special case. All knowledge sources are treated as feature functions, which depend on the source language sentence, the target language sentence and possible hidden variables. This approach allows a baseline machine translation system to be extended easily by adding new feature functions. We show that a baseline statistical machine translation system is significantly improved using this approach. ...
8p
bunmoc_1
20-04-2013
28
2
Download
-
Numerous abbreviations are used routinely throughout such texts and identifying their meaning is critical to understanding of the document. The problem is that abbreviations are highly ambiguous with respect to their meaning. For example, according to UMLS2 (2001), RA may stand for “rheumatoid arthritis”, “renal artery”, “right atrium”, “right atrial”, “refractory anemia”, “radioactive”, “right arm”, “rheumatic arthritis,” etc. Liu et al. (2001) show that 33% of abbreviations listed in UMLS are ambiguous. ...
8p
bunmoc_1
20-04-2013
37
1
Download
-
We describe a speedup for training conditional maximum entropy models. The algorithm is a simple variation on Generalized Iterative Scaling, but converges roughly an order of magnitude faster, depending on the number of constraints, and the way speed is measured. Rather than attempting to train all model parameters simultaneously, the algorithm trains them sequentially. The algorithm is easy to implement, typically uses only slightly more memory, and will lead to improvements for most maximum entropy problems. ...
8p
bunmoc_1
20-04-2013
34
2
Download
-
We propose a statistical dialogue analysis model to determine discourse structures as well as speech acts using maximum entropy model. The model can automatically acquire probabilistic discourse knowledge from a discourse tagged corpus to resolve ambiguities. We propose the idea of tagging discourse segment boundaries to represent the structural information of discourse. Using this representation we can effectively combine speech act analysis and discourse structure analysis in one framework.
8p
bunrieu_1
18-04-2013
60
2
Download
-
This paper proposes a novel method for learning probability models of subcategorization preference of verbs. We consider the issues of case dependencies and noun class generalization in a uniform way by employing the maximum entropy modeling method. We also propose a new model selection algorithm which starts from the most general model and gradually examines more specific models.
7p
bunrieu_1
18-04-2013
52
5
Download
-
This paper proposes a learning method of translation rules from parallel corpora. This method applies the maximum entropy principle to a probabilistic model of translation rules. First, we define feature functions which express statistical properties of this model. Next, in order to optimize the model, the system iterates following steps: (1) selects a feature function which maximizes loglikelihood, and (2) adds this function to the model incrementally.
5p
bunrieu_1
18-04-2013
30
2
Download
-
Maximum entropy framework proved to be expressive and powerful for the statistical language modelling, but it suffers from the computational expensiveness of the model building. The iterative scaling algorithm that is used for the parameter estimation is computationally expensive while the feature selection process might require to estimate parameters for many candidate features many times.
7p
bunrieu_1
18-04-2013
22
2
Download
-
In this paper we examine how the differences in modelling between different data driven systems performing the same NLP task can be exploited to yield a higher accuracy than the best individual system. We do this by means of an experiment involving the task of morpho-syntactic wordclass tagging. Four well-known tagger generators (Hidden Markov Model, Memory-Based, Transformation Rules and Maximum Entropy) are trained on the same corpus data. After comparison, their outputs are combined using several voting strategies and second stage classifiers. ...
7p
bunrieu_1
18-04-2013
43
5
Download
-
The major obstacle in morphological (sometimes called morpho-syntactic, or extended POS) tagging of highly inflective languages, such as Czech or Russian, is - given the resources possibly available - the tagset size. Typically, it is in the order of thousands. Our method uses an exponential probabilistic model based on automatically selected features. The parameters of the model are computed using simple estimates (which makes training much faster than when one uses Maximum Entropy) to directly minimize the error rate on training data.
8p
bunrieu_1
18-04-2013
39
2
Download
-
Typically, the lexicon models used in statistical machine translation systems do not include any kind of linguistic or contextual information, which often leads to problems in performing a correct word sense disambiguation. One way to deal with this problem within the statistical framework is to use maximum entropy methods. In this paper, we present how to use this type of information within a statistical machine translation system. We show that it is possible to significantly decrease training and test corpus perplexity of the translation models. ...
8p
bunrieu_1
18-04-2013
27
1
Download
-
In this paper, we propose adding long-term grammatical information in a Whole Sentence Maximun Entropy Language Model (WSME) in order to improve the performance of the model. The grammatical information was added to the WSME model as features and were obtained from a Stochastic Context-Free grammar. Finally, experiments using a part of the Penn Treebank corpus were carried out and significant improvements were acheived.
8p
bunrieu_1
18-04-2013
38
1
Download
-
Extracting semantic relationships between entities is challenging because of a paucity of annotated data and the errors induced by entity detection modules. We employ Maximum Entropy models to combine diverse lexical, syntactic and semantic features derived from the text. Our system obtained competitive results in the Automatic Content Extraction (ACE) evaluation. Here we present our general approach and describe our ACE results.
4p
bunbo_1
17-04-2013
44
1
Download
-
We describe a statistical approach for modeling agreements and disagreements in conversational interaction. Our approach first identifies adjacency pairs using maximum entropy ranking based on a set of lexical, durational, and structural features that look both forward and backward in the discourse. We then classify utterances as agreement or disagreement using these adjacency pairs and features that represent various pragmatic influences of previous agreement or disagreement on the current utterance. ...
8p
bunbo_1
17-04-2013
41
1
Download
-
We introduce a new method for disambiguating word senses that exploits a nonlinear Kernel Principal Component Analysis (KPCA) technique to achieve accuracy superior to the best published individual models. We present empirical results demonstrating significantly better accuracy compared to the state-of-the-art achieved by either na¨ve Bayes ı or maximum entropy models, on Senseval-2 data. We also contrast against another type of kernel method, the support vector machine (SVM) model, and show that our KPCA-based model outperforms the SVM-based model. ...
8p
bunbo_1
17-04-2013
45
1
Download
-
This paper proposes a new approach for coreference resolution which uses the Bell tree to represent the search space and casts the coreference resolution problem as finding the best path from the root of the Bell tree to the leaf nodes. A Maximum Entropy model is used to rank these paths. The coreference performance on the 2002 and 2003 Automatic Content Extraction (ACE) data will be reported. We also train a coreference system using the MUC6 data and competitive results are obtained.
8p
bunbo_1
17-04-2013
41
2
Download
-
Sentence boundary detection in speech is important for enriching speech recognition output, making it easier for humans to read and downstream modules to process. In previous work, we have developed hidden Markov model (HMM) and maximum entropy (Maxent) classifiers that integrate textual and prosodic knowledge sources for detecting sentence boundaries.
8p
bunbo_1
17-04-2013
38
2
Download
CHỦ ĐỀ BẠN MUỐN TÌM
