version của kernel có format như sau:
linux - major. minor. patchlevel
major:version chính của kernel
minor: những thay đổi quan trọng của version
số chẵn: version này được kiểm tra và công bố sử dụng2,4,6,8
Understanding the Linux Kernel helps readers understand how Linux performs best and how
it meets the challenge of different environments. The authors introduce each topic by
explaining its importance, and show how kernel operations relate to the utilities that are
familiar to Unix programmers and users.
Linux Kernel Development details the design and implementation of the Linux kernel, presenting the content in a manner that is beneficial to those writing and developing kernel code, as well as to programmers seeking to better understand the operating system and become more efficient and productive in their coding.
Building Embedded Linux Systems Linux Device Drivers Linux in a Nutshell Linux Pocket Guide Running Linux Understanding Linux Network Internals Understanding the Linux Kernel
Linux Books Resource Center linux.oreilly.com is a complete catalog of O’Reilly’s books on Linux and Unix and related technologies, including sample chapters and code examples. O’Reilly brings diverse innovators together to nurture the ideas that spark revolutionary industries.
Writing a Kernel in C. So far, your only experience in operating system writing might have been writing a boot loader in assembly. If
you wrote it from scratch it might have taken you several weeks (at least), and you might be wishing there
was an easier way. Well: there is, particularly if you are already familiar with the C programming language.
Even if you’re not familiar with C (and you already know some other high-level language), it’s well worth
learning it, because it’s trivial to start coding your kernel in C. It’s a matter of getting a few details correct and
We present a fast, space efﬁcient and nonheuristic method for calculating the decision function of polynomial kernel classiﬁers for NLP applications. We apply the method to the MaltParser system, resulting in a Java parser that parses over 50 sentences per second on modest hardware without loss of accuracy (a 30 time speedup over existing methods). The method implementation is available as the open-source splitSVM Java library.
We study the impact of syntactic and shallow semantic information in automatic classiﬁcation of questions and answers and answer re-ranking. We deﬁne (a) new tree structures based on shallow semantics encoded in Predicate Argument Structures (PASs) and (b) new kernel functions to exploit the representational power of such structures with Support Vector Machines. Our experiments suggest that syntactic information helps tasks such as question/answer classiﬁcation and that shallow semantics gives remarkable contribution when a reliable set of PASs can be extracted, e.g. from answers.
In recent years tree kernels have been proposed for the automatic learning of natural language applications. Unfortunately, they show (a) an inherent super linear complexity and (b) a lower accuracy than traditional attribute/value methods. In this paper, we show that tree kernels are very helpful in the processing of natural language as (a) we provide a simple algorithm to compute tree kernels in linear average running time and (b) our study on the classiﬁcation properties of diverse tree kernels show that kernel combinations always improve the traditional methods. ...
Coreferencing entities across documents in a large corpus enables advanced document understanding tasks such as question answering. This paper presents a novel cross document coreference approach that leverages the proﬁles of entities which are constructed by using information extraction tools and reconciled by using a within-document coreference module. We propose to match the proﬁles by using a learned ensemble distance function comprised of a suite of similarity specialists.
In this paper we propose a domainindependent text segmentation method, which consists of three components. Latent Dirichlet allocation (LDA) is employed to compute words semantic distribution, and we measure semantic similarity by the Fisher kernel. Finally global best segmentation is achieved by dynamic programming. Experiments on Chinese data sets with the technique show it can be effective. Introducing latent semantic information, our algorithm is robust on irregular-sized segments.
Syntactic knowledge is important for pronoun resolution. Traditionally, the syntactic information for pronoun resolution is represented in terms of features that have to be selected and deﬁned heuristically. In the paper, we propose a kernel-based method that can automatically mine the syntactic information from the parse trees for pronoun resolution. Speciﬁcally, we utilize the parse trees directly as a structured feature and apply kernel functions to this feature, as well as other normal features, to learn the resolution classiﬁer.
count instead of explicitly combines features. By setting with polynomial kernel degree (i.e., d), different number of feature conjunctions can be imKernel methods such as support vector maplicitly computed. In this way, polynomial kernel chines (SVMs) have attracted a great deal SVM is often better than linear kernel which did of popularity in the machine learning and not use feature conjunctions. However, the training natural language processing (NLP) comand testing time costs for polynomial kernel SVM munities. ...
Previous research applying kernel methods to natural language parsing have focussed on proposing kernels over parse trees, which are hand-crafted based on domain knowledge and computational considerations. In this paper we propose a method for deﬁning kernels in terms of a probabilistic model of parsing. This model is then trained, so that the parameters of the probabilistic model reﬂect the generalizations in the training data. The method we propose then uses these trained parameters to deﬁne a kernel for reranking parse trees. ...
Starting from the assumption that machine translation (MT) should be based on theoretically s o u n d grounds, we argue that, given the state of the a r t , the only v i a b l e solution for the designer of software tools for MT, is to provide the linguists building the MT system with a generator of highly specialized, problem oriented systems. We propose that such theory sensitive systems be generated automatically by supplying a set of definitions to a kernel software, of which we give an informal description in the paper. We give...
Automatic detection of general relations between short texts is a complex task that cannot be carried out only relying on language models and bag-of-words. Therefore, learning methods to exploit syntax and semantics are required. In this paper, we present a new kernel for the representation of shallow semantic information along with a comprehensive study on kernel methods for the exploitation of syntactic/semantic structures for short text pair categorization.
Better representations of plot structure could greatly improve computational methods for summarizing and generating stories. Current representations lack abstraction, focusing too closely on events. We present a kernel for comparing novelistic plots at a higher level, in terms of the cast of characters they depict and the social relationships between them. Our kernel compares the characters of different novels to one another by measuring their frequency of occurrence over time and the descriptive and emotional language associated with them.
Bài viết sau sẽ tóm tắt những bước cần thiết cho việc cập nhật kernel từ source.
1. Lấy kernel về:
Kernel source có thể tải về từ http://www.kernel.org . Bản stable hiện tại là 2.4.21 và developer là 2.5.73. Nếu bạn không muốn test những chức năng mới của kernel thì nên sử dụng 2.4.21 cho công việc hàng ngày.
Learning for sentence re-writing is a fundamental task in natural language processing and information retrieval. In this paper, we propose a new class of kernel functions, referred to as string re-writing kernel, to address the problem. A string re-writing kernel measures the similarity between two pairs of strings, each pair representing re-writing of a string. It can capture the lexical and structural similarity between two pairs of sentences without the need of constructing syntactic trees.