Shiitake mushroom contains several therapeutic actions such as antioxidant and antimicrobial properties, carried by the diversity of
its components. In the present work, extracts from shiitake mushroom were obtained using different extraction techniques: high-pressure
operations and low-pressure methods. The high-pressure technique was applied to obtain shiitake extracts using pure CO2 and
CO2 with co-solvent in pressures up to 30 MPa.
The applicability of many current information extraction techniques is severely limited by the need for supervised training data. We demonstrate that for certain ﬁeld structured extraction tasks, such as classiﬁed advertisements and bibliographic citations, small amounts of prior knowledge can be used to learn effective models in a primarily unsupervised fashion. Although hidden Markov models (HMMs) provide a suitable generative model for ﬁeld structured text, general unsupervised HMM learning fails to learn useful structure in either of our domains.
database maintained by the National Library of Medicine1 (NLM), which incorporates around 40,000 Health Sciences papers each month. Researchers depend on these electronic resources to keep abreast of their rapidly changing field. In order to maintain and update vital indexing references such as the Unified Medical Language System (UMLS) resources, the MeSH and SPECIALIST vocabularies, the NLM staff needs to review 400,000 highly-technical papers each year.
Gas chromatography has been and it still is, one of the key tools in analytical
techniques in many of the advanced research carried out over the globe. This
technique has contributed tremendously and was once the main technique in the
analysis of specific compounds like volatile compounds, certain pesticides,
pharmaceuticals and petroleum products. The advance of this technique has resulted
in several tandem instruments with application of other techniques to enhance the
results obtained by gas chromatography.
This paper explores techniques to take advantage of the fundamental difference in structure between hidden Markov models (HMM) and hierarchical hidden Markov models (HHMM). The HHMM structure allows repeated parts of the model to be merged together. A merged model takes advantage of the recurring patterns within the hierarchy, and the clusters that exist in some sequences of observations, in order to increase the extraction accuracy.
Natural gas has traditionally been used as a feedstock for the chemical industry, and as a fuel for process and space heating. Recent advances in exploration, drilling techniques and hydraulic fracturing have made it possible for natural gas to become available in abundance (as of 2012). As natural gas displaces traditional petroleum use in various sectors, a certain amount of disruption is likely.
Automatic key phrase extraction is fundamental to the success of many recent digital library applications and semantic information retrieval techniques and a difficult and essential problem in Vietnamese natural language processing (NLP). In this work, we propose a novel method for key phrase extracting of Vietnamese text that exploits the Vietnamese Wikipedia as an ontology and exploits specific characteristics of the Vietnamese language for the key phrase selection stage.
Project Number Project Title Vietnamese Institution Australian Institution Commencement Date
VIE36 IF03 Moulting and growth of mud crab (Scylla paramamosain) larvae treated with extracts of neem tree (Azdirachta indica) and Cell Salts Research Institute for Aquaculture No. 2 Charles Darwin University May 2006 Completion Date December 2006
Objectives: To evaluate the effects of neem extracts and cell salts as an antibiotic replacement on mud crab (S. paramamosain) larvae in terms of moulting, growth, and survival rate from Zoea 1 to Megalopa.
A revised and updated edition of this successful text offers new material on plastic restorations.
With increasing emphasis on restoration, as opposed to extraction, and growing public awareness of the importance of primary teeth, the first edition was one of the first books to illustrate in detail the various scientifically proven clinical techniques.
The authors are at the Leeds Dental Institute.
Sociolinguists have long argued that social context influences language use in all manner of ways, resulting in lects 1 . This paper explores a text classification problem we will call lect modeling, an example of what has been termed computational sociolinguistics. In particular, we use machine learning techniques to identify social power relationships between members of a social network, based purely on the content of their interpersonal communication.
In my thesis, I propose to build a system that would enable extraction of social interactions from texts. To date I have deﬁned a comprehensive set of social events and built a preliminary system that extracts social events from news articles. I plan to improve the performance of my current system by incorporating semantic information. Using domain adaptation techniques, I propose to apply my system to a wide range of genres.
Automatic opinion recognition involves a number of related tasks, such as identifying the boundaries of opinion expression, determining their polarity, and determining their intensity. Although much progress has been made in this area, existing research typically treats each of the above tasks in isolation. In this paper, we apply a hierarchical parameter sharing technique using Conditional Random Fields for ﬁne-grained opinion analysis, jointly detecting the boundaries of opinion expressions as well as determining two of their key attributes — polarity and intensity. ...
As an alternative to requiring substantial supervised relation training data, many have explored bootstrapping relation extraction from a few seed examples. Most techniques assume that the examples are based on easily spotted anchors, e.g., names or dates. Sentences in a corpus which contain the anchors are then used to induce alternative ways of expressing the relation. We explore whether coreference can improve the learning process.
We present Wikulu1 , a system focusing on supporting wiki users with their everyday tasks by means of an intelligent interface. Wikulu is implemented as an extensible architecture which transparently integrates natural language processing (NLP) techniques with wikis. It is designed to be deployed with any wiki platform, and the current prototype integrates a wide range of NLP algorithms such as keyphrase extraction, link discovery, text segmentation, summarization, or text similarity.
We apply pattern-based methods for collecting hypernym relations from the web. We compare our approach with hypernym extraction from morphological clues and from large text corpora. We show that the abundance of available data on the web enables obtaining good results with relatively unsophisticated techniques.
Two trends are evident in the recent evolution of the field of information extraction: a preference for simple, often corpus-driven techniques over linguistically sophisticated ones; and a broadening of the central problem definition to include many non-traditional text domains. This development calls for information extraction systems which are as retctrgetable and general as possible. Here, we describe SRV, a learning architecture for information extraction which is designed for maximum generality and flexibility. ...
We describe a novel method that extracts paraphrases from a bitext, for both the source and target languages. In order to reduce the search space, we decompose the phrase-table into sub-phrase-tables and construct separate clusters for source and target phrases. We convert the clusters into graphs, add smoothing/syntacticinformation-carrier vertices, and compute the similarity between phrases with a random walk-based measure, the commute time.
Ethanolic and aqueous (hot and cold) extracts of the fruit pulp, stem bark and
leaves of Tamarindus indica were evaluated for antibacterial activity, in vitro, against
13 Gram negative and 5 Gram positive bacterial strains using agar well diffusion and
macro broth dilution techniques, simultaneously.
Online forum discussions often contain vast amounts of questions that are the focuses of discussions. Extracting contexts and answers together with the questions will yield not only a coherent forum summary but also a valuable QA knowledge base. In this paper, we propose a general framework based on Conditional Random Fields (CRFs) to detect the contexts and answers of questions from forum threads. We improve the basic framework by Skip-chain CRFs and 2D CRFs to better accommodate the features of forums for better performance.
We describe an application that generates web pages for research institutions by summarising terms extracted from individual researchers’ publication titles. Our online demo covers all researchers and research groups in the Computer Laboratory, University of Cambridge. We also present a novel visualisation interface for browsing collaborations.