Small amount of drugs and poisons incorporated into human bodies are hidden among large amounts of biological components, such as proteins, lipids, nucleic acids and membranes. It is not easy to detect only a target compound from such complicated matrices. Before instrumental analysis, extraction procedure is usually essential and very important.
The advancement of technologies was marvelous during the past half century; new analytical instruments have been being invented and improved. About 30 years ago, thin-layer chromatography (TLC) was being used most widely for detection and identification of drugs and poisons. Around that time, the use of GC/MS started in the field of medicine. Therefore, an ideal procedure for analysis of drugs and poisons was considered to be the screening by TLC, followed by the final identification and quantitation by GC/MS.
Blood and urine are the common specimens for drug analysis in both antemortem and postmortem cases. Usually, urine is used for drug screening using immunoassays at the first step; secondly, the drug detected is chromatographically quantitated with blood. The data obtained are carefully assessed with taking the values reported in references into consideration together with clinical and postmortem findings; the judgement of poisoning and its degree is made comprehensively.
Since 1965, when the paraquat herbicide had started to be sold, its poisoning cases increased year by year. However, in 1986, mixture products of paraquat plus diquat with lower toxicity appeared; just after this year, the numer of cases of poisoning by paraquat (plus diquat) decreased suddenly, followed by the gradual decrease until now, but the paraquat (plus diquat) poisoning cases still count as much as about 40 % of the total number of pesticide poisoning .
Over the past 20 years, technological advances in molecular biology have
proven invaluable to the understanding of the pathogenesis of human cancer.
The application of molecular technology to the study of cancer has not only
led to advances in tumor diagnosis, but has also provided markers for the
assessment of prognosis and disease progression. The aim of Molecular Analysis
of Cancer is to provide a comprehensive collection of the most up-to-date
techniques for the detection of molecular changes in human cancer.
Software error detection is one of the most challenging problems in software engineering. Now, you can learn how to make the most of software testing by selecting test cases to maximize the probability of revealing latent errors. Software Error Detection through Testing and Analysis begins with a thorough discussion of test-case selection and a review of the concepts, notations, and principles used in the book.
The petroleum fuels are a group of hydrocarbons refined and modified from the crude oil, and include more than 100 kinds of aliphatic and aromatic hydrocarbons. They are roughly classified into petroleum gas, gasoline, kerosene, light oil, heavy oil and others . This chapter deals with analysis of hydrocarbons of C3–C16 being included in automobile gasoline, purified kerosene (No.1 kerosene), automobile light oil for a diesel engine and liquefied petroleum (LP) gas. The petroleum oils, such as gasoline and kerosene, are frequently detected from specimens in fire cases.
Breath sounds have long been important indicators of respiratory health and disease. Acoustical
monitoring of respiratory sounds has been used by researchers for various diagnostic purposes.
A few decades ago, physicians relied on their hearing to detect any symptomatic signs in
respiratory sounds of their patients. However, with the aid of computer technology and digital
signal processing techniques in recent years, breath sound analysis has drawn much attention
because of its diagnostic capabilities.
powerful and widely-used method for analyzing the performance behavior of
parallel programs is event tracing. When an application is traced, performancerelevant
events, such as entering functions or sending messages, are recorded at runtime
and analyzed post-mortem to identify and potentially remove performance problems.
While event tracing enables the detection of performance problems at a high
level of detail, growing trace-file size often constrains its scalability on large-scale
systems and complicates management, analysis, and visualization of trace data.
A new automated procedure for preconcentration and analysis of anionic surfactants in water using solid-phase extraction (SPE) in a flow injection (Fl) system together with an UV-vis detection is reported in this paper. The method involves extraction of blue coloured ion-pair complex of anionic surfactants and methylene blue into solid phase and their elution from the SPE column with organic solvent (chloroform stabilised with methanol) and subsequent measurement in an UV-vis flow cell at 650 nm. The system is well optimised before to apply for analysis of real samples.
Bài giảng Chapter 12: The External Environment: Opportunities, Threats, Industry Competition, and Competitor Analysis với các hoạt chính như: Identifying early signals of environmental changes and trends; Detecting meaning through ongoing observations of environmental changes and trends; Developing projections of anticipated outcomes based on monitored changes and trends;... Hy vọng tài liệu là nguồn thông tin hữu ích cho quá trình học tập và nghiên cứu của các bạn.
We evaluate the effect of adding parse features to a leading model of preposition usage. Results show a signiﬁcant improvement in the preposition selection task on native speaker text and a modest increment in precision and recall in an ESL error detection task. Analysis of the parser output indicates that it is robust enough in the face of noisy non-native writing to extract useful information.
Community-based knowledge forums, such as Wikipedia, are susceptible to vandalism, i.e., ill-intentioned contributions that are detrimental to the quality of collective intelligence. Most previous work to date relies on shallow lexico-syntactic patterns and metadata to automatically detect vandalism in Wikipedia. In this paper, we explore more linguistically motivated approaches to vandalism detection.
Information extraction systems incorporate multiple stages of linguistic analysis. Although errors are typically compounded from stage to stage, it is possible to reduce the errors in one stage by harnessing the results of the other stages. We demonstrate this by using the results of coreference analysis and relation extraction to reduce the errors produced by a Chinese name tagger. We use an N-best approach to generate multiple hypotheses and have them re-ranked by subsequent stages of processing.
We have analyzed 607 sentences of spontaneous human-computer speech data containing repairs, drawn from a total corpus of 10,718 sentences. We present here criteria and techniques for automatically detecting the presence of a repair, its location, and making the appropriate correction. The criteria involve integration of knowledge from several sources: pattern matching, syntactic and semantic analysis, and acoustics. INTRODUCTION Spontaneous spoken language often includes speech that is not intended by the speaker to be part of the content of the utterance. ...
Interpreting fully natural speech is an important goal for spoken language understanding systems. However, while corpus studies have shown that about 10% of spontaneous utterances contain self-corrections, or REPAIRS, little is known about the extent to which cues in the speech signal may facilitate repair processing. We identify several cues based on acoustic and prosodic analysis of repairs in a corpus of spontaneous speech, and propose methods for exploiting these cues to detect and correct repairs.
Populations in Southeast AsiaandSouthChinahave highfrequencies ofa-thalassemia
caused by a-globin gene mutations and/or deletions. This study was designed to find
an efficient and simple diagnostic test for the mutations and deletions. A duplex polymerase
chain reaction (PCR)/denaturing high-pressure liquid chromatography
(DHPLC) was used to detect the mutations and deletions. A blinded study of 110
samples, which included 92 a-thalassemia samples with various genotypes and 18
normal DNA samples, was carried out by the methods.
We report work1 in progress on adding affect-detection to an existing program for virtual dramatic improvisation, monitored by a human director. To partially automate the directors’ functions, we have partially implemented the detection of emotions, etc. in users’ text input, by means of pattern-matching, robust parsing and some semantic analysis. The work also involves basic research into how affect is conveyed by metaphor.
Coordination disambiguation remains a difﬁcult sub-problem in parsing despite the frequency and importance of coordination structures. We propose a method for disambiguating coordination structures. In this method, dual decomposition is used as a framework to take advantage of both HPSG parsing and coordinate structure analysis with alignment-based local features. We evaluate the performance of the proposed method on the Genia corpus and the Wall Street Journal portion of the Penn Treebank.
In pro-drop languages, the detection of explicit subjects, zero subjects and nonreferential impersonal constructions is crucial for anaphora and co-reference resolution. While the identiﬁcation of explicit and zero subjects has attracted the attention of researchers in the past, the automatic identiﬁcation of impersonal constructions in Spanish has not been addressed yet and this work is the ﬁrst such study.