Neural embedding models
-
This paper introduces a novel framework, July, which serves the dual purpose of detecting vulnerable commits and localizing the root causes of the vulnerabilities. The fundamental concept of July is that the determinant of the vulnerability of a commit is the inherent meaning embedded in its changed code. For just-in-time vulnerability detection (JIT-VD), July represents each commit by a Code Transformation Graph and employs a Graph Neural Network model to capture their meanings and distinguish between vulnerable and non-vulnerable commits.
23p dianmotminh02 03-05-2024 3 1 Download
-
This article describes an ensembling system to automatically extract adverse drug events and drug related entities from clinical narratives, which was developed for the 2018 n2c2 Shared Task Track 2. Materials and Methods: We designed a neural model to tackle both nested (entities embedded in other entities) and polysemous entities (entities annotated with multiple semantic types) based on MIMIC III discharge summaries.
9p vighostrider 25-05-2023 3 2 Download
-
Part 1 of the document "Speech and language processing: An introduction to natural language processing, computational linguistics, and speech recognition" provide with knowledge about: regular expressions, text normalization, edit distance; N-gram language models; naive bayes and sentiment classification; logistic regression; vector semantics and embeddings; neural networks and neural language models;...
287p britaikridanik 05-07-2022 17 6 Download
-
Graph neural networks (GNNs) have achieved superior performance and gained significant interest in various domains. However, most of the existing GNNs are considered for homogeneous graphs, whereas real-world systems are usually modeled as heterogeneous graphs or heterogeneous information networks (HINs).
26p guernsey 28-12-2021 4 0 Download
-
The pretrained BERT multilingual model is used to generate embedding vectors from the input text. These vectors are combined with TF-IDF values to produce the input of the text summarization system. Redundant sentences from the output summary are eliminated by the Maximal Marginal Relevance method. Our system is evaluated with both English and Vietnamese languages using CNN and Baomoi datasets, respectively. Experimental results show that our system achieves better results compared to existing works using the same dataset.
21p spiritedaway36 25-11-2021 7 1 Download
-
In this paper, we investigate the use of a deep learning method, Deep Belief Network (DBN), combined with chaos theory to forecast chaotic time series. DBN should be used to forecast chaotic time series. First, the chaotic time series are analyzed by calculating the largest Lyapunov exponent, reconstructing the time series by phase-space reconstruction and determining the best embedding dimension and the best delay time. When the forecasting model is constructed, the deep belief network is used to feature learning and the neural network is used for prediction.
11p trinhthamhodang9 04-12-2020 22 2 Download
-
Biomedical named entity recognition(BNER) is a crucial initial step of information extraction in biomedical domain. The task is typically modeled as a sequence labeling problem. Various machine learning algorithms, such as Conditional Random Fields (CRFs), have been successfully used for this task. However, these state-of-the-art BNER systems largely depend on hand-crafted features.
11p viconnecticut2711 29-10-2020 21 1 Download
-
Biomedical literature is expanding rapidly, and tools that help locate information of interest are needed. To this end, a multitude of different approaches for classifying sentences in biomedical publications according to their coarse semantic and rhetoric categories (e.g., Background, Methods, Results, Conclusions) have been devised, with recent state-of-the-art results reported for a complex deep learning model.
9p vicoachella2711 27-10-2020 9 1 Download
-
Neural network based embedding models are receiving significant attention in the field of natural language processing due to their capability to effectively capture semantic information representing words, sentences or even larger text elements in low-dimensional vector space.
10p vijisoo2711 27-10-2020 13 1 Download
-
Unsupervised word representations are very useful in NLP tasks both as inputs to learning algorithms and as extra word features in NLP systems. However, most of these models are built with only local context and one representation per word. This is problematic because words are often polysemous and global context can also provide useful information for learning word meanings.
10p nghetay_1 07-04-2013 57 1 Download
-
Tuyển tập các báo cáo nghiên cứu về hóa học được đăng trên tạp chí sinh học quốc tế đề tài :Implantation of neural stem cells embedded in hyaluronic acid and collagen composite conduit promotes regeneration in a rabbit facial nerve injury model
11p dauphong3 03-01-2012 70 3 Download