Results 41 to 50 of about 96,400 (324)
Word Embeddings for Entity-annotated Texts
Learned vector representations of words are useful tools for many information retrieval and natural language processing tasks due to their ability to capture lexical semantics.
A Das +15 more
core +1 more source
Distilling knowledge from a well-trained cumbersome network to a small one has recently become a new research topic, as lightweight neural networks with high performance are particularly in need in various resource-restricted systems. This paper addresses the problem of distilling word embeddings for NLP tasks.
Mou, Lili +5 more
openaire +2 more sources
To resolve lexical disagreement problems between queries and frequently asked questions (FAQs), we propose a reliable sentence classification model based on an encoder-decoder neural network.
Youngjin Jang, Harksoo Kim
doaj +1 more source
SensEmbed: Learning sense embeddings for word and relational similarity [PDF]
Word embeddings have recently gained considerable popularity for modeling words in different Natural Language Processing (NLP) tasks including semantic similarity measurement.
IACOBACCI, IGNACIO JAVIER +2 more
core +2 more sources
Cultural Cartography with Word Embeddings [PDF]
Using the frequency of keywords is a classic approach in the formal analysis of text, but has the drawback of glossing over the relationality of word meanings. Word embedding models overcome this problem by constructing a standardized and continuous “meaning-space” where words are assigned a location based on relations of similarity to other words ...
Stoltz, Dustin, Taylor, Marshall
openaire +4 more sources
Semantic features are very important for machine learning-based drug name recognition (DNR) systems. The semantic features used in most DNR systems are based on drug dictionaries manually constructed by experts.
Shengyu Liu +3 more
doaj +1 more source
Better Word Embeddings by Disentangling Contextual n-Gram Information
Pre-trained word vectors are ubiquitous in Natural Language Processing applications. In this paper, we show how training word embeddings jointly with bigram and even trigram embeddings, results in improved unigram embeddings.
Gupta, Prakhar +2 more
core +1 more source
Theoretical foundations and limits of word embeddings: What types of meaning can they capture? [PDF]
Arseniev-Koehler A.
europepmc +2 more sources
Morphological Priors for Probabilistic Neural Word Embeddings
Word embeddings allow natural language processing systems to share statistical information across related words. These embeddings are typically based on distributional statistics, making it difficult for them to generalize to rare or unseen words.
Bhatia, Parminder +2 more
core +1 more source
Understanding and Creating Word Embeddings
Word embeddings allow you to analyze the usage of different terms in a corpus of texts by capturing information about their contextual usage. Through a primarily theoretical lens, this lesson will teach you how to prepare a corpus and train a word ...
Avery Blankenship +2 more
doaj +1 more source

