Results 51 to 60 of about 2,031,469 (352)
Background In the past few years, neural word embeddings have been widely used in text mining. However, the vector representations of word embeddings mostly act as a black box in downstream applications using them, thereby limiting their interpretability.
Zhiwei Chen+3 more
doaj +1 more source
Acoustic Word Embeddings for End-to-End Speech Synthesis
The most recent end-to-end speech synthesis systems use phonemes as acoustic input tokens and ignore the information about which word the phonemes come from.
Feiyu Shen, Chenpeng Du, Kai Yu
doaj +1 more source
This work explores the use of word embeddings as features for Spanish verb sense disambiguation (VSD). This type of learning technique is named disjoint semisupervised learning: an unsupervised algorithm (i.e.
Cristian Cardellino+1 more
doaj +1 more source
Benchmark for Evaluation of Danish Clinical Word Embeddings
In natural language processing, benchmarks are used to track progress and identify useful models. Currently, no benchmark for Danish clinical word embeddings exists.
Martin Sundahl Laursen+4 more
doaj +1 more source
Towards Spatial Word Embeddings [PDF]
Leveraging textual and spatial data provided in spatio-textual objects (eg., tweets), has become increasingly important in real-world applications, favoured by the increasing rate of their availability these last decades (eg., through smartphones).
Paul Mousset+3 more
openaire +5 more sources
Enhancing Accuracy of Semantic Relatedness Measurement by Word Single-Meaning Embeddings
We propose a lightweight algorithm of learning word single-meaning embeddings (WSME), by exploring WordNet synsets and Doc2vec document embeddings, to enhance the accuracy of semantic relatedness measurement.
Xiaotao Li, Shujuan You, Wai Chen
doaj +1 more source
AutoExtend: Combining Word Embeddings with Semantic Resources
We present AutoExtend, a system that combines word embeddings with semantic resources by learning embeddings for non-word objects like synsets and entities and learning word embeddings that incorporate the semantic information from the resource.
Sascha Rothe, Hinrich Schütze
doaj +1 more source
Morphological Skip-Gram: Replacing FastText characters n-gram with morphological knowledge
Natural language processing systems have attracted much interest of the industry. This branch of study is composed of some applications such as machine translation, sentiment analysis, named entity recognition, question and answer, and others.
Thiago Dias Bispo+3 more
doaj +1 more source
Deconstructing word embedding algorithms [PDF]
Word embeddings are reliable feature representations of words used to obtain high quality results for various NLP applications. Uncontextualized word embeddings are used in many NLP tasks today, especially in resource-limited settings where high memory capacity and GPUs are not available.
Jackie Chi Kit Cheung+2 more
openaire +3 more sources
A Gloss Composition and Context Clustering Based Distributed Word Sense Representation Model
In recent years, there has been an increasing interest in learning a distributed representation of word sense. Traditional context clustering based models usually require careful tuning of model parameters, and typically perform worse on infrequent word ...
Tao Chen+3 more
doaj +1 more source