Results 21 to 30 of about 89,062 (279)
Contextual Word Embedding [PDF]
Effective clustering of short documents, such as tweets, is difficult because of the lack of sufficient semantic context. Word embedding is a technique that is effective in addressing this lack of semantic context. However, the process of word vector embedding, in turn, relies on the availability of sufficient contexts to learn the word associations ...
Debasis Ganguly, Kripabandhu Ghosh
openaire +1 more source
This work explores the use of word embeddings as features for Spanish verb sense disambiguation (VSD). This type of learning technique is named disjoint semisupervised learning: an unsupervised algorithm (i.e.
Cristian Cardellino +1 more
doaj +1 more source
Evaluating the Underlying Gender Bias in Contextualized Word Embeddings [PDF]
Gender bias is highly impacting natural language processing applications. Word embeddings have clearly been proven both to keep and amplify gender biases that are present in current data sources.
Basta, Christine +2 more
core +2 more sources
Attention Word Embedding [PDF]
Word embedding models learn semantically rich vector representations of words and are widely used to initialize natural processing language (NLP) models. The popular continuous bag-of-words (CBOW) model of word2vec learns a vector embedding by masking a given word in a sentence and then using the other words as a context to predict it.
Sonkar, Shashank +2 more
openaire +2 more sources
Enhancing Accuracy of Semantic Relatedness Measurement by Word Single-Meaning Embeddings
We propose a lightweight algorithm of learning word single-meaning embeddings (WSME), by exploring WordNet synsets and Doc2vec document embeddings, to enhance the accuracy of semantic relatedness measurement.
Xiaotao Li, Shujuan You, Wai Chen
doaj +1 more source
AutoExtend: Combining Word Embeddings with Semantic Resources
We present AutoExtend, a system that combines word embeddings with semantic resources by learning embeddings for non-word objects like synsets and entities and learning word embeddings that incorporate the semantic information from the resource.
Sascha Rothe, Hinrich Schütze
doaj +1 more source
Benchmark for Evaluation of Danish Clinical Word Embeddings
In natural language processing, benchmarks are used to track progress and identify useful models. Currently, no benchmark for Danish clinical word embeddings exists.
Martin Sundahl Laursen +4 more
doaj +1 more source
Morphological Skip-Gram: Replacing FastText characters n-gram with morphological knowledge
Natural language processing systems have attracted much interest of the industry. This branch of study is composed of some applications such as machine translation, sentiment analysis, named entity recognition, question and answer, and others.
Thiago Dias Bispo +3 more
doaj +1 more source
Socialized Word Embeddings [PDF]
Word embeddings have attracted a lot of attention. On social media, each user’s language use can be significantly affected by the user’s friends. In this paper, we propose a socialized word embedding algorithm which can consider both user’s personal characteristics of language use and the user’s social relationship on social media.
Ziqian Zeng +3 more
openaire +1 more source
A Collection of Swedish Diachronic Word Embedding Models Trained on Historical Newspaper Data
This paper describes the creation of several word embedding models based on a large collection of diachronic Swedish newspaper material available through Språkbanken Text, the Swedish language bank.
Simon Hengchen, Nina Tahmasebi
doaj +1 more source

