Results 21 to 30 of about 89,062 (279)

Contextual Word Embedding [PDF]

open access: yesCompanion of the The Web Conference 2018 on The Web Conference 2018 - WWW '18, 2018
Effective clustering of short documents, such as tweets, is difficult because of the lack of sufficient semantic context. Word embedding is a technique that is effective in addressing this lack of semantic context. However, the process of word vector embedding, in turn, relies on the availability of sufficient contexts to learn the word associations ...
Debasis Ganguly, Kripabandhu Ghosh
openaire   +1 more source

Exploring the impact of word embeddings for disjoint semisupervised Spanish verb sense disambiguation

open access: yesInteligencia Artificial, 2018
This work explores the use of word embeddings as features for Spanish  verb sense disambiguation (VSD). This type of learning technique is named disjoint semisupervised learning: an unsupervised algorithm (i.e.
Cristian Cardellino   +1 more
doaj   +1 more source

Evaluating the Underlying Gender Bias in Contextualized Word Embeddings [PDF]

open access: yes, 2019
Gender bias is highly impacting natural language processing applications. Word embeddings have clearly been proven both to keep and amplify gender biases that are present in current data sources.
Basta, Christine   +2 more
core   +2 more sources

Attention Word Embedding [PDF]

open access: yesProceedings of the 28th International Conference on Computational Linguistics, 2020
Word embedding models learn semantically rich vector representations of words and are widely used to initialize natural processing language (NLP) models. The popular continuous bag-of-words (CBOW) model of word2vec learns a vector embedding by masking a given word in a sentence and then using the other words as a context to predict it.
Sonkar, Shashank   +2 more
openaire   +2 more sources

Enhancing Accuracy of Semantic Relatedness Measurement by Word Single-Meaning Embeddings

open access: yesIEEE Access, 2021
We propose a lightweight algorithm of learning word single-meaning embeddings (WSME), by exploring WordNet synsets and Doc2vec document embeddings, to enhance the accuracy of semantic relatedness measurement.
Xiaotao Li, Shujuan You, Wai Chen
doaj   +1 more source

AutoExtend: Combining Word Embeddings with Semantic Resources

open access: yesComputational Linguistics, 2017
We present AutoExtend, a system that combines word embeddings with semantic resources by learning embeddings for non-word objects like synsets and entities and learning word embeddings that incorporate the semantic information from the resource.
Sascha Rothe, Hinrich Schütze
doaj   +1 more source

Benchmark for Evaluation of Danish Clinical Word Embeddings

open access: yesNorthern European Journal of Language Technology, 2023
In natural language processing, benchmarks are used to track progress and identify useful models. Currently, no benchmark for Danish clinical word embeddings exists.
Martin Sundahl Laursen   +4 more
doaj   +1 more source

Morphological Skip-Gram: Replacing FastText characters n-gram with morphological knowledge

open access: yesInteligencia Artificial, 2021
Natural language processing systems have attracted much interest of the industry. This branch of study is composed of some applications such as machine translation, sentiment analysis, named entity recognition, question and answer, and others.
Thiago Dias Bispo   +3 more
doaj   +1 more source

Socialized Word Embeddings [PDF]

open access: yesProceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence, 2017
Word embeddings have attracted a lot of attention. On social media, each user’s language use can be significantly affected by the user’s friends. In this paper, we propose a socialized word embedding algorithm which can consider both user’s personal characteristics of language use and the user’s social relationship on social media.
Ziqian Zeng   +3 more
openaire   +1 more source

A Collection of Swedish Diachronic Word Embedding Models Trained on Historical Newspaper Data

open access: yesJournal of Open Humanities Data, 2021
This paper describes the creation of several word embedding models based on a large collection of diachronic Swedish newspaper material available through Språkbanken Text, the Swedish language bank.
Simon Hengchen, Nina Tahmasebi
doaj   +1 more source

Home - About - Disclaimer - Privacy