Results 61 to 70 of about 2,031,469 (352)
Learning Gender-Neutral Word Embeddings [PDF]
Word embedding models have become a fundamental component in a wide range of Natural Language Processing (NLP) applications. However, embeddings trained on human-generated corpora have been demonstrated to inherit strong gender stereotypes that reflect ...
Jieyu Zhao+4 more
semanticscholar +1 more source
Decoding Word Embeddings with Brain-Based Semantic Features
Word embeddings are vectorial semantic representations built with either counting or predicting techniques aimed at capturing shades of meaning from word co-occurrences.
Emmanuele Chersoni+3 more
semanticscholar +1 more source
Compositional Demographic Word Embeddings [PDF]
Word embeddings are usually derived from corpora containing text from many individuals, thus leading to general purpose representations rather than individually personalized representations. While personalized embeddings can be useful to improve language model performance and other language processing tasks, they can only be computed for people with a ...
Charles Welch+3 more
openaire +3 more sources
Compressing Word Embeddings [PDF]
10 pages, 0 figures, submitted to ICONIP-2016. Previous experimental results were submitted to ICLR-2016, but the paper has been significantly updated, since a new experimental set-up worked much ...
openaire +3 more sources
A Collection of Swedish Diachronic Word Embedding Models Trained on Historical Newspaper Data
This paper describes the creation of several word embedding models based on a large collection of diachronic Swedish newspaper material available through Språkbanken Text, the Swedish language bank.
Simon Hengchen, Nina Tahmasebi
doaj +1 more source
The Ability of Word Embeddings to Capture Word Similarities [PDF]
Distributed language representation has become the most widely used technique for language representation in various natural language processing tasks. Most of the natural language processing models that are based on deep learning techniques use already pre-trained distributed word representations, commonly called word embeddings.
Frosina Stojanovska+2 more
openaire +2 more sources
Making Sense of Word Embeddings [PDF]
We present a simple yet effective approach for learning word sense embeddings. In contrast to existing techniques, which either directly learn sense representations from corpora or rely on sense inventories from lexical resources, our approach can induce a sense inventory from existing word embeddings via clustering of ego-networks of related words. An
Alexander Panchenko+3 more
openaire +2 more sources
To resolve lexical disagreement problems between queries and frequently asked questions (FAQs), we propose a reliable sentence classification model based on an encoder-decoder neural network.
Youngjin Jang, Harksoo Kim
doaj +1 more source
GLTM: A Global and Local Word Embedding-Based Topic Model for Short Texts
Short texts have become a kind of prevalent source of information, and discovering topical information from short text collections is valuable for many applications.
Wenxin Liang+4 more
doaj +1 more source
Gender Bias in Contextualized Word Embeddings [PDF]
In this paper, we quantify, analyze and mitigate gender bias exhibited in ELMo’s contextualized word vectors. First, we conduct several intrinsic analyses and find that (1) training data for ELMo contains significantly more male than female entities, (2)
Jieyu Zhao+5 more
semanticscholar +1 more source