Results 61 to 70 of about 2,031,469 (352)

Learning Gender-Neutral Word Embeddings [PDF]

open access: yesConference on Empirical Methods in Natural Language Processing, 2018
Word embedding models have become a fundamental component in a wide range of Natural Language Processing (NLP) applications. However, embeddings trained on human-generated corpora have been demonstrated to inherit strong gender stereotypes that reflect ...
Jieyu Zhao   +4 more
semanticscholar   +1 more source

Decoding Word Embeddings with Brain-Based Semantic Features

open access: yesInternational Conference on Computational Logic, 2021
Word embeddings are vectorial semantic representations built with either counting or predicting techniques aimed at capturing shades of meaning from word co-occurrences.
Emmanuele Chersoni   +3 more
semanticscholar   +1 more source

Compositional Demographic Word Embeddings [PDF]

open access: yesProceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), 2020
Word embeddings are usually derived from corpora containing text from many individuals, thus leading to general purpose representations rather than individually personalized representations. While personalized embeddings can be useful to improve language model performance and other language processing tasks, they can only be computed for people with a ...
Charles Welch   +3 more
openaire   +3 more sources

Compressing Word Embeddings [PDF]

open access: yes, 2016
10 pages, 0 figures, submitted to ICONIP-2016. Previous experimental results were submitted to ICLR-2016, but the paper has been significantly updated, since a new experimental set-up worked much ...
openaire   +3 more sources

A Collection of Swedish Diachronic Word Embedding Models Trained on Historical Newspaper Data

open access: yesJournal of Open Humanities Data, 2021
This paper describes the creation of several word embedding models based on a large collection of diachronic Swedish newspaper material available through Språkbanken Text, the Swedish language bank.
Simon Hengchen, Nina Tahmasebi
doaj   +1 more source

The Ability of Word Embeddings to Capture Word Similarities [PDF]

open access: yesInternational Journal on Natural Language Computing, 2020
Distributed language representation has become the most widely used technique for language representation in various natural language processing tasks. Most of the natural language processing models that are based on deep learning techniques use already pre-trained distributed word representations, commonly called word embeddings.
Frosina Stojanovska   +2 more
openaire   +2 more sources

Making Sense of Word Embeddings [PDF]

open access: yesProceedings of the 1st Workshop on Representation Learning for NLP, 2016
We present a simple yet effective approach for learning word sense embeddings. In contrast to existing techniques, which either directly learn sense representations from corpora or rely on sense inventories from lexical resources, our approach can induce a sense inventory from existing word embeddings via clustering of ego-networks of related words. An
Alexander Panchenko   +3 more
openaire   +2 more sources

Reliable Classification of FAQs with Spelling Errors Using an Encoder-Decoder Neural Network in Korean

open access: yesApplied Sciences, 2019
To resolve lexical disagreement problems between queries and frequently asked questions (FAQs), we propose a reliable sentence classification model based on an encoder-decoder neural network.
Youngjin Jang, Harksoo Kim
doaj   +1 more source

GLTM: A Global and Local Word Embedding-Based Topic Model for Short Texts

open access: yesIEEE Access, 2018
Short texts have become a kind of prevalent source of information, and discovering topical information from short text collections is valuable for many applications.
Wenxin Liang   +4 more
doaj   +1 more source

Gender Bias in Contextualized Word Embeddings [PDF]

open access: yesNorth American Chapter of the Association for Computational Linguistics, 2019
In this paper, we quantify, analyze and mitigate gender bias exhibited in ELMo’s contextualized word vectors. First, we conduct several intrinsic analyses and find that (1) training data for ELMo contains significantly more male than female entities, (2)
Jieyu Zhao   +5 more
semanticscholar   +1 more source

Home - About - Disclaimer - Privacy