Results 21 to 30 of about 2,031,469 (352)

Attention Word Embedding [PDF]

open access: yesProceedings of the 28th International Conference on Computational Linguistics, 2020
Word embedding models learn semantically rich vector representations of words and are widely used to initialize natural processing language (NLP) models. The popular continuous bag-of-words (CBOW) model of word2vec learns a vector embedding by masking a given word in a sentence and then using the other words as a context to predict it.
Richard G. Baraniuk   +2 more
openaire   +2 more sources

Cultural Cartography with Word Embeddings [PDF]

open access: yesPoetics, 2020
Using the frequency of keywords is a classic approach in the formal analysis of text, but has the drawback of glossing over the relationality of word meanings. Word embedding models overcome this problem by constructing a standardized and continuous “meaning-space” where words are assigned a location based on relations of similarity to other words ...
Stoltz, Dustin, Taylor, Marshall
openaire   +5 more sources

Be Careful about Poisoned Word Embeddings: Exploring the Vulnerability of the Embedding Layers in NLP Models [PDF]

open access: yesNorth American Chapter of the Association for Computational Linguistics, 2021
Recent studies have revealed a security threat to natural language processing (NLP) models, called the Backdoor Attack. Victim models can maintain competitive performance on clean samples while behaving abnormally on samples with a specific trigger word ...
Wenkai Yang   +5 more
semanticscholar   +1 more source

Improving Word Embedding Using Variational Dropout

open access: yesProceedings of the International Florida Artificial Intelligence Research Society Conference, 2023
Pre-trained word embeddings are essential in natural language processing (NLP). In recent years, many post-processing algorithms have been proposed to improve the pre-trained word embeddings.
Zainab Albujasim   +3 more
doaj   +1 more source

Relational Word Embeddings [PDF]

open access: yesProceedings of the 57th Annual Meeting of the Association for Computational Linguistics, 2019
While word embeddings have been shown to implicitly encode various forms of attributional knowledge, the extent to which they capture relational information is far more limited. In previous work, this limitation has been addressed by incorporating relational knowledge from external knowledge bases when learning the word embedding.
Steven Schockaert   +2 more
openaire   +4 more sources

Spillover and crossover effects of exposure to work‐related aggression and adversities: A dyadic diary study

open access: yesAggressive Behavior, Volume 49, Issue 1, Page 85-95, January 2023., 2023
Abstract The past two decades have produced extensive evidence on the manifold and severe outcomes for victims of aggression exposure in the workplace. However, due to the dominating individual‐centered approach, most findings miss a social network perspective.
Alexander Herrmann   +2 more
wiley   +1 more source

Neuro-Symbolic Word Embedding Using Textual and Knowledge Graph Information

open access: yesApplied Sciences, 2022
The construction of high-quality word embeddings is essential in natural language processing. In existing approaches using a large text corpus, the word embeddings learn only sequential patterns in the context; thus, accurate learning of the syntax and ...
Dongsuk Oh, Jungwoo Lim, Heuiseok Lim
doaj   +1 more source

Diachronic Word Embeddings Reveal Statistical Laws of Semantic Change [PDF]

open access: yesAnnual Meeting of the Association for Computational Linguistics, 2016
Understanding how words change their meanings over time is key to models of language and cultural evolution, but historical data on meaning is scarce, making theories hard to develop and test.
William L. Hamilton   +2 more
semanticscholar   +1 more source

Detecting Emergent Intersectional Biases: Contextualized Word Embeddings Contain a Distribution of Human-like Biases [PDF]

open access: yesAAAI/ACM Conference on AI, Ethics, and Society, 2020
With the starting point that implicit human biases are reflected in the statistical regularities of language, it is possible to measure biases in English static word embeddings.
W. Guo, Aylin Caliskan
semanticscholar   +1 more source

Semi‐supervised classification of fundus images combined with CNN and GCN

open access: yesJournal of Applied Clinical Medical Physics, Volume 23, Issue 12, December 2022., 2022
Abstract Purpose Diabetic retinopathy (DR) is one of the most serious complications of diabetes, which is a kind of fundus lesion with specific changes. Early diagnosis of DR can effectively reduce the visual damage caused by DR. Due to the variety and different morphology of DR lesions, automatic classification of fundus images in mass screening can ...
Sixu Duan   +8 more
wiley   +1 more source

Home - About - Disclaimer - Privacy