Results 21 to 30 of about 2,031,469 (352)
Attention Word Embedding [PDF]
Word embedding models learn semantically rich vector representations of words and are widely used to initialize natural processing language (NLP) models. The popular continuous bag-of-words (CBOW) model of word2vec learns a vector embedding by masking a given word in a sentence and then using the other words as a context to predict it.
Richard G. Baraniuk+2 more
openaire +2 more sources
Cultural Cartography with Word Embeddings [PDF]
Using the frequency of keywords is a classic approach in the formal analysis of text, but has the drawback of glossing over the relationality of word meanings. Word embedding models overcome this problem by constructing a standardized and continuous “meaning-space” where words are assigned a location based on relations of similarity to other words ...
Stoltz, Dustin, Taylor, Marshall
openaire +5 more sources
Be Careful about Poisoned Word Embeddings: Exploring the Vulnerability of the Embedding Layers in NLP Models [PDF]
Recent studies have revealed a security threat to natural language processing (NLP) models, called the Backdoor Attack. Victim models can maintain competitive performance on clean samples while behaving abnormally on samples with a specific trigger word ...
Wenkai Yang+5 more
semanticscholar +1 more source
Improving Word Embedding Using Variational Dropout
Pre-trained word embeddings are essential in natural language processing (NLP). In recent years, many post-processing algorithms have been proposed to improve the pre-trained word embeddings.
Zainab Albujasim+3 more
doaj +1 more source
Relational Word Embeddings [PDF]
While word embeddings have been shown to implicitly encode various forms of attributional knowledge, the extent to which they capture relational information is far more limited. In previous work, this limitation has been addressed by incorporating relational knowledge from external knowledge bases when learning the word embedding.
Steven Schockaert+2 more
openaire +4 more sources
Abstract The past two decades have produced extensive evidence on the manifold and severe outcomes for victims of aggression exposure in the workplace. However, due to the dominating individual‐centered approach, most findings miss a social network perspective.
Alexander Herrmann+2 more
wiley +1 more source
Neuro-Symbolic Word Embedding Using Textual and Knowledge Graph Information
The construction of high-quality word embeddings is essential in natural language processing. In existing approaches using a large text corpus, the word embeddings learn only sequential patterns in the context; thus, accurate learning of the syntax and ...
Dongsuk Oh, Jungwoo Lim, Heuiseok Lim
doaj +1 more source
Diachronic Word Embeddings Reveal Statistical Laws of Semantic Change [PDF]
Understanding how words change their meanings over time is key to models of language and cultural evolution, but historical data on meaning is scarce, making theories hard to develop and test.
William L. Hamilton+2 more
semanticscholar +1 more source
Detecting Emergent Intersectional Biases: Contextualized Word Embeddings Contain a Distribution of Human-like Biases [PDF]
With the starting point that implicit human biases are reflected in the statistical regularities of language, it is possible to measure biases in English static word embeddings.
W. Guo, Aylin Caliskan
semanticscholar +1 more source
Semi‐supervised classification of fundus images combined with CNN and GCN
Abstract Purpose Diabetic retinopathy (DR) is one of the most serious complications of diabetes, which is a kind of fundus lesion with specific changes. Early diagnosis of DR can effectively reduce the visual damage caused by DR. Due to the variety and different morphology of DR lesions, automatic classification of fundus images in mass screening can ...
Sixu Duan+8 more
wiley +1 more source