Results 31 to 40 of about 2,103,745 (366)

Relational Word Embeddings [PDF]

open access: yesProceedings of the 57th Annual Meeting of the Association for Computational Linguistics, 2019
While word embeddings have been shown to implicitly encode various forms of attributional knowledge, the extent to which they capture relational information is far more limited. In previous work, this limitation has been addressed by incorporating relational knowledge from external knowledge bases when learning the word embedding.
Steven Schockaert   +2 more
openaire   +4 more sources

Spillover and crossover effects of exposure to work‐related aggression and adversities: A dyadic diary study

open access: yesAggressive Behavior, Volume 49, Issue 1, Page 85-95, January 2023., 2023
Abstract The past two decades have produced extensive evidence on the manifold and severe outcomes for victims of aggression exposure in the workplace. However, due to the dominating individual‐centered approach, most findings miss a social network perspective.
Alexander Herrmann   +2 more
wiley   +1 more source

Neuro-Symbolic Word Embedding Using Textual and Knowledge Graph Information

open access: yesApplied Sciences, 2022
The construction of high-quality word embeddings is essential in natural language processing. In existing approaches using a large text corpus, the word embeddings learn only sequential patterns in the context; thus, accurate learning of the syntax and ...
Dongsuk Oh, Jungwoo Lim, Heuiseok Lim
doaj   +1 more source

Semi‐supervised classification of fundus images combined with CNN and GCN

open access: yesJournal of Applied Clinical Medical Physics, Volume 23, Issue 12, December 2022., 2022
Abstract Purpose Diabetic retinopathy (DR) is one of the most serious complications of diabetes, which is a kind of fundus lesion with specific changes. Early diagnosis of DR can effectively reduce the visual damage caused by DR. Due to the variety and different morphology of DR lesions, automatic classification of fundus images in mass screening can ...
Sixu Duan   +8 more
wiley   +1 more source

Diachronic Word Embeddings Reveal Statistical Laws of Semantic Change [PDF]

open access: yesAnnual Meeting of the Association for Computational Linguistics, 2016
Understanding how words change their meanings over time is key to models of language and cultural evolution, but historical data on meaning is scarce, making theories hard to develop and test.
William L. Hamilton   +2 more
semanticscholar   +1 more source

Multi-sense Embeddings Using Synonym Sets and Hypernym Information from Wordnet

open access: yesInternational Journal of Interactive Multimedia and Artificial Intelligence, 2021
Word embedding approaches increased the efficiency of natural language processing (NLP) tasks. Traditional word embeddings though robust for many NLP activities, do not handle polysemy of words.
Krishna Siva Prasad Mudigonda   +1 more
doaj   +1 more source

A robust self-learning method for fully unsupervised cross-lingual mappings of word embeddings [PDF]

open access: yesAnnual Meeting of the Association for Computational Linguistics, 2018
Recent work has managed to learn cross-lingual word embeddings without parallel data by mapping monolingual embeddings to a shared space through adversarial training. However, their evaluation has focused on favorable conditions, using comparable corpora
Mikel Artetxe   +2 more
semanticscholar   +1 more source

Detecting Emergent Intersectional Biases: Contextualized Word Embeddings Contain a Distribution of Human-like Biases [PDF]

open access: yesAAAI/ACM Conference on AI, Ethics, and Society, 2020
With the starting point that implicit human biases are reflected in the statistical regularities of language, it is possible to measure biases in English static word embeddings.
W. Guo, Aylin Caliskan
semanticscholar   +1 more source

Word embeddings quantify 100 years of gender and ethnic stereotypes [PDF]

open access: yesProceedings of the National Academy of Sciences of the United States of America, 2017
Significance Word embeddings are a popular machine-learning method that represents each English word by a vector, such that the geometry between these vectors captures semantic relations between the corresponding words.
Nikhil Garg   +3 more
semanticscholar   +1 more source

A Monolingual Approach to Contextualized Word Embeddings for Mid-Resource Languages [PDF]

open access: yesAnnual Meeting of the Association for Computational Linguistics, 2020
We use the multilingual OSCAR corpus, extracted from Common Crawl via language classification, filtering and cleaning, to train monolingual contextualized word embeddings (ELMo) for five mid-resource languages.
Pedro Ortiz Suarez   +2 more
semanticscholar   +1 more source

Home - About - Disclaimer - Privacy