Results 31 to 40 of about 2,103,745 (366)
Relational Word Embeddings [PDF]
While word embeddings have been shown to implicitly encode various forms of attributional knowledge, the extent to which they capture relational information is far more limited. In previous work, this limitation has been addressed by incorporating relational knowledge from external knowledge bases when learning the word embedding.
Steven Schockaert+2 more
openaire +4 more sources
Abstract The past two decades have produced extensive evidence on the manifold and severe outcomes for victims of aggression exposure in the workplace. However, due to the dominating individual‐centered approach, most findings miss a social network perspective.
Alexander Herrmann+2 more
wiley +1 more source
Neuro-Symbolic Word Embedding Using Textual and Knowledge Graph Information
The construction of high-quality word embeddings is essential in natural language processing. In existing approaches using a large text corpus, the word embeddings learn only sequential patterns in the context; thus, accurate learning of the syntax and ...
Dongsuk Oh, Jungwoo Lim, Heuiseok Lim
doaj +1 more source
Semi‐supervised classification of fundus images combined with CNN and GCN
Abstract Purpose Diabetic retinopathy (DR) is one of the most serious complications of diabetes, which is a kind of fundus lesion with specific changes. Early diagnosis of DR can effectively reduce the visual damage caused by DR. Due to the variety and different morphology of DR lesions, automatic classification of fundus images in mass screening can ...
Sixu Duan+8 more
wiley +1 more source
Diachronic Word Embeddings Reveal Statistical Laws of Semantic Change [PDF]
Understanding how words change their meanings over time is key to models of language and cultural evolution, but historical data on meaning is scarce, making theories hard to develop and test.
William L. Hamilton+2 more
semanticscholar +1 more source
Multi-sense Embeddings Using Synonym Sets and Hypernym Information from Wordnet
Word embedding approaches increased the efficiency of natural language processing (NLP) tasks. Traditional word embeddings though robust for many NLP activities, do not handle polysemy of words.
Krishna Siva Prasad Mudigonda+1 more
doaj +1 more source
A robust self-learning method for fully unsupervised cross-lingual mappings of word embeddings [PDF]
Recent work has managed to learn cross-lingual word embeddings without parallel data by mapping monolingual embeddings to a shared space through adversarial training. However, their evaluation has focused on favorable conditions, using comparable corpora
Mikel Artetxe+2 more
semanticscholar +1 more source
Detecting Emergent Intersectional Biases: Contextualized Word Embeddings Contain a Distribution of Human-like Biases [PDF]
With the starting point that implicit human biases are reflected in the statistical regularities of language, it is possible to measure biases in English static word embeddings.
W. Guo, Aylin Caliskan
semanticscholar +1 more source
Word embeddings quantify 100 years of gender and ethnic stereotypes [PDF]
Significance Word embeddings are a popular machine-learning method that represents each English word by a vector, such that the geometry between these vectors captures semantic relations between the corresponding words.
Nikhil Garg+3 more
semanticscholar +1 more source
A Monolingual Approach to Contextualized Word Embeddings for Mid-Resource Languages [PDF]
We use the multilingual OSCAR corpus, extracted from Common Crawl via language classification, filtering and cleaning, to train monolingual contextualized word embeddings (ELMo) for five mid-resource languages.
Pedro Ortiz Suarez+2 more
semanticscholar +1 more source