Results 21 to 30 of about 16,624 (311)
This work explores the use of word embeddings as features for Spanish verb sense disambiguation (VSD). This type of learning technique is named disjoint semisupervised learning: an unsupervised algorithm (i.e.
Cristian Cardellino +1 more
doaj +1 more source
Enhancing Accuracy of Semantic Relatedness Measurement by Word Single-Meaning Embeddings
We propose a lightweight algorithm of learning word single-meaning embeddings (WSME), by exploring WordNet synsets and Doc2vec document embeddings, to enhance the accuracy of semantic relatedness measurement.
Xiaotao Li, Shujuan You, Wai Chen
doaj +1 more source
Attention Word Embedding [PDF]
Word embedding models learn semantically rich vector representations of words and are widely used to initialize natural processing language (NLP) models. The popular continuous bag-of-words (CBOW) model of word2vec learns a vector embedding by masking a given word in a sentence and then using the other words as a context to predict it.
Sonkar, Shashank +2 more
openaire +2 more sources
Benchmark for Evaluation of Danish Clinical Word Embeddings
In natural language processing, benchmarks are used to track progress and identify useful models. Currently, no benchmark for Danish clinical word embeddings exists.
Martin Sundahl Laursen +4 more
doaj +1 more source
AutoExtend: Combining Word Embeddings with Semantic Resources
We present AutoExtend, a system that combines word embeddings with semantic resources by learning embeddings for non-word objects like synsets and entities and learning word embeddings that incorporate the semantic information from the resource.
Sascha Rothe, Hinrich Schütze
doaj +1 more source
Morphological Skip-Gram: Replacing FastText characters n-gram with morphological knowledge
Natural language processing systems have attracted much interest of the industry. This branch of study is composed of some applications such as machine translation, sentiment analysis, named entity recognition, question and answer, and others.
Thiago Dias Bispo +3 more
doaj +1 more source
Socialized Word Embeddings [PDF]
Word embeddings have attracted a lot of attention. On social media, each user’s language use can be significantly affected by the user’s friends. In this paper, we propose a socialized word embedding algorithm which can consider both user’s personal characteristics of language use and the user’s social relationship on social media.
Ziqian Zeng +3 more
openaire +1 more source
Efficient estimation of Hindi WSD with distributed word representation in vector space
Word Sense Disambiguation (WSD) is significant for improving the accuracy of the interpretation of a Natural language text. Various supervised learning-based models and knowledge-based models have been developed in the literature for WSD of the language ...
Archana Kumari, D.K. Lobiyal
doaj +1 more source
A Collection of Swedish Diachronic Word Embedding Models Trained on Historical Newspaper Data
This paper describes the creation of several word embedding models based on a large collection of diachronic Swedish newspaper material available through Språkbanken Text, the Swedish language bank.
Simon Hengchen, Nina Tahmasebi
doaj +1 more source
Biomedical Word Sense Disambiguation with Word Embeddings [PDF]
There is a growing need for automatic extraction of information and knowledge from the increasing amount of biomedical and clinical data produced, namely in textual form. Natural language processing comes in this direction, helping in tasks such as information extraction and information retrieval.
Antunes, Rui, Matos, Sérgio
openaire +2 more sources

