Results 41 to 50 of about 254,842 (278)
Unsupervised Word Embedding Learning by Incorporating Local and Global Contexts
Word embedding has benefited a broad spectrum of text analysis tasks by learning distributed word representations to encode word semantics. Word representations are typically learned by modeling local contexts of words, assuming that words sharing ...
Yu Meng +5 more
doaj +1 more source
Word Embedding With Zipf’s Context
Word embeddings generated by neural language models have achieved great success in many NLP tasks. However, neural language models may be difficult to train and time consuming.
Lizheng Gao +3 more
doaj +1 more source
Citation Intent Classification Using Word Embedding
Citation analysis is an active area of research for various reasons. So far, statistical approaches are mainly used for citation analysis, which does not look into the internal context of the citations.
Muhammad Roman +4 more
doaj +1 more source
Data Sets: Word Embeddings Learned from Tweets and General Data
A word embedding is a low-dimensional, dense and real- valued vector representation of a word. Word embeddings have been used in many NLP tasks. They are usually gener- ated from a large text corpus.
Li, Quanzhi +3 more
core +2 more sources
Comparative Analysis of Word Embeddings for Capturing Word Similarities
Distributed language representation has become the most widely used technique for language representation in various natural language processing tasks. Most of the natural language processing models that are based on deep learning techniques use already ...
Kalajdjieski, Jovan +2 more
core +1 more source
Gloss Alignment using Word Embeddings
Capturing and annotating Sign language datasets is a time consuming and costly process. Current datasets are orders of magnitude too small to successfully train unconstrained \acf{slt} models. As a result, research has turned to TV broadcast content as a source of large-scale training data, consisting of both the sign language interpreter and the ...
Walsh, Harry +3 more
openaire +2 more sources
Chinese event extraction uses word embedding to capture similarity, but suffers when handling previously unseen or rare words. From the test, we know that characters may provide some information that we cannot obtain in words, so we propose a novel ...
Yue Wu, Junyi Zhang
doaj +1 more source
Improving Word Embedding Using Variational Dropout
Pre-trained word embeddings are essential in natural language processing (NLP). In recent years, many post-processing algorithms have been proposed to improve the pre-trained word embeddings.
Zainab Albujasim +3 more
doaj +1 more source
This work presents a novel methodology for calculating the phonetic similarity between words taking motivation from the human perception of sounds. This metric is employed to learn a continuous vector embedding space that groups similar sounding words together and can be used for various downstream computational phonology tasks.
Sharma, Rahul +2 more
openaire +2 more sources
Closed Form Word Embedding Alignment [PDF]
We develop a family of techniques to align word embeddings which are derived from different source datasets or created using different mechanisms (e.g., GloVe or word2vec). Our methods are simple and have a closed form to optimally rotate, translate, and scale to minimize root mean squared errors or maximize the average cosine similarity between two ...
Sunipa Dev +2 more
openaire +2 more sources

