Results 41 to 50 of about 254,842 (278)

Unsupervised Word Embedding Learning by Incorporating Local and Global Contexts

open access: yesFrontiers in Big Data, 2020
Word embedding has benefited a broad spectrum of text analysis tasks by learning distributed word representations to encode word semantics. Word representations are typically learned by modeling local contexts of words, assuming that words sharing ...
Yu Meng   +5 more
doaj   +1 more source

Word Embedding With Zipf’s Context

open access: yesIEEE Access, 2019
Word embeddings generated by neural language models have achieved great success in many NLP tasks. However, neural language models may be difficult to train and time consuming.
Lizheng Gao   +3 more
doaj   +1 more source

Citation Intent Classification Using Word Embedding

open access: yesIEEE Access, 2021
Citation analysis is an active area of research for various reasons. So far, statistical approaches are mainly used for citation analysis, which does not look into the internal context of the citations.
Muhammad Roman   +4 more
doaj   +1 more source

Data Sets: Word Embeddings Learned from Tweets and General Data

open access: yes, 2017
A word embedding is a low-dimensional, dense and real- valued vector representation of a word. Word embeddings have been used in many NLP tasks. They are usually gener- ated from a large text corpus.
Li, Quanzhi   +3 more
core   +2 more sources

Comparative Analysis of Word Embeddings for Capturing Word Similarities

open access: yes, 2020
Distributed language representation has become the most widely used technique for language representation in various natural language processing tasks. Most of the natural language processing models that are based on deep learning techniques use already ...
Kalajdjieski, Jovan   +2 more
core   +1 more source

Gloss Alignment using Word Embeddings

open access: yes2023 IEEE International Conference on Acoustics, Speech, and Signal Processing Workshops (ICASSPW), 2023
Capturing and annotating Sign language datasets is a time consuming and costly process. Current datasets are orders of magnitude too small to successfully train unconstrained \acf{slt} models. As a result, research has turned to TV broadcast content as a source of large-scale training data, consisting of both the sign language interpreter and the ...
Walsh, Harry   +3 more
openaire   +2 more sources

Chinese Event Extraction Based on Attention and Semantic Features: A Bidirectional Circular Neural Network

open access: yesFuture Internet, 2018
Chinese event extraction uses word embedding to capture similarity, but suffers when handling previously unseen or rare words. From the test, we know that characters may provide some information that we cannot obtain in words, so we propose a novel ...
Yue Wu, Junyi Zhang
doaj   +1 more source

Improving Word Embedding Using Variational Dropout

open access: yesProceedings of the International Florida Artificial Intelligence Research Society Conference, 2023
Pre-trained word embeddings are essential in natural language processing (NLP). In recent years, many post-processing algorithms have been proposed to improve the pre-trained word embeddings.
Zainab Albujasim   +3 more
doaj   +1 more source

Phonetic Word Embeddings

open access: yes, 2021
This work presents a novel methodology for calculating the phonetic similarity between words taking motivation from the human perception of sounds. This metric is employed to learn a continuous vector embedding space that groups similar sounding words together and can be used for various downstream computational phonology tasks.
Sharma, Rahul   +2 more
openaire   +2 more sources

Closed Form Word Embedding Alignment [PDF]

open access: yes2019 IEEE International Conference on Data Mining (ICDM), 2019
We develop a family of techniques to align word embeddings which are derived from different source datasets or created using different mechanisms (e.g., GloVe or word2vec). Our methods are simple and have a closed form to optimally rotate, translate, and scale to minimize root mean squared errors or maximize the average cosine similarity between two ...
Sunipa Dev   +2 more
openaire   +2 more sources

Home - About - Disclaimer - Privacy