Results 81 to 90 of about 2,031,469 (352)

Relevance-based Word Embedding [PDF]

open access: yesProceedings of the 40th International ACM SIGIR Conference on Research and Development in Information Retrieval, 2017
Learning a high-dimensional dense representation for vocabulary terms, also known as a word embedding, has recently attracted much attention in natural language processing and information retrieval tasks. The embedding vectors are typically learned based on term proximity in a large corpus.
W. Bruce Croft, Hamed Zamani
openaire   +3 more sources

Lexicon-Enhanced LSTM With Attention for General Sentiment Analysis

open access: yesIEEE Access, 2018
Long short-term memory networks (LSTMs) have gained good performance in sentiment analysis tasks. The general method is to use LSTMs to combine word embeddings for text representation.
Xianghua Fu   +4 more
doaj   +1 more source

An Experimental Analysis of Deep Neural Network Based Classifiers for Sentiment Analysis Task

open access: yesIEEE Access, 2023
The application of natural language processing (NLP) in sentiment analysis task by using textual data has wide scale application across various domains in plethora of industries.
Mrigank Shukla, Akhil Kumar
doaj   +1 more source

Black is to Criminal as Caucasian is to Police: Detecting and Removing Multiclass Bias in Word Embeddings [PDF]

open access: yesNorth American Chapter of the Association for Computational Linguistics, 2019
Online texts - across genres, registers, domains, and styles - are riddled with human stereotypes, expressed in overt or subtle ways. Word embeddings, trained on these texts, perpetuate and amplify these stereotypes, and propagate biases to machine ...
Thomas Manzini   +3 more
semanticscholar   +1 more source

The activation of embedded words in spoken word recognition [PDF]

open access: yesJournal of Memory and Language, 2015
The current study investigated how listeners understand English words that have shorter words embedded in them. A series of auditory-auditory priming experiments assessed the activation of six types of embedded words (2 embedded positions × 3 embedded proportions) under different listening conditions.
Arthur G. Samuel, Xujin Zhang
openaire   +3 more sources

Overcoming Poor Word Embeddings with Word Definitions [PDF]

open access: yesProceedings of *SEM 2021: The Tenth Joint Conference on Lexical and Computational Semantics, 2021
Modern natural language understanding models depend on pretrained subword embeddings, but applications may need to reason about words that were never or rarely seen during pretraining. We show that examples that depend critically on a rarer word are more challenging for natural language inference models.
openaire   +2 more sources

Predicting Word Embeddings Variability [PDF]

open access: yesProceedings of the Seventh Joint Conference on Lexical and Computational Semantics, 2018
Neural word embeddings models (such as those built with word2vec) are known to have stability problems: when retraining a model with the exact same hyperparameters, words neighborhoods may change. We propose a method to estimate such variation, based on the overlap of neighbors of a given word in two models trained with identical hyperparam-eters.
Pierrejean, Bénédicte, Tanguy, Ludovic
openaire   +3 more sources

Task-Specific Dependency-based Word Embedding Methods [PDF]

open access: yesarXiv, 2021
Two task-specific dependency-based word embedding methods are proposed for text classification in this work. In contrast with universal word embedding methods that work for generic tasks, we design task-specific word embedding methods to offer better performance in a specific task.
arxiv  

Classification and Clustering of Arguments with Contextualized Word Embeddings [PDF]

open access: yesAnnual Meeting of the Association for Computational Linguistics, 2019
We experiment with two recent contextualized word embedding methods (ELMo and BERT) in the context of open-domain argument search. For the first time, we show how to leverage the power of contextualized word embeddings to classify and cluster topic ...
Nils Reimers   +5 more
semanticscholar   +1 more source

DarkVec: automatic analysis of darknet traffic with word embeddings

open access: yesConference on Emerging Network Experiment and Technology, 2021
Darknets are passive probes listening to traffic reaching IP addresses that host no services. Traffic reaching them is unsolicited by nature and often induced by scanners, malicious senders and misconfigured hosts. Its peculiar nature makes it a valuable
Luca Gioacchini   +5 more
semanticscholar   +1 more source

Home - About - Disclaimer - Privacy