Effects of Semantic Features on Machine Learning-Based Drug Name Recognition Systems: Word Embeddings vs. Manually Constructed Dictionaries [PDF]
Semantic features are very important for machine learning-based drug name recognition (DNR) systems. The semantic features used in most DNR systems are based on drug dictionaries manually constructed by experts.
Shengyu Liu+3 more
doaj +2 more sources
Spoken-Word Recognition: The Access to Embedded Words [PDF]
Two cross-modal priming experiments investigated whether the representation of either an initial- or a final-embedded word may be activated when the longer carrier word is auditorily presented. Visual targets were semantically related either to the embedded word or to the carrier word or they were unrelated to the primes. A priming effect was found for
Frédéric Isel, Nicole Bacri
openalex +4 more sources
Semantic projection recovers rich human knowledge of multiple object features from word embeddings. [PDF]
Grand G+3 more
europepmc +2 more sources
Grounding the Lexical Sets of Causative-Inchoative Verbs with Word Embedding [PDF]
This work aims at evaluating and comparing two different frameworks for the unsupervised topic modelling of the CompWHoB Corpus, namely our political-linguistic dataset. The first approach is represented by the application of the latent DirichLet Allocation (henceforth LDA), defining the evaluation of this model as baseline of comparison.
Edoardo Maria Ponti+2 more
openalex +4 more sources
Socialized Word Embeddings [PDF]
Word embeddings have attracted a lot of attention. On social media, each user’s language use can be significantly affected by the user’s friends. In this paper, we propose a socialized word embedding algorithm which can consider both user’s personal characteristics of language use and the user’s social relationship on social media.
Ziqian Zeng+3 more
openalex +3 more sources
A World Full of Stereotypes? Further Investigation on Origin and Gender Bias in Multi-Lingual Word Embeddings [PDF]
Publicly available off-the-shelf word embeddings that are often used in productive applications for natural language processing have been proven to be biased.
Mascha Kurpicz-Briki, Tomaso Leoni
doaj +2 more sources
Gender Bias in Word Embeddings: A Comprehensive Analysis of Frequency, Syntax, and Semantics [PDF]
Word embeddings are numeric representations of meaning derived from word co-occurrence statistics in corpora of human-produced texts. The statistical regularities in language corpora encode well-known social biases into word embeddings (e.g., the word ...
Aylin Caliskan+4 more
semanticscholar +1 more source
Testing word embeddings for Polish
Testing word embeddings for Polish Distributional Semantics postulates the representation of word meaning in the form of numeric vectors which represent words which occur in context in large text data.
Agnieszka Mykowiecka+2 more
doaj +3 more sources
Semeval-2022 Task 1: CODWOE – Comparing Dictionaries and Word Embeddings [PDF]
Word embeddings have advanced the state of the art in NLP across numerous tasks. Understanding the contents of dense neural representations is of utmost interest to the computational semantics community.
Timothee Mickus+3 more
semanticscholar +1 more source
Representing Mixtures of Word Embeddings with Mixtures of Topic Embeddings [PDF]
A topic model is often formulated as a generative model that explains how each word of a document is generated given a set of topics and document-specific topic proportions. It is focused on capturing the word co-occurrences in a document and hence often
Dongsheng Wang+6 more
semanticscholar +1 more source