Results 141 to 150 of about 2,031,469 (352)
On the Convergent Properties of Word Embedding Methods [PDF]
Do word embeddings converge to learn similar things over different initializations? How repeatable are experiments with word embeddings? Are all word embedding techniques equally reliable? In this paper we propose evaluating methods for learning word representations by their consistency across initializations.
arxiv
The SciAgents AI model drives hypothesis generation by harnessing multi‐agent graph reasoning, extracting insights from knowledge graphs constructed from scientific papers. Each agent plays a specific role: the Ontologist defines concepts, the Scientists draft and refine proposals, and the Critic reviews.
Alireza Ghafarollahi, Markus J. Buehler
wiley +1 more source
Neural Network Models for Word Sense Disambiguation: An Overview
The following article presents an overview of the use of artificial neural networks for the task of Word Sense Disambiguation (WSD). More specifically, it surveys the advances in neural language models in recent years that have resulted in methods for ...
Popov Alexander
doaj +1 more source
Suicidal Ideation Cause Extraction From Social Texts
Suicide has become a major public health and social concern in the world. Suicidal ideation cause extraction (SICE) in social texts can provide support for suicide prevention.
Dexi Liu+7 more
doaj +1 more source
Think Globally, Embed Locally --- Locally Linear Meta-embedding of Words [PDF]
Distributed word embeddings have shown superior performances in numerous Natural Language Processing (NLP) tasks. However, their performances vary significantly across different tasks, implying that the word embeddings learnt by those methods capture complementary aspects of lexical semantics.
arxiv
The rise of lead halide perovskite semiconductors has enabled high‐performance LEDs with internal quantum efficiencies approaching 100%. In order to further enhance the external quantum efficiencies limited by light outcoupling effects, in this account, the strategies for reducing energy dissipation through the substrate, waveguide, and evanescent ...
Tommaso Marcato+2 more
wiley +1 more source
Multiplex Word Embeddings for Selectional Preference Acquisition [PDF]
Conventional word embeddings represent words with fixed vectors, which are usually trained based on co-occurrence patterns among words. In doing so, however, the power of such representations is limited, where the same word might be functionalized separately under different syntactic relations. To address this limitation, one solution is to incorporate
arxiv
Wood and cellulose are the most abundant and important sustainable materials on the planet at the disposal to solve major societal challenges. This perspective, written for all materials scientists, highlights how breakthroughs in cellulose nanotechnology combined with functional nanomaterials can revolutionize important areas like construction ...
Mahiar Max Hamedi+5 more
wiley +1 more source
Discrete Word Embedding for Logical Natural Language Understanding [PDF]
We propose an unsupervised neural model for learning a discrete embedding of words. Unlike existing discrete embeddings, our binary embedding supports vector arithmetic operations similar to continuous embeddings. Our embedding represents each word as a set of propositional statements describing a transition rule in classical/STRIPS planning formalism.
arxiv
Reversible protonic ceramic electrochemical cells (R‐PCECs) face challenges from sluggish and unstable oxygen reduction and evolution reactions in the air electrode. This review discusses recent progress in triple‐conducting air electrodes, emphasizing mechanisms, performance factors, and design strategies, offering guidance for creating efficient and ...
Xi Chen+8 more
wiley +1 more source