Results 61 to 70 of about 96,400 (324)
Lexicon-Enhanced LSTM With Attention for General Sentiment Analysis
Long short-term memory networks (LSTMs) have gained good performance in sentiment analysis tasks. The general method is to use LSTMs to combine word embeddings for text representation.
Xianghua Fu +4 more
doaj +1 more source
A Robust Adaptive One‐Sample‐Ahead Preview Super‐Twisting Sliding Mode Controller
Block Diagram of the Robust Adaptive One‐Sample‐Ahead Preview Super‐Twisting Sliding Mode Controller. ABSTRACT This article introduces a discrete‐time robust adaptive one‐sample‐ahead preview super‐twisting sliding mode controller. A stability analysis of the controller by Lyapunov criteria is developed to demonstrate its robustness in handling both ...
Guilherme Vieira Hollweg +5 more
wiley +1 more source
Learning Bilingual Word Embedding Mappings with Similar Words in Related Languages Using GAN
Cross-lingual word embeddings display words from different languages in the same vector space. They provide reasoning about semantics, compare the meaning of words across languages and word meaning in multilingual contexts, necessary to bilingual lexicon
Ghafour Alipour +2 more
doaj +1 more source
Obtaining high-quality embeddings of out-of-vocabularies (OOVs) and low-frequency words is a challenge in natural language processing (NLP). To efficiently estimate the embeddings of OOVs and low-frequency words, we propose a new method that uses the ...
Xianwen Liao +5 more
doaj +1 more source
WOVe: Incorporating Word Order in GloVe Word Embeddings
Word vector representations open up new opportunities to extract useful information from unstructured text. Defining a word as a vector made it easy for the machine learning algorithms to understand a text and extract information from. Word vector representations have been used in many applications such word synonyms, word analogy, syntactic parsing ...
Mohammed Salah Ibrahim +3 more
openaire +2 more sources
What Do Large Language Models Know About Materials?
If large language models (LLMs) are to be used inside the material discovery and engineering process, they must be benchmarked for the accurateness of intrinsic material knowledge. The current work introduces 1) a reasoning process through the processing–structure–property–performance chain and 2) a tool for benchmarking knowledge of LLMs concerning ...
Adrian Ehrenhofer +2 more
wiley +1 more source
SPINE: SParse Interpretable Neural Embeddings
Prediction without justification has limited utility. Much of the success of neural models can be attributed to their ability to learn rich, dense and expressive representations.
Berg-Kirkpatrick, Taylor +4 more
core +1 more source
Low‐consumable nickel ferrite‐based anodes for the Hall–Héroult process are compared with conventional prebaked carbon anodes using thermodynamic simulation and prospective life cycle assessment under contrasting future electricity system pathways from 2025 to 2050.
Felipe Alejandro Garcia Paz +6 more
wiley +1 more source
Exploring the Privacy-Preserving Properties of Word Embeddings: Algorithmic Validation Study
BackgroundWord embeddings are dense numeric vectors used to represent language in neural networks. Until recently, there had been no publicly released embeddings trained on clinical data.
Abdalla, Mohamed +3 more
doaj +1 more source
Twitter is a social media site where people post their personal experiences, opinions, and news. Due to the ubiquitous real-time data availability, many rescue agencies monitor this data regularly to identify disasters, reduce risk, and save lives ...
Sumona Deb, Ashis Kumar Chanda
doaj +1 more source

