Results 41 to 50 of about 16,624 (311)
This work presents a novel methodology for calculating the phonetic similarity between words taking motivation from the human perception of sounds. This metric is employed to learn a continuous vector embedding space that groups similar sounding words together and can be used for various downstream computational phonology tasks.
Sharma, Rahul +2 more
openaire +2 more sources
Closed Form Word Embedding Alignment [PDF]
We develop a family of techniques to align word embeddings which are derived from different source datasets or created using different mechanisms (e.g., GloVe or word2vec). Our methods are simple and have a closed form to optimally rotate, translate, and scale to minimize root mean squared errors or maximize the average cosine similarity between two ...
Sunipa Dev +2 more
openaire +2 more sources
Learning Bilingual Word Embedding Mappings with Similar Words in Related Languages Using GAN
Cross-lingual word embeddings display words from different languages in the same vector space. They provide reasoning about semantics, compare the meaning of words across languages and word meaning in multilingual contexts, necessary to bilingual lexicon
Ghafour Alipour +2 more
doaj +1 more source
Obtaining high-quality embeddings of out-of-vocabularies (OOVs) and low-frequency words is a challenge in natural language processing (NLP). To efficiently estimate the embeddings of OOVs and low-frequency words, we propose a new method that uses the ...
Xianwen Liao +5 more
doaj +1 more source
Genetic testing in epithelial ovarian cancer includes both germline and tumor‐testing. This approach often duplicates resources. The current prospective study assessed the feasibility of tumor‐first multigene testing by comparing tumor tissue with germline testing of peripheral blood using an 18‐gene NGS panel in 106 patients.
Elisabeth Spenard +12 more
wiley +1 more source
Transformer models are the state-of-the-art in Natural Language Processing (NLP) and the core of the Large Language Models (LLMs). We propose a transformer-based model for transition-based dependency parsing of free word order languages.
Fatima Tuz Zuhra +2 more
doaj +1 more source
Quantum-Inspired Complex Word Embedding [PDF]
A challenging task for word embeddings is to capture the emergent meaning or polarity of a combination of individual words. For example, existing approaches in word embeddings will assign high probabilities to the words "Penguin" and "Fly" if they frequently co-occur, but it fails to capture the fact that they occur in an opposite sense - Penguins do ...
Li, Qiuchi +3 more
openaire +2 more sources
The cancer problem is increasing globally with projections up to the year 2050 showing unfavourable outcomes in terms of incidence and cancer‐related deaths. The main challenges are prevention, improved therapeutics resulting in increased cure rates and enhanced health‐related quality of life.
Ulrik Ringborg +43 more
wiley +1 more source
Lexicon-Enhanced LSTM With Attention for General Sentiment Analysis
Long short-term memory networks (LSTMs) have gained good performance in sentiment analysis tasks. The general method is to use LSTMs to combine word embeddings for text representation.
Xianghua Fu +4 more
doaj +1 more source
Exploring the Privacy-Preserving Properties of Word Embeddings: Algorithmic Validation Study
BackgroundWord embeddings are dense numeric vectors used to represent language in neural networks. Until recently, there had been no publicly released embeddings trained on clinical data.
Abdalla, Mohamed +3 more
doaj +1 more source

