Results 51 to 60 of about 259,269 (327)
This work presents a novel methodology for calculating the phonetic similarity between words taking motivation from the human perception of sounds. This metric is employed to learn a continuous vector embedding space that groups similar sounding words together and can be used for various downstream computational phonology tasks.
Sharma, Rahul +2 more
openaire +2 more sources
Closed Form Word Embedding Alignment [PDF]
We develop a family of techniques to align word embeddings which are derived from different source datasets or created using different mechanisms (e.g., GloVe or word2vec). Our methods are simple and have a closed form to optimally rotate, translate, and scale to minimize root mean squared errors or maximize the average cosine similarity between two ...
Sunipa Dev +2 more
openaire +2 more sources
Genetic testing in epithelial ovarian cancer includes both germline and tumor‐testing. This approach often duplicates resources. The current prospective study assessed the feasibility of tumor‐first multigene testing by comparing tumor tissue with germline testing of peripheral blood using an 18‐gene NGS panel in 106 patients.
Elisabeth Spenard +12 more
wiley +1 more source
Improving Word Embedding Using Variational Dropout
Pre-trained word embeddings are essential in natural language processing (NLP). In recent years, many post-processing algorithms have been proposed to improve the pre-trained word embeddings.
Zainab Albujasim +3 more
doaj +1 more source
Rotations and Interpretability of Word Embeddings: the Case of the Russian Language
Consider a continuous word embedding model. Usually, the cosines between word vectors are used as a measure of similarity of words. These cosines do not change under orthogonal transformations of the embedding space.
Zobnin, Alexey
core +1 more source
Quantum-Inspired Complex Word Embedding [PDF]
A challenging task for word embeddings is to capture the emergent meaning or polarity of a combination of individual words. For example, existing approaches in word embeddings will assign high probabilities to the words "Penguin" and "Fly" if they frequently co-occur, but it fails to capture the fact that they occur in an opposite sense - Penguins do ...
Li, Qiuchi +3 more
openaire +2 more sources
The cancer problem is increasing globally with projections up to the year 2050 showing unfavourable outcomes in terms of incidence and cancer‐related deaths. The main challenges are prevention, improved therapeutics resulting in increased cure rates and enhanced health‐related quality of life.
Ulrik Ringborg +43 more
wiley +1 more source
A supervised topic embedding model and its application.
We propose rTopicVec, a supervised topic embedding model that predicts response variables associated with documents by analyzing the text data. Topic modeling leverages document-level word co-occurrence patterns to learn latent topics of each document ...
Weiran Xu, Koji Eguchi
doaj +1 more source
A New Sentiment-Enhanced Word Embedding Method for Sentiment Analysis
Since some sentiment words have similar syntactic and semantic features in the corpus, existing pre-trained word embeddings always perform poorly in sentiment analysis tasks.
Qizhi Li +4 more
doaj +1 more source
Embedding Semantic Relations into Word Representations [PDF]
Learning representations for semantic relations is important for various tasks such as analogy detection, relational search, and relation classification.
Bollegala, Danushka +2 more
core +1 more source

