Results 271 to 280 of about 16,624 (311)
Inspired by the extracellular matrix, eutectogels are prepared by in situ deconstruction of silk fibroin into micro/nanofibrils and EGaIn‐induced polymerization. The multiscale fibril network with dynamic crosslinking achieves high strength (1.25 MPa), toughness (23.09 MJ m−3), and conductivity (1.51 S m−1), outperforming previous natural polymer ...
Haiwei Yang +4 more
wiley +1 more source
This study uncovers a recipient‐derived monocyte‐to‐macrophage trajectory that drives inflammation during kidney transplant rejection. Using over 150 000 single‐cell profiles and more than 850 biopsies, the authors identify CXCL10+ macrophages as key predictors of graft loss.
Alexis Varin +16 more
wiley +1 more source
Cluster Labeling by Word Embeddings and WordNet’s Hypernymy
Hanieh Poostchi, Massimo Piccardi
openalex +1 more source
Towards Qualitative Word Embeddings Evaluation: Measuring Neighbors Variation
Bénédicte Pierrejean, Ludovic Tanguy
openalex +1 more source
Some of the next articles are maybe not open access.
Related searches:
Related searches:
2021 Conference on Information Communications Technology and Society (ICTAS), 2021
Word embeddings are currently the most popular vector space model in Natural Language Processing. How we encode words is important because it affects the performance of many downstream tasks such as Machine Translation (MT), Information Retrieval (IR) and Automatic Speech Recognition (ASR).
Sibonelo Dlamini +3 more
openaire +1 more source
Word embeddings are currently the most popular vector space model in Natural Language Processing. How we encode words is important because it affects the performance of many downstream tasks such as Machine Translation (MT), Information Retrieval (IR) and Automatic Speech Recognition (ASR).
Sibonelo Dlamini +3 more
openaire +1 more source
Proceedings of the AAAI Conference on Artificial Intelligence, 2015
Most word embedding models typically represent each word using a single vector, which makes these models indiscriminative for ubiquitous homonymy and polysemy. In order to enhance discriminativeness, we employ latent topic models to assign topics for each word in the text corpus, and learn topical word embeddings (TWE) based on both ...
Yang Liu +3 more
openaire +1 more source
Most word embedding models typically represent each word using a single vector, which makes these models indiscriminative for ubiquitous homonymy and polysemy. In order to enhance discriminativeness, we employ latent topic models to assign topics for each word in the text corpus, and learn topical word embeddings (TWE) based on both ...
Yang Liu +3 more
openaire +1 more source
Abstract This chapter deals with the mathematical representation of words through vectors or embeddings which are the basis of modern language models. It starts by discussing the limits of the one-hot representation and continues with a section that presents traditional approaches based on the factorization of the word co-occurrence ...
Christophe Gaillac, Jérémy L'Hour
openaire +2 more sources
Christophe Gaillac, Jérémy L'Hour
openaire +2 more sources

