Results 91 to 100 of about 6,156,320 (380)
Are Large Language Models Intelligent? Are Humans?
Claims that large language models lack intelligence are abundant in current AI discourse. To the extent that the claims are supported by arguments, these usually amount to claims that the models (a) lack common sense, (b) know only facts they have been ...
Olle Häggström
doaj +1 more source
The Sensitivity of Language Models and Humans to Winograd Schema Perturbations
Large-scale pretrained language models are the major driving force behind recent improvements in performance on the Winograd Schema Challenge, a widely employed test of common sense reasoning ability. We show, however, with a new diagnostic dataset, that
Abdou, Mostafa+5 more
core +1 more source
Should we Stop Training More Monolingual Models, and Simply Use Machine Translation Instead? [PDF]
Most work in NLP makes the assumption that it is desirable to develop solutions in the native language in question. There is consequently a strong trend towards building native language models even for low-resource languages. This paper questions this development, and explores the idea of simply translating the data into English, thereby enabling the ...
arxiv
The LEXOVE prospective study evaluated plasma cell‐free extracellular vesicle (cfEV) dynamics using Bradford assay and dynamic light scattering in metastatic non‐small cell lung cancer patients undergoing first‐line treatments, correlating a ∆cfEV < 20% with improved median progression‐free survival in responders versus non‐responders.
Valerio Gristina+17 more
wiley +1 more source
Images in Language Space: Exploring the Suitability of Large Language Models for Vision & Language Tasks [PDF]
Large language models have demonstrated robust performance on various language tasks using zero-shot or few-shot learning paradigms. While being actively researched, multimodal models that can additionally handle images as input have yet to catch up in size and generality with language-only models.
arxiv
We quantified and cultured circulating tumor cells (CTCs) of 62 patients with various cancer types and generated CTC‐derived tumoroid models from two salivary gland cancer patients. Cellular liquid biopsy‐derived information enabled molecular genetic assessment of systemic disease heterogeneity and functional testing for therapy selection in both ...
Nataša Stojanović Gužvić+31 more
wiley +1 more source
Accelerating materials language processing with large language models
Materials language processing (MLP) can facilitate materials science research by automating the extraction of structured data from research papers. Despite the existence of deep learning models for MLP tasks, there are ongoing practical issues associated
Jaewoong Choi, Byungju Lee
doaj +1 more source
The authors applied joint/mixed models that predict mortality of trifluridine/tipiracil‐treated metastatic colorectal cancer patients based on circulating tumor DNA (ctDNA) trajectories. Patients at high risk of death could be spared aggressive therapy with the prospect of a higher quality of life in their remaining lifetime, whereas patients with a ...
Matthias Unseld+7 more
wiley +1 more source
Large Language Models as Kuwaiti Annotators
Stance detection for low-resource languages, such as the Kuwaiti dialect, poses a significant challenge in natural language processing (NLP) due to the scarcity of annotated datasets and specialized tools.
Hana Alostad
doaj +1 more source
Formal Aspects of Language Modeling [PDF]
Large language models have become one of the most commonly deployed NLP inventions. In the past half-decade, their integration into core natural language processing tools has dramatically increased the performance of such tools, and they have entered the public discourse surrounding artificial intelligence.
arxiv