Results 141 to 150 of about 189,149 (342)

Hallucinations in Large Multilingual Translation Models

open access: diamond, 2023
Ricardo Rei   +6 more
openalex   +1 more source

Hallucinations

open access: yesInternational Psychogeriatrics, 1997
P J, Whitehouse   +6 more
openaire   +2 more sources

Differential Effectiveness of Atypical Antipsychotics on Hallucinations

open access: hybrid, 2021
Igne Sinkeviciute   +11 more
openalex   +1 more source

Content-based clustering of hallucinations across sensory modalities in a large online survey

open access: yesScientific Reports
Hallucinations can have rather heterogeneous aetiology and presentation. This inspired the concept of different subtypes based on symptom profiles, especially in the field of auditory hallucinations.
Theresa M. Marschall   +4 more
doaj   +1 more source

Self-Supervised and Controlled Multi-Document Opinion Summarization

open access: yes, 2020
We address the problem of unsupervised abstractive summarization of collections of user generated reviews with self-supervision and control. We propose a self-supervised setup that considers an individual document as a target summary for a set of similar
Coavoux, Maximin   +3 more
core  

Cognitive therapy for command hallucinations: randomised controlled trial [PDF]

open access: bronze, 2004
Peter Trower   +5 more
openalex   +1 more source

Using artificial intelligence thanabots as “thanatobots” to assist anatomy learning and professional development: Ghosts masquerading as opportunity?

open access: yesAnatomical Sciences Education, EarlyView.
Thanabots—AI‐generated digital representations of deceased donors—could enhance anatomy education by linking medical history with anatomy and fostering humanistic engagement. However, their use poses ethical questions and carries psychological risks, including issues around consent, authenticity, and emotional harm.
Jon Cornwall, Sabine Hildebrandt
wiley   +1 more source

From Illusion to Insight: A Taxonomic Survey of Hallucination Mitigation Techniques in LLMs

open access: yesAI
Large Language Models (LLMs) exhibit remarkable generative capabilities but remain vulnerable to hallucinations—outputs that are fluent yet inaccurate, ungrounded, or inconsistent with source material.
Ioannis Kazlaris   +3 more
doaj   +1 more source

Home - About - Disclaimer - Privacy