The Curious Case of Hallucinations in Neural Machine Translation [PDF]
In this work, we study hallucinations in Neural Machine Translation (NMT), which lie at an extreme end on the spectrum of NMT pathologies. Firstly, we connect the phenomenon of hallucinations under source perturbation to the Long-Tail theory of Feldman (2020), and present an empirically validated hypothesis that explains hallucinations under source ...
Vikas Raunak+2 more
arxiv +3 more sources
Mitigating Object Hallucinations in Large Vision-Language Models through Visual Contrastive Decoding [PDF]
Large Vision-Language Models (LVLMs) have advanced considerably, intertwining visual recognition and language understanding to generate content that is not only coherent but also contextually attuned.
Sicong Leng+6 more
semanticscholar +1 more source
How Language Model Hallucinations Can Snowball [PDF]
A major risk of using language models in practical applications is their tendency to hallucinate incorrect statements. Hallucinations are often attributed to knowledge gaps in LMs, but we hypothesize that in some cases, when justifying previously ...
Muru Zhang+4 more
semanticscholar +1 more source
LLM Lies: Hallucinations are not Bugs, but Features as Adversarial Examples [PDF]
Large Language Models (LLMs), including GPT-3.5, LLaMA, and PaLM, seem to be knowledgeable and able to adapt to many tasks. However, we still cannot completely trust their answers, since LLMs suffer from \textbf{hallucination}\textemdash fabricating non ...
Jia-Yu Yao+4 more
semanticscholar +1 more source
A Stitch in Time Saves Nine: Detecting and Mitigating Hallucinations of LLMs by Validating Low-Confidence Generation [PDF]
Recently developed large language models have achieved remarkable success in generating fluent and coherent text. However, these models often tend to 'hallucinate' which critically hampers their reliability.
Neeraj Varshney+4 more
semanticscholar +1 more source
Hallucinations in Large Multilingual Translation Models [PDF]
Hallucinated translations can severely undermine and raise safety issues when machine translation systems are deployed in the wild. Previous research on the topic focused on small bilingual models trained on high-resource languages, leaving a gap in our ...
Nuno M. Guerreiro+6 more
semanticscholar +1 more source
Detecting and Preventing Hallucinations in Large Vision Language Models [PDF]
Instruction tuned Large Vision Language Models (LVLMs) have significantly advanced in generalizing across a diverse set of multi-modal tasks, especially for Visual Question Answering (VQA).
A. Gunjal, Jihan Yin, Erhan Bas
semanticscholar +1 more source
Cognitive Mirage: A Review of Hallucinations in Large Language Models [PDF]
As large language models continue to develop in the field of AI, text generation systems are susceptible to a worrisome phenomenon known as hallucination. In this study, we summarize recent compelling insights into hallucinations in LLMs.
Hongbin Ye+4 more
semanticscholar +1 more source
Understanding and Detecting Hallucinations in Neural Machine Translation via Model Introspection [PDF]
Neural sequence generation models are known to “hallucinate”, by producing outputs that are unrelated to the source text. These hallucinations are potentially harmful, yet it remains unclear in what conditions they arise and how to mitigate their impact.
Weijia Xu+4 more
semanticscholar +1 more source
Artificial Hallucinations in ChatGPT: Implications in Scientific Writing
While still in its infancy, ChatGPT (Generative Pretrained Transformer), introduced in November 2022, is bound to hugely impact many industries, including healthcare, medical education, biomedical research, and scientific writing. Implications of ChatGPT,
H. Alkaissi, Samy I McFarlane
semanticscholar +1 more source