Trapping LLM Hallucinations Using Tagged Context Prompts [PDF]
Recent advances in large language models (LLMs), such as ChatGPT, have led to highly sophisticated conversation agents. However, these models suffer from"hallucinations,"where the model generates false or fabricated information. Addressing this challenge
Philip G. Feldman+2 more
semanticscholar +1 more source
Chain of Natural Language Inference for Reducing Large Language Model Ungrounded Hallucinations [PDF]
Large language models (LLMs) can generate fluent natural language texts when given relevant documents as background context. This ability has attracted considerable interest in developing industry applications of LLMs. However, LLMs are prone to generate
Deren Lei+6 more
semanticscholar +1 more source
DelucionQA: Detecting Hallucinations in Domain-specific Question Answering [PDF]
Hallucination is a well-known phenomenon in text generated by large language models (LLMs). The existence of hallucinatory responses is found in almost all application scenarios e.g., summarization, question-answering (QA) etc. For applications requiring
Mobashir Sadat+8 more
semanticscholar +1 more source
On the Origin of Hallucinations in Conversational Models: Is it the Datasets or the Models? [PDF]
Knowledge-grounded conversational models are known to suffer from producing factually invalid statements, a phenomenon commonly called hallucination.
Nouha Dziri+4 more
semanticscholar +1 more source
Development and assessment of a brief screening tool for psychosis in dementia
Introduction Hallucinations and delusions (H+D) are common in dementia, but screening for these symptoms—especially in busy clinical practices—is challenging. Methods Six subject matter experts developed the DRP3™ screen, a novel valid tool to detect H+D
Jeffrey L. Cummings+7 more
doaj +1 more source
Detecting and Mitigating Hallucinations in Machine Translation: Model Internal Workings Alone Do Well, Sentence Similarity Even Better [PDF]
While the problem of hallucinations in neural machine translation has long been recognized, so far the progress on its alleviation is very little. Indeed, recently it turned out that without artificially encouraging models to hallucinate, previously ...
David Dale+3 more
semanticscholar +1 more source
Hallucinated but Factual! Inspecting the Factuality of Hallucinations in Abstractive Summarization [PDF]
State-of-the-art abstractive summarization systems often generate hallucinations; i.e., content that is not directly inferable from the source text. Despite being assumed to be incorrect, we find that much hallucinated content is actually consistent with
Mengyao Cao, Yue Dong, J. Cheung
semanticscholar +1 more source
ChatGPT: these are not hallucinations – they’re fabrications and falsifications
The artificial intelligence (AI) system, Chat Generative Pre-trained Transformer (ChatGPT), is considered a promising, even revolutionary tool and its widespread use in health care education, research, and practice is predicted to be inevitable.
R. Emsley
semanticscholar +1 more source
Looking for a Needle in a Haystack: A Comprehensive Study of Hallucinations in Neural Machine Translation [PDF]
Although the problem of hallucinations in neural machine translation (NMT) has received some attention, research on this highly pathological phenomenon lacks solid ground.
Nuno M. Guerreiro+2 more
semanticscholar +1 more source
On Early Detection of Hallucinations in Factual Question Answering [PDF]
While large language models (LLMs) have taken great strides towards helping humans with a plethora of tasks, hallucinations remain a major impediment towards gaining user trust. The fluency and coherence of model generations even when hallucinating makes detection a difficult task.
arxiv +1 more source