Results 271 to 280 of about 57,139 (318)

Trust at Your Own Peril: A Mixed Methods Exploration of the Ability of Large Language Models to Generate Expert‐Like Systems Engineering Artifacts and a Characterization of Failure Modes

open access: yesSystems Engineering, EarlyView.
ABSTRACT Multi‐purpose large language models (LLMs), a subset of generative artificial intelligence (AI), have recently made significant progress. While expectations for LLMs to assist systems engineering (SE) tasks are paramount; the interdisciplinary and complex nature of systems, along with the need to synthesize deep‐domain knowledge and ...
Taylan G. Topcu   +3 more
wiley   +1 more source

The Gold‐Maker of Animal Oil and Prussian Blue Fame — The Chemical and Medicinal Science Philosophy of Johann Conrad Dippel

open access: yesThe Chemical Record, EarlyView.
The radical Pietist Johann Conrad Dippel was a self‐proclaimed adept – a maker of gold and the philosophers’ stone. He was also a magister of theology, a doctor of medicine, and a self‐taught chemist, who coinvented the pigment Prussian Blue together with Johann von Diesbach, became known for his animal pyrolysis oil, his wonder‐wound balm, his ...
Curt Wentrup
wiley   +1 more source

An ethics module on academic integrity and generative AI

open access: yesNew Directions for Teaching and Learning, EarlyView.
Abstract This article explores the intersection between academic integrity and generative AI (GenAI). It presents a tested framework for a versatile 3‐h module applicable to various disciplines. Since ChatGPT's emergence, GenAI's impact on academic integrity has raised concerns, challenged established norms, and blurred lines of authorship.
Christopher Hill, Jace Hargis
wiley   +1 more source

Unveiling large multimodal models in pulmonary CT: A comparative assessment of generative AI performance in lung cancer diagnostics

open access: yesVIEW, EarlyView.
1. The emergence of generative artificial intelligence (Gen‐AI) requires rigorous validation to assess its diagnostic reliability and limitations. 2. Three Gen‐AI models (GPT‐4‐turbo, Gemini‐pro‐vision, and Claude‐3‐opus) performed inconsistently across different diagnostic environments, demonstrating significant internal variability and overall ...
Lihaoyun Huang   +17 more
wiley   +1 more source

Acquired narcolepsy secondary to a presumptive hypothalamic hamartoma in a young German wirehaired pointer dog

open access: yesVeterinary Record Case Reports, EarlyView.
Abstract A 3‐year‐old, male, entire, German wirehaired pointer dog was presented with a 2‐year history of paroxysmal episodes of collapse associated with reduced levels of consciousness. A magnetic resonance imaging study identified a single, ill‐defined, non‐contrast‐enhancing, intra‐axial mass lesion involving the hypothalamus.
Callum Atkins   +3 more
wiley   +1 more source

Large Language Models With Contrastive Decoding Algorithm for Hallucination Mitigation in Low‐Resource Languages

open access: yesCAAI Transactions on Intelligence Technology, EarlyView.
ABSTRACT Neural machine translation (NMT) has advanced with deep learning and large‐scale multilingual models, yet translating low‐resource languages often lacks sufficient training data and leads to hallucinations. This often results in translated content that diverges significantly from the source text.
Zan Hongying   +4 more
wiley   +1 more source
Some of the next articles are maybe not open access.

Related searches:

Can Knowledge Graphs Reduce Hallucinations in LLMs? : A Survey

North American Chapter of the Association for Computational Linguistics, 2023
The contemporary LLMs are prone to producing hallucinations, stemming mainly from the knowledge gaps within the models. To address this critical limitation, researchers employ diverse strategies to augment the LLMs by incorporating external knowledge ...
Garima Agrawal   +3 more
semanticscholar   +1 more source

Beyond Hallucinations: Enhancing LVLMs through Hallucination-Aware Direct Preference Optimization

arXiv.org, 2023
Multimodal large language models have made significant advancements in recent years, yet they still suffer from a common issue known as the"hallucination problem", in which the models generate textual descriptions that inaccurately depict or entirely ...
Zhiyuan Zhao   +5 more
semanticscholar   +1 more source

Alleviating Hallucinations of Large Language Models through Induced Hallucinations

North American Chapter of the Association for Computational Linguistics, 2023
Despite their impressive capabilities, large language models (LLMs) have been observed to generate responses that include inaccurate or fabricated information, a phenomenon commonly known as ``hallucination''.
Yue Zhang   +3 more
semanticscholar   +1 more source

Mitigating Large Language Model Hallucinations via Autonomous Knowledge Graph-based Retrofitting

AAAI Conference on Artificial Intelligence, 2023
Incorporating factual knowledge in knowledge graph is regarded as a promising approach for mitigating the hallucination of large language models (LLMs).
Xinyan Guan   +6 more
semanticscholar   +1 more source

Home - About - Disclaimer - Privacy