Results 61 to 70 of about 186,001 (296)
Grounding Large Language Models for Robot Task Planning Using Closed‐Loop State Feedback
BrainBody‐Large Language Model (LLM) introduces a hierarchical, feedback‐driven planning framework where two LLMs coordinate high‐level reasoning and low‐level control for robotic tasks. By grounding decisions in real‐time state feedback, it reduces hallucinations and improves task reliability.
Vineet Bhat +4 more
wiley +1 more source
LLM Hallucination: The Curse That Cannot Be Broken
Artificial intelligence chatbots (e.g., ChatGPT, Claude, and Llama, etc.), also known as large language models (LLMs), are continually evolving to be an essential part of the digital tools we use, but are plagued with the phenomenon of hallucination ...
Hussein Al-Mahmood
doaj +1 more source
Set-LLM: A Permutation-Invariant LLM
While large language models (LLMs) demonstrate impressive capabilities across numerous applications, their robustness remains a critical concern. This paper is motivated by a specific vulnerability: the order sensitivity of LLMs. This vulnerability manifests itself as the order bias observed when LLMs decide between possible options (for example, a ...
Egressy, Beni, Stühmer, Jan
openaire +2 more sources
Chronic ethanol feeding alters miRNA expression dynamics during liver regeneration. [PDF]
BACKGROUND: Adaptation to chronic ethanol (EtOH) treatment of rats results in a changed functional state of the liver and greatly inhibits its regenerative ability, which may contribute to the progression of alcoholic liver disease.
Dippold, Rachael P +4 more
core +2 more sources
Multimodal Human–Robot Interaction Using Human Pose Estimation and Local Large Language Models
A multimodal human–robot interaction framework integrates human pose estimation (HPE) and a large language model (LLM) for gesture‐ and voice‐based robot control. Speech‐to‐text (STT) enables voice command interpretation, while a safety‐aware arbitration mechanism prioritizes gesture input for rapid intervention.
Nasiru Aboki +2 more
wiley +1 more source
Introduction. The relevance of the study lies primarily in the fact that the increasingly active appeal of the widest circles of users to the generation of texts of different genres, properties and volumes using the so-called LLM (Large Language Model ...
S. V. Gusarenko, M. K. Gusarenko
doaj +1 more source
Folding thermodynamics of three beta-sheet peptides: A model study
We study the folding thermodynamics of a beta-hairpin and two three-stranded beta-sheet peptides using a simplified sequence-based all-atom model, in which folding is driven mainly by backbone hydrogen bonding and effective hydrophobic attraction.
Irbäck, Anders, Sjunnesson, Fredrik
core +1 more source
A Monte-Carlo study of the AdS/CFT correspondence: an exploration of quantum gravity effects [PDF]
In this paper we study the AdS/CFT correspondence for N=4 SYM with gauge group U(N), compactified on S^3 in four dimensions using Monte-Carlo techniques.
A. Donos +36 more
core +2 more sources
We introduce AutomataGPT, a generative pretrained transformer (GPT) trained on synthetic spatiotemporal data from 2D cellular automata to learn symbolic rules. Demonstrating strong performance on both forward and inverse tasks, AutomataGPT establishes a scalable, domain‐agnostic framework for interpretable modeling, paving the way for future ...
Jaime A. Berkovich +2 more
wiley +1 more source
Improving Performance of Local Chatbot with Caching [PDF]
Chatbots and the technology behind them are widely used in many places and in various ways. Retrieval Augmented Generation AI framework has gained its popularity by its linking of large language model with private dataset.
John Jenq
doaj

