Results 221 to 230 of about 729,502 (283)

Grounding Large Language Models for Robot Task Planning Using Closed‐Loop State Feedback

open access: yesAdvanced Robotics Research, EarlyView.
BrainBody‐Large Language Model (LLM) introduces a hierarchical, feedback‐driven planning framework where two LLMs coordinate high‐level reasoning and low‐level control for robotic tasks. By grounding decisions in real‐time state feedback, it reduces hallucinations and improves task reliability.
Vineet Bhat   +4 more
wiley   +1 more source

Continual Learning for Multimodal Data Fusion of a Soft Gripper

open access: yesAdvanced Robotics Research, EarlyView.
Models trained on a single data modality often struggle to generalize when exposed to a different modality. This work introduces a continual learning algorithm capable of incrementally learning different data modalities by leveraging both class‐incremental and domain‐incremental learning scenarios in an artificial environment where labeled data is ...
Nilay Kushawaha, Egidio Falotico
wiley   +1 more source

Auditory–Tactile Congruence for Synthesis of Adaptive Pain Expressions in RoboPatients

open access: yesAdvanced Robotics Research, EarlyView.
In this work, we explore auditory–tactile congruence for synthesizing adaptive vocal pain expressions in robopatients. Using a robopatient platform that integrates vocal pain sounds with palpation forces, we conducted 7680 trials across 20 participants.
Saitarun Nadipineni   +4 more
wiley   +1 more source

A Multidirectional Textile Interface for Remote Control Using Dynamic Area‐Based Capacitance Modulation

open access: yesAdvanced Robotics Research, EarlyView.
Here, we present a textile, wearable capacitive interface enabling multidirectional remote control by dynamically modulating electrode overlap and spacing via a freely gliding upper electrode. A forearm‐mounted prototype drives robotic and media tasks with 12–15 ms latency, maintains < 0.8% drift after 500 cycles, and remains stably functional at 90 ...
Cagatay Gumus   +8 more
wiley   +1 more source

Multimodal Human–Robot Interaction Using Human Pose Estimation and Local Large Language Models

open access: yesAdvanced Robotics Research, EarlyView.
A multimodal human–robot interaction framework integrates human pose estimation (HPE) and a large language model (LLM) for gesture‐ and voice‐based robot control. Speech‐to‐text (STT) enables voice command interpretation, while a safety‐aware arbitration mechanism prioritizes gesture input for rapid intervention.
Nasiru Aboki   +2 more
wiley   +1 more source

MicroRoboScope: A Portable and Integrated Mechatronic Platform for Magnetic and Acoustic Microrobotic Experimentation

open access: yesAdvanced Robotics Research, EarlyView.
This work presents the MicroRoboScope, a highly integrated, compact, and portable microrobotic experimentation platform combining electromagnetic and acoustic actuation with real‐time visual feedback into a single, end‐to‐end device. The system enables closed‐loop control and tracking algorithm experimentation within an accessible and unified hardware ...
Max Sokolich   +4 more
wiley   +1 more source

Home - About - Disclaimer - Privacy