Results 181 to 190 of about 8,126,261 (333)
Auditory–Tactile Congruence for Synthesis of Adaptive Pain Expressions in RoboPatients
In this work, we explore auditory–tactile congruence for synthesizing adaptive vocal pain expressions in robopatients. Using a robopatient platform that integrates vocal pain sounds with palpation forces, we conducted 7680 trials across 20 participants.
Saitarun Nadipineni +4 more
wiley +1 more source
Interpersonal Affective Touch in a Virtual World: Feeling the Social Presence of Others to Overcome Loneliness. [PDF]
Della Longa L, Valori I, Farroni T.
europepmc +1 more source
Compliant Pneumatic Feet with Real‐Time Stiffness Adaptation for Humanoid Locomotion
A compliant pneumatic foot with real‐time variable stiffness enables humanoid robots to adapt to changing terrains. Using onboard vision and pressure control, the foot modulates stiffness within each gait cycle, reducing impact forces and improving balance. The design, cast in soft silicone with embedded air chambers and Kevlar wrapping, offers durable,
Irene Frizza +3 more
wiley +1 more source
We live in a virtual world: Training the trainee using an integrated visual reality simulator curriculum. [PDF]
Mooney SS +7 more
europepmc +1 more source
An enhanced universal gripper combining rigid mechanics with self‐adaptable fingers is presented for industrial automation. The novel six‐bar linkage with integrated compliant pad eliminates mechanical interference while enabling passive shape adaptation.
Muhammad Usman Khalid +7 more
wiley +1 more source
This work presents a robotic control method for human–robot collaborative assembly based on a biomechanics‐constrained digital human model. Reinforcement learning is used to generate physiologically plausible human motion trajectories, which are integrated into a virtual environment for robot control learning.
Bitao Yao +4 more
wiley +1 more source
Growing up in a virtual world - A new look for transitioning to adult transplant care. [PDF]
Chandran MM +5 more
europepmc +1 more source
Multimodal Human–Robot Interaction Using Human Pose Estimation and Local Large Language Models
A multimodal human–robot interaction framework integrates human pose estimation (HPE) and a large language model (LLM) for gesture‐ and voice‐based robot control. Speech‐to‐text (STT) enables voice command interpretation, while a safety‐aware arbitration mechanism prioritizes gesture input for rapid intervention.
Nasiru Aboki +2 more
wiley +1 more source
Real-World Persuasion From Virtual-World Campaigns
Christopher N. Burrows, H. Blanton
semanticscholar +1 more source

