Results 171 to 180 of about 289,571 (276)

Waveguide Photoactuators: Materials, Fabrication, and Applications

open access: yesAdvanced Robotics Research, EarlyView.
Waveguide photoactuators convert guided light into mechanical motion. Their tethered‐flexible design enables minimally invasive surgery and confined‐space robotics. This review aims to guide materials selection, device design, and system integration, accelerating the transition of waveguide photoactuators from laboratory prototypes to versatile ...
Minjie Xi   +4 more
wiley   +1 more source

Liquid Crystalline Elastomers in Soft Robotics: Assessing Promise and Limitations

open access: yesAdvanced Robotics Research, EarlyView.
Liquid crystalline elastomers (LCEs) are programmable soft materials that undergo large, anisotropic deformation in response to external stimuli. Their molecular alignment encodes directional actuation in a monolithic structure, making them long‐standing candidates for soft robotic systems.
Justin M. Speregen, Timothy J. White
wiley   +1 more source

A Multidirectional Textile Interface for Remote Control Using Dynamic Area‐Based Capacitance Modulation

open access: yesAdvanced Robotics Research, EarlyView.
Here, we present a textile, wearable capacitive interface enabling multidirectional remote control by dynamically modulating electrode overlap and spacing via a freely gliding upper electrode. A forearm‐mounted prototype drives robotic and media tasks with 12–15 ms latency, maintains < 0.8% drift after 500 cycles, and remains stably functional at 90 ...
Cagatay Gumus   +8 more
wiley   +1 more source

Robotic Control for Human–Robot Collaborative Assembly Based on Digital Human Model and Reinforcement Learning

open access: yesAdvanced Robotics Research, EarlyView.
This work presents a robotic control method for human–robot collaborative assembly based on a biomechanics‐constrained digital human model. Reinforcement learning is used to generate physiologically plausible human motion trajectories, which are integrated into a virtual environment for robot control learning.
Bitao Yao   +4 more
wiley   +1 more source

Multimodal Human–Robot Interaction Using Human Pose Estimation and Local Large Language Models

open access: yesAdvanced Robotics Research, EarlyView.
A multimodal human–robot interaction framework integrates human pose estimation (HPE) and a large language model (LLM) for gesture‐ and voice‐based robot control. Speech‐to‐text (STT) enables voice command interpretation, while a safety‐aware arbitration mechanism prioritizes gesture input for rapid intervention.
Nasiru Aboki   +2 more
wiley   +1 more source

Home - About - Disclaimer - Privacy