Results 141 to 150 of about 787,887 (241)

A Proof‐of‐Concept Assessment of a Novel Wearable Eyelid Muscle Device: A Pre‐Clinical Animal Cadaver Study for Eyelid Closure Restoration

open access: yesAdvanced Robotics Research, EarlyView.
This article introduces a soft wearable eyelid sling device incorporating a hydraulic soft artificial muscle (SAM) for achieving complete closure of an eyelid. The SAM is driven by a cam mechanism that provides a displacement profile closely matched with those of a healthy eyelid.
Patrick Pruscino   +7 more
wiley   +1 more source

Hard‐Magnetic Soft Millirobots in Underactuated Systems

open access: yesAdvanced Robotics Research, EarlyView.
This review provides a comprehensive overview of hard‐magnetic soft millirobots in underactuated systems. It examines key advances in structural design, physics‐informed modeling, and control strategies, while highlighting the interplay among these domains.
Qiong Wang   +4 more
wiley   +1 more source

Identifying Physical Interactions in Contact‐Based Robot Manipulation for Learning from Demonstration

open access: yesAdvanced Robotics Research, EarlyView.
Robots can learn manipulation tasks from human demonstrations. This work proposes a versatile method to identify the physical interactions that occur in a demonstration, such as sequences of different contacts and interactions with mechanical constraints.
Alex Harm Gert‐Jan Overbeek   +3 more
wiley   +1 more source

TacScope: A Miniaturized Vision‐Based Tactile Sensor for Surgical Applications

open access: yesAdvanced Robotics Research, EarlyView.
TacScope is a compact, vision‐based tactile sensor designed for robot‐assisted surgery. By leveraging a curved elastomer surface with pressure‐sensitive particle redistribution, it captures high‐resolution 3D tactile feedback. TacScope enables accurate tumor detection and shape classification beneath soft tissue phantoms, offering a scalable, low‐cost ...
Md Rakibul Islam Prince   +3 more
wiley   +1 more source

Multimodal Human–Robot Interaction Using Human Pose Estimation and Local Large Language Models

open access: yesAdvanced Robotics Research, EarlyView.
A multimodal human–robot interaction framework integrates human pose estimation (HPE) and a large language model (LLM) for gesture‐ and voice‐based robot control. Speech‐to‐text (STT) enables voice command interpretation, while a safety‐aware arbitration mechanism prioritizes gesture input for rapid intervention.
Nasiru Aboki   +2 more
wiley   +1 more source

Home - About - Disclaimer - Privacy