Results 181 to 190 of about 560,724 (296)

Multimodal Locomotion in Insect‐Inspired Microrobots: A Review of Strategies for Aerial, Surface, Aquatic, and Interfacial Motion

open access: yesAdvanced Robotics Research, EarlyView.
This review identifies key design considerations for insect‐inspired microrobots capable of multimodal locomotion. To draw inspiration, biological and robotic strategies for moving in air, on water surfaces, and underwater are examined, along with approaches for crossing the air–water interface.
Mija Jovchevska   +2 more
wiley   +1 more source

A provincial cost analysis of electric vehicle operation in China. [PDF]

open access: yesiScience
Li B   +7 more
europepmc   +1 more source

Continual Learning for Multimodal Data Fusion of a Soft Gripper

open access: yesAdvanced Robotics Research, EarlyView.
Models trained on a single data modality often struggle to generalize when exposed to a different modality. This work introduces a continual learning algorithm capable of incrementally learning different data modalities by leveraging both class‐incremental and domain‐incremental learning scenarios in an artificial environment where labeled data is ...
Nilay Kushawaha, Egidio Falotico
wiley   +1 more source

A Soft Robotic Fish With a Dielectric Elastomer Actuator Body and Negative Stiffness Spine

open access: yesAdvanced Robotics Research, EarlyView.
This work introduces a bio‐mimetic soft robotic fish driven by fiber‐reinforced dielectric elastomer actuators integrated as its body. By prestretching this active skin against a flexible spine, a negative stiffness system is created, enabling large‐amplitude bending.
Markus Koenigsdorff   +4 more
wiley   +1 more source

Robotic Control for Human–Robot Collaborative Assembly Based on Digital Human Model and Reinforcement Learning

open access: yesAdvanced Robotics Research, EarlyView.
This work presents a robotic control method for human–robot collaborative assembly based on a biomechanics‐constrained digital human model. Reinforcement learning is used to generate physiologically plausible human motion trajectories, which are integrated into a virtual environment for robot control learning.
Bitao Yao   +4 more
wiley   +1 more source

Multimodal Human–Robot Interaction Using Human Pose Estimation and Local Large Language Models

open access: yesAdvanced Robotics Research, EarlyView.
A multimodal human–robot interaction framework integrates human pose estimation (HPE) and a large language model (LLM) for gesture‐ and voice‐based robot control. Speech‐to‐text (STT) enables voice command interpretation, while a safety‐aware arbitration mechanism prioritizes gesture input for rapid intervention.
Nasiru Aboki   +2 more
wiley   +1 more source

Home - About - Disclaimer - Privacy