Results 211 to 220 of about 564,038 (269)

Bidirectional Process Prediction in the Laser‐Induced‐Graphene Production Using Blackbox Deep Learning

open access: yesAdvanced Materials Technologies, EarlyView.
This study shows that a lightweight blackbox neural network provides a practical, cost‐effective solution for bidirectional process prediction in laser‐induced graphene (LIG) fabrication. Achieving high predictive performance with minimal overhead, the approach democratizes machine learning (ML) for resource‐limited environments.
Maxim Polomoshnov   +3 more
wiley   +1 more source

Learning Highly Dynamic Skills Transition for Quadruped Jumping Through Constrained Space

open access: yesAdvanced Robotics Research, EarlyView.
A quadruped robot masters dynamic jumps through constrained spaces with animal‐inspired moves and intelligent vision control. This hierarchical learning approach combines imitation of biological agility with real‐time trajectory planning. Although legged animals are capable of performing explosive motions while traversing confined spaces, replicating ...
Zeren Luo   +6 more
wiley   +1 more source

ChicGrasp: Imitation‐Learning‐Based Customized Dual‐Jaw Gripper Control for Manipulation of Delicate, Irregular Bio‐Products

open access: yesAdvanced Robotics Research, EarlyView.
Automated poultry processing lines still rely on humans to lift slippery, easily bruised carcasses onto a shackle conveyor. Deformability, anatomical variance, and hygiene rules make conventional suction and scripted motions unreliable. We present ChicGrasp, an end‐to‐end hardware‐software co‐designed imitation learning framework, to offer a ...
Amirreza Davar   +8 more
wiley   +1 more source

Robotic Control for Human–Robot Collaborative Assembly Based on Digital Human Model and Reinforcement Learning

open access: yesAdvanced Robotics Research, EarlyView.
This work presents a robotic control method for human–robot collaborative assembly based on a biomechanics‐constrained digital human model. Reinforcement learning is used to generate physiologically plausible human motion trajectories, which are integrated into a virtual environment for robot control learning.
Bitao Yao   +4 more
wiley   +1 more source

Multimodal Human–Robot Interaction Using Human Pose Estimation and Local Large Language Models

open access: yesAdvanced Robotics Research, EarlyView.
A multimodal human–robot interaction framework integrates human pose estimation (HPE) and a large language model (LLM) for gesture‐ and voice‐based robot control. Speech‐to‐text (STT) enables voice command interpretation, while a safety‐aware arbitration mechanism prioritizes gesture input for rapid intervention.
Nasiru Aboki   +2 more
wiley   +1 more source

Home - About - Disclaimer - Privacy