Results 131 to 140 of about 78,230 (294)
Automated poultry processing lines still rely on humans to lift slippery, easily bruised carcasses onto a shackle conveyor. Deformability, anatomical variance, and hygiene rules make conventional suction and scripted motions unreliable. We present ChicGrasp, an end‐to‐end hardware‐software co‐designed imitation learning framework, to offer a ...
Amirreza Davar +8 more
wiley +1 more source
Compliant Pneumatic Feet with Real‐Time Stiffness Adaptation for Humanoid Locomotion
A compliant pneumatic foot with real‐time variable stiffness enables humanoid robots to adapt to changing terrains. Using onboard vision and pressure control, the foot modulates stiffness within each gait cycle, reducing impact forces and improving balance. The design, cast in soft silicone with embedded air chambers and Kevlar wrapping, offers durable,
Irene Frizza +3 more
wiley +1 more source
Here, we present a textile, wearable capacitive interface enabling multidirectional remote control by dynamically modulating electrode overlap and spacing via a freely gliding upper electrode. A forearm‐mounted prototype drives robotic and media tasks with 12–15 ms latency, maintains < 0.8% drift after 500 cycles, and remains stably functional at 90 ...
Cagatay Gumus +8 more
wiley +1 more source
Cooperative Training of Deep Aggregation Networks for RGB-D Action Recognition
A novel deep neural network training paradigm that exploits the conjoint information in multiple heterogeneous sources is proposed. Specifically, in a RGB-D based action recognition task, it cooperatively trains a single convolutional neural network ...
Li, Wanqing +4 more
core +1 more source
This work presents a robotic control method for human–robot collaborative assembly based on a biomechanics‐constrained digital human model. Reinforcement learning is used to generate physiologically plausible human motion trajectories, which are integrated into a virtual environment for robot control learning.
Bitao Yao +4 more
wiley +1 more source
Multimodal Human–Robot Interaction Using Human Pose Estimation and Local Large Language Models
A multimodal human–robot interaction framework integrates human pose estimation (HPE) and a large language model (LLM) for gesture‐ and voice‐based robot control. Speech‐to‐text (STT) enables voice command interpretation, while a safety‐aware arbitration mechanism prioritizes gesture input for rapid intervention.
Nasiru Aboki +2 more
wiley +1 more source
Miniaturized Magnetic Tip Design for Endoluminal Vine Robot Navigation
A magnetic tip mount is designed for a miniaturized 7 mm soft‐growing vine robot to enable wireless magnetic steering and onboard imaging, while preserving a 3 mm working channel. The internal–external ring magnets design balances magnetic attachment with low eversion pressure. Experiments demonstrate ±90° steering, 34 mm bending radius, and successful
Andrea Yanez Trujillo +4 more
wiley +1 more source
By integrating data from in vitro, ex vivo, and in vivo models, our research identifies the MARV glycoprotein as a remarkable hemorrhagic factor, filling a major gap in this important field. It also provides practical experimental tools for the basic research on viral pathogenesis and applied research aimed at antiviral intervention for hemorrhagic ...
Ting Yao +11 more
wiley +1 more source
Due to the rapid advances in computer vision and deep learning, human action recognition has become one of the most important representative tasks for video understanding.
Yumin Zhang, Yanyong Wang
doaj +1 more source
Salient Object Detection in RGB-D Videos
IEEE TIP (under major revision)
Ao Mou +5 more
openaire +3 more sources

