Results 101 to 110 of about 103,607 (290)
Due to the rapid advances in computer vision and deep learning, human action recognition has become one of the most important representative tasks for video understanding.
Yumin Zhang, Yanyong Wang
doaj +1 more source
Context Awareness and Human–Robot Interaction Optimization for Museum Intelligent Guide Robot
This study presents a context‐aware human–robot interaction framework designed for intelligent museum guide robots. The system features a three‐layer architecture—perception, understanding, and behavior execution—that enables adaptive and meaningful interactions with museum visitors.
Anna Zou, Yue Meng, Shijing Tong
wiley +1 more source
DGU-HAO: A Dataset With Daily Life Objects for Comprehensive 3D Human Action Analysis
The importance of a high-quality dataset availability in 3D human action analysis research cannot be overstated. This paper introduces DGU-HAO (Human Action analysis dataset with daily life Objects).
Jiho Park +3 more
doaj +1 more source
Temporally Consistent Motion Segmentation from RGB-D Video
We present a method for temporally consistent motion segmentation from RGB-D videos assuming a piecewise rigid motion model. We formulate global energies over entire RGB-D sequences in terms of the segmentation of each frame into a number of objects, and the rigid motion of each object through the sequence.
Bertholet, Peter-Immanuel +1 more
openaire +2 more sources
GraphNeuralCloth: A Graph‐Neural‐Network‐Based Framework for Non‐Skinning Cloth Simulation
This study presents a cloth motion capture system and a point‐cloud‐to‐mesh processing method to support the prediction of real‐world fabric deformation. GraphNeuralCloth, a graph neural‐network (GNN)‐based framework is also proposed to estimate the cloth morphology change in real time.
Yingqi Li +9 more
wiley +1 more source
Optical Flow Enables Hand Tracking With EyeGlove Low‐Cost Cameras in Confined Environments
A cost‐effective (<£150) hand‐wearable stereo vision system, EyeGlove, is proposed to support visual inspection in confined environments. The system integrates disjointed low‐cost cameras to enable dexterous camera manipulation and wearable display unit for real‐time interaction.
Erhui Sun +3 more
wiley +1 more source
Object detection and tracking on UAV RGB videos for early extraction of grape phenotypic traits
Mar Ariza-Sentís +3 more
openalex +1 more source
MorpheuS: Neural Dynamic 360° Surface Reconstruction from Monocular RGB-D Video [PDF]
Hengyi Wang +2 more
openalex +1 more source
Artificial intelligence (AI) is reshaping autonomous mobile robot navigation beyond classical pipelines. This review analyzes how AI techniques are integrated into core navigation tasks, including path planning and control, localization and mapping, perception, and context‐aware decision‐making. Learning‐based, probabilistic, and soft‐computing methods
Giovanna Guaragnella +5 more
wiley +1 more source
Cooperative Training of Deep Aggregation Networks for RGB-D Action Recognition
A novel deep neural network training paradigm that exploits the conjoint information in multiple heterogeneous sources is proposed. Specifically, in a RGB-D based action recognition task, it cooperatively trains a single convolutional neural network ...
Li, Wanqing +4 more
core +1 more source

