Results 41 to 50 of about 440,945 (164)
Dynamic Equilibrium Module for Action Recognition
Temporal variations, such as sudden motion, acceleration and occlusions, occur frequently in real-world videos and force video-modeling networks to account for them. However. often they are not beneficial for recognizing actions at coarse granularity and
Qili Zeng, M. Ozan Tezcan, Janusz Konrad
doaj +1 more source
Inverse Dynamics for Action Recognition [PDF]
Pose-based approaches for human action recognition are attractive owing to their accurate use of human motion information. Traditionally, such approaches used kinematic features for classification. However, in addition to having high dimensions and a small interclass variation, kinematic features do not consider the interaction of the environment on ...
Al, Mansur +2 more
openaire +2 more sources
Learning 3D Skeletal Representation From Transformer for Action Recognition
Skeleton-based human action recognition has attracted significant interest due to its simplicity and good accuracy. Diverse end-to-end trainable frameworks based on skeletal representation have been proposed so far to map the representation to human ...
Junuk Cha +5 more
doaj +1 more source
Instant Action Recognition [PDF]
In this paper, we present an efficient system for action recognition from very short sequences. For action recognition typically appearance and/or motion information of an action is analyzed using a large number of frames. This is a limitation if very fast actions (e.g., in sport analysis) have to be analyzed.
Thomas Mauthner +2 more
openaire +1 more source
Model recommendation for action recognition [PDF]
Simply choosing one model out of a large set of possibilities for a given vision task is a surprisingly difficult problem, especially if there is limited evaluation data with which to distinguish among models, such as when choosing the best “walk” action classifier from a large pool of classifiers tuned for different viewing angles, lighting conditions,
Matikainen, Pyry +2 more
openaire +1 more source
Data Mining for Action Recognition [PDF]
In recent years, dense trajectories have shown to be an efficient representation for action recognition and have achieved state-of-the-art results on a variety of increasingly difficult datasets. However, while the features have greatly improved the recognition scores, the training process and machine learning used hasn’t in general deviated from the ...
Gilbert, A, Bowden, R
openaire +3 more sources
Weakly supervised instance action recognition
We study the novel problem of weakly supervised instance action recognition (WSiAR) in multi-person (crowd) scenes. We specifically aim to recognize the action of each subject in the crowd, for which we propose the use of a weakly supervised method ...
Haomin Yan +4 more
doaj +1 more source
Action recognition by dense trajectories [PDF]
Feature trajectories have shown to be efficient for representing videos. Typically, they are extracted using the KLT tracker or matching SIFT descriptors between frames. However, the quality as well as quantity of these trajectories is often not sufficient. Inspired by the recent success of dense sampling in image classification, we propose an approach
Wang, Heng +3 more
openaire +1 more source
Dilated Multi-Temporal Modeling for Action Recognition
Action recognition involves capturing temporal information from video clips where the duration varies with videos for the same action. Due to the diverse scale of temporal context, uniform size kernels utilized in convolutional neural networks (CNNs ...
Tao Zhang, Yifan Wu, Xiaoqiang Li
doaj +1 more source
Feature seeding for action recognition [PDF]
Progress in action recognition has been in large part due to advances in the features that drive learning-based methods. However, the relative sparsity of training data and the risk of overfitting have made it difficult to directly search for good features. In this paper we suggest using synthetic data to search for robust features that can more easily
Matikainen, Pyry +2 more
openaire +1 more source

