Results 1 to 10 of about 192,041 (363)

Gesture and Liturgical Gesture [PDF]

open access: yesPhainomena, 2023
The usual interpretations of gestuality presuppose that a gesture accompanies the expressive action, whereby it itself almost disappears, in order to make way for what the person gesturing wants to show as appertaining to his or her interiority.
Virgilio Cesarone
doaj   +3 more sources

Style Transfer for Co-Speech Gesture Animation: A Multi-Speaker Conditional-Mixture Approach [PDF]

open access: yesEuropean Conference on Computer Vision 2020, 2020
How can we teach robots or virtual assistants to gesture naturally? Can we go further and adapt the gesturing style to follow a specific speaker? Gestures that are naturally timed with corresponding speech during human communication are called co-speech gestures.
Chaitanya Ahuja   +3 more
arxiv   +2 more sources

Real-Time Hand Gesture Monitoring Model Based on MediaPipe’s Registerable System [PDF]

open access: yesSensors
Hand gesture recognition plays a significant role in human-to-human and human-to-machine interactions. Currently, most hand gesture detection methods rely on fixed hand gesture recognition.
Yuting Meng   +3 more
doaj   +2 more sources

Hand Gesture Recognition for Sign Language Using 3DCNN

open access: yesIEEE Access, 2020
Recently, automatic hand gesture recognition has gained increasing importance for two principal reasons: the growth of the deaf and hearing-impaired population, and the development of vision-based applications and touchless control on ubiquitous devices.
Muneer Al-Hammadi   +5 more
doaj   +2 more sources

Gesture interaction in virtual reality

open access: yesVirtual Reality & Intelligent Hardware, 2019
With the development of virtual reality (VR) and human-computer interaction technology, how to use natural and efficient interaction methods in the virtual environment has become a hot topic of research. Gesture is one of the most important communication
Yang LI   +4 more
doaj   +2 more sources

Taming Diffusion Models for Audio-Driven Co-Speech Gesture Generation [PDF]

open access: yesComputer Vision and Pattern Recognition, 2023
Animating virtual avatars to make co-speech gestures facilitates various applications in human-machine interaction. The existing methods mainly rely on generative adversarial networks (GANs), which typically suffer from notorious mode collapse and ...
Lingting Zhu   +5 more
semanticscholar   +1 more source

Learning Hierarchical Cross-Modal Association for Co-Speech Gesture Generation [PDF]

open access: yesComputer Vision and Pattern Recognition, 2022
Generating speech-consistent body and gesture movements is a long-standing problem in virtual avatar creation. Previous studies often synthesize pose movement in a holistic manner, where poses of all joints are generated simultaneously.
Xian Liu   +9 more
semanticscholar   +1 more source

Approach for Improving User Interface Based on Gesture Recognition [PDF]

open access: yesE3S Web of Conferences, 2021
Gesture recognition technology based on visual detection to acquire gestures information is obtained in a non-contact manner. There are two types of gesture recognition: independent and continuous gesture recognition.
Elmagrouni Issam   +3 more
doaj   +1 more source

GesGPT: Speech Gesture Synthesis With Text Parsing from ChatGPT [PDF]

open access: yesIEEE Robotics and Automation Letters 9 (2024) 3, 2023
Gesture synthesis has gained significant attention as a critical research field, aiming to produce contextually appropriate and natural gestures corresponding to speech or textual input. Although deep learning-based approaches have achieved remarkable progress, they often overlook the rich semantic information present in the text, leading to less ...
arxiv   +1 more source

Speech gesture generation from the trimodal context of text, audio, and speaker identity [PDF]

open access: yesACM Transactions on Graphics, 2020
For human-like agents, including virtual avatars and social robots, making proper gestures while speaking is crucial in human-agent interaction. Co-speech gestures enhance interaction experiences and make the agents look alive.
Youngwoo Yoon   +6 more
semanticscholar   +1 more source

Home - About - Disclaimer - Privacy