Results 261 to 270 of about 6,835,887 (313)
Some of the next articles are maybe not open access.
Expression of Personality by Gaze Movements of an Android Robot in Multi-Party Dialogues*
IEEE International Symposium on Robot and Human Interactive Communication, 2022In this study, we describe an improved version of our proposed model to generate gaze movements (eye and head movements) of a dialogue robot in multi-party dialogue situations, and investigated how the impressions change for models created by data of ...
Taiken Shintani, C. Ishi, H. Ishiguro
semanticscholar +1 more source
From Bottom-Up To Top-Down: Characterization Of Training Process In Gaze Modeling
IEEE International Conference on Acoustics, Speech, and Signal Processing, 2022During training, artificial neural networks might not converge to a global minimum. Usually, using gradient descent, the training procedure cause the network to stroll in the high-dimensional weights’ space.
Ron M. Hecht +4 more
semanticscholar +1 more source
Appearance-Based Gaze Estimation Using Visual Saliency
IEEE Transactions on Pattern Analysis and Machine Intelligence, 2013Yusuke Sugano +2 more
semanticscholar +3 more sources
Human Factors and Ergonomics in Manufacturing & Service Industries, 2021
To investigate a suitable design of interactive objects in the fixation‐triggered eye‐control interface, this study conducted two ergonomic experiments based on the location of an object, area of the object, and distance between adjacent objects ...
Y. Niu +7 more
semanticscholar +1 more source
To investigate a suitable design of interactive objects in the fixation‐triggered eye‐control interface, this study conducted two ergonomic experiments based on the location of an object, area of the object, and distance between adjacent objects ...
Y. Niu +7 more
semanticscholar +1 more source
A View on the Viewer: Gaze-Adaptive Captions for Videos
International Conference on Human Factors in Computing Systems, 2020Subtitles play a crucial role in cross-lingual distribution of multimedia content and help communicate information where auditory content is not feasible (loud environments, hearing impairments, unknown languages). Established methods utilize text at the
K. Kurzhals +4 more
semanticscholar +1 more source
Analysis of Eye Gaze Reasons and Gaze Aversions During Three-Party Conversations
Interspeech, 2021The background of this study is the generation of natural gaze behaviors in human-robot multimodal interaction. For that purpose, in this study we analyzed gaze behaviors of multiple speakers in a dataset containing three-party conversations, in terms of
C. Ishi, Taiken Shintani
semanticscholar +1 more source
A distributed real time eye-gaze tracking system
EFTA 2003. 2003 IEEE Conference on Emerging Technologies and Factory Automation. Proceedings (Cat. No.03TH8696), 2004This paper describes a real-time eye-gaze tracking system that bases its accuracy in a very precise estimation of the user's pupil centre. The system consists of two cameras, four illuminators and a low cost cluster of four PC's interconnected by a gigabit network. One of the cameras is mounted on a pan-tilt unit in order to perform the tracking of the
A. Garcia +6 more
openaire +1 more source
Comparing pedestrians’ gaze behavior in desktop and in real environments
Cartography and Geographic Information Science, 2020This research is motivated by the widespread use of desktop environments in the lab and by the recent trend of conducting real-world eye-tracking experiments to investigate pedestrian navigation.
Weihua Dong +6 more
semanticscholar +1 more source
Emotion, 2020
Facial expression recognition relies on the processing of diagnostic information from different facial regions. For example, successful recognition of anger versus disgust requires one to process information located in the eye/brow region, or in the ...
N. Yitzhak +3 more
semanticscholar +1 more source
Facial expression recognition relies on the processing of diagnostic information from different facial regions. For example, successful recognition of anger versus disgust requires one to process information located in the eye/brow region, or in the ...
N. Yitzhak +3 more
semanticscholar +1 more source
Enhancing 3D Gaze Estimation in the Wild using Weak Supervision with Gaze Following Labels
Computer Vision and Pattern Recognitionin-the-wild 3D gaze datasets. To address these challenges, we introduce a novel Self-Training Weakly-Supervised Gaze Estimation framework (ST-WSGE). This two-stage learning framework leverages diverse 2D gaze datasets, such as gaze-following data, which ...
Pierre Vuillecard, J. Odobez
semanticscholar +1 more source

