Results 71 to 80 of about 201,680 (280)

Vision‐Augmented Wearable Interfaces: Bioinspired Approaches for Realistic AI‐Human‐Machine Interaction

open access: yesAdvanced Materials Technologies, EarlyView.
This review presents recent progress in vision‐augmented wearable interfaces that combine artificial vision, soft wearable sensors, and exoskeletal robots. Inspired by biological visual systems, these technologies enable multimodal perception and intelligent human–machine interaction.
Jihun Lee   +4 more
wiley   +1 more source

Continual Learning for Multimodal Data Fusion of a Soft Gripper

open access: yesAdvanced Robotics Research, EarlyView.
Models trained on a single data modality often struggle to generalize when exposed to a different modality. This work introduces a continual learning algorithm capable of incrementally learning different data modalities by leveraging both class‐incremental and domain‐incremental learning scenarios in an artificial environment where labeled data is ...
Nilay Kushawaha, Egidio Falotico
wiley   +1 more source

NONVERBAL COMMUNICATION PERFORMED BY FOREIGN ENGLISH TEACHER

open access: yesIndonesian EFL Journal, 2020
This paper specifically aims at knowing the types of nonverbal communications performed by the foreign English teacher based on Schmitz s (2012) theory and finding out the students responses toward the foreign English teacher s nonverbal communication ...
Jihan Ananda   +2 more
doaj   +1 more source

THE COMPONENTS OF NONVERBAL COMMUNICATION IMPORTANT FACTORS IN THE TEACHING PROCESS [PDF]

open access: yes
The act of teaching is a predominantly verbal. Nevertheless communication during the teaching process equally depends on the paraverbal and nonverbal components, that are meant to reinforce the formative interaction between the teacher, as traditional ...
Camelia FIRICA
core  

Auditory–Tactile Congruence for Synthesis of Adaptive Pain Expressions in RoboPatients

open access: yesAdvanced Robotics Research, EarlyView.
In this work, we explore auditory–tactile congruence for synthesizing adaptive vocal pain expressions in robopatients. Using a robopatient platform that integrates vocal pain sounds with palpation forces, we conducted 7680 trials across 20 participants.
Saitarun Nadipineni   +4 more
wiley   +1 more source

Students\' Viewpoints on Advisors\' Nonverbal Communication Skills: A survey in Schools of Health and Allied Health Sciences in Kashan University of Medical Sciences [PDF]

open access: yesمجله ایرانی آموزش در علوم پزشکی, 2012
Introduction: One of the most important principles of counseling is proper use of communication skills nonverbal communication is one of the most effective ways of available communications.
Mohammad Sabbahi Bigdeli   +4 more
doaj  

Психологічні особливості сприйняття імпліцитної поведінки комунікатора [PDF]

open access: yes, 2011
У статті проаналізовано специфіку явища невербальної комунікації як ключового моменту міжособистісної взаємодії. Розглянуто та проаналізовано основні характеристики та функції невербальної поведінки та підсумовано, що наявність стійких зв'язків між усіма
Пасічник, І.Д. (I. Pasichnyk)
core  

A Multidirectional Textile Interface for Remote Control Using Dynamic Area‐Based Capacitance Modulation

open access: yesAdvanced Robotics Research, EarlyView.
Here, we present a textile, wearable capacitive interface enabling multidirectional remote control by dynamically modulating electrode overlap and spacing via a freely gliding upper electrode. A forearm‐mounted prototype drives robotic and media tasks with 12–15 ms latency, maintains < 0.8% drift after 500 cycles, and remains stably functional at 90 ...
Cagatay Gumus   +8 more
wiley   +1 more source

Spotting Agreement and Disagreement: A Survey of Nonverbal Audiovisual Cues and Tools [PDF]

open access: yes, 2009
While detecting and interpreting temporal patterns of non–verbal behavioral cues in a given context is a natural and often unconscious process for humans, it remains a rather difficult task for computer systems.
Bousmalis, Konstantinos   +2 more
core   +5 more sources

Multimodal Human–Robot Interaction Using Human Pose Estimation and Local Large Language Models

open access: yesAdvanced Robotics Research, EarlyView.
A multimodal human–robot interaction framework integrates human pose estimation (HPE) and a large language model (LLM) for gesture‐ and voice‐based robot control. Speech‐to‐text (STT) enables voice command interpretation, while a safety‐aware arbitration mechanism prioritizes gesture input for rapid intervention.
Nasiru Aboki   +2 more
wiley   +1 more source

Home - About - Disclaimer - Privacy