Simulation-Based Training for Emergency Department Flow Management: A Dual-Site Experience With GridlockED in France. [PDF]
Lansiaux E, Guerif Dubreucq E, Chan TM.
europepmc +1 more source
An AI‐Enabled All‐In‐One Visual, Proximity, and Tactile Perception Multimodal Sensor
Targeting integrated multimodal perception of robots, an AI‐enabled all‐in‐one multimodal sensor is proposed. This sensor is capable of perceiving three types of modalities, including vision, proximity, and tactility. By toggling an ultraviolet light and adjusting the camera focus, it switches smoothly between multiple perceptual modalities, enabling ...
Menghao Pu +7 more
wiley +1 more source
Internet gaming disorder symptoms and functional impairment in a non-clinical sample of adolescents and young adults in Japan. [PDF]
Tateno M +4 more
europepmc +1 more source
Continual Learning for Multimodal Data Fusion of a Soft Gripper
Models trained on a single data modality often struggle to generalize when exposed to a different modality. This work introduces a continual learning algorithm capable of incrementally learning different data modalities by leveraging both class‐incremental and domain‐incremental learning scenarios in an artificial environment where labeled data is ...
Nilay Kushawaha, Egidio Falotico
wiley +1 more source
The Predictive Utility of Past Success: Skill and Chance in Children's Theory of Performance. [PDF]
Pawsey H +4 more
europepmc +1 more source
Here, we present a textile, wearable capacitive interface enabling multidirectional remote control by dynamically modulating electrode overlap and spacing via a freely gliding upper electrode. A forearm‐mounted prototype drives robotic and media tasks with 12–15 ms latency, maintains < 0.8% drift after 500 cycles, and remains stably functional at 90 ...
Cagatay Gumus +8 more
wiley +1 more source
The use of an escape room-based learning strategy to enhance pharmacology knowledge among nursing students. [PDF]
Aktaş N, Sazak Y.
europepmc +1 more source
Multimodal Human–Robot Interaction Using Human Pose Estimation and Local Large Language Models
A multimodal human–robot interaction framework integrates human pose estimation (HPE) and a large language model (LLM) for gesture‐ and voice‐based robot control. Speech‐to‐text (STT) enables voice command interpretation, while a safety‐aware arbitration mechanism prioritizes gesture input for rapid intervention.
Nasiru Aboki +2 more
wiley +1 more source
User Experience of Extended Reality Treatment for Visuospatial Neglect Among Patients and Informal Caregivers: Qualitative Interview Study. [PDF]
Bousché E +3 more
europepmc +1 more source

