Results 71 to 80 of about 269,662 (318)

Cross-modal Synergy for Enhancing Emotion Recognition Through Integrated Audio–Video Fusion Techniques

open access: yesInternational Journal of Computational Intelligence Systems
Emotion identification is important for human–computer interaction, and its applications include medical and customer service provision. In the past, emotion recognition systems have been based on one modality, such as text, image, audio, or video, each ...
P. Santhiya   +3 more
doaj   +1 more source

Active Speaker Detection Using Audio, Visual, and Depth Modalities: A Survey

open access: yesIEEE Access
The rapid progress of multimodal signal processing in recent years has cleared the way for novel applications in human-computer interaction, surveillance, and telecommunication.
Siti Nur Aisyah Mohd Robi   +4 more
doaj   +1 more source

Integrating EEG and MEG signals to improve motor imagery classification in brain-computer interfaces

open access: yes, 2018
We propose a fusion approach that combines features from simultaneously recorded electroencephalographic (EEG) and magnetoencephalographic (MEG) signals to improve classification performances in motor imagery-based brain-computer interfaces (BCIs).
Bassett, Danielle S.   +6 more
core   +3 more sources

Fluid Biomarkers of Disease Burden and Cognitive Dysfunction in Progressive Supranuclear Palsy

open access: yesAnnals of Clinical and Translational Neurology, EarlyView.
ABSTRACT Objective Identifying objective biomarkers for progressive supranuclear palsy (PSP) is crucial to improving diagnosis and establishing clinical trial and treatment endpoints. This study evaluated fluid biomarkers in PSP versus controls and their associations with regional 18F‐PI‐2620 tau‐PET, clinical, and cognitive outcomes.
Roxane Dilcher   +10 more
wiley   +1 more source

SAWGAN-BDCMA: A Self-Attention Wasserstein GAN and Bidirectional Cross-Modal Attention Framework for Multimodal Emotion Recognition

open access: yesSensors
Emotion recognition from physiological signals is pivotal for advancing human–computer interaction, yet unimodal pipelines frequently underperform due to limited information, constrained data diversity, and suboptimal cross-modal fusion. Addressing these
Ning Zhang   +5 more
doaj   +1 more source

The maelstrom of online programs in Colombian teacher education

open access: yesEducation Policy Analysis Archives, 2018
The replacement of direct human interaction by the computer connected to the internet is one of the most radical reforms in the history of education. In the first part, we show chronologically how–unlike correspondence, radio and television–the internet ...
Pedro Pineda, Jorge Celis
doaj   +1 more source

Enabling Digital Continuity in Virtual Manufacturing for Eco‐Efficiency Assessment of Lightweight Structures by Means of a Domain‐Specific Structural Mechanics Language: Requirements, Idea and Proof of Concept

open access: yesAdvanced Engineering Materials, EarlyView.
This article presents a solver‐agnostic domain‐specific language (DSL) for computational structural mechanics that strengthens interoperability in virtual product development. Using a hierarchical data model, the DSL enables seamless exchange between diverse simulation tools and numerical methods.
Martin Rädel   +3 more
wiley   +1 more source

3D (Bio) Printing Combined Fiber Fabrication Methods for Tissue Engineering Applications: Possibilities and Limitations

open access: yesAdvanced Functional Materials, EarlyView.
Biofabrication aims at providing innovative technologies and tools for the fabrication of tissue‐like constructs for tissue engineering and regenerative medicine applications. By integrating multiple biofabrication technologies, such as 3D (bio) printing with fiber fabrication methods, it would be more realistic to reconstruct native tissue's ...
Waseem Kitana   +2 more
wiley   +1 more source

Cross-Modal Attention Fusion: A Deep Learning and Affective Computing Model for Emotion Recognition

open access: yesMultimodal Technologies and Interaction
Artificial emotional intelligence is a sub-domain of human–computer interaction research that aims to develop deep learning models capable of detecting and interpreting human emotional states through various modalities.
Himanshu Kumar   +2 more
doaj   +1 more source

Multimodal Grounding for Language Processing [PDF]

open access: yes, 2018
This survey discusses how recent developments in multimodal processing facilitate conceptual grounding of language. We categorize the information flow in multimodal processing with respect to cognitive models of human information processing and analyze ...
Beinborn, Lisa   +2 more
core   +2 more sources

Home - About - Disclaimer - Privacy