Results 111 to 120 of about 417,336 (324)
Voice emotion recognition by Mandarin‐speaking pediatric cochlear implant users in Taiwan
Objectives To explore the effects of obligatory lexical tone learning on speech emotion recognition and the cross‐culture differences between United States and Taiwan for speech emotion understanding in children with cochlear implant. Methods This cohort
Yung‐Song Lin +7 more
doaj +1 more source
Cognitive Behavioral Therapy for Youth with Childhood‐Onset Lupus: A Randomized Clinical Trial
Objective Our objective was to determine the feasibility and acceptability of the Treatment and Education Approach for Childhood‐onset Lupus (TEACH), a six‐session cognitive behavioral intervention addressing depressive, fatigue, and pain symptoms, delivered remotely to individual youth with lupus by a trained interventionist.
Natoshia R. Cunningham +29 more
wiley +1 more source
A unidirectional cerebral organoid–organoid neural circuit is established using a microfluidic platform, enabling controlled directional propagation of electrical signals, neuroinflammatory cues, and neurodegenerative disease–related proteins between spatially separated organoids.
Kyeong Seob Hwang +9 more
wiley +1 more source
Automatic Emotion Recognition in Speech: Possibilities and Significance [PDF]
Automatic speech recognition and spoken language understanding are crucial steps towards a natural humanmachine interaction. The main task of the speech communication process is the recognition of the word sequence, but the recognition of prosody ...
Milana Bojanić, Vlado Delić
doaj
This study presents a highly sensitive, oxidation‐resistant, biocompatible, and degradable Janus piezoresistive electronic skin for sustainable wearable electronics. The electronic skin exhibits sensitive and stable response across a broad pressure range, exceptional oxidation resistance, and Janus wettability.
Joon Kim +5 more
wiley +1 more source
In the super smart society (Society 5.0), new and rapid methods are needed for speech recognition, emotion recognition, and speech emotion recognition areas to maximize human-machine or human-computer interaction and collaboration. Speech signal contains
Yeşim ÜLGEN SÖNMEZ, Asaf VAROL
doaj +1 more source
Recent Progress on Flexible Multimodal Sensors: Decoupling Strategies, Fabrication and Applications
In this review, we establish a tripartite decoupling framework for flexible multimodal sensors, which elucidates the underlying principles of signal crosstalk and their solutions through material design, structural engineering, and AI algorithms. We also demonstrate its potential applications across environmental monitoring, health monitoring, human ...
Tao Wu +10 more
wiley +1 more source
Continual Learning for Multimodal Data Fusion of a Soft Gripper
Models trained on a single data modality often struggle to generalize when exposed to a different modality. This work introduces a continual learning algorithm capable of incrementally learning different data modalities by leveraging both class‐incremental and domain‐incremental learning scenarios in an artificial environment where labeled data is ...
Nilay Kushawaha, Egidio Falotico
wiley +1 more source
Expressive Speech Identifications based on Hidden Markov Model [PDF]
This paper concerns a sub-area of a larger research field of Affective Computing, focusing on the employment of affect-recognition systems using speech modality.
Barra Chicote, Roberto +4 more
core
Deep factorization for speech signal
Various informative factors mixed in speech signals, leading to great difficulty when decoding any of the factors. An intuitive idea is to factorize each speech frame into individual informative factors, though it turns out to be highly difficult ...
Chen, Yixiang +5 more
core +1 more source

