Results 21 to 30 of about 31,166 (283)
Emotional Prosody Control for Speech Generation [PDF]
Machine-generated speech is characterized by its limited or unnatural emotional variation. Current text to speech systems generates speech with either a flat emotion, emotion selected from a predefined set, average variation learned from prosody sequences in training data or transferred from a source style.
Sivaprasad, Sarath +2 more
openaire +2 more sources
Inferring emotions from speech prosody: not so easy at age five. [PDF]
Previous research has suggested that children do not rely on prosody to infer a speaker's emotional state because of biases toward lexical content or situational context.
Marc Aguert +4 more
doaj +1 more source
Associations between vocal emotion recognition and socio-emotional adjustment in children
The human voice is a primary channel for emotional communication. It is often presumed that being able to recognize vocal emotions is important for everyday socio-emotional functioning, but evidence for this assumption remains scarce.
Leonor Neves +4 more
doaj +1 more source
In computerised technology, artificial speech is becoming increasingly important, and is already used in ATMs, online gaming and healthcare contexts. However, today’s artificial speech typically sounds monotonous, a main reason for this being the lack of
Rachel L. C. Mitchell, Yi eXu
doaj +1 more source
Generating expressive speech for storytelling applications [PDF]
Work on expressive speech synthesis has long focused on the expression of basic emotions. In recent years, however, interest in other expressive styles has been increasing.
Bailly, G. +8 more
core +3 more sources
Speech intelligibility and prosody production in children with cochlear implants [PDF]
Objectives—The purpose of the current study was to examine the relation between speech intelligibility and prosody production in children who use cochlear implants.
Bergeson, Tonya R. +2 more
core +2 more sources
Prosody, emotions, and… ‘whatever’
We examine the role of prosody in cueing a scale of negative meanings associated with the use of whatever. The analysis of a corpus of elicited examples shows that the more negative the token, the more likely it is to have an additional pitch accent, extended duration, and expanded pitch range on the first syllable. These findings are analyzed as a
Benus, Stefan +2 more
openaire +2 more sources
Background : The recognition of the emotion expressed during conversation relies on the integration of both semantic processing and decoding of emotional prosody. The integration of both types of elements is necessary for social interaction. No study has
Perrine eBrazo +7 more
doaj +1 more source
Atypical neural responses to vocal anger in attention-deficit/hyperactivity disorder [PDF]
Background Deficits in facial emotion processing, reported in attention-deficit/hyperactivity disorder (ADHD), have been linked to both early perceptual and later attentional components of event-related potentials (ERPs).
Banaschewski +59 more
core +2 more sources
ERP evidence for the recognition of emotional prosody through simulated cochlear implant strategies
Background Emotionally salient information in spoken language can be provided by variations in speech melody (prosody) or by emotional semantics. Emotional prosody is essential to convey feelings through speech.
Agrawal Deepashri +6 more
doaj +1 more source

