Acoustic cues for the korean stop contrast-dialectal variation [PDF]
In this study, cross-dialectal variation in the use of the acoustic cues of VOT and F0 to mark the laryngeal contrast in Korean stops is examined with Chonnam Korean and Seoul Korean.
Choi, Hansook
core
Seeing the Speaker's Face Enhances Second Language Shadowing: Neural and Behavioral Evidence
Abstract This functional magnetic resonance imaging (fMRI) study investigated how facial cues influence second language (L2) shadowing among 42 Japanese learners of English. Participants completed four conditions that varied by task type (listening vs. shadowing) and visual input (face vs. mosaic).
Hyeonjeong Jeong +7 more
wiley +1 more source
Abstract Measurement of interactional competence (IC) has attracted increasing interest in language assessment research. One key question is whether proficiency sufficiently accounts for IC, making separate IC assessment unnecessary. This study examines the IC–proficiency relationship using a test that assesses Chinese speakers’ ability to manage ...
David Wei Dai, Carsten Roever
wiley +1 more source
How to Do Things Without Words: Infants, utterance-activity and distributed cognition [PDF]
Clark and Chalmers (1998) defend the hypothesis of an ‘Extended Mind’, maintaining that beliefs and other paradigmatic mental states can be implemented outside the central nervous system or body.
Bargh +56 more
core +1 more source
Two Nationalisms, One City: Official and Diasporic Framings of the 2019 Hong Kong Protests
ABSTRACT This study analyses the contested collective memories of the 2019 Anti‐Extradition Law Amendment Bill (Anti‐ELAB) movement, investigating how the Hong Kong government and diaspora construct divergent narratives to shape national identity and nationalism.
Isaac Iu
wiley +1 more source
A Shallow Echo: Artificial Intelligence and the Semantic Flattening of the Qur'an
ABSTRACT Scriptural Arabic relies on highly intentional word choices, employing apparent synonyms and near‐synonyms that convey distinct semantic values based on their specific textual placement. Historically, computational translation has struggled to reproduce these precise textual boundaries. Addressing this issue, the present investigation assesses
Ekrema Shehab
wiley +1 more source
Auditory communication in domestic dogs: vocal signalling in the extended social environment of a companion animal [PDF]
Domestic dogs produce a range of vocalisations, including barks, growls, and whimpers, which are shared with other canid species. The source–filter model of vocal production can be used as a theoretical and applied framework to explain how and why the ...
Adachi +130 more
core +1 more source
Decoding Emotional Signatures of Ethical Ads: An Analysis of Actor‐Viewer Synchrony
ABSTRACT We examine whether ethical advertisements differ from conventional ads in their on‐screen emotional signatures and whether those signatures transfer to actor‐viewer synchrony. Study 1 analyses 138 professionally produced YouTube ads using Automated Facial Expression Recognition (AFER) and Convolutional Neural Networks (CNN) to quantify actor ...
Vik Naidoo, Nicolas Hamelin
wiley +1 more source
Segmentation of the Accentual Phrase in Seoul Korean [PDF]
Pairs of phonemically identical utterances with different location of an Accentual Phrase boundary were presented to listeners. When duration and/or F0 were swapped between the utterances within a pair, only F0 change elicited changes in listeners ...
Jeon, Hae-Sung, Nolan, Francis
core
Gesturing While Writing: An Alternate Perspective on Mimetic Prosody
Critical Quarterly, EarlyView.
Paul Magee
wiley +1 more source

