Results 31 to 40 of about 149,578 (172)
Learning Spoken Words via the Ears and Eyes: Evidence from 30-Month-Old Children
From the very first moments of their lives, infants are able to link specific movements of the visual articulators to auditory speech signals. However, recent evidence indicates that infants focus primarily on auditory speech signals when learning new ...
Mélanie Havy, Pascal Zesiger
doaj +1 more source
Semantic Crosstalk in Timbre Perception
Many adjectives for musical timbre reflect cross-modal correspondence, particularly with vision and touch (e.g., “dark–bright,” “smooth–rough”). Although multisensory integration between visual/tactile processing and hearing has been demonstrated for ...
Zachary Wallmark
doaj +1 more source
Visual-Haptic Mapping and the Origin of Cross-Modal Identity [PDF]
We found that congenitally blind people who gain sight initially fail to identify seen objects with their felt versions: a negative answer to the Molyneux question. However, they succeed in doing so after a few days of sight. We argue that this rapid learning resembles that of adaptation to rearrangement in which the experimentally produced separations
openaire +2 more sources
A Universal Model for Cross Modality Mapping by Relational Reasoning
With the aim of matching a pair of instances from two different modalities, cross modality mapping has attracted growing attention in the computer vision community. Existing methods usually formulate the mapping function as the similarity measure between the pair of instance features, which are embedded to a common space.
Li, Zun +6 more
openaire +2 more sources
CrossATNet - a novel cross-attention based framework for sketch-based image retrieval [PDF]
We propose a novel framework for cross-modal zero-shot learning (ZSL) in the context of sketch-based image retrieval (SBIR). Conventionally, the SBIR schema mainly considers simultaneous mappings among the two image views and the semantic side ...
Ushasi Chaudhuri +3 more
semanticscholar +1 more source
What if machines could seamlessly translate between the visual richness of images and the semantic depth of language with mathematical precision? This paper presents a theoretical and empirical analysis of five novel cross-modal Wasserstein adversarial ...
Joseph Tafataona Mtetwa +2 more
doaj +1 more source
Studies investigating cross-modal correspondences between auditory pitch and visual shapes have shown children and adults consistently match high pitch to pointy shapes and low pitch to curvy shapes, yet no studies have investigated linguistic-uses of ...
Nan Shang, Suzy J. Styles
doaj +1 more source
The early diagnosis of Alzheimer’s disease (AD) is crucial because individuals may first experience mild cognitive impairment (MCI), which can then develop into AD, enabling timely intervention, slowing disease progression, and advancing the ...
Liqiang Xu +6 more
doaj +1 more source
Lexical Synaesthesia in Metaphorical Collocations
This study seeks to shed more light on the role of lexical synaesthesia (LS), a phenomenon exemplified by expressing one sense in terms of another (e.g., gustation for sound – a sweet melody), in the process of forming metaphorical collocations. Lexical
Jana Jurčević
doaj +1 more source
Linguistic synesthesia, characterized by cross-modal sensory mappings, is frequently associated with metaphor and neurological synesthesia. However, while prior research has emphasized these latter phenomena, the cognitive and neural underpinnings of ...
Kaiwen Cheng, Shengqin Cao, Yu Chen
doaj +1 more source

