Musicians are more consistent: Gestural cross-modal mappings of pitch, loudness and tempo in real-time [PDF]
Cross-modal mappings of auditory stimuli reveal valuable insights into how humans make sense of sound and music. Whereas researchers have investigated cross-modal mappings of sound features varied in isolation within paradigms such as speeded ...
Mats B. Küssner +3 more
doaj +5 more sources
Natural cross-modal mappings between visual and auditory features. [PDF]
The brain may combine information from different sense modalities to enhance the speed and accuracy of detection of objects and events, and the choice of appropriate responses. There is mounting evidence that perceptual experiences that appear to be modality-specific are also influenced by activity from other sensory modalities, even in the absence of ...
Evans KK, Treisman A.
europepmc +4 more sources
What Sound Does That Taste? Cross-Modal Mappings across Gustation and Audition [PDF]
All people share implicit mappings across the senses, which give us preferences for certain sensory combinations over others (eg light colours are preferentially paired with higher-pitch sounds; Ward et al, 2006 Cortex42 264–280). Although previous work has tended to focus on the cross-modality of vision with other senses, here we present evidence of ...
Simner, Julia +2 more
semanticscholar +6 more sources
Do Neural Network Cross-Modal Mappings Really Bridge Modalities? [PDF]
Feed-forward networks are widely used in cross-modal applications to bridge modalities by mapping distributed vectors of one modality to the other, or to a shared space. The predicted vectors are then used to perform e.g., retrieval or labeling.
Collell Talleda, Guillem +1 more
openaire +5 more sources
Cross Modal Facial Image Synthesis Using a Collaborative Bidirectional Style Transfer Network
In this paper, we present a novel collaborative bidirectional style transfer network based on generative adversarial network (GAN) for cross modal facial image synthesis, possibly with large modality gap.
Nizam Ud Din +4 more
doaj +2 more sources
Cross-modal re-mapping influences the Simon effect [PDF]
Tagliabue, Zorzi, Umiltà, and Bassignani (2000) showed that one's practicing of a spatially incompatible task influences performance in a Simon task even when the interval between the two tasks is as long as 1 week. In the present study, three experiments were conducted to investigate whether such an effect could be found in a cross-modal paradigm ...
TAGLIABUE, MARIAELENA +2 more
openaire +4 more sources
Learning visual to auditory sensory substitution reveals flexibility in image to sound mapping [PDF]
Visual-to-auditory sensory substitution devices (SSDs) translate images to sounds. One SSD, The vOICe, translates a pixel’s vertical position into pitch and horizontal position into time.
Asa Kucinkas +5 more
doaj +2 more sources
Spatial metaphor in language can promote the development of cross‐modal mappings in children [PDF]
AbstractPitch is often described metaphorically: for example, Farsi and Turkish speakers use a ‘thickness’ metaphor (low sounds are ‘thick’ and high sounds are ‘thin’), while German and English speakers use a height metaphor (‘low’, ‘high’). This study examines how child and adult speakers of Farsi, Turkish, and German map pitch and thickness using a ...
Shayan, S. +3 more
openaire +5 more sources
Cross-Modal Sound Mapping Using Deep Learning
We present a method for automatic feature extraction and cross-modal mappingusing deep learning. Our system uses stacked autoencoders to learn a layeredfeature representation of the data. Feature vectors from two (or more)different domains are mapped to each other, effectively creating a cross-modalmapping. Our system can either run fully unsupervised,
Fried, Ohad, Fiebrink, Rebecca
openaire +2 more sources
Cross-modal Memory Networks for Radiology Report Generation [PDF]
Medical imaging plays a significant role in clinical practice of medical diagnosis, where the text reports of the images are essential in understanding them and facilitating later treatments.
Zhihong Chen +3 more
semanticscholar +1 more source

