Results 1 to 10 of about 153,605 (299)
Musicians are more consistent: Gestural cross-modal mappings of pitch, loudness and tempo in real-time [PDF]
Cross-modal mappings of auditory stimuli reveal valuable insights into how humans make sense of sound and music. Whereas researchers have investigated cross-modal mappings of sound features varied in isolation within paradigms such as speeded ...
Mats B. Küssner +3 more
doaj +3 more sources
Natural cross-modal mappings between visual and auditory features. [PDF]
The brain may combine information from different sense modalities to enhance the speed and accuracy of detection of objects and events, and the choice of appropriate responses. There is mounting evidence that perceptual experiences that appear to be modality-specific are also influenced by activity from other sensory modalities, even in the absence of ...
Evans KK, Treisman A.
europepmc +4 more sources
Cross Modal Facial Image Synthesis Using a Collaborative Bidirectional Style Transfer Network
In this paper, we present a novel collaborative bidirectional style transfer network based on generative adversarial network (GAN) for cross modal facial image synthesis, possibly with large modality gap.
Nizam Ud Din +4 more
doaj +2 more sources
Learning visual to auditory sensory substitution reveals flexibility in image to sound mapping [PDF]
Visual-to-auditory sensory substitution devices (SSDs) translate images to sounds. One SSD, The vOICe, translates a pixel’s vertical position into pitch and horizontal position into time.
Asa Kucinkas +5 more
doaj +2 more sources
Cross-modal re-mapping influences the Simon effect [PDF]
Tagliabue, Zorzi, Umiltà, and Bassignani (2000) showed that one's practicing of a spatially incompatible task influences performance in a Simon task even when the interval between the two tasks is as long as 1 week. In the present study, three experiments were conducted to investigate whether such an effect could be found in a cross-modal paradigm ...
TAGLIABUE, MARIAELENA +2 more
openaire +4 more sources
Survey of Visual Question Answering Based on Deep Learning [PDF]
Visual question answering(VQA) is an interdisciplinary research paradigm that involves computer vision and natural language processing.VQA generally requires both image and text data to be encoded,their mappings learned,and their features fused,before ...
LI Xiang, FAN Zhiguang, LI Xuexiang, ZHANG Weixing, YANG Cong, CAO Yangjie
doaj +1 more source
Cross-modal distillation for flood extent mapping
Abstract The increasing intensity and frequency of floods is one of the many consequences of our changing climate. In this work, we explore ML techniques that improve the flood detection module of an operational early flood warning system. Our method exploits an unlabeled dataset of paired multi-spectral and synthetic aperture radar (SAR) imagery to
Shubhika Garg +6 more
openaire +3 more sources
Augmented reality flavor: cross-modal mapping across gustation, olfaction, and vision [PDF]
AbstractGustatory display research is still in its infancy despite being one of the essential everyday senses that human practice while eating and drinking. Indeed, the most important and frequent tasks that our brain deals with every day are foraging and feeding.
Osama Halabi, Mohammad Saleh
openaire +4 more sources
People conceptualize auditory pitch as vertical space: low and high pitch correspond to low and high space, respectively. The strength of this cross-modal correspondence, however, seems to vary across different cultural contexts and a debate on the ...
Valentijn Prové
doaj +1 more source
Future or Movement? The L2 Acquisition of Aller + V Forms
This study aims to advance the understanding of the impact of the discursive context in the form-function mappings of aller + V forms by native speakers (NSs) and learners of French (NNSs), and to further knowledge about the developmental patterns of use
Pascale Leclercq
doaj +1 more source

