Results 11 to 20 of about 149,578 (172)

Cross-modal associations and synesthesia: Categorical perception and structure in vowel–color mappings in a large online sample

open access: yesBehavior Research Methods, 2019
We report associations between vowel sounds, graphemes, and colors collected online from over 1,000 Dutch speakers. We also provide open materials, including a Python implementation of the structure measure and code for a single-page web application to ...
C. Cuskley   +3 more
semanticscholar   +3 more sources

Survey of Visual Question Answering Based on Deep Learning [PDF]

open access: yesJisuanji kexue, 2023
Visual question answering(VQA) is an interdisciplinary research paradigm that involves computer vision and natural language processing.VQA generally requires both image and text data to be encoded,their mappings learned,and their features fused,before ...
LI Xiang, FAN Zhiguang, LI Xuexiang, ZHANG Weixing, YANG Cong, CAO Yangjie
doaj   +1 more source

Cross-modal distillation for flood extent mapping

open access: yesEnvironmental Data Science, 2023
Abstract The increasing intensity and frequency of floods is one of the many consequences of our changing climate. In this work, we explore ML techniques that improve the flood detection module of an operational early flood warning system. Our method exploits an unlabeled dataset of paired multi-spectral and synthetic aperture radar (SAR) imagery to
Shubhika Garg   +6 more
openaire   +3 more sources

Is Cross-Modal Information Retrieval Possible Without Training? [PDF]

open access: yesEuropean Conference on Information Retrieval, 2023
Encoded representations from a pretrained deep learning model (e.g., BERT text embeddings, penultimate CNN layer activations of an image) convey a rich set of features beneficial for information retrieval.
Hyunjin Choi   +3 more
semanticscholar   +1 more source

Augmented reality flavor: cross-modal mapping across gustation, olfaction, and vision [PDF]

open access: yesMultimedia Tools and Applications, 2021
AbstractGustatory display research is still in its infancy despite being one of the essential everyday senses that human practice while eating and drinking. Indeed, the most important and frequent tasks that our brain deals with every day are foraging and feeding.
Osama Halabi, Mohammad Saleh
openaire   +4 more sources

Measuring embodied conceptualizations of pitch in singing performances: Insights from an OpenPose study

open access: yesFrontiers in Communication, 2022
People conceptualize auditory pitch as vertical space: low and high pitch correspond to low and high space, respectively. The strength of this cross-modal correspondence, however, seems to vary across different cultural contexts and a debate on the ...
Valentijn Prové
doaj   +1 more source

Future or Movement? The L2 Acquisition of Aller + V Forms

open access: yesLanguages, 2021
This study aims to advance the understanding of the impact of the discursive context in the form-function mappings of aller + V forms by native speakers (NSs) and learners of French (NNSs), and to further knowledge about the developmental patterns of use
Pascale Leclercq
doaj   +1 more source

Pix2Map: Cross-Modal Retrieval for Inferring Street Maps from Images

open access: yes2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2023
12 pages, 8 ...
Wu, Xindi   +4 more
openaire   +2 more sources

That sounds sweet: using cross-modal correspondences to communicate gustatory attributes

open access: yesPsychology and Marketing, 2015
Klemens Knoeferle   +3 more
semanticscholar   +3 more sources

Cross-modal Map Learning for Vision and Language Navigation

open access: yes2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2022
We consider the problem of Vision-and-Language Navigation (VLN). The majority of current methods for VLN are trained end-to-end using either unstructured memory such as LSTM, or using cross-modal attention over the egocentric observations of the agent.
Georgakis, Georgios   +6 more
openaire   +2 more sources

Home - About - Disclaimer - Privacy