Results 131 to 140 of about 149,578 (172)
Some of the next articles are maybe not open access.
Manual directional gestures facilitate cross-modal perceptual learning.
Cognition, 2019Action and perception interact in complex ways to shape how we learn. In the context of language acquisition, for example, hand gestures can facilitate learning novel sound-to-meaning mappings that are critical to successfully understanding a second ...
Anna Zhen +4 more
semanticscholar +1 more source
A Library-Oriented Large Language Model Approach to Cross-Lingual and Cross-Modal Document Retrieval
ElectronicsUnder the growing demand for processing multimodal and cross-lingual information, traditional retrieval systems have encountered substantial limitations when handling heterogeneous inputs such as images, textual layouts, and multilingual language ...
Wang Yi +4 more
semanticscholar +1 more source
Learning Semantic Structure-preserved Embeddings for Cross-modal Retrieval
ACM Multimedia, 2018This paper learns semantic embeddings for multi-label cross-modal retrieval. Our method exploits the structure in semantics represented by label vectors to guide the learning of embeddings.
Yiling Wu, Shuhui Wang, Qingming Huang
semanticscholar +1 more source
Multimedia Feature Mapping and Correlation Learning for Cross-Modal Retrieval
International Journal of Grid and High Performance Computing, 2018This article describes how with the rapid increasing of multimedia content on the Internet, the need for effective cross-modal retrieval has attracted much attention recently. Many related works ignore the latent semantic correlations of modalities in the non-linear space and the extraction of high-level modality features, which only focuses on the ...
Xu Yuan +4 more
openaire +1 more source
Cross-modal association between vowels and colours: A cross-linguistic perspective.
Journal of the Acoustical Society of America, 2019Previous studies showed similar mappings between sounds and colours for synaesthetes and non-synaesthetes alike, and proposed that common mechanisms underlie such cross-modal association.
Peggy Mok +4 more
semanticscholar +1 more source
GEAL: Generalizable 3D Affordance Learning with Cross-Modal Consistency
Computer Vision and Pattern RecognitionIdentifying affordance regions on 3D objects from semantic cues is essential for robotics and human-machine interaction. However, existing 3D affordance learning methods struggle with generalization and robustness due to limited annotated data and a ...
Dongyue Lu +3 more
semanticscholar +1 more source
Syncgan: Synchronize the Latent Spaces of Cross-Modal Generative Adversarial Networks
IEEE International Conference on Multimedia and Expo, 2018Generative adversarial network (GAN) has achieved impressive success on cross-domain generation, but it faces difficulty in cross-modal generation due to the lack of a common distribution between heterogeneous data.
Wen-Cheng Chen +2 more
semanticscholar +1 more source
Workshop on Cognitive Modeling and Computational Linguistics
Humans have clear cross-modal preferences when matching certain novel words to visual shapes. Evidence suggests that these preferences play a prominent role in our linguistic processing, language learning, and the origins of signal-meaning mappings. With
T. Verhoef +2 more
semanticscholar +1 more source
Humans have clear cross-modal preferences when matching certain novel words to visual shapes. Evidence suggests that these preferences play a prominent role in our linguistic processing, language learning, and the origins of signal-meaning mappings. With
T. Verhoef +2 more
semanticscholar +1 more source
Ergonomics, 2020
A discrete four-choice response task with auditory signal presentation and a joystick-controlled visual tracking task was used to investigate how spatial compatibility influences the dual-task performance of different display-control settings.
S. Tsang, A. Chan, Xing Pan, S. Man
semanticscholar +1 more source
A discrete four-choice response task with auditory signal presentation and a joystick-controlled visual tracking task was used to investigate how spatial compatibility influences the dual-task performance of different display-control settings.
S. Tsang, A. Chan, Xing Pan, S. Man
semanticscholar +1 more source
Robust 3D Semantic Segmentation Based on Multi-Phase Multi-Modal Fusion for Intelligent Vehicles
IEEE Transactions on Intelligent Vehicles3D semantic segmentation is a key technology for intelligent vehicles. Recently, great efforts have been made to achieve accurate and robust 3D semantic segmentation results through LiDAR-camera fusion, due to the complementary information of images like
Peizhou Ni +5 more
semanticscholar +1 more source

