Results 221 to 230 of about 64,981 (259)
Some of the next articles are maybe not open access.
Cross-Modal Center Loss for 3D Cross-Modal Retrieval
2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2021Cross-modal retrieval aims to learn discriminative and modal-invariant features for data from different modalities. Unlike the existing methods which usually learn from the features extracted by offline networks, in this paper, we propose an approach to jointly train the components of cross-modal retrieval framework with metadata, and enable the ...
Longlong Jing +3 more
openaire +1 more source
FedCMR: Federated Cross-Modal Retrieval
Proceedings of the 44th International ACM SIGIR Conference on Research and Development in Information Retrieval, 2021Deep cross-modal retrieval methods have shown their competitiveness among different cross-modal retrieval algorithms. Generally, these methods require a large amount of training data. However, aggregating large amounts of data will incur huge privacy risks and high maintenance costs.
Linlin Zong +5 more
openaire +1 more source
Adversarial Cross-Modal Retrieval
Proceedings of the 25th ACM international conference on Multimedia, 2017Cross-modal retrieval aims to enable flexible retrieval experience across different modalities (e.g., texts vs. images). The core of cross-modal retrieval research is to learn a common subspace where the items of different modalities can be directly compared to each other.
Wang, Bokun (author) +4 more
openaire +2 more sources
Automatic Semantic Modeling by Cross-Modal Retrieval
2022 IEEE 24th Int Conf on High Performance Computing & Communications; 8th Int Conf on Data Science & Systems; 20th Int Conf on Smart City; 8th Int Conf on Dependability in Sensor, Cloud & Big Data Systems & Application (HPCC/DSS/SmartCity/DependSys), 2022Semantic models of data sources describe the concepts and relations within the data. Building semantic models with the help of a common ontology is a key step in automatically publishing the semantics of structured data into knowledge graphs. However, modeling semantics of data manually requires considerable human cost, and expertise and can be error ...
Ruiqing Xu +7 more
openaire +2 more sources
Cross-Modal Retrieval With Partially Mismatched Pairs
IEEE Transactions on Pattern Analysis and Machine Intelligence, 2023In this paper, we study a challenging but less-touched problem in cross-modal retrieval, i.e., partially mismatched pairs (PMPs). Specifically, in real-world scenarios, a huge number of multimedia data (e.g., the Conceptual Captions dataset) are collected from the Internet, and thus it is inevitable to wrongly treat some irrelevant cross-modal pairs as
Peng Hu +4 more
openaire +2 more sources
Generalized Zero-Shot Cross-Modal Retrieval
IEEE Transactions on Image Processing, 2019Cross-modal retrieval is an important research area due to its wide range of applications, and several algorithms have been proposed to address this task. We feel that it is the right time to take a step back and analyze the current status of research in this area.
Titir Dutta, Soma Biswas
openaire +3 more sources
Deep Centralized Cross-modal Retrieval
2021The mainstream of cross-modal retrieval approaches generally focus on measure the similarity between different types of data in a common subspace. Most of these methods are based on pairs or triplets of instances, which has the following disadvantages: 1) due to the high discrepancy of pairs and triplets, there might be a large number of zero-losses ...
Zhenyu Wen, Aimin Feng
openaire +1 more source
Deep Relation Embedding for Cross-Modal Retrieval
IEEE Transactions on Image Processing, 2021Cross-modal retrieval aims to identify relevant data across different modalities. In this work, we are dedicated to cross-modal retrieval between images and text sentences, which is formulated into similarity measurement for each image-text pair. To this end, we propose a Cross-modal Relation Guided Network (CRGN) to embed image and text into a latent ...
Yifan Zhang +4 more
openaire +2 more sources

