Results 1 to 10 of about 4,969,023 (298)
Adversarial Multitask Learning for Domain Adaptation Through Domain Adapter
This study presents a technique called Adversarial Multitask Learning (AML) to enhance the effectiveness of domain adaptation methods in practical applications, which are currently highly sought after. The proposed approach addresses the challenges posed
Hidayaturrahman +3 more
doaj +2 more sources
A Bearing Fault Diagnosis Method Based on Dual-Stream Hybrid-Domain Adaptation [PDF]
Bearing fault diagnosis under varying operating conditions faces challenges of domain shift and labeled data scarcity. This paper proposes a dual-stream hybrid-domain adaptation network (DS-HDA Net) that fuses CNN-extracted time-domain features with MLP ...
Xinze Jiao, Jianjie Zhang, Jianhui Cao
doaj +2 more sources
Multi-EPL: Accurate multi-source domain adaptation
Given multiple source datasets with labels, how can we train a target model with no labeled data? Multi-source domain adaptation (MSDA) aims to train a model using multiple source datasets different from a target dataset in the absence of target data ...
Seongmin Lee, Hyunsik Jeon, U. Kang
doaj +2 more sources
Unsupervised domain adaptation with post-adaptation labeled domain performance preservation
Unsupervised domain adaptation is a machine learning-oriented application that aims to transfer knowledge learned from a seen (source) domain with labeled data to an unseen (target) domain with only unlabeled data.
Haidi Badr, Nayer Wanas, Magda Fayek
doaj +1 more source
NuSegDA: Domain adaptation for nuclei segmentation
The accurate segmentation of nuclei is crucial for cancer diagnosis and further clinical treatments. To successfully train a nuclei segmentation network in a fully-supervised manner for a particular type of organ or cancer, we need the dataset with ...
Mohammad Minhazul Haq +2 more
doaj +1 more source
Partial Domain Adaptation Without Domain Alignment
10 pages.
Weikai Li, Songcan Chen
openaire +3 more sources
Domain Adaptive Ensemble Learning [PDF]
The problem of generalizing deep neural networks from multiple source domains to a target one is studied under two settings: When unlabeled target data is available, it is a multi-source unsupervised domain adaptation (UDA) problem, otherwise a domain generalization (DG) problem.
Kaiyang Zhou +3 more
openaire +3 more sources
UDAPTER - Efficient Domain Adaptation Using Adapters
We propose two methods to make unsupervised domain adaptation (UDA) more parameter efficient using adapters, small bottleneck layers interspersed with every layer of the large-scale pre-trained language model (PLM). The first method deconstructs UDA into a two-step process: first by adding a domain adapter to learn domain-invariant information and then
Malik, Bhavitvya +3 more
openaire +2 more sources
Self-Adaptation for Unsupervised Domain Adaptation [PDF]
Lack of labelled data in the target domain for training is a common problem in domain adaptation. To overcome this problem, we propose a novel unsupervised domain adaptation method that combines projection and self-training based approaches. Using the labelled data from the source domain, we first learn a projection that maximises the distance among ...
Cui, Xia, Bollegala, Danushka
openaire +2 more sources
Cross Domain Mean Approximation for Unsupervised Domain Adaptation
Unsupervised Domain Adaptation (UDA) aims to leverage the knowledge from the labeled source domain to help the task of target domain with the unlabeled data. It is a key step for UDA to minimize the cross-domain distribution divergence. In this paper, we
Shaofei Zang +4 more
doaj +1 more source

