Results 11 to 20 of about 459,246 (134)
Effects of crowding and attention on high-levels of motion processing and motion adaptation [PDF]
The motion after-effect (MAE) persists in crowding conditions, i.e., when the adaptation direction cannot be reliably perceived. The MAE originating from complex moving patterns spreads into non-adapted sectors of a multi-sector adapting display (i.e ...
Greenlee, Mark W., Pavan, Andrea
core +2 more sources
FouRA: Fourier Low-Rank Adaptation
While Low-Rank Adaptation (LoRA) has proven beneficial for efficiently fine-tuning large models, LoRA fine-tuned text-to-image diffusion models lack diversity in the generated images, as the model tends to copy data from the observed training samples. This effect becomes more pronounced at higher values of adapter strength and for adapters with higher ...
Borse, Shubhankar +9 more
openaire +2 more sources
Transfer Learning via Contextual Invariants for One-to-Many Cross-Domain Recommendation
The rapid proliferation of new users and items on the social web has aggravated the gray-sheep user/long-tail item challenge in recommender systems.
Bendre, Mangesh +4 more
core +1 more source
Unsupervised Domain Adaptation for Face Recognition in Unlabeled Videos
Despite rapid advances in face recognition, there remains a clear gap between the performance of still image-based face recognition and video-based face recognition, due to the vast difference in visual quality between the domains and the difficulty of ...
Chandraker, Manmohan +5 more
core +1 more source
Adaptive Low-Rank Methods: Problems on Sobolev Spaces [PDF]
This paper is concerned with the development and analysis of an iterative solver for high-dimensional second-order elliptic problems based on subspace-based low-rank tensor formats. Both the subspaces giving rise to low-rank approximations and corresponding sparse approximations of lower-dimensional tensor components are determined adaptively.
Markus Bachmayr, Wolfgang Dahmen
openaire +3 more sources
Towards Adapting ImageNet to Reality: Scalable Domain Adaptation with Implicit Low-rank Transformations [PDF]
Images seen during test time are often not from the same distribution as images used for learning. This problem, known as domain shift, occurs when training classifiers from object-centric internet image databases and trying to apply them directly to ...
Darrell, Trevor +4 more
core
Return of Frustratingly Easy Domain Adaptation
Unlike human learning, machine learning often fails to handle changes between training (source) and test (target) input distributions. Such domain shifts, common in practical scenarios, severely damage the performance of conventional machine learning ...
Feng, Jiashi, Saenko, Kate, Sun, Baochen
core +1 more source
ABSTRACT Primary lung carcinomas and bronchial carcinoid tumors (BC) are very rare malignancies in childhood. While typical BC and mucoepidermoid carcinomas are mostly low‐grade, localized tumors with a more favorable prognosis than in adults, necessitating avoidance of overtreatment, adenocarcinomas of the lung are often diagnosed at advanced disease ...
Michael Abele +19 more
wiley +1 more source
Factor analysis modelling for speaker verification with short utterances [PDF]
This paper examines combining both relevance MAP and subspace speaker adaptation processes to train GMM speaker models for use in speaker verification systems with a particular focus on short utterance lengths.
Lustri, Christopher +2 more
core +1 more source
CoLA: Collaborative Low-Rank Adaptation
The scaling law of Large Language Models (LLMs) reveals a power-law relationship, showing diminishing return on performance as model scale increases. While training LLMs from scratch is resource-intensive, fine-tuning a pre-trained model for specific tasks has become a practical alternative.
Zhou, Yiyun, Yao, Chang, Chen, Jingyuan
openaire +2 more sources

