Results 11 to 20 of about 459,246 (134)

Effects of crowding and attention on high-levels of motion processing and motion adaptation [PDF]

open access: yes, 2015
The motion after-effect (MAE) persists in crowding conditions, i.e., when the adaptation direction cannot be reliably perceived. The MAE originating from complex moving patterns spreads into non-adapted sectors of a multi-sector adapting display (i.e ...
Greenlee, Mark W., Pavan, Andrea
core   +2 more sources

FouRA: Fourier Low-Rank Adaptation

open access: yesAdvances in Neural Information Processing Systems 37
While Low-Rank Adaptation (LoRA) has proven beneficial for efficiently fine-tuning large models, LoRA fine-tuned text-to-image diffusion models lack diversity in the generated images, as the model tends to copy data from the observed training samples. This effect becomes more pronounced at higher values of adapter strength and for adapters with higher ...
Borse, Shubhankar   +9 more
openaire   +2 more sources

Transfer Learning via Contextual Invariants for One-to-Many Cross-Domain Recommendation

open access: yes, 2020
The rapid proliferation of new users and items on the social web has aggravated the gray-sheep user/long-tail item challenge in recommender systems.
Bendre, Mangesh   +4 more
core   +1 more source

Unsupervised Domain Adaptation for Face Recognition in Unlabeled Videos

open access: yes, 2017
Despite rapid advances in face recognition, there remains a clear gap between the performance of still image-based face recognition and video-based face recognition, due to the vast difference in visual quality between the domains and the difficulty of ...
Chandraker, Manmohan   +5 more
core   +1 more source

Adaptive Low-Rank Methods: Problems on Sobolev Spaces [PDF]

open access: yesSIAM Journal on Numerical Analysis, 2016
This paper is concerned with the development and analysis of an iterative solver for high-dimensional second-order elliptic problems based on subspace-based low-rank tensor formats. Both the subspaces giving rise to low-rank approximations and corresponding sparse approximations of lower-dimensional tensor components are determined adaptively.
Markus Bachmayr, Wolfgang Dahmen
openaire   +3 more sources

Towards Adapting ImageNet to Reality: Scalable Domain Adaptation with Implicit Low-rank Transformations [PDF]

open access: yes, 2013
Images seen during test time are often not from the same distribution as images used for learning. This problem, known as domain shift, occurs when training classifiers from object-centric internet image databases and trying to apply them directly to ...
Darrell, Trevor   +4 more
core  

Return of Frustratingly Easy Domain Adaptation

open access: yes, 2015
Unlike human learning, machine learning often fails to handle changes between training (source) and test (target) input distributions. Such domain shifts, common in practical scenarios, severely damage the performance of conventional machine learning ...
Feng, Jiashi, Saenko, Kate, Sun, Baochen
core   +1 more source

Carcinomas and Carcinoid Tumors of the Lungs and Bronchi in Children and Adolescents: The EXPeRT Recommendations

open access: yesPediatric Blood &Cancer, EarlyView.
ABSTRACT Primary lung carcinomas and bronchial carcinoid tumors (BC) are very rare malignancies in childhood. While typical BC and mucoepidermoid carcinomas are mostly low‐grade, localized tumors with a more favorable prognosis than in adults, necessitating avoidance of overtreatment, adenocarcinomas of the lung are often diagnosed at advanced disease ...
Michael Abele   +19 more
wiley   +1 more source

Factor analysis modelling for speaker verification with short utterances [PDF]

open access: yes, 2008
This paper examines combining both relevance MAP and subspace speaker adaptation processes to train GMM speaker models for use in speaker verification systems with a particular focus on short utterance lengths.
Lustri, Christopher   +2 more
core   +1 more source

CoLA: Collaborative Low-Rank Adaptation

open access: yesFindings of the Association for Computational Linguistics: ACL 2025
The scaling law of Large Language Models (LLMs) reveals a power-law relationship, showing diminishing return on performance as model scale increases. While training LLMs from scratch is resource-intensive, fine-tuning a pre-trained model for specific tasks has become a practical alternative.
Zhou, Yiyun, Yao, Chang, Chen, Jingyuan
openaire   +2 more sources

Home - About - Disclaimer - Privacy