Results 41 to 50 of about 459,246 (134)
FLoCoRA: Federated Learning Compression with Low-Rank Adaptation
Low-Rank Adaptation (LoRA) methods have gained popularity in efficient parameter fine-tuning of models containing hundreds of billions of parameters. In this work, instead, we demonstrate the application of LoRA methods to train small-vision models in Federated Learning (FL) from scratch.
Ribeiro, Lucas Grativol +4 more
openaire +2 more sources
DoRA: Weight-Decomposed Low-Rank Adaptation
Among the widely used parameter-efficient fine-tuning (PEFT) methods, LoRA and its variants have gained considerable popularity because of avoiding additional inference costs. However, there still often exists an accuracy gap between these methods and full fine-tuning (FT).
Liu, Shih-Yang +6 more
openaire +2 more sources
Low-Rank Adaptation of Neural Fields
Processing visual data often involves small adjustments or sequences of changes, e.g., image filtering, surface smoothing, and animation. While established graphics techniques like normal mapping and video compression exploit redundancy to encode such small changes efficiently, the problem of encoding small changes to neural fields -- neural network ...
Truong, Anh +3 more
openaire +2 more sources
SBoRA: Low-Rank Adaptation with Regional Weight Updates
16 pages, 4 ...
Po, Lai-Man +7 more
openaire +2 more sources
Regularizing Subspace Redundancy of Low-Rank Adaptation
10 pages, 4 figures, Accepted by ...
Yue Zhu +10 more
openaire +2 more sources
zFLoRA: Zero-Latency Fused Low-Rank Adapters
Large language models (LLMs) are increasingly deployed with task-specific adapters catering to multiple downstream applications. In such a scenario, the additional compute associated with these apparently insignificant number of adapter parameters (typically less than 1% of the base model) turns out to be disproportionately significant during inference
Gowda, Dhananjaya +3 more
openaire +2 more sources
Contextually Guided Transformers via Low-Rank Adaptation
Large Language Models (LLMs) based on Transformers excel at text processing, but their reliance on prompts for specialized behavior introduces computational overhead. We propose a modification to a Transformer architecture that eliminates the need for explicit prompts by learning to encode context into the model's weights.
Zhmoginov, Andrey +3 more
openaire +2 more sources
TensLoRA: Tensor Alternatives for Low-Rank Adaptation
Submitted at ICASSP 2026. 5 pages, 1 figure, 2 tables.
Marmoret, Axel +4 more
openaire +2 more sources
MokA: Multimodal Low-Rank Adaptation for MLLMs
In this paper, we reveal that most current efficient multimodal fine-tuning methods are hindered by a key limitation: they are directly borrowed from LLMs, often neglecting the intrinsic differences of multimodal scenarios and even affecting the full utilization of all modalities. Inspired by our empirical observation, we argue that unimodal adaptation
Wei, Yake +3 more
openaire +2 more sources

