Results 41 to 50 of about 459,246 (134)

FLoCoRA: Federated Learning Compression with Low-Rank Adaptation

open access: yes2024 32nd European Signal Processing Conference (EUSIPCO)
Low-Rank Adaptation (LoRA) methods have gained popularity in efficient parameter fine-tuning of models containing hundreds of billions of parameters. In this work, instead, we demonstrate the application of LoRA methods to train small-vision models in Federated Learning (FL) from scratch.
Ribeiro, Lucas Grativol   +4 more
openaire   +2 more sources

DoRA: Weight-Decomposed Low-Rank Adaptation

open access: yes
Among the widely used parameter-efficient fine-tuning (PEFT) methods, LoRA and its variants have gained considerable popularity because of avoiding additional inference costs. However, there still often exists an accuracy gap between these methods and full fine-tuning (FT).
Liu, Shih-Yang   +6 more
openaire   +2 more sources

Low-Rank Adaptation of Neural Fields

open access: yes
Processing visual data often involves small adjustments or sequences of changes, e.g., image filtering, surface smoothing, and animation. While established graphics techniques like normal mapping and video compression exploit redundancy to encode such small changes efficiently, the problem of encoding small changes to neural fields -- neural network ...
Truong, Anh   +3 more
openaire   +2 more sources

Variational Low-Rank Adaptation Using IVON

open access: yes
arXiv
Cong, Bai   +6 more
openaire   +3 more sources

SBoRA: Low-Rank Adaptation with Regional Weight Updates

open access: yes
16 pages, 4 ...
Po, Lai-Man   +7 more
openaire   +2 more sources

Regularizing Subspace Redundancy of Low-Rank Adaptation

open access: yesProceedings of the 33rd ACM International Conference on Multimedia
10 pages, 4 figures, Accepted by ...
Yue Zhu   +10 more
openaire   +2 more sources

zFLoRA: Zero-Latency Fused Low-Rank Adapters

open access: yesProceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
Large language models (LLMs) are increasingly deployed with task-specific adapters catering to multiple downstream applications. In such a scenario, the additional compute associated with these apparently insignificant number of adapter parameters (typically less than 1% of the base model) turns out to be disproportionately significant during inference
Gowda, Dhananjaya   +3 more
openaire   +2 more sources

Contextually Guided Transformers via Low-Rank Adaptation

open access: yes
Large Language Models (LLMs) based on Transformers excel at text processing, but their reliance on prompts for specialized behavior introduces computational overhead. We propose a modification to a Transformer architecture that eliminates the need for explicit prompts by learning to encode context into the model's weights.
Zhmoginov, Andrey   +3 more
openaire   +2 more sources

TensLoRA: Tensor Alternatives for Low-Rank Adaptation

open access: yes
Submitted at ICASSP 2026. 5 pages, 1 figure, 2 tables.
Marmoret, Axel   +4 more
openaire   +2 more sources

MokA: Multimodal Low-Rank Adaptation for MLLMs

open access: yes
In this paper, we reveal that most current efficient multimodal fine-tuning methods are hindered by a key limitation: they are directly borrowed from LLMs, often neglecting the intrinsic differences of multimodal scenarios and even affecting the full utilization of all modalities. Inspired by our empirical observation, we argue that unimodal adaptation
Wei, Yake   +3 more
openaire   +2 more sources

Home - About - Disclaimer - Privacy