Results 31 to 40 of about 459,246 (134)
BeamLoRA: Beam-Constraint Low-Rank Adaptation
Due to the demand for efficient fine-tuning of large language models, Low-Rank Adaptation (LoRA) has been widely adopted as one of the most effective parameter-efficient fine-tuning methods. Nevertheless, while LoRA improves efficiency, there remains room for improvement in accuracy. Herein, we adopt a novel perspective to assess the characteristics of
Gu, Naibin +9 more
openaire +2 more sources
GoRA: Gradient-driven Adaptive Low Rank Adaptation
NeurIPS ...
He, Haonan +6 more
openaire +2 more sources
Compacter: Efficient Low-Rank Hypercomplex Adapter Layers
accepted in NeurIPS ...
Mahabadi, Rabeeh Karimi +2 more
openaire +2 more sources
Low-Rank Interconnected Adaptation across Layers
Accepted to ACL 2025 (findings, long paper)
Zhong, Yibo, Zhao, Jinman, Zhou, Yao
openaire +2 more sources
SARA: Singular-Value Based Adaptive Low-Rank Adaption
With the increasing number of parameters in large pre-trained models, LoRA as a parameter-efficient fine-tuning(PEFT) method is widely used for not adding inference overhead. The LoRA method assumes that weight changes during fine-tuning can be approximated by low-rank matrices.
Gu, Jihao +4 more
openaire +2 more sources
ASLoRA: Adaptive Sharing Low-Rank Adaptation Across Layers
As large language models (LLMs) grow in size, traditional full fine-tuning becomes increasingly impractical due to its high computational and storage costs. Although popular parameter-efficient fine-tuning methods, such as LoRA, have significantly reduced the number of tunable parameters, there is still room for further optimization.
Hu, Junyan +6 more
openaire +2 more sources
A Survey on Metric Learning for Feature Vectors and Structured Data
The need for appropriate ways to measure the distance or similarity between data is ubiquitous in machine learning, pattern recognition and data mining, but handcrafting such good metrics for specific problems is generally difficult.
Bellet, Aurélien +2 more
core
RaSA: Rank-Sharing Low-Rank Adaptation
Low-rank adaptation (LoRA) has been prominently employed for parameter-efficient fine-tuning of large language models (LLMs). However, the limited expressive capacity of LoRA, stemming from the low-rank constraint, has been recognized as a bottleneck, particularly in rigorous tasks like code generation and mathematical reasoning.
He, Zhiwei +9 more
openaire +2 more sources
Aggregating Low Rank Adapters in Federated Fine-Tuning
presented at conference https://flta-conference.org/flta-2024-detailed-program/
Trautmann, Evelyn +2 more
openaire +2 more sources
Ensembles of Low-Rank Expert Adapters
The training and fine-tuning of large language models (LLMs) often involve diverse textual data from multiple sources, which poses challenges due to conflicting gradient directions, hindering optimization and specialization. These challenges can undermine model generalization across tasks, resulting in reduced downstream performance.
Li, Yinghao +3 more
openaire +2 more sources

