Results 31 to 40 of about 459,246 (134)

BeamLoRA: Beam-Constraint Low-Rank Adaptation

open access: yesProceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Due to the demand for efficient fine-tuning of large language models, Low-Rank Adaptation (LoRA) has been widely adopted as one of the most effective parameter-efficient fine-tuning methods. Nevertheless, while LoRA improves efficiency, there remains room for improvement in accuracy. Herein, we adopt a novel perspective to assess the characteristics of
Gu, Naibin   +9 more
openaire   +2 more sources

GoRA: Gradient-driven Adaptive Low Rank Adaptation

open access: yes
NeurIPS ...
He, Haonan   +6 more
openaire   +2 more sources

Compacter: Efficient Low-Rank Hypercomplex Adapter Layers

open access: yes, 2021
accepted in NeurIPS ...
Mahabadi, Rabeeh Karimi   +2 more
openaire   +2 more sources

Low-Rank Interconnected Adaptation across Layers

open access: yesFindings of the Association for Computational Linguistics: ACL 2025
Accepted to ACL 2025 (findings, long paper)
Zhong, Yibo, Zhao, Jinman, Zhou, Yao
openaire   +2 more sources

SARA: Singular-Value Based Adaptive Low-Rank Adaption

open access: yes
With the increasing number of parameters in large pre-trained models, LoRA as a parameter-efficient fine-tuning(PEFT) method is widely used for not adding inference overhead. The LoRA method assumes that weight changes during fine-tuning can be approximated by low-rank matrices.
Gu, Jihao   +4 more
openaire   +2 more sources

ASLoRA: Adaptive Sharing Low-Rank Adaptation Across Layers

open access: yes
As large language models (LLMs) grow in size, traditional full fine-tuning becomes increasingly impractical due to its high computational and storage costs. Although popular parameter-efficient fine-tuning methods, such as LoRA, have significantly reduced the number of tunable parameters, there is still room for further optimization.
Hu, Junyan   +6 more
openaire   +2 more sources

A Survey on Metric Learning for Feature Vectors and Structured Data

open access: yes, 2013
The need for appropriate ways to measure the distance or similarity between data is ubiquitous in machine learning, pattern recognition and data mining, but handcrafting such good metrics for specific problems is generally difficult.
Bellet, AurĂ©lien   +2 more
core  

RaSA: Rank-Sharing Low-Rank Adaptation

open access: yes
Low-rank adaptation (LoRA) has been prominently employed for parameter-efficient fine-tuning of large language models (LLMs). However, the limited expressive capacity of LoRA, stemming from the low-rank constraint, has been recognized as a bottleneck, particularly in rigorous tasks like code generation and mathematical reasoning.
He, Zhiwei   +9 more
openaire   +2 more sources

Aggregating Low Rank Adapters in Federated Fine-Tuning

open access: yes2024 2nd International Conference on Federated Learning Technologies and Applications (FLTA)
presented at conference https://flta-conference.org/flta-2024-detailed-program/
Trautmann, Evelyn   +2 more
openaire   +2 more sources

Ensembles of Low-Rank Expert Adapters

open access: yes
The training and fine-tuning of large language models (LLMs) often involve diverse textual data from multiple sources, which poses challenges due to conflicting gradient directions, hindering optimization and specialization. These challenges can undermine model generalization across tasks, resulting in reduced downstream performance.
Li, Yinghao   +3 more
openaire   +2 more sources

Home - About - Disclaimer - Privacy