Results 11 to 20 of about 33,388 (259)
Lottery Rank-Pruning Adaptation Parameter Efficient Fine-Tuning
Recent studies on parameter-efficient fine-tuning (PEFT) have introduced effective and efficient methods for fine-tuning large language models (LLMs) on downstream tasks using fewer parameters than required by full fine-tuning.
Juhyeong Kim, Gyunyeop Kim, Sangwoo Kang
doaj +2 more sources
Democratizing protein language models with parameter-efficient fine-tuning. [PDF]
AbstractProteomics has been revolutionized by large pre-trained protein language models, which learn unsupervised representations from large corpora of sequences. The parameters of these models are then fine-tuned in a supervised setting to tailor the model to a specific downstream task.
Sledzieski S +5 more
europepmc +4 more sources
Sensitivity-Aware Visual Parameter-Efficient Fine-Tuning
ICCV 2023 ...
He, Haoyu +4 more
openaire +2 more sources
Parameter-Efficient Fine-Tuning Design Spaces
Code is available at https://github.com/amazon-science/peft-design ...
Chen, Jiaao +5 more
openaire +2 more sources
Parameter-Efficient Fine-Tuning without Introducing New Latency
ACL2023 camera-ready ...
Liao, B., Meng, Y., Monz, C.
openaire +3 more sources
On the Effectiveness of Parameter-Efficient Fine-Tuning
Fine-tuning pre-trained models has been ubiquitously proven to be effective in a wide range of NLP tasks. However, fine-tuning the whole model is parameter inefficient as it always yields an entirely new model for each task. Currently, many research works propose to only fine-tune a small portion of the parameters while keeping most of the parameters ...
Fu, Zihao +5 more
openaire +2 more sources
Abstract Large pretrained language models are widely used in downstream NLP tasks via task- specific fine-tuning, but such procedures can be costly. Recently, Parameter-Efficient Fine-Tuning (PEFT) methods have achieved strong task performance while updating much fewer parameters than full model fine-tuning (FFT).
Han Zhou +3 more
doaj +3 more sources
CE-Prompt: enhance prompt expression stability by multiple understanding [PDF]
In this article, we propose CE-Prompt, an enhanced version of Prompt-Tuning designed to address issues such as the instability of random initialization and inefficiencies caused by long text in pre-trained large language models (LLMs).
Wujian Yang +3 more
doaj +2 more sources
Exploring The Principles and Prospects for Efficient Fine-Tuning of Transformer-Based Pre-Trained Large Language Models [PDF]
In recent years, large language models (LLMs) have made breakthroughs in natural language processing and multimodal tasks. However, the growing model size and the high cost of full parameter fine-tuning pose challenges to their efficient adaptation. This
He Ruiqi
doaj +1 more source
TRACE: Time Series Parameter Efficient Fine-Tuning
We propose an efficient fine-tuning method for time series foundation models, termed TRACE: Time Series Parameter Efficient Fine-tuning. While pretrained time series foundation models are gaining popularity, they face the following challenges: (1) Unlike natural language tasks, time series data vary in frequency, channel numbers, historical/prediction ...
Yuze Li, Wei Zhu
openaire +2 more sources

