Results 11 to 20 of about 33,388 (259)

Lottery Rank-Pruning Adaptation Parameter Efficient Fine-Tuning

open access: yesMathematics
Recent studies on parameter-efficient fine-tuning (PEFT) have introduced effective and efficient methods for fine-tuning large language models (LLMs) on downstream tasks using fewer parameters than required by full fine-tuning.
Juhyeong Kim, Gyunyeop Kim, Sangwoo Kang
doaj   +2 more sources

Democratizing protein language models with parameter-efficient fine-tuning. [PDF]

open access: yesProc Natl Acad Sci U S A, 2023
AbstractProteomics has been revolutionized by large pre-trained protein language models, which learn unsupervised representations from large corpora of sequences. The parameters of these models are then fine-tuned in a supervised setting to tailor the model to a specific downstream task.
Sledzieski S   +5 more
europepmc   +4 more sources

Sensitivity-Aware Visual Parameter-Efficient Fine-Tuning

open access: yes2023 IEEE/CVF International Conference on Computer Vision (ICCV), 2023
ICCV 2023 ...
He, Haoyu   +4 more
openaire   +2 more sources

Parameter-Efficient Fine-Tuning Design Spaces

open access: yes, 2023
Code is available at https://github.com/amazon-science/peft-design ...
Chen, Jiaao   +5 more
openaire   +2 more sources

Parameter-Efficient Fine-Tuning without Introducing New Latency

open access: yesProceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), 2023
ACL2023 camera-ready ...
Liao, B., Meng, Y., Monz, C.
openaire   +3 more sources

On the Effectiveness of Parameter-Efficient Fine-Tuning

open access: yesProceedings of the AAAI Conference on Artificial Intelligence, 2023
Fine-tuning pre-trained models has been ubiquitously proven to be effective in a wide range of NLP tasks. However, fine-tuning the whole model is parameter inefficient as it always yields an entirely new model for each task. Currently, many research works propose to only fine-tune a small portion of the parameters while keeping most of the parameters ...
Fu, Zihao   +5 more
openaire   +2 more sources

AutoPEFT: Automatic Configuration Search for Parameter-Efficient Fine-Tuning

open access: yesTransactions of the Association for Computational Linguistics
Abstract Large pretrained language models are widely used in downstream NLP tasks via task- specific fine-tuning, but such procedures can be costly. Recently, Parameter-Efficient Fine-Tuning (PEFT) methods have achieved strong task performance while updating much fewer parameters than full model fine-tuning (FFT).
Han Zhou   +3 more
doaj   +3 more sources

CE-Prompt: enhance prompt expression stability by multiple understanding [PDF]

open access: yesPeerJ Computer Science
In this article, we propose CE-Prompt, an enhanced version of Prompt-Tuning designed to address issues such as the instability of random initialization and inefficiencies caused by long text in pre-trained large language models (LLMs).
Wujian Yang   +3 more
doaj   +2 more sources

Exploring The Principles and Prospects for Efficient Fine-Tuning of Transformer-Based Pre-Trained Large Language Models [PDF]

open access: yesITM Web of Conferences
In recent years, large language models (LLMs) have made breakthroughs in natural language processing and multimodal tasks. However, the growing model size and the high cost of full parameter fine-tuning pose challenges to their efficient adaptation. This
He Ruiqi
doaj   +1 more source

TRACE: Time Series Parameter Efficient Fine-Tuning

open access: yesNeurocomputing
We propose an efficient fine-tuning method for time series foundation models, termed TRACE: Time Series Parameter Efficient Fine-tuning. While pretrained time series foundation models are gaining popularity, they face the following challenges: (1) Unlike natural language tasks, time series data vary in frequency, channel numbers, historical/prediction ...
Yuze Li, Wei Zhu
openaire   +2 more sources

Home - About - Disclaimer - Privacy