Results 11 to 20 of about 226,203 (262)

Lottery Rank-Pruning Adaptation Parameter Efficient Fine-Tuning

open access: yesMathematics
Recent studies on parameter-efficient fine-tuning (PEFT) have introduced effective and efficient methods for fine-tuning large language models (LLMs) on downstream tasks using fewer parameters than required by full fine-tuning.
Juhyeong Kim, Gyunyeop Kim, Sangwoo Kang
doaj   +2 more sources

Democratizing protein language models with parameter-efficient fine-tuning. [PDF]

open access: yesProc Natl Acad Sci U S A, 2023
AbstractProteomics has been revolutionized by large pre-trained protein language models, which learn unsupervised representations from large corpora of sequences. The parameters of these models are then fine-tuned in a supervised setting to tailor the model to a specific downstream task.
Sledzieski S   +5 more
europepmc   +4 more sources

Sensitivity-Aware Visual Parameter-Efficient Fine-Tuning

open access: yes2023 IEEE/CVF International Conference on Computer Vision (ICCV), 2023
ICCV 2023 ...
He, Haoyu   +4 more
openaire   +2 more sources

Parameter-Efficient Fine-Tuning Design Spaces

open access: yes, 2023
Code is available at https://github.com/amazon-science/peft-design ...
Chen, Jiaao   +5 more
openaire   +2 more sources

Parameter-Efficient Fine-Tuning without Introducing New Latency

open access: yesProceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), 2023
ACL2023 camera-ready ...
Liao, B., Meng, Y., Monz, C.
openaire   +3 more sources

On the Effectiveness of Parameter-Efficient Fine-Tuning

open access: yesProceedings of the AAAI Conference on Artificial Intelligence, 2023
Fine-tuning pre-trained models has been ubiquitously proven to be effective in a wide range of NLP tasks. However, fine-tuning the whole model is parameter inefficient as it always yields an entirely new model for each task. Currently, many research works propose to only fine-tune a small portion of the parameters while keeping most of the parameters ...
Fu, Zihao   +5 more
openaire   +2 more sources

AutoPEFT: Automatic Configuration Search for Parameter-Efficient Fine-Tuning

open access: yesTransactions of the Association for Computational Linguistics
Abstract Large pretrained language models are widely used in downstream NLP tasks via task- specific fine-tuning, but such procedures can be costly. Recently, Parameter-Efficient Fine-Tuning (PEFT) methods have achieved strong task performance while updating much fewer parameters than full model fine-tuning (FFT).
Han Zhou   +3 more
doaj   +3 more sources

Adaptive performance optimization for large-scale traffic control systems [PDF]

open access: yes, 2011
In this paper, we study the problem of optimizing (fine-tuning) the design parameters of large-scale traffic control systems that are composed of distinct and mutually interacting modules.
Aboudolas, K()   +6 more
core   +2 more sources

LHC and dark matter phenomenology of the NUGHM [PDF]

open access: yes, 2014
We present a Bayesian analysis of the NUGHM, a supersymmetric scenario with non-universal gaugino masses and Higgs masses, including all the relevant experimental observables and dark matter constraints. The main merit of the NUGHM is that it essentially
Bertone, Gianfranco   +3 more
core   +3 more sources

Low fine tuning in the MSSM with higgsino dark matter and unification constraints [PDF]

open access: yes, 2014
We examine the issue of fine tuning in the MSSM with GUT-scale boundary conditions. We identify specific unification patterns and mass relations that can lead to a significant lowering of the fine tuning due to gauginos, scalars, and the \mu\ parameter ...
Kowalska, Kamila   +3 more
core   +1 more source

Home - About - Disclaimer - Privacy