Results 1 to 10 of about 227,550 (285)

Democratizing protein language models with parameter-efficient fine-tuning. [PDF]

open access: yesProc Natl Acad Sci U S A, 2023
AbstractProteomics has been revolutionized by large pre-trained protein language models, which learn unsupervised representations from large corpora of sequences. The parameters of these models are then fine-tuned in a supervised setting to tailor the model to a specific downstream task.
Sledzieski S   +5 more
europepmc   +4 more sources

Parameter-efficient fine-tuning of large language models using semantic knowledge tuning [PDF]

open access: yesScientific Reports
Large Language Models (LLMs) are gaining significant popularity in recent years for specialized tasks using prompts due to their low computational cost. Standard methods like prefix tuning utilize special, modifiable tokens that lack semantic meaning and
Nusrat Jahan Prottasha   +6 more
doaj   +2 more sources

InfoMSD: an information-maximization self-distillation framework for parameter-efficient fine-tuning on artwork images [PDF]

open access: yesFrontiers in Artificial Intelligence
In recent years, despite the remarkable performance of large-scale vision language models across various visual classification tasks, their substantial parameter counts and high fine-tuning costs have hindered deployment in resource-constrained cultural ...
Feng Guan   +3 more
doaj   +2 more sources

Adaptive Multiple-Attribute Scenario LoRA Merge for Robust Perception in Autonomous Driving [PDF]

open access: yesSensors
Perception models for autonomous driving are predominantly trained on clear, daytime data, leaving their performance under rare conditions—particularly in multiple-attribute (joint weather–lighting) conditions such as night × rainy or night × snowy—an ...
Ryosuke Kawata   +3 more
doaj   +2 more sources

Lottery Rank-Pruning Adaptation Parameter Efficient Fine-Tuning

open access: yesMathematics
Recent studies on parameter-efficient fine-tuning (PEFT) have introduced effective and efficient methods for fine-tuning large language models (LLMs) on downstream tasks using fewer parameters than required by full fine-tuning.
Juhyeong Kim, Gyunyeop Kim, Sangwoo Kang
doaj   +2 more sources

CPMI-ChatGLM: parameter-efficient fine-tuning ChatGLM with Chinese patent medicine instructions [PDF]

open access: yesScientific Reports
Chinese patent medicine (CPM) is a typical type of traditional Chinese medicine (TCM) preparation that uses Chinese herbs as raw materials and is an important means of treating diseases in TCM. Chinese patent medicine instructions (CPMI) serve as a guide
Can Liu   +7 more
doaj   +2 more sources

AutoPEFT: Automatic Configuration Search for Parameter-Efficient Fine-Tuning

open access: yesTransactions of the Association for Computational Linguistics
Abstract Large pretrained language models are widely used in downstream NLP tasks via task- specific fine-tuning, but such procedures can be costly. Recently, Parameter-Efficient Fine-Tuning (PEFT) methods have achieved strong task performance while updating much fewer parameters than full model fine-tuning (FFT).
Han Zhou   +3 more
doaj   +3 more sources

Multimodal Assessment of Schizophrenia Symptom Severity From Linguistic, Acoustic and Visual Cues

open access: yesIEEE Transactions on Neural Systems and Rehabilitation Engineering, 2023
Assessing the condition of every schizophrenia patient correctly normally requires lengthy and frequent interviews with professionally trained doctors.
Chih-Yuan Chuang   +7 more
doaj   +1 more source

Parameter-Efficient Fine-Tuning Method for Task-Oriented Dialogue Systems

open access: yesMathematics, 2023
The use of Transformer-based pre-trained language models has become prevalent in enhancing the performance of task-oriented dialogue systems. These models, which are pre-trained on large text data to grasp the language syntax and semantics, fine-tune the
Yunho Mo, Joon Yoo, Sangwoo Kang
doaj   +1 more source

Structure-Aware Low-Rank Adaptation for Parameter-Efficient Fine-Tuning

open access: yesMathematics, 2023
With the growing scale of pre-trained language models (PLMs), full parameter fine-tuning becomes prohibitively expensive and practically infeasible.
Yahao Hu   +4 more
doaj   +1 more source

Home - About - Disclaimer - Privacy