Results 1 to 10 of about 33,388 (259)

Parameter-efficient fine-tuning of large language models using semantic knowledge tuning [PDF]

open access: yesScientific Reports
Large Language Models (LLMs) are gaining significant popularity in recent years for specialized tasks using prompts due to their low computational cost. Standard methods like prefix tuning utilize special, modifiable tokens that lack semantic meaning and
Nusrat Jahan Prottasha   +6 more
doaj   +2 more sources

CPMI-ChatGLM: parameter-efficient fine-tuning ChatGLM with Chinese patent medicine instructions [PDF]

open access: yesScientific Reports
Chinese patent medicine (CPM) is a typical type of traditional Chinese medicine (TCM) preparation that uses Chinese herbs as raw materials and is an important means of treating diseases in TCM. Chinese patent medicine instructions (CPMI) serve as a guide
Can Liu   +7 more
doaj   +2 more sources

Parameter-efficient fine-tuning for low-resource text classification: a comparative study of LoRA, IA3, and ReFT [PDF]

open access: yesFrontiers in Big Data
The successful application of large-scale transformer models in Natural Language Processing (NLP) is often hindered by the substantial computational cost and data requirements of full fine-tuning.
Steve Nwaiwu
doaj   +2 more sources

Augmented prediction of vertebral collapse after osteoporotic vertebral compression fractures through parameter-efficient fine-tuning of biomedical foundation models [PDF]

open access: yesScientific Reports
Vertebral collapse (VC) following osteoporotic vertebral compression fracture (OVCF) often requires aggressive treatment, necessitating an accurate prediction for early intervention.
Sibeen Kim   +7 more
doaj   +2 more sources

Enhancing queries for code generation with reinforcement learning [PDF]

open access: yesScientific Reports
We present a reinforcement learning framework that enhances natural language queries to improve DeepSeek code generation. A parametric refiner (Qwen with LoRA) is trained via REINFORCE while the generator remains fixed, using a scalar reward that can ...
Dawei Yuan   +3 more
doaj   +2 more sources

A new low-rank adaptation method for brain structure and metastasis segmentation via decoupled principal weight direction and magnitude [PDF]

open access: yesScientific Reports
Deep learning techniques have become pivotal in medical image segmentation, but their success often relies on large, manually annotated datasets, which are expensive and labor-intensive to obtain.
Hancan Zhu   +7 more
doaj   +2 more sources

Multimodal Assessment of Schizophrenia Symptom Severity From Linguistic, Acoustic and Visual Cues

open access: yesIEEE Transactions on Neural Systems and Rehabilitation Engineering, 2023
Assessing the condition of every schizophrenia patient correctly normally requires lengthy and frequent interviews with professionally trained doctors.
Chih-Yuan Chuang   +7 more
doaj   +1 more source

Parameter-Efficient Fine-Tuning Method for Task-Oriented Dialogue Systems

open access: yesMathematics, 2023
The use of Transformer-based pre-trained language models has become prevalent in enhancing the performance of task-oriented dialogue systems. These models, which are pre-trained on large text data to grasp the language syntax and semantics, fine-tune the
Yunho Mo, Joon Yoo, Sangwoo Kang
doaj   +1 more source

Structure-Aware Low-Rank Adaptation for Parameter-Efficient Fine-Tuning

open access: yesMathematics, 2023
With the growing scale of pre-trained language models (PLMs), full parameter fine-tuning becomes prohibitively expensive and practically infeasible.
Yahao Hu   +4 more
doaj   +1 more source

Deepfake Detection Method Integrating Multiple Parameter-Efficient Fine-Tuning Techniques [PDF]

open access: yesJisuanji kexue yu tansuo
In recent years, as deepfake technology matures, face-swapping software and synthesized videos have become widespread. While these techniques offer entertainment, they also provide opportunities for misuse by malicious actors.
ZHANG Yiwen, CAI Manchun, CHEN Yonghao, ZHU Yi, YAO Lifeng
doaj   +1 more source

Home - About - Disclaimer - Privacy