Results 41 to 50 of about 33,388 (259)
Adapters and Low-Rank Adaptation (LoRA) are parameter-efficient fine-tuning techniques designed to make the training of language models more efficient. Previous results demonstrated that these methods can even improve performance on some classification ...
Olesya Razuvayevskaya +7 more
doaj +1 more source
ABSTRACT Objective Peripheral neuropathies contribute to patient disability but may be diagnosed late or missed altogether due to late referral, limitation of current diagnostic methods and lack of specialized testing facilities. To address this clinical gap, we developed NeuropathAI, an interpretable deep learning–based multiclass classification ...
Chaima Ben Rabah +7 more
wiley +1 more source
Parameter-Efficient Fine-Tuning of Large Pretrained Models for Instance Segmentation Tasks
Research and applications in artificial intelligence have recently shifted with the rise of large pretrained models, which deliver state-of-the-art results across numerous tasks.
Nermeen Abou Baker +2 more
doaj +1 more source
SLoRA: Federated Parameter Efficient Fine-Tuning of Language Models
Transfer learning via fine-tuning pre-trained transformer models has gained significant success in delivering state-of-the-art results across various NLP tasks. In the absence of centralized data, Federated Learning (FL) can benefit from distributed and private data of the FL edge clients for fine-tuning.
Babakniya, Sara +6 more
openaire +2 more sources
ABSTRACT Objective Accurate localization of epileptogenic tubers (ETs) in patients with tuberous sclerosis complex (TSC) is essential but challenging, as these tubers lack distinct pathological or genetic markers to differentiate them from other cortical tubers.
Tinghong Liu +11 more
wiley +1 more source
Efficient Side-Tuning for Remote Sensing: A Low-Memory Fine-Tuning Framework
Fine-tuning pretrained models for remote sensing tasks often demands substantial computational resources. To reduce memory requirements and training costs, this article proposes a low-memory fine-tuning framework, called efficient side-tuning (EST), for ...
Haichen Yu +6 more
doaj +1 more source
Fine-Tune Smarter, Not Harder: Parameter-Efficient Fine-Tuning for Geospatial Foundation Models
Earth observation (EO) is crucial for monitoring environmental changes, responding to disasters, and managing natural resources. In this context, foundation models facilitate remote sensing image analysis to retrieve relevant geoinformation accurately and efficiently.
Marti-Escofet, Francesc +4 more
openaire +2 more sources
GIST: Improving Parameter Efficient Fine Tuning via Knowledge Interaction
17pages, 8 figures, 22 tables, Work in ...
Ruan, Jiacheng +6 more
openaire +2 more sources
A Robust Adaptive One‐Sample‐Ahead Preview Super‐Twisting Sliding Mode Controller
Block Diagram of the Robust Adaptive One‐Sample‐Ahead Preview Super‐Twisting Sliding Mode Controller. ABSTRACT This article introduces a discrete‐time robust adaptive one‐sample‐ahead preview super‐twisting sliding mode controller. A stability analysis of the controller by Lyapunov criteria is developed to demonstrate its robustness in handling both ...
Guilherme Vieira Hollweg +5 more
wiley +1 more source
Activation-Guided Low-Rank Parameter Adaptation for Efficient Model Fine-Tuning
Fine-tuning large language models is computationally expensive, and while existing parameter-efficient methods like Low-Rank Adaptation (LoRA) reduce computational costs, they are limited by suboptimal initialization strategies.
Qingchen Wang, Shengyu Shen
doaj +1 more source

