Results 41 to 50 of about 33,388 (259)

Comparison between parameter-efficient techniques and full fine-tuning: A case study on multilingual news article classification.

open access: yesPLoS ONE
Adapters and Low-Rank Adaptation (LoRA) are parameter-efficient fine-tuning techniques designed to make the training of language models more efficient. Previous results demonstrated that these methods can even improve performance on some classification ...
Olesya Razuvayevskaya   +7 more
doaj   +1 more source

Deep Learning–Assisted Differentiation of Four Peripheral Neuropathies Using Corneal Confocal Microscopy

open access: yesAnnals of Clinical and Translational Neurology, EarlyView.
ABSTRACT Objective Peripheral neuropathies contribute to patient disability but may be diagnosed late or missed altogether due to late referral, limitation of current diagnostic methods and lack of specialized testing facilities. To address this clinical gap, we developed NeuropathAI, an interpretable deep learning–based multiclass classification ...
Chaima Ben Rabah   +7 more
wiley   +1 more source

Parameter-Efficient Fine-Tuning of Large Pretrained Models for Instance Segmentation Tasks

open access: yesMachine Learning and Knowledge Extraction
Research and applications in artificial intelligence have recently shifted with the rise of large pretrained models, which deliver state-of-the-art results across numerous tasks.
Nermeen Abou Baker   +2 more
doaj   +1 more source

SLoRA: Federated Parameter Efficient Fine-Tuning of Language Models

open access: yes, 2023
Transfer learning via fine-tuning pre-trained transformer models has gained significant success in delivering state-of-the-art results across various NLP tasks. In the absence of centralized data, Federated Learning (FL) can benefit from distributed and private data of the FL edge clients for fine-tuning.
Babakniya, Sara   +6 more
openaire   +2 more sources

Predicting Epileptogenic Tubers in Patients With Tuberous Sclerosis Complex Using a Fusion Model Integrating Lesion Network Mapping and Machine Learning

open access: yesAnnals of Clinical and Translational Neurology, EarlyView.
ABSTRACT Objective Accurate localization of epileptogenic tubers (ETs) in patients with tuberous sclerosis complex (TSC) is essential but challenging, as these tubers lack distinct pathological or genetic markers to differentiate them from other cortical tubers.
Tinghong Liu   +11 more
wiley   +1 more source

Efficient Side-Tuning for Remote Sensing: A Low-Memory Fine-Tuning Framework

open access: yesIEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing
Fine-tuning pretrained models for remote sensing tasks often demands substantial computational resources. To reduce memory requirements and training costs, this article proposes a low-memory fine-tuning framework, called efficient side-tuning (EST), for ...
Haichen Yu   +6 more
doaj   +1 more source

Fine-Tune Smarter, Not Harder: Parameter-Efficient Fine-Tuning for Geospatial Foundation Models

open access: yes
Earth observation (EO) is crucial for monitoring environmental changes, responding to disasters, and managing natural resources. In this context, foundation models facilitate remote sensing image analysis to retrieve relevant geoinformation accurately and efficiently.
Marti-Escofet, Francesc   +4 more
openaire   +2 more sources

GIST: Improving Parameter Efficient Fine Tuning via Knowledge Interaction

open access: yes, 2023
17pages, 8 figures, 22 tables, Work in ...
Ruan, Jiacheng   +6 more
openaire   +2 more sources

A Robust Adaptive One‐Sample‐Ahead Preview Super‐Twisting Sliding Mode Controller

open access: yesInternational Journal of Adaptive Control and Signal Processing, EarlyView.
Block Diagram of the Robust Adaptive One‐Sample‐Ahead Preview Super‐Twisting Sliding Mode Controller. ABSTRACT This article introduces a discrete‐time robust adaptive one‐sample‐ahead preview super‐twisting sliding mode controller. A stability analysis of the controller by Lyapunov criteria is developed to demonstrate its robustness in handling both ...
Guilherme Vieira Hollweg   +5 more
wiley   +1 more source

Activation-Guided Low-Rank Parameter Adaptation for Efficient Model Fine-Tuning

open access: yesIEEE Access
Fine-tuning large language models is computationally expensive, and while existing parameter-efficient methods like Low-Rank Adaptation (LoRA) reduce computational costs, they are limited by suboptimal initialization strategies.
Qingchen Wang, Shengyu Shen
doaj   +1 more source

Home - About - Disclaimer - Privacy