Results 101 to 110 of about 227,550 (285)

Optimizer-Aware Fine-Tuning of Whisper Small with Low-Rank Adaption: An Empirical Study of Adam and AdamW

open access: yesInformation
Whisper is a transformer-based multilingual model that has illustrated state-of-the-art behavior in numerous languages. However, the efficiency remains persistent with the limited computational resources.
Hadia Arshad   +5 more
doaj   +1 more source

Multi-modal parameter-efficient fine-tuning via graph neural network

open access: yesApplied Intelligence
With the advent of the era of foundation models, pre-training and fine-tuning have become common paradigms. Recently, parameter-efficient fine-tuning has garnered widespread attention due to its better balance between the number of learnable parameters and performance.
Bin Cheng, Jiaxuan Lu
openaire   +2 more sources

All‐in‐One Analog AI Hardware: On‐Chip Training and Inference with Conductive‐Metal‐Oxide/HfOx ReRAM Devices

open access: yesAdvanced Functional Materials, EarlyView.
An all‐in‐one analog AI accelerator is presented, enabling on‐chip training, weight retention, and long‐term inference acceleration. It leverages a BEOL‐integrated CMO/HfOx ReRAM array with low‐voltage operation (<1.5 V), multi‐bit capability over 32 states, low programming noise (10 nS), and near‐ideal weight transfer.
Donato Francesco Falcone   +11 more
wiley   +1 more source

Fine-tuning protein language models boosts predictions across diverse tasks

open access: yesNature Communications
Prediction methods inputting embeddings from protein language models have reached or even surpassed state-of-the-art performance on many protein prediction tasks.
Robert Schmirler   +2 more
doaj   +1 more source

ANALYSIS OF WHISPER AUTOMATIC SPEECH RECOGNITION PERFORMANCE ON LOW RESOURCE LANGUAGE

open access: yesPilar Nusa Mandiri
Implementing Automatic Speech Recognition Technology in daily life could give convenience to its users. However, speeches that can be recognized accurately by the ASR model right now are in languages considered high resources, like English.
Riefkyanov Surya Adia Pratama   +1 more
doaj   +1 more source

Parameter Efficient Fine-tuning via Explained Variance Adaptation

open access: yes
Foundation models (FMs) are pre-trained on large-scale datasets and then fine-tuned for a specific downstream task. The most common fine-tuning method is to update pretrained weights via low-rank adaptation (LoRA). Existing initialization strategies for LoRA often rely on singular value decompositions (SVD) of gradients or weight matrices.
Paischer, Fabian   +5 more
openaire   +2 more sources

NeuroAda: Activating Each Neuron’s Potential for Parameter-Efficient Fine-Tuning

open access: yesProceedings of the 2025 Conference on Empirical Methods in Natural Language Processing
Existing parameter-efficient fine-tuning (PEFT) methods primarily fall into two categories: addition-based and selective in-situ adaptation. The former, such as LoRA, introduce additional modules to adapt the model to downstream tasks, offering strong memory efficiency.
Zhang, Zhi   +3 more
openaire   +2 more sources

Modulating Two‐Photon Absorption in a Pyrene‐Based MOF Series: An In‐Depth Investigation of Structure–Property Relationships

open access: yesAdvanced Functional Materials, EarlyView.
This study investigates H4TBAPy‐based metal–organic frameworks (MOFs) ‐ NU‐1000, NU‐901, SrTBAPy, and BaTBAPy ‐ for multiphoton absorption (MPA) performance. It observes topology‐dependent variations in the 2PA cross‐section, with BaTBAPy exhibiting the highest activity.
Simon N. Deger   +10 more
wiley   +1 more source

Exploring a New Architecture for Efficient Parameter Fine-Tuning in SLoRA Multitasking Scenarios

open access: yesApplied Sciences
Propose an enhanced LoRA (Low-Rank Adaptation) MoE (mixed expert) architecture, SLoRA (Enhanced LoRA MoE Architecture), aimed at addressing the key problem of efficient parameter fine-tuning in multitasking scenarios.
Ce Shi, Jin-Woo Jung
doaj   +1 more source

Hydra: Multi-head low-rank adaptation for parameter efficient fine-tuning

open access: yesNeural Networks
The recent surge in large-scale foundation models has spurred the development of efficient methods for adapting these models to various downstream tasks. Low-rank adaptation methods, such as LoRA, have gained significant attention due to their outstanding parameter efficiency and no additional inference latency.
Sanghyeon Kim   +4 more
openaire   +3 more sources

Home - About - Disclaimer - Privacy