Results 31 to 40 of about 226,203 (262)

High-accuracy ECG image interpretation using parameter-efficient Low-Rank Adaptation (LoRA) fine-tuning with multimodal LLaMA V.3.2

open access: yesBMJ Digital Health & AI
Objective To develop and evaluate a high-accuracy ECG image interpretation model using parameter-efficient Low-Rank Adaptation (LoRA) fine-tuning with the multimodal LLaMA V.3.2 model.Methods and analysis We fine-tuned the multimodal LLaMA V.3.2 model ...
Nandakishor Mukkunnoth   +3 more
doaj   +1 more source

Frozen Weights as Prior for Parameter-Efficient Fine-Tuning

open access: yesIEEE Access
In the fields of natural language processing and computer vision, the emergence of large pre-trained models has led to the adoption of fine-tuning them for downstream tasks as an important paradigm. However, the full fine-tuning approach often comes with
Xiaolong Ma   +7 more
doaj   +1 more source

The health of SUSY after the Higgs discovery and the XENON100 data [PDF]

open access: yes, 2013
We analyze the implications for the status and prospects of supersymmetry of the Higgs discovery and the last XENON data. We focus mainly, but not only, on the CMSSM and NUHM models.
Cabrera, Maria Eugenia   +2 more
core   +3 more sources

Enhancing LoRA Model Serving Capacity via Adaptive Operator Scheduling for Multi-Tenancy on GPU

open access: yesIEEE Access
Low-Rank Adaptation (LoRA) has garnered increasing attention for effectively fine-tuning large language models (LLMs) with limited resources. Nonetheless, conventional approaches that cater to multiple LoRA models independently lead to redundant ...
Lingnan Xia, Hua Ma
doaj   +1 more source

Fine-Pruning: Joint Fine-Tuning and Compression of a Convolutional Network with Bayesian Optimization

open access: yes, 2017
When approaching a novel visual recognition problem in a specialized image domain, a common strategy is to start with a pre-trained deep neural network and fine-tune it to the specialized domain.
Mori, Greg   +2 more
core   +1 more source

Explore the Principles of Prompt Tuning and the Progress of Research [PDF]

open access: yesITM Web of Conferences
Prompt Tuning is a lightweight fine-tuning method that demonstrates efficient task adaptation and parameter efficiency for pre-trained language models (PLMs). Prompt Tuning highlights an important contribution to the advancement of NLP technology.
Zheng Tongxin
doaj   +1 more source

Primordial black holes from the QCD epoch: Linking dark matter, baryogenesis and anthropic selection

open access: yes, 2020
If primordial black holes (PBHs) formed at the quark-hadron epoch, their mass must be close to the Chandrasekhar limit, this also being the characteristic mass of stars.
Carr, Bernard   +2 more
core   +1 more source

By dawn or dusk—how circadian timing rewrites bacterial infection outcomes

open access: yesFEBS Letters, EarlyView.
The circadian clock shapes immune function, yet its influence on infection outcomes is only beginning to be understood. This review highlights how circadian timing alters host responses to the bacterial pathogens Salmonella enterica, Listeria monocytogenes, and Streptococcus pneumoniae revealing that the effectiveness of immune defense depends not only
Devons Mo   +2 more
wiley   +1 more source

DARTS-ASR: Differentiable Architecture Search for Multilingual Speech Recognition and Adaptation

open access: yes, 2020
In previous works, only parameter weights of ASR models are optimized under fixed-topology architecture. However, the design of successful model architecture has always relied on human experience and intuition.
Chen, Yi-Chen   +3 more
core   +1 more source

Parameter-Efficient Fine-Tuning via Circular Convolution

open access: yesFindings of the Association for Computational Linguistics: ACL 2025
ACL ...
Chen, Aochuan   +6 more
openaire   +2 more sources

Home - About - Disclaimer - Privacy