Results 51 to 60 of about 226,203 (262)
Parameter-Efficient Sparsity for Large Language Models Fine-Tuning
With the dramatically increased number of parameters in language models, sparsity methods have received ever-increasing research focus to compress and accelerate the models. While most research focuses on how to accurately retain appropriate weights while maintaining the performance of the compressed model, there are challenges in the computational ...
Li, Yuchao +6 more
openaire +2 more sources
The cancer problem is increasing globally with projections up to the year 2050 showing unfavourable outcomes in terms of incidence and cancer‐related deaths. The main challenges are prevention, improved therapeutics resulting in increased cure rates and enhanced health‐related quality of life.
Ulrik Ringborg +43 more
wiley +1 more source
Large language models for PHM: a review of optimization techniques and applications
The rapid advancement of Large Language Models (LLMs) has created unprecedented opportunities for industrial automation, process optimization, and decision support systems.
Tingyi Yu +5 more
doaj +1 more source
Parameter-Efficient Fine-Tuning With Adapters
In the arena of language model fine-tuning, the traditional approaches, such as Domain-Adaptive Pretraining (DAPT) and Task-Adaptive Pretraining (TAPT), although effective, but computational intensive. This research introduces a novel adaptation method utilizing the UniPELT framework as a base and added a PromptTuning Layer, which significantly reduces
Chen, Keyu, Pang, Yuan, Yang, Zi
openaire +2 more sources
Targeted modulation of IGFL2‐AS1 reveals its translational potential in cervical adenocarcinoma
Cervical adenocarcinoma patients face worse outcomes than squamous cell carcinoma counterparts despite similar treatment. The identification of IGFL2‐AS1's differential expression provides a molecular basis for distinguishing these histotypes, paving the way for personalized therapies and improved survival in vulnerable populations globally.
Ricardo Cesar Cintra +6 more
wiley +1 more source
Strong Baselines for Parameter-Efficient Few-Shot Fine-Tuning
Few-shot classification (FSC) entails learning novel classes given only a few examples per class after a pre-training (or meta-training) phase on a set of base classes. Recent works have shown that simply fine-tuning a pre-trained Vision Transformer (ViT) on new test classes is a strong approach for FSC. Fine-tuning ViTs, however, is expensive in time,
Basu, Samyadeep +3 more
openaire +2 more sources
Tandem VHH targeting distinct EGFR epitopes were engineered into a monovalent bispecific antibody (7D12‐EGA1‐Fc) with more potent ADCC without increasing affinity to EGFR. Structural modeling of 7D12‐EGA1‐Fc showed cross‐linking of separate EGFR domains to enhance CD16a engagement on NK cells.
Yuqiang Xu +5 more
wiley +1 more source
Naturalness of Neutralino Dark Matter
We investigate the level of fine-tuning of neutralino Dark Matter below 200 GeV in the low-energy phenomenological minimal supersymmetric Standard Model taking into account the newest results from XENON100 and the Large Hadron Collider as well as all ...
Grothaus, Philipp +2 more
core +1 more source
CRISPRI‐mediated gene silencing and phenotypic exploration in nontuberculous mycobacteria. In this Research Protocol, we describe approaches to control, monitor, and quantitatively assess CRISPRI‐mediated gene silencing in M. smegmatis and M. abscessus model organisms.
Vanessa Point +7 more
wiley +1 more source
Adapters and Low-Rank Adaptation (LoRA) are parameter-efficient fine-tuning techniques designed to make the training of language models more efficient. Previous results demonstrated that these methods can even improve performance on some classification ...
Olesya Razuvayevskaya +7 more
doaj +1 more source

