Results 31 to 40 of about 227,550 (285)

The health of SUSY after the Higgs discovery and the XENON100 data [PDF]

open access: yes, 2013
We analyze the implications for the status and prospects of supersymmetry of the Higgs discovery and the last XENON data. We focus mainly, but not only, on the CMSSM and NUHM models.
Cabrera, Maria Eugenia   +2 more
core   +3 more sources

Frozen Weights as Prior for Parameter-Efficient Fine-Tuning

open access: yesIEEE Access
In the fields of natural language processing and computer vision, the emergence of large pre-trained models has led to the adoption of fine-tuning them for downstream tasks as an important paradigm. However, the full fine-tuning approach often comes with
Xiaolong Ma   +7 more
doaj   +1 more source

Two-Stage Inflation in Supergravity [PDF]

open access: yes, 1998
We investigate the viability of a two-stage inflationary scenario in the context of supergravity, so as to resolve the problem of initial conditions for hybrid inflation.
A. A. Starobinsky   +44 more
core   +4 more sources

Enhancing LoRA Model Serving Capacity via Adaptive Operator Scheduling for Multi-Tenancy on GPU

open access: yesIEEE Access
Low-Rank Adaptation (LoRA) has garnered increasing attention for effectively fine-tuning large language models (LLMs) with limited resources. Nonetheless, conventional approaches that cater to multiple LoRA models independently lead to redundant ...
Lingnan Xia, Hua Ma
doaj   +1 more source

Potential therapeutic targeting of BKCa channels in glioblastoma treatment

open access: yesMolecular Oncology, EarlyView.
This review summarizes current insights into the role of BKCa and mitoBKCa channels in glioblastoma biology, their potential classification as oncochannels, and the emerging pharmacological strategies targeting these channels, emphasizing the translational challenges in developing BKCa‐directed therapies for glioblastoma treatment.
Kamila Maliszewska‐Olejniczak   +4 more
wiley   +1 more source

Parameter-Efficient Fine-Tuning via Circular Convolution

open access: yesFindings of the Association for Computational Linguistics: ACL 2025
ACL ...
Chen, Aochuan   +6 more
openaire   +2 more sources

Targeted modulation of IGFL2‐AS1 reveals its translational potential in cervical adenocarcinoma

open access: yesMolecular Oncology, EarlyView.
Cervical adenocarcinoma patients face worse outcomes than squamous cell carcinoma counterparts despite similar treatment. The identification of IGFL2‐AS1's differential expression provides a molecular basis for distinguishing these histotypes, paving the way for personalized therapies and improved survival in vulnerable populations globally.
Ricardo Cesar Cintra   +6 more
wiley   +1 more source

Hijacking emergency granulopoiesis: Neutrophil ontogeny and reprogramming in cancer

open access: yesMolecular Oncology, EarlyView.
Neutrophils are highly plastic innate immune cells; their functions in cancer extend beyond the tumour microenvironment. This Review summarises current understanding of neutrophil maturation and heterogeneity and highlights tumour‐induced granulopoiesis as a systemic programme that expands immature, immunosuppressive neutrophils via tumour‐derived ...
Gabriela Marinescu, Yi Feng
wiley   +1 more source

Fine-tuning a local LLaMA-3 large language model for automated privacy-preserving physician letter generation in radiation oncology

open access: yesFrontiers in Artificial Intelligence
IntroductionGenerating physician letters is a time-consuming task in daily clinical practice.MethodsThis study investigates local fine-tuning of large language models (LLMs), specifically LLaMA models, for physician letter generation in a privacy ...
Yihao Hou   +40 more
doaj   +1 more source

Explore the Principles of Prompt Tuning and the Progress of Research [PDF]

open access: yesITM Web of Conferences
Prompt Tuning is a lightweight fine-tuning method that demonstrates efficient task adaptation and parameter efficiency for pre-trained language models (PLMs). Prompt Tuning highlights an important contribution to the advancement of NLP technology.
Zheng Tongxin
doaj   +1 more source

Home - About - Disclaimer - Privacy