Results 101 to 110 of about 459,286 (299)
QA-LoRA: Quantization-Aware Low-Rank Adaptation of Large Language Models [PDF]
Yuhui Xu +8 more
openalex +1 more source
AZD9291 has shown promise in targeted cancer therapy but is limited by resistance. In this study, we employed metabolic labeling and LC–MS/MS to profile time‐resolved nascent protein perturbations, allowing dynamic tracking of drug‐responsive proteins. We demonstrated that increased NNMT expression is associated with drug resistance, highlighting NNMT ...
Zhanwu Hou +5 more
wiley +1 more source
RaSA: Rank-Sharing Low-Rank Adaptation
Low-rank adaptation (LoRA) has been prominently employed for parameter-efficient fine-tuning of large language models (LLMs). However, the limited expressive capacity of LoRA, stemming from the low-rank constraint, has been recognized as a bottleneck, particularly in rigorous tasks like code generation and mathematical reasoning.
He, Zhiwei +9 more
openaire +2 more sources
Aggregating Low Rank Adapters in Federated Fine-Tuning
presented at conference https://flta-conference.org/flta-2024-detailed-program/
Trautmann, Evelyn +2 more
openaire +2 more sources
PARP inhibitors are used to treat a small subset of prostate cancer patients. These studies reveal that PARP1 activity and expression are different between European American and African American prostate cancer tissue samples. Additionally, different PARP inhibitors cause unique and overlapping transcriptional changes, notably, p53 pathway upregulation.
Moriah L. Cunningham +21 more
wiley +1 more source
A‐to‐I editing of miRNAs, particularly miR‐200b‐3p, contributes to HGSOC progression by enhancing cancer cell proliferation, migration and 3D growth. The edited form is linked to poorer patient survival and the identification of novel molecular targets.
Magdalena Niemira +14 more
wiley +1 more source
Ensembles of Low-Rank Expert Adapters
The training and fine-tuning of large language models (LLMs) often involve diverse textual data from multiple sources, which poses challenges due to conflicting gradient directions, hindering optimization and specialization. These challenges can undermine model generalization across tasks, resulting in reduced downstream performance.
Li, Yinghao +3 more
openaire +2 more sources
FLoCoRA: Federated Learning Compression with Low-Rank Adaptation
Low-Rank Adaptation (LoRA) methods have gained popularity in efficient parameter fine-tuning of models containing hundreds of billions of parameters. In this work, instead, we demonstrate the application of LoRA methods to train small-vision models in Federated Learning (FL) from scratch.
Ribeiro, Lucas Grativol +4 more
openaire +2 more sources
This study indicates that Merkel cell carcinoma (MCC) does not originate from Merkel cells, and identifies gene, protein & cellular expression of immune‐linked and neuroendocrine markers in primary and metastatic Merkel cell carcinoma (MCC) tumor samples, linked to Merkel cell polyomavirus (MCPyV) status, with enrichment of B‐cell and other immune cell
Richie Jeremian +10 more
wiley +1 more source
A Survey on Metric Learning for Feature Vectors and Structured Data
The need for appropriate ways to measure the distance or similarity between data is ubiquitous in machine learning, pattern recognition and data mining, but handcrafting such good metrics for specific problems is generally difficult.
Bellet, Aurélien +2 more
core

