Results 41 to 50 of about 225,848 (315)

Imaging of High‐Risk Neuroblastoma: Recommendations From SIOPEN Radiology and Nuclear Medicine Specialty Committees

open access: yesPediatric Blood &Cancer, EarlyView.
ABSTRACT Neuroblastoma is the most common extracranial solid tumor in early childhood. Its clinical behavior is highly variable, ranging from spontaneous regression to fatal outcome despite intensive treatment. The International Society of Pediatric Oncology Europe Neuroblastoma Group (SIOPEN) Radiology and Nuclear Medicine Specialty Committees ...
Annemieke Littooij   +11 more
wiley   +1 more source

Differentially Private Model Compression

open access: yes, 2022
Recent papers have shown that large pre-trained language models (LLMs) such as BERT, GPT-2 can be fine-tuned on private data to achieve performance comparable to non-private models for many downstream Natural Language Processing (NLP) tasks while simultaneously guaranteeing differential privacy.
Mireshghallah, Fatemehsadat   +4 more
openaire   +2 more sources

Boosting Lightweight CNNs Through Network Pruning and Knowledge Distillation for SAR Target Recognition

open access: yesIEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 2021
Deep convolutional neural networks (CNNs) have yielded unusually brilliant results in synthetic aperture radar (SAR) target recognition. However, overparameterization is a widely-recognized property of deep CNNs, and most previous works excessively ...
Zhen Wang, Lan Du, Yi Li
doaj   +1 more source

Phosphatidylinositol 4‐kinase as a target of pathogens—friend or foe?

open access: yesFEBS Letters, EarlyView.
This graphical summary illustrates the roles of phosphatidylinositol 4‐kinases (PI4Ks). PI4Ks regulate key cellular processes and can be hijacked by pathogens, such as viruses, bacteria and parasites, to support their intracellular replication. Their dual role as essential host enzymes and pathogen cofactors makes them promising drug targets.
Ana C. Mendes   +3 more
wiley   +1 more source

Optimized AlexNet Pruning for Edge-Based Medical Diagnostics

open access: yesIEEE Access
Medical diagnostics demand rapid and accurate disease detection to ensure timely treatment, directly impacting human lives. Deep neural networks (DNNs) have shown unparalleled success in medical applications, surpassing traditional methods.
Yasser A. Amer   +2 more
doaj   +1 more source

An upstream open reading frame regulates expression of the mitochondrial protein Slm35 and mitophagy flux

open access: yesFEBS Letters, EarlyView.
This study reveals how the mitochondrial protein Slm35 is regulated in Saccharomyces cerevisiae. The authors identify stress‐responsive DNA elements and two upstream open reading frames (uORFs) in the 5′ untranslated region of SLM35. One uORF restricts translation, and its mutation increases Slm35 protein levels and mitophagy.
Hernán Romo‐Casanueva   +5 more
wiley   +1 more source

Structural instability impairs function of the UDP‐xylose synthase 1 Ile181Asn variant associated with short‐stature genetic syndrome in humans

open access: yesFEBS Letters, EarlyView.
The Ile181Asn variant of human UDP‐xylose synthase (hUXS1), associated with a short‐stature genetic syndrome, has previously been reported as inactive. Our findings demonstrate that Ile181Asn‐hUXS1 retains catalytic activity similar to the wild‐type but exhibits reduced stability, a looser oligomeric state, and an increased tendency to precipitate ...
Tuo Li   +2 more
wiley   +1 more source

Sub 4-bit Power-of-Two-Based Mixed-Precision Quantization for Efficient LLM Compression and Acceleration

open access: yesIEEE Access
While Large Language Models (LLMs) have demonstrated remarkable performance, their deployment on resource-constrained edge devices is hindered by their immense size.
Han Cho   +3 more
doaj   +1 more source

When Compression Meets Model Compression: Memory-Efficient Double Compression for Large Language Models

open access: yesFindings of the Association for Computational Linguistics: EMNLP 2024
Large language models (LLMs) exhibit excellent performance in various tasks. However, the memory requirements of LLMs present a great challenge when deploying on memory-limited devices, even for quantized LLMs. This paper introduces a framework to compress LLM after quantization further, achieving about 2.2x compression ratio.
Wang, Weilan   +5 more
openaire   +2 more sources

Home - About - Disclaimer - Privacy