Results 151 to 160 of about 1,035,590 (321)

Automated Identification of Cylindrical Cells for Enhanced State of Health Assessment in Lithium-Ion Battery Reuse

open access: yesBatteries
Lithium-ion batteries are pervasive in contemporary life, providing power for a vast array of devices, including smartphones and electric vehicles. With the projected sale of millions of electric vehicles globally by 2022 and over a million electric ...
Alejandro H. de la Iglesia   +4 more
doaj   +1 more source

Performance Characterization of Expert Router for Scalable LLM Inference [PDF]

open access: yesarXiv
Large Language Models (LLMs) have experienced widespread adoption across scientific and industrial domains due to their versatility and utility for diverse tasks. Nevertheless, deploying and serving these models at scale with optimal throughput and latency remains a significant challenge, primarily because of LLMs' high computational and memory demands.
arxiv  

Enhancing transporter activity in heterologous expression systems with SAHA: a 2500‐times more potent and odorless alternative to butyrate

open access: yesFEBS Open Bio, EarlyView.
Heterologous expression of membrane transporters in cultured cells is essential for functional characterization, but is sometimes limited by low activity. Our study compares the HDAC inhibitors butyrate, VPA and SAHA to enhance transport activity. We propose to replace butyrate by SAHA: it is equally effective, devoid of repulsive odor, costs less, and
Svenja Flögel   +4 more
wiley   +1 more source

fMoE: Fine-Grained Expert Offloading for Large Mixture-of-Experts Serving [PDF]

open access: yesarXiv
Large Language Models (LLMs) have gained immense success in revolutionizing various applications, including content generation, search and recommendation, and AI-assisted operation. To reduce high training costs, Mixture-of-Experts (MoE) architecture has become a popular backbone for modern LLMs.
arxiv  

Mixture-of-Experts for Distributed Edge Computing with Channel-Aware Gating Function [PDF]

open access: yesarXiv
In a distributed mixture-of-experts (MoE) system, a server collaborates with multiple specialized expert clients to perform inference. The server extracts features from input data and dynamically selects experts based on their areas of specialization to produce the final output.
arxiv  

Understanding and Overcoming Immunotherapy Resistance in Skin Cancer: Mechanisms and Strategies

open access: yesAging and Cancer, EarlyView.
This narrative review explores the mechanisms driving immunotherapy resistance in skin cancer, including tumor microenvironment factors, genetic mutations, and immune evasion strategies. It highlights potential strategies to overcome resistance, offering insights for improving therapeutic outcomes and guiding future research in personalized ...
Shreya Singh Beniwal   +8 more
wiley   +1 more source

Advancing MoE Efficiency: A Collaboration-Constrained Routing (C2R) Strategy for Better Expert Parallelism Design [PDF]

open access: yesarXiv
Mixture-of-Experts (MoE) has successfully scaled up models while maintaining nearly constant computing costs. By employing a gating network to route input tokens, it selectively activates a subset of expert networks to process the corresponding token embeddings.
arxiv  

Home - About - Disclaimer - Privacy