Results 151 to 160 of about 176,171 (277)
Quantization‐aware training creates resource‐efficient structured state space sequential S4(D) models for ultra‐long sequence processing in edge AI hardware. Including quantization during training leads to efficiency gains compared to pure post‐training quantization.
Sebastian Siegel +5 more
wiley +1 more source
LLM-I: LLMs are Naturally Interleaved Multimodal Creators
We propose LLM-Interleaved (LLM-I), a flexible and dynamic framework that reframes interleaved image-text generation as a tool-use problem. LLM-I is designed to overcome the "one-tool" bottleneck of current unified models, which are limited to synthetic imagery and struggle with tasks requiring factual grounding or programmatic precision. Our framework
Guo, Zirun +3 more
openaire +2 more sources
This paper presents an integrated AI‐driven cardiovascular platform unifying multimodal data, predictive analytics, and real‐time monitoring. It demonstrates how artificial intelligence—from deep learning to federated learning—enables early diagnosis, precision treatment, and personalized rehabilitation across the full disease lifecycle, promoting a ...
Mowei Kong +4 more
wiley +1 more source
SciLitMiner: An Intelligent System for Scientific Literature Mining and Knowledge Discovery
SciLitMiner is an intelligent system that federately ingests scientific literature, filters it using advanced information retrieval methods, and applies retrieval‐augmented generation tailored to scientific domains. Demonstrated on creep deformation in γ‐TiAl alloys, SciLitMiner provides a controlled workflow for systematic knowledge discovery and ...
Vipul Gupta +3 more
wiley +1 more source
LLM-DSE: Searching Accelerator Parameters with LLM Agents
Even though high-level synthesis (HLS) tools mitigate the challenges of programming domain-specific accelerators (DSAs) by raising the abstraction level, optimizing hardware directive parameters remains a significant hurdle. Existing heuristic and learning-based methods struggle with adaptability and sample efficiency. We present LLM-DSE, a multi-agent
Wang, Hanyu +8 more
openaire +2 more sources
Family Limited Partnerships: Are They Still a Viable Weapon in the Estate Planner’s Arsenal? [PDF]
Van Leer-Greenberg, Matthew, Esq., LLM
core +1 more source
Objective The objective of this study was to compare the long‐term safety profiles of ocrelizumab and rituximab in persons with multiple sclerosis (MS). Methods Using retrospective data from the University of California (UC) Health System, we simulated a target clinical trial. The primary cohort from UC San Francisco (UCSF) and a validation cohort from
Gabriel Cerono +3 more
wiley +1 more source
LLM-Auction: Generative Auction towards LLM-Native Advertising
The rapid advancement of large language models (LLMs) necessitates novel monetization strategies, among which LLM-native advertising has emerged as a promising paradigm by naturally integrating advertisement within LLM-generated responses. However, this paradigm fundamentally shifts the auction object from discrete ad slots to the distribution over LLM
Zhao, Chujie +6 more
openaire +2 more sources
CUDA-LLM: LLMs Can Write Efficient CUDA Kernels
Large Language Models (LLMs) have demonstrated strong capabilities in general-purpose code generation. However, generating the code which is deeply hardware-specific, architecture-aware, and performance-critical, especially for massively parallel GPUs, remains a complex challenge.
Chen, Wentao +4 more
openaire +2 more sources

