Results 151 to 160 of about 186,001 (296)

LLM-gene_function_predict

open access: yes
O​ur project's ...
openaire   +1 more source

LLM-I: LLMs are Naturally Interleaved Multimodal Creators

open access: yes
We propose LLM-Interleaved (LLM-I), a flexible and dynamic framework that reframes interleaved image-text generation as a tool-use problem. LLM-I is designed to overcome the "one-tool" bottleneck of current unified models, which are limited to synthetic imagery and struggle with tasks requiring factual grounding or programmatic precision. Our framework
Guo, Zirun   +3 more
openaire   +2 more sources

AI Powered Biobanks From Static Archives to Dynamic Discovery Engines

open access: yesAdvanced Intelligent Discovery, EarlyView.
Large language models (LLMs) provide a potential framework for transforming biobanks from static data repositories into intelligent discovery engines. By enabling unified representation and analysis of multimodal biomedical data, LLM‐based systems facilitate dynamic risk prediction, biomarker identification, and mechanistic interpretation, thereby ...
Wenzhen Yin   +5 more
wiley   +1 more source

An Autonomous Large Language Model‐Agent Framework for Transparent and Local Time Series Forecasting

open access: yesAdvanced Intelligent Discovery, EarlyView.
Architecture of the proposed large language model (LLM)‐based agent framework for autonomous time series forecasting in thermal power generation systems. The framework operates through a vertical pipeline initiated by natural language queries from users, which are processed by the LLM Agent Core powered by Llama.cpp and a ReAct loop with persistent ...
William Gouvêa Buratto   +5 more
wiley   +1 more source

Reinforcing Stereotypes in Health Care Through Artificial Intelligence–Generated Images: A Call for Regulation

open access: yesMayo Clinic Proceedings: Digital Health
Hannah van Kolfschooten, LLM   +1 more
doaj   +1 more source

LLM-DSE: Searching Accelerator Parameters with LLM Agents

open access: yes
Even though high-level synthesis (HLS) tools mitigate the challenges of programming domain-specific accelerators (DSAs) by raising the abstraction level, optimizing hardware directive parameters remains a significant hurdle. Existing heuristic and learning-based methods struggle with adaptability and sample efficiency. We present LLM-DSE, a multi-agent
Wang, Hanyu   +8 more
openaire   +2 more sources

AI‐Guided Co‐Optimization of Advanced Field‐Effect Transistors: Bridging Material, Device, and Fabrication Design

open access: yesAdvanced Intelligent Discovery, EarlyView.
This article outlines how artificial intelligence could reshape the design of next‐generation transistors as traditional scaling reaches its limits. It discusses emerging roles of machine learning across materials selection, device modeling, and fabrication processes, and highlights hierarchical reinforcement learning as a promising framework for ...
Shoubhanik Nath   +4 more
wiley   +1 more source

LLM-Auction: Generative Auction towards LLM-Native Advertising

open access: yes
The rapid advancement of large language models (LLMs) necessitates novel monetization strategies, among which LLM-native advertising has emerged as a promising paradigm by naturally integrating advertisement within LLM-generated responses. However, this paradigm fundamentally shifts the auction object from discrete ad slots to the distribution over LLM
Zhao, Chujie   +6 more
openaire   +2 more sources

When Biology Meets Medicine: A Perspective on Foundation Models

open access: yesAdvanced Intelligent Discovery, EarlyView.
Artificial intelligence, and foundation models in particular, are transforming life sciences and medicine. This perspective reviews biological and medical foundation models across scales, highlighting key challenges in data availability, model evaluation, and architectural design.
Kunying Niu   +3 more
wiley   +1 more source

CUDA-LLM: LLMs Can Write Efficient CUDA Kernels

open access: yes
Large Language Models (LLMs) have demonstrated strong capabilities in general-purpose code generation. However, generating the code which is deeply hardware-specific, architecture-aware, and performance-critical, especially for massively parallel GPUs, remains a complex challenge.
Chen, Wentao   +4 more
openaire   +2 more sources

Home - About - Disclaimer - Privacy