Results 151 to 160 of about 52,931 (300)
Medical images and radiology reports are essential for physicians to diagnose medical conditions. However, the vast diversity and cross-source heterogeneity inherent in these data have posed significant challenges to the generalizability of current data ...
Yutong Zhang +16 more
doaj +1 more source
Large Language Model in Materials Science: Roles, Challenges, and Strategic Outlook
Large language models (LLMs) are reshaping materials science. Acting as Oracle, Surrogate, Quant, and Arbiter, they now extract knowledge, predict properties, gauge risk, and steer decisions within a traceable loop. Overcoming data heterogeneity, hallucinations, and poor interpretability demands domain‐adapted models, cross‐modal data standards, and ...
Jinglan Zhang +4 more
wiley +1 more source
Chat computational fluid dynamics (CFD) introduces an large language model (LLM)‐driven agent that automates OpenFOAM simulations end‐to‐end, attaining 82.1% execution success and 68.12% physical fidelity across 315 benchmarks—far surpassing prior systems.
E Fan +8 more
wiley +1 more source
Exploring large language models for summarizing and interpreting an online brain tumor support forum
Objective This study explored the capabilities of large language models (LLMs) GPT-3.5, GPT-4, and Llama 3 to summarize qualitative data from an online brain tumor support forum, assessing the differences between these methods and traditional thematic ...
Christy Muasher-Kerwin +4 more
doaj +1 more source
LLM‐Based Scientific Assistants for Knowledge Extraction: Which Design Choices Matter?
A comprehensive framework for optimizing Large Language Models in domain‐specific applications is introduced. The LLM Playground integrates Prompt Engineering, knowledge augmentation, and advanced reasoning strategies to enable systematic comparison of architectures and base models.
David Exler +7 more
wiley +1 more source
AI Powered Biobanks From Static Archives to Dynamic Discovery Engines
Large language models (LLMs) provide a potential framework for transforming biobanks from static data repositories into intelligent discovery engines. By enabling unified representation and analysis of multimodal biomedical data, LLM‐based systems facilitate dynamic risk prediction, biomarker identification, and mechanistic interpretation, thereby ...
Wenzhen Yin +5 more
wiley +1 more source
An Autonomous Large Language Model‐Agent Framework for Transparent and Local Time Series Forecasting
Architecture of the proposed large language model (LLM)‐based agent framework for autonomous time series forecasting in thermal power generation systems. The framework operates through a vertical pipeline initiated by natural language queries from users, which are processed by the LLM Agent Core powered by Llama.cpp and a ReAct loop with persistent ...
William Gouvêa Buratto +5 more
wiley +1 more source
When Biology Meets Medicine: A Perspective on Foundation Models
Artificial intelligence, and foundation models in particular, are transforming life sciences and medicine. This perspective reviews biological and medical foundation models across scales, highlighting key challenges in data availability, model evaluation, and architectural design.
Kunying Niu +3 more
wiley +1 more source
Large Language Model‐Based Chatbots in Higher Education
The use of large language models (LLMs) in higher education can facilitate personalized learning experiences, advance asynchronized learning, and support instructors, students, and researchers across diverse fields. The development of regulations and guidelines that address ethical and legal issues is essential to ensure safe and responsible adaptation
Defne Yigci +4 more
wiley +1 more source
Calibration‐Free Electromyography Motor Intent Decoding Using Large‐Scale Supervised Pretraining
Calibration‐free electromyography motor intent decoding is enabled through large‐scale supervised pretraining across heterogeneous datasets. A Spatially Aware Feature‐learning Transformer processes variable channel counts and electrode geometries, allowing transfer across users and recording setups. On a held‐out benchmark, fine‐tuned cross‐user models
Alexander E. Olsson +3 more
wiley +1 more source

