Multimodal Emotion Recognition Using Modality-Wise Knowledge Distillation. [PDF]
Lee S, Ahn Y, Shin JW.
europepmc +1 more source
An Autonomous Large Language Model‐Agent Framework for Transparent and Local Time Series Forecasting
Architecture of the proposed large language model (LLM)‐based agent framework for autonomous time series forecasting in thermal power generation systems. The framework operates through a vertical pipeline initiated by natural language queries from users, which are processed by the LLM Agent Core powered by Llama.cpp and a ReAct loop with persistent ...
William Gouvêa Buratto +5 more
wiley +1 more source
When Biology Meets Medicine: A Perspective on Foundation Models
Artificial intelligence, and foundation models in particular, are transforming life sciences and medicine. This perspective reviews biological and medical foundation models across scales, highlighting key challenges in data availability, model evaluation, and architectural design.
Kunying Niu +3 more
wiley +1 more source
MLD-Net: A Multi-Level Knowledge Distillation Network for Automatic Modulation Recognition. [PDF]
Zhang X +6 more
europepmc +1 more source
An explainable CatBoost model was trained to predict the bandgaps of 474 phosphate crystals based on composition and density descriptors. SHAP analysis identified two key variables—d‐electron‐count dispersion and atomic‐density dispersion—as the primary drivers of the model's predictions.
Wenhu Wang +3 more
wiley +1 more source
Human visual attention-inspired knowledge distillation underlying interpretable computational pathology. [PDF]
Yu M +12 more
europepmc +1 more source
Haptic In‐Sensor Computing Device Based on CNT/PDMS Nanocomposite Physical Reservoir
Using a porous carbon nanotube‐polydimethylsiloxane nanocomposite, a sensor array integrated with a physical reservoir computing paradigm capable of in‐sensor computing is demonstrated. The device is able to classify between nine objects with an accuracy above 80%, opening the possibility for low‐power sensing/computing for future robotics.
Kouki Kimizuka +7 more
wiley +1 more source
CSWin-MDKDNet: cross-shaped window network with multi-dimensional fusion and knowledge distillation for medical image segmentation. [PDF]
Cui G +8 more
europepmc +1 more source
Calibration‐Free Electromyography Motor Intent Decoding Using Large‐Scale Supervised Pretraining
Calibration‐free electromyography motor intent decoding is enabled through large‐scale supervised pretraining across heterogeneous datasets. A Spatially Aware Feature‐learning Transformer processes variable channel counts and electrode geometries, allowing transfer across users and recording setups. On a held‐out benchmark, fine‐tuned cross‐user models
Alexander E. Olsson +3 more
wiley +1 more source
Aligning to the teacher: multilevel feature-aligned knowledge distillation. [PDF]
Zhang Y +7 more
europepmc +1 more source

