Results 61 to 70 of about 23,034 (235)
This study introduces a framework that combines graph neural networks with causal inference to forecast recurrence and uncover the clinical and pathological factors driving it. It further provides interpretability, validates risk factors via counterfactual and interventional analyses, and offers evidence‐based insights for treatment planning ...
Jubair Ahmed +3 more
wiley +1 more source
This paper presents an integrated AI‐driven cardiovascular platform unifying multimodal data, predictive analytics, and real‐time monitoring. It demonstrates how artificial intelligence—from deep learning to federated learning—enables early diagnosis, precision treatment, and personalized rehabilitation across the full disease lifecycle, promoting a ...
Mowei Kong +4 more
wiley +1 more source
How artificial intelligence (AI) and digital twin (DT) technologies are revolutionizing tunnel surveillance, offering proactive maintenance strategies and enhanced safety protocols. It explores AI's analytical power and DT's virtual replicas of infrastructure, emphasizing their role in optimizing maintenance and safety in tunnel management.
Mohammad Afrazi +4 more
wiley +1 more source
Visualizations for an Explainable Planning Agent
In this paper, we report on the visualization capabilities of an Explainable AI Planning (XAIP) agent that can support human in the loop decision making.
Bellamy, Rachel K. E. +6 more
core +1 more source
This paper explores how climate‐resilient technologies, such as smart grids, digital twins, and self‐healing materials, can enhance urban resilience. It highlights the urgent need for proactive planning, public‐private collaboration, and data‐driven innovation to future‐proof underground infrastructure amid accelerating climate and urban pressures ...
Kai Chen Goh +12 more
wiley +1 more source
AS‐XAI: Self‐Supervised Automatic Semantic Interpretation for CNN
Explainable artificial intelligence (XAI) aims to develop transparent explanatory approaches for “black‐box” deep learning models. However, it remains difficult for existing methods to achieve the trade‐off of the three key criteria in interpretability ...
Changqi Sun +3 more
doaj +1 more source
ExSS 2018: Workshop on explainable smart systems [PDF]
Smart systems that apply complex reasoning to make decisions and plan behavior are often difficult for users to understand. While research to make systems more explainable and therefore more intelligible and transparent is gaining pace, there are ...
Lim, B., Smith, A., Stumpf, S.
core
Artificial intelligence in preclinical epilepsy research: Current state, potential, and challenges
Abstract Preclinical translational epilepsy research uses animal models to better understand the mechanisms underlying epilepsy and its comorbidities, as well as to analyze and develop potential treatments that may mitigate this neurological disorder and its associated conditions. Artificial intelligence (AI) has emerged as a transformative tool across
Jesús Servando Medel‐Matus +7 more
wiley +1 more source
Explainable machine learning in materials science
Machine learning models are increasingly used in materials studies because of their exceptional accuracy. However, the most accurate machine learning models are usually difficult to explain.
Xiaoting Zhong +5 more
doaj +1 more source
The graphical abstract presents the concept of applying machine‐learning algorithms to assess the performance of photovoltaic modules. Data from solar panels are fed to surrogates of intelligent models, to assess the following performance metrics: identifying faults, quantifying energy production and trend degradation over time. The combination of data
Nangamso Nathaniel Nyangiwe +3 more
wiley +1 more source

