Face Aging by Explainable Conditional Adversarial Autoencoders
This paper deals with Generative Adversarial Networks (GANs) applied to face aging. An explainable face aging framework is proposed that builds on a well-known face aging approach, namely the Conditional Adversarial Autoencoder (CAAE).
Christos Korgialas +4 more
doaj +1 more source
Explainable Artificial Intelligence (XAI)
Complex machine learning models perform better. However, we consider these models as black boxes. That s where Explainable AI (XAI) comes into play. Understanding why a model makes a specific prediction can be as crucial as its accuracy for many applications, researchers, and decision-makers.
openaire +1 more source
Exact and Approximate Rule Extraction from Neural Networks with Boolean Features [PDF]
Rule extraction from classifiers treated as black boxes is an important topic in explainable artificial intelligence (XAI). It is concerned with finding rules that describe classifiers and that are understandable to humans, having the form of (I f...T ...
Howe, J. M., Mereani, F.
core +1 more source
Explainable Artificial Intelligence in Echocardiography [PDF]
Recent advancements in artificial intelligence (AI) have generated novel opportunities and challenges in ultrasound imaging. Deep learning algorithms exhibit significant potential in analyzing echocardiographic images, encompassing tasks such as view ...
Hu Xuelin, Zhu Ye, Zhang Zisang, Quan Yuanting, Chen Wenwen, Chen Leichong, Xu Guangyu, Qin Luning, Xie Mingxing, Zhang Li
doaj +1 more source
????????? ?????? ???????????? ?????? ???????????? ?????? ??????????????? ???????????? ????????? ???????????? ?????? [PDF]
Department of Computer Science and EngineeringAs deep learning has grown fast, so did the desire to interpret deep learning black boxes. As a result, many analysis tools have emerged to interpret it.
Lee, Ginkyeng
core
Robust Network Intrusion Detection through Explainable Artificial Intelligence (XAI)
<p>In this letter, we present a two-stage pipeline for robust network intrusion detection. First, we implement an extreme gradient boosting (XGBoost) model to perform supervised intrusion detection, and leverage the SHapley Additive exPlanation (SHAP) framework to devise explanations of our model. In the second stage, we use these explanations to
Pieter Barnard +2 more
openaire +2 more sources
Potential Applications of Explainable Artificial Intelligence to Actuarial Problems
Explainable artificial intelligence (XAI) is a group of techniques and evaluations that allows users to understand artificial intelligence knowledge and increase the reliability of the results produced using artificial intelligence.
Catalina Lozano-Murcia +4 more
doaj +1 more source
The Grammar of Interactive Explanatory Model Analysis
The growing need for in-depth analysis of predictive models leads to a series of new methods for explaining their local and global properties. Which of these methods is the best? It turns out that this is an ill-posed question.
Baniecki, Hubert, Biecek, Przemyslaw
core
Explainable Artificial Intelligence (XAI) is crucial for the transition from the fourth to fifth industrial revolution, providing transparency and fostering user confidence in Artificial Intelligence (AI) powered systems.
Konstantinos Nikiforidis +8 more
doaj +1 more source
Explaining reaction coordinates of alanine dipeptide isomerization obtained from deep neural networks using Explainable Artificial Intelligence (XAI) [PDF]
Takuma Kikutsuji +5 more
openalex +2 more sources

