Results 31 to 40 of about 73,912 (246)

Explainable Artificial Intelligence Applications in Cyber Security: State-of-the-Art in Research

open access: yesIEEE Access, 2022
This survey presents a comprehensive review of current literature on Explainable Artificial Intelligence (XAI) methods for cyber security applications. Due to the rapid development of Internet-connected systems and Artificial Intelligence in recent years,
Zhibo Zhang   +4 more
doaj   +1 more source

Visualizations for an Explainable Planning Agent

open access: yes, 2018
In this paper, we report on the visualization capabilities of an Explainable AI Planning (XAIP) agent that can support human in the loop decision making.
Bellamy, Rachel K. E.   +6 more
core   +1 more source

Molecular bases of circadian magnesium rhythms across eukaryotes

open access: yesFEBS Letters, EarlyView.
Circadian rhythms in intracellular [Mg2+] exist across eukaryotic kingdoms. Central roles for Mg2+ in metabolism suggest that Mg2+ rhythms could regulate daily cellular energy and metabolism. In this Perspective paper, we propose that ancestral prokaryotic transport proteins could be responsible for mediating Mg2+ rhythms and posit a feedback model ...
Helen K. Feord, Gerben van Ooijen
wiley   +1 more source

An Interpretable Machine Vision Approach to Human Activity Recognition using Photoplethysmograph Sensor Data [PDF]

open access: yes, 2018
The current gold standard for human activity recognition (HAR) is based on the use of cameras. However, the poor scalability of camera systems renders them impractical in pursuit of the goal of wider adoption of HAR in mobile computing contexts ...
Brophy, Eoin   +4 more
core   +1 more source

Induction of Non-Monotonic Logic Programs to Explain Boosted Tree Models Using LIME

open access: yes, 2018
We present a heuristic based algorithm to induce \textit{nonmonotonic} logic programs that will explain the behavior of XGBoost trained classifiers.
Gupta, Gopal, Shakerin, Farhad
core   +1 more source

Structural biology of ferritin nanocages

open access: yesFEBS Letters, EarlyView.
Ferritin is a conserved iron‐storage protein that sequesters iron as a ferric mineral core within a nanocage, protecting cells from oxidative damage and maintaining iron homeostasis. This review discusses ferritin biology, structure, and function, and highlights recent cryo‐EM studies revealing mechanisms of ferritinophagy, cellular iron uptake, and ...
Eloise Mastrangelo, Flavio Di Pisa
wiley   +1 more source

Enhancing Decision Tree based Interpretation of Deep Neural Networks through L1-Orthogonal Regularization

open access: yes, 2019
One obstacle that so far prevents the introduction of machine learning models primarily in critical areas is the lack of explainability. In this work, a practicable approach of gaining explainability of deep artificial neural networks (NN) using an ...
Huber, Marco F.   +2 more
core   +1 more source

Bridging the gap: Multi‐stakeholder perspectives of molecular diagnostics in oncology

open access: yesMolecular Oncology, EarlyView.
Although molecular diagnostics is transforming cancer care, implementing novel technologies remains challenging. This study identifies unmet needs and technology requirements through a two‐step stakeholder involvement. Liquid biopsies for monitoring applications and predictive biomarker testing emerge as key unmet needs. Technology requirements vary by
Jorine Arnouts   +8 more
wiley   +1 more source

Deterministic Uncertainty Estimation for Multi-Modal Regression With Deep Neural Networks

open access: yesIEEE Access
Prediction interval (PI) is a common method to represent predictive uncertainty in regression by deep neural networks. This paper proposes an extension of the prediction interval by using a union of disjoint intervals. Since previous PI methods assumed a
Jaehak Cho   +3 more
doaj   +1 more source

Toward Accountable and Explainable Artificial Intelligence Part One: Theory and Examples

open access: yesIEEE Access, 2022
Like other Artificial Intelligence (AI) systems, Machine Learning (ML) applications cannot explain decisions, are marred with training-caused biases, and suffer from algorithmic limitations.
Masood M. Khan, Jordan Vice
doaj   +1 more source

Home - About - Disclaimer - Privacy