Results 121 to 130 of about 23,034 (235)
ABSTRACT Objective Despite high stand‐alone performance, studies demonstrate that artificial intelligence (AI)‐supported endoscopic diagnostics often fall short in clinical applications due to human‐AI interaction factors. This video‐based trial on Barrett's esophagus aimed to investigate how examiner behavior, their levels of confidence, and system ...
David Roser +13 more
wiley +1 more source
SXAD: Shapely eXplainable AI-Based Anomaly Detection Using Log Data
Artificial Intelligence (AI) has made tremendous progress in anomaly detection. However, AI models work as a black-box, making it challenging to provide reasoning behind their judgments in a Log Anomaly Detection (LAD).
Kashif Alam +4 more
doaj +1 more source
Validating Explainer Methods: A Functionally Grounded Approach for Numerical Forecasting
ABSTRACT Forecasting systems have a long tradition in providing outputs accompanied by explanations. While the vast majority of such explanations relies on inherently interpretable linear statistical models, research has put forth eXplainable Artificial Intelligence (XAI) methods to improve the comprehensibility of nonlinear machine learning models. As
Felix Haag +2 more
wiley +1 more source
Human centred explainable AI decision-making in healthcare
Human-centred AI (HCAI11 HCAI – Human-centred artificial intelligence) implies building AI systems in a manner that comprehends human aims, needs, and expectations by assisting, interacting, and collaborating with humans.
Catharina M. van Leersum, Clara Maathuis
doaj +1 more source
Edge‐Oriented DoS/DDoS Intrusion Detection and Supervision Platform
ABSTRACT This work presents an Edge Node‐Oriented DoS/DDoS Intrusion Detection and Monitoring Platform, a novel anomaly detection system based on temporal analysis with machine learning (ML) and deep learning (DL) algorithms, specifically designed to operate on edge servers with limited resources.
Geraldo Eufrazio Martins Júnior +3 more
wiley +1 more source
Explainable Artificial Intelligence (XAI) for Deep Learning Based Medical Imaging Classification. [PDF]
Ghnemat R, Alodibat S, Abu Al-Haija Q.
europepmc +1 more source
Counterfactual Explanations in Education: A Systematic Review
Main challenges on counterfactual explanations in education. ABSTRACT Counterfactuals are a type of explanations based on hypothetical scenarios used in Explainable Artificial Intelligence (XAI), showing what changes in input variables could have led to different outcomes in predictive problems.
Pamela Buñay‐Guisñan +2 more
wiley +1 more source
Explainable artificial intelligence (XAI) in radiology and nuclear medicine: a literature review. [PDF]
de Vries BM +5 more
europepmc +1 more source
ABSTRACT Background Artificial Intelligence (AI) is increasingly discussed as a tool that can support speech and language therapy (SLT). However, clinical adoption of AI requires improved AI literacy among clinicians. AI is a rapidly evolving and often inconsistently defined field that can be difficult to navigate.
Ana Oliveira‐Buckley +3 more
wiley +1 more source
ABSTRACT Zero‐day exploits remain challenging to detect because they often appear in unknown distributions of signatures and rules. The article entails a systematic review and cross‐sectional synthesis of four fundamental model families for identifying zero‐day intrusions, namely, convolutional neural networks (CNN), deep neural networks (DNN ...
Abdullah Al Siam +3 more
wiley +1 more source

