Results 91 to 100 of about 4,042,759 (279)
Human centred explainable AI decision-making in healthcare
Human-centred AI (HCAI11 HCAI – Human-centred artificial intelligence) implies building AI systems in a manner that comprehends human aims, needs, and expectations by assisting, interacting, and collaborating with humans.
Catharina M. van Leersum, Clara Maathuis
doaj +1 more source
Revolutionizing breast ultrasound diagnostics with EfficientNet-B7 and Explainable AI
Breast cancer is a leading cause of mortality among women globally, necessitating precise classification of breast ultrasound images for early diagnosis and treatment.
M. Latha +5 more
semanticscholar +1 more source
ABSTRACT Objective Plasma fibrinogen is essential in thrombosis and fibrinolysis, yet its dynamic changes pre‐ and post‐intravenous thrombolysis (IVT) for predicting brain injury severity and prognosis in acute ischemic stroke (AIS) patients remain unclear.
Wenhai Zhai +28 more
wiley +1 more source
Explainable AI Frameworks: Navigating the Present Challenges and Unveiling Innovative Applications
This study delves into the realm of Explainable Artificial Intelligence (XAI) frameworks, aiming to empower researchers and practitioners with a deeper understanding of these tools. We establish a comprehensive knowledge base by classifying and analyzing
Neeraj Anand Sharma +5 more
doaj +1 more source
XAI-IoT: An Explainable AI Framework for Enhancing Anomaly Detection in IoT Systems
The exponential growth of Internet of Things (IoT) systems inspires new research directions on developing artificial intelligence (AI) techniques for detecting anomalies in these IoT systems. One important goal in this context is to accurately detect and
Anna Namrita Gummadi +2 more
semanticscholar +1 more source
ABSTRACT Objective Peripheral neuropathies contribute to patient disability but may be diagnosed late or missed altogether due to late referral, limitation of current diagnostic methods and lack of specialized testing facilities. To address this clinical gap, we developed NeuropathAI, an interpretable deep learning–based multiclass classification ...
Chaima Ben Rabah +7 more
wiley +1 more source
From local explanations to global understanding with explainable AI for trees
Scott M. Lundberg +9 more
semanticscholar +1 more source
Association of Corticospinal Tract Asymmetry With Ambulatory Ability After Intracerebral Hemorrhage
ABSTRACT Background Ambulatory ability after intracerebral hemorrhage (ICH) is important to patients. We tested whether asymmetry between ipsi‐ and contra‐lesional corticospinal tracts (CSTs) assessed by diffusion tensor imaging (DTI) is associated with post‐ICH ambulation.
Yasmin N. Aziz +25 more
wiley +1 more source
Analyzing and assessing explainable AI models for smart agriculture environments
We analyze a case study in the field of smart agriculture exploiting Explainable AI (XAI) approach, a field of study that aims to provide interpretations and explanations to the behaviour of AI systems.
Andrea Cartolano +2 more
semanticscholar +1 more source
Effects of Biological Sex and Age on Cerebrospinal Fluid Markers—A Retrospective Observational Study
ABSTRACT Objective Cerebrospinal fluid (CSF) analysis is a key diagnostic tool for neurological diseases. To date, only a few studies have investigated in larger cohorts the effect of age and biological sex on diagnostic markers extracted from CSF. Methods For this retrospective observational study, 4163 CSF findings (2012–2020) were evaluated.
Isabel‐Sophie Hafer +3 more
wiley +1 more source

