Results 131 to 140 of about 1,209,773 (317)

Increasing the Robustness of Image Quality Assessment Models Through Adversarial Training

open access: yesTechnologies
The adversarial robustness of image quality assessment (IQA) models to adversarial attacks is emerging as a critical issue. Adversarial training has been widely used to improve the robustness of neural networks to adversarial attacks, but little in-depth
Anna Chistyakova   +6 more
doaj   +1 more source

Fail‐Controlled Classifiers: A Swiss‐Army Knife Toward Trustworthy Systems

open access: yesSoftware: Practice and Experience, EarlyView.
ABSTRACT Background Modern critical systems often require to take decisions and classify data and scenarios autonomously without having detrimental effects on people, infrastructures or the environment, ensuring desired dependability attributes. Researchers typically strive to craft classifiers with perfect accuracy, which should be always correct and ...
Fahad Ahmed Khokhar   +4 more
wiley   +1 more source

Deep Reinforcement Learning-Based Adversarial Attack and Defense in Industrial Control Systems

open access: yesMathematics
Adversarial attacks targeting industrial control systems, such as the Maroochy wastewater system attack and the Stuxnet worm attack, have caused significant damage to related facilities.
Mun-Suk Kim
doaj   +1 more source

Adversarial Attacks on Classifiers for Eye-based User Modelling [PDF]

open access: green, 2020
Inken Hagestedt   +2 more
openalex   +1 more source

Lifecycle‐Based Governance to Build Reliable Ethical AI Systems

open access: yesSystems Research and Behavioral Science, EarlyView.
ABSTRACT Artificial intelligence (AI) systems represent a paradigm shift in technological capabilities, offering transformative potential across industries while introducing novel governance and implementation challenges. This paper presents a comprehensive framework for understanding AI systems through three critical dimensions: trustworthiness ...
Maikel Leon
wiley   +1 more source

POSES: Patch Optimization Strategies for Efficiency and Stealthiness Using eXplainable AI

open access: yesIEEE Access
Adversarial examples, which are carefully crafted inputs designed to deceive deep learning models, create significant challenges in Artificial Intelligence.
Han-Ju Lee   +3 more
doaj   +1 more source

Adversarial Regression for Detecting Attacks in Cyber-Physical Systems [PDF]

open access: gold, 2018
Amin Ghafouri   +2 more
openalex   +1 more source

Mission Aware Cyber‐Physical Security

open access: yesSystems Engineering, EarlyView.
ABSTRACT Perimeter cybersecurity, while essential, has proven insufficient against sophisticated, coordinated, and cyber‐physical attacks. In contrast, mission‐centric cybersecurity emphasizes finding evidence of attack impact on mission success, allowing for targeted resource allocation to mitigate vulnerabilities and protect critical assets.
Georgios Bakirtzis   +3 more
wiley   +1 more source

CycleGAN-Gradient Penalty for Enhancing Android Adversarial Malware Detection in Gray Box Setting

open access: yesIEEE Access
Adversarial attacks pose significant threats to Android malware detection by undermining the effectiveness of machine learning-based systems. The rapid increase in Android apps complicates the management of malicious software that can compromise user ...
Fabrice Setephin Atedjio   +4 more
doaj   +1 more source

Pixel Lens: A Granular Assessment of Saliency Explanations

open access: yesArtificial Intelligence for Engineering, EarlyView.
We propose a pipeline that detects shortcut‐dominated classifiers by comparing predictions on clean and shortcut‐perturbed images and checking dominance via a Shapley‐based ground‐truth explainer. The workflow quantifies the explanation quality of different explainable artificial intelligence (XAI) methods.
Kanglong Fan   +5 more
wiley   +1 more source

Home - About - Disclaimer - Privacy