Detecting Audio Adversarial Examples in Automatic Speech Recognition Systems Using Decision Boundary Patterns. [PDF]
Zong W, Chow YW, Susilo W, Kim J, Le NT.
europepmc +1 more source
Universal adversarial examples and perturbations for quantum classifiers. [PDF]
Gong W, Deng DL.
europepmc +1 more source
Universal adversarial defense in remote sensing based on pre-trained denoising diffusion models
Deep neural networks (DNNs) have risen to prominence as key solutions in numerous AI applications for earth observation (AI4EO). However, their susceptibility to adversarial examples poses a critical challenge, compromising the reliability of AI4EO ...
Weikang Yu, Yonghao Xu, Pedram Ghamisi
doaj +1 more source
Generating adversarial examples without specifying a target model. [PDF]
Yang G, Li M, Fang X, Zhang J, Liang X.
europepmc +1 more source
Beware the Black-Box: On the Robustness of Recent Defenses to Adversarial Examples. [PDF]
Mahmood K +3 more
europepmc +1 more source
Improving the Transferability of Adversarial Examples With a Noise Data Enhancement Framework and Random Erasing. [PDF]
Xie P +8 more
europepmc +1 more source
POSES: Patch Optimization Strategies for Efficiency and Stealthiness Using eXplainable AI
Adversarial examples, which are carefully crafted inputs designed to deceive deep learning models, create significant challenges in Artificial Intelligence.
Han-Ju Lee +3 more
doaj +1 more source
Adversarial Example Generation Method Based on Wavelet Transform
Adversarial examples are crucial tools for assessing the robustness of deep neural networks (DNNs) and revealing potential security vulnerabilities.
Meng Bi +5 more
doaj +1 more source
Targeted Discrepancy Attacks: Crafting Selective Adversarial Examples in Graph Neural Networks
In this study, we present a novel approach to adversarial attacks for graph neural networks (GNNs), specifically addressing the unique challenges posed by graphical data.
Hyun Kwon, Jang-Woon Baek
doaj +1 more source
Developing Hessian–Free Second–Order Adversarial Examples for Adversarial Training
Recent studies show that deep neural networks (DNNs) are extremely vulnerable to elaborately designed adversarial examples. Adversarial training, which uses adversarial examples as training data, has been proven to be one of the most effective methods of
Qian Yaguan +5 more
doaj +1 more source

