Results 91 to 100 of about 173,113 (165)

Universal adversarial defense in remote sensing based on pre-trained denoising diffusion models

open access: yesInternational Journal of Applied Earth Observations and Geoinformation
Deep neural networks (DNNs) have risen to prominence as key solutions in numerous AI applications for earth observation (AI4EO). However, their susceptibility to adversarial examples poses a critical challenge, compromising the reliability of AI4EO ...
Weikang Yu, Yonghao Xu, Pedram Ghamisi
doaj   +1 more source

Generating adversarial examples without specifying a target model. [PDF]

open access: yesPeerJ Comput Sci, 2021
Yang G, Li M, Fang X, Zhang J, Liang X.
europepmc   +1 more source

Improving the Transferability of Adversarial Examples With a Noise Data Enhancement Framework and Random Erasing. [PDF]

open access: yesFront Neurorobot, 2021
Xie P   +8 more
europepmc   +1 more source

POSES: Patch Optimization Strategies for Efficiency and Stealthiness Using eXplainable AI

open access: yesIEEE Access
Adversarial examples, which are carefully crafted inputs designed to deceive deep learning models, create significant challenges in Artificial Intelligence.
Han-Ju Lee   +3 more
doaj   +1 more source

Adversarial Example Generation Method Based on Wavelet Transform

open access: yesInformation
Adversarial examples are crucial tools for assessing the robustness of deep neural networks (DNNs) and revealing potential security vulnerabilities.
Meng Bi   +5 more
doaj   +1 more source

Targeted Discrepancy Attacks: Crafting Selective Adversarial Examples in Graph Neural Networks

open access: yesIEEE Access
In this study, we present a novel approach to adversarial attacks for graph neural networks (GNNs), specifically addressing the unique challenges posed by graphical data.
Hyun Kwon, Jang-Woon Baek
doaj   +1 more source

Developing Hessian–Free Second–Order Adversarial Examples for Adversarial Training

open access: yesInternational Journal of Applied Mathematics and Computer Science
Recent studies show that deep neural networks (DNNs) are extremely vulnerable to elaborately designed adversarial examples. Adversarial training, which uses adversarial examples as training data, has been proven to be one of the most effective methods of
Qian Yaguan   +5 more
doaj   +1 more source

Home - About - Disclaimer - Privacy