Results 31 to 40 of about 79,918 (254)
Hadamard’s Defense Against Adversarial Examples [PDF]
Adversarial images have become an increasing concern in real-world image recognition applications with deep neural networks (DNN). We observed that all the architectures in DNN use one-hot encoding after a softmax layer. The attacker can make minute modifications in the adversarial example rendering those changes imperceptible to a human observer.
Angello Hoyos, Ubaldo Ruiz, Edgar Chavez
openaire +1 more source
Deepfake Cross-Model Defense Method Based on Generative Adversarial Network [PDF]
To reduce social risks caused by the abuse of deepfake technology, an active defense method against deep forgery based on a Generative Adversarial Network (GAN) is proposed. Adversarial samples are created by adding imperceptible perturbation to original
DAI Lei, CAO Lin, GUO Yanan, ZHANG Fan, DU Kangning
doaj +1 more source
Adversarial Attacks and Defenses in Deep Learning
With the rapid developments of artificial intelligence (AI) and deep learning (DL) techniques, it is critical to ensure the security and robustness of the deployed algorithms.
Kui Ren +3 more
doaj +1 more source
In this work, we propose a novel defense system against adversarial examples leveraging the unique power of Generative Adversarial Networks (GANs) to generate new adversarial examples for model retraining. To do so, we develop an automated pipeline using
Shayan Taheri +3 more
doaj +1 more source
Leveraging linear mapping for model-agnostic adversarial defense
In the ever-evolving landscape of deep learning, novel designs of neural network architectures have been thought to drive progress by enhancing embedded representations.
Huma Jamil +5 more
doaj +1 more source
Gotta Catch 'Em All: Using Honeypots to Catch Adversarial Attacks on Neural Networks
Deep neural networks (DNN) are known to be vulnerable to adversarial attacks. Numerous efforts either try to patch weaknesses in trained models, or try to make it difficult or costly to compute adversarial examples that exploit them.
Li, Bo +5 more
core +1 more source
Deep neural networks (DNNs) have been widely utilized in automatic visual navigation and recognition on modern unmanned aerial vehicles (UAVs), achieving state-of-the-art performances.
Zihao Lu, Hao Sun, Yanjie Xu
doaj +1 more source
Defense against Adversarial Attacks Using High-Level Representation Guided Denoiser
Neural networks are vulnerable to adversarial examples, which poses a threat to their application in security sensitive systems. We propose high-level representation guided denoiser (HGD) as a defense for image classification.
Dong, Yinpeng +5 more
core +1 more source
Adversarial defenses via vector quantization
This is the author-accepted version of our paper published in Neurocomputing.
Zhiyi Dong, Yongyi Mao
openaire +2 more sources
Towards Adversarial Robustness for Multi-Mode Data through Metric Learning
Adversarial attacks have become one of the most serious security issues in widely used deep neural networks. Even though real-world datasets usually have large intra-variations or multiple modes, most adversarial defense methods, such as adversarial ...
Sarwar Khan +3 more
doaj +1 more source

