Efficient Adversarial Training With Transferable Adversarial Examples [PDF]
Adversarial training is an effective defense method to protect classification models against adversarial attacks. However, one limitation of this approach is that it can require orders of magnitude additional training time due to high cost of generating strong adversarial examples during training.
Zheng, Haizhong +4 more
openaire +2 more sources
Clustering Approach for Detecting Multiple Types of Adversarial Examples
With intentional feature perturbations to a deep learning model, the adversary generates an adversarial example to deceive the deep learning model.
Seok-Hwan Choi +3 more
doaj +1 more source
Adversarial example generation with adabelief optimizer and crop invariance [PDF]
Deep neural networks are vulnerable to adversarial examples, which are crafted by applying small, human-imperceptible perturbations on the original images, so as to mislead deep neural networks to output inaccurate predictions.
Bo Yang +4 more
semanticscholar +1 more source
Adversarial Examples Detection Method Based on Image Denoising and Compression [PDF]
Numerous deep learning achievements in the field of computer vision have been widely applied in real life. However, adversarial examples can lead to false positives in deep learning models with high confidence, resulting in serious security consequences.
Feiyu WANG, Fan ZHANG, Jiayu DU, Hongle LEI, Xiaofeng QI
doaj +1 more source
Adversarial Examples for Good: Adversarial Examples Guided Imbalanced Learning
Appeared in ICIP ...
Zhang, Jie +3 more
openaire +2 more sources
FADER: Fast adversarial example rejection [PDF]
Deep neural networks are vulnerable to adversarial examples, i.e., carefully-crafted inputs that mislead classification at test time. Recent defenses have been shown to improve adversarial robustness by detecting anomalous deviations from legitimate training samples at different layer representations - a behavior normally exhibited by adversarial ...
Crecchi, Francesco +4 more
openaire +4 more sources
A Universal Detection Method for Adversarial Examples and Fake Images
Deep-learning technologies have shown impressive performance on many tasks in recent years. However, there are multiple serious security risks when using deep-learning technologies. For examples, state-of-the-art deep-learning technologies are vulnerable
Jiewei Lai +3 more
doaj +1 more source
Adversarial Example Generation with Syntactically Controlled Paraphrase Networks [PDF]
We propose syntactically controlled paraphrase networks (SCPNs) and use them to generate adversarial examples. Given a sentence and a target syntactic form (e.g., a constituency parse), SCPNs are trained to produce a paraphrase of the sentence with the ...
Mohit Iyyer +3 more
semanticscholar +1 more source
Multi-target Category Adversarial Example Generating Algorithm Based on GAN [PDF]
Although deep neural networks perform well in many areas,research shows that deep neural networks are vulnerable to attacks from adversarial examples.There are many algorithms for attacking neural networks,but the attack speed of most attack algorithms ...
LI Jian, GUO Yan-ming, YU Tian-yuan, WU Yu-lun, WANG Xiang-han, LAO Song-yang
doaj +1 more source
Adversarial Attack and Defense on Deep Neural Network-Based Voice Processing Systems: An Overview
Voice Processing Systems (VPSes), now widely deployed, have become deeply involved in people’s daily lives, helping drive the car, unlock the smartphone, make online purchases, etc.
Xiaojiao Chen, Sheng Li, Hao Huang
doaj +1 more source

