Adversarial example defense based on image reconstruction [PDF]
The rapid development of deep neural networks (DNN) has promoted the widespread application of image recognition, natural language processing, and autonomous driving.
Yu(AUST) Zhang +3 more
doaj +3 more sources
Robust Adversarial Example Detection Algorithm Based on High-Level Feature Differences [PDF]
The threat posed by adversarial examples (AEs) to deep learning applications has garnered significant attention from the academic community. In response, various defense strategies have been proposed, including adversarial example detection.
Hua Mu +4 more
doaj +2 more sources
A Novel Adversarial Example Detection Method Based on Frequency Domain Reconstruction for Image Sensors [PDF]
Convolutional neural networks (CNNs) have been extensively used in numerous remote sensing image detection tasks owing to their exceptional performance.
Shuaina Huang, Zhiyong Zhang, Bin Song
doaj +2 more sources
Generating adversarial examples without specifying a target model [PDF]
Adversarial examples are regarded as a security threat to deep learning models, and there are many ways to generate them. However, most existing methods require the query authority of the target during their work.
Gaoming Yang +4 more
doaj +2 more sources
Clustering Approach for Detecting Multiple Types of Adversarial Examples
With intentional feature perturbations to a deep learning model, the adversary generates an adversarial example to deceive the deep learning model.
Seok-Hwan Choi +3 more
doaj +1 more source
Adversarial Examples Detection Method Based on Image Denoising and Compression [PDF]
Numerous deep learning achievements in the field of computer vision have been widely applied in real life. However, adversarial examples can lead to false positives in deep learning models with high confidence, resulting in serious security consequences.
Feiyu WANG, Fan ZHANG, Jiayu DU, Hongle LEI, Xiaofeng QI
doaj +1 more source
A Universal Detection Method for Adversarial Examples and Fake Images
Deep-learning technologies have shown impressive performance on many tasks in recent years. However, there are multiple serious security risks when using deep-learning technologies. For examples, state-of-the-art deep-learning technologies are vulnerable
Jiewei Lai +3 more
doaj +1 more source
Multi-target Category Adversarial Example Generating Algorithm Based on GAN [PDF]
Although deep neural networks perform well in many areas,research shows that deep neural networks are vulnerable to attacks from adversarial examples.There are many algorithms for attacking neural networks,but the attack speed of most attack algorithms ...
LI Jian, GUO Yan-ming, YU Tian-yuan, WU Yu-lun, WANG Xiang-han, LAO Song-yang
doaj +1 more source
Adversarial Attack and Defense on Deep Neural Network-Based Voice Processing Systems: An Overview
Voice Processing Systems (VPSes), now widely deployed, have become deeply involved in people’s daily lives, helping drive the car, unlock the smartphone, make online purchases, etc.
Xiaojiao Chen, Sheng Li, Hao Huang
doaj +1 more source
Targeted Speech Adversarial Example Generation With Generative Adversarial Network
Although neural network-based speech recognition models have enjoyed significant success in many acoustic systems, they are susceptible to be attacked by the adversarial examples.
Donghua Wang +4 more
doaj +1 more source

