Results 21 to 30 of about 172,371 (266)

Developing a Robust Defensive System against Adversarial Examples Using Generative Adversarial Networks

open access: yesBig Data and Cognitive Computing, 2020
In this work, we propose a novel defense system against adversarial examples leveraging the unique power of Generative Adversarial Networks (GANs) to generate new adversarial examples for model retraining. To do so, we develop an automated pipeline using
Shayan Taheri   +3 more
doaj   +1 more source

Adversarial Attack for SAR Target Recognition Based on UNet-Generative Adversarial Network

open access: yesRemote Sensing, 2021
Some recent articles have revealed that synthetic aperture radar automatic target recognition (SAR-ATR) models based on deep learning are vulnerable to the attacks of adversarial examples and cause security problems.
Chuan Du, Lei Zhang
doaj   +1 more source

Hadamard’s Defense Against Adversarial Examples

open access: yesIEEE Access, 2021
Adversarial images have become an increasing concern in real-world image recognition applications with deep neural networks (DNN). We observed that all the architectures in DNN use one-hot encoding after a softmax layer.
Angello Hoyos, Ubaldo Ruiz, Edgar Chavez
doaj   +1 more source

Not all adversarial examples require a complex defense : identifying over-optimized adversarial examples with IQR-based logit thresholding [PDF]

open access: yes, 2019
Detecting adversarial examples currently stands as one of the biggest challenges in the field of deep learning. Adversarial attacks, which produce adversarial examples, increase the prediction likelihood of a target class for a particular data point ...
De Neve, Wesley   +2 more
core   +2 more sources

Semantic Adversarial Examples [PDF]

open access: yes2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), 2018
Deep neural networks are known to be vulnerable to adversarial examples, i.e., images that are maliciously perturbed to fool the model. Generating adversarial examples has been mostly limited to finding small perturbations that maximize the model prediction error.
Hosseini, Hossein, Poovendran, Radha
openaire   +2 more sources

Adversarial Examples Generation Method Based on Random Translation Transformation [PDF]

open access: yesJisuanji gongcheng, 2022
The image classification model based on Deep Neural Network(DNN) can recognize images with a recognition degree that is even higher than that of human eyes.However, it is vulnerable to attacks from adversarial examples because of the fragility of the ...
LI Zheming, ZHANG Hengwei, MA Junqiang, WANG Jindong, YANG Bo
doaj   +1 more source

Impact of adversarial examples on deep learning models for biomedical image segmentation [PDF]

open access: yes, 2019
Deep learning models, which are increasingly being used in the field of medical image analysis, come with a major security risk, namely, their vulnerability to adversarial examples.
C Pena-Betancor   +3 more
core   +4 more sources

Adversarial attacks and defenses in deep learning

open access: yes网络与信息安全学报, 2020
The adversarial example is a modified image that is added imperceptible perturbations, which can make deep neural networks decide wrongly. The adversarial examples seriously threaten the availability of the system and bring great security risks to the ...
LIU Ximeng   +2 more
doaj   +3 more sources

A Multimodal Adversarial Attack Framework Based on Local and Random Search Algorithms

open access: yesInternational Journal of Computational Intelligence Systems, 2021
Although many problems in computer vision and natural language processing have made breakthrough progress with neural networks, adversarial attack is a serious potential problem in many neural network- based applications.
Zibo Yi, Jie Yu, Yusong Tan, Qingbo Wu
doaj   +1 more source

Adversarial Examples: Opportunities and Challenges [PDF]

open access: yesIEEE Transactions on Neural Networks and Learning Systems, 2019
16 pages, 13 figures, 5 ...
Jiliang Zhang, Chen Li
openaire   +3 more sources

Home - About - Disclaimer - Privacy