Results 11 to 20 of about 79,918 (254)
IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI)
Muzammal Naseer +4 more
openaire +3 more sources
Adversarial example defense algorithm for MNIST based on image reconstruction
With the popularization of deep learning, more and more attention has been paid to its security issues.The adversarial sample is to add a small disturbance to the original image, which can cause the deep learning model to misclassify the image, which ...
Zhongyuan QIN +3 more
doaj +3 more sources
Clustering Approach for Detecting Multiple Types of Adversarial Examples
With intentional feature perturbations to a deep learning model, the adversary generates an adversarial example to deceive the deep learning model.
Seok-Hwan Choi +3 more
doaj +1 more source
An adversarial example, which is an input instance with small, intentional feature perturbations to machine learning models, represents a concrete problem in Artificial intelligence safety.
Seok-Hwan Choi +3 more
doaj +1 more source
Adversarial example defense based on image reconstruction [PDF]
The rapid development of deep neural networks (DNN) has promoted the widespread application of image recognition, natural language processing, and autonomous driving.
Yu(AUST) Zhang +3 more
doaj +2 more sources
A Defense Method Against FGSM Adversarial Attack [PDF]
Intelligent ship recognition has been widely used in the military,but it also brings increasingly serious security issues.Even the high performance classification models are still vulnerable to the attacks from adversarial examples.For Fast Gradient Sign
WANG Xiaopeng, LUO Wei, QIN Ke, YANG Jintao, WANG Min
doaj +1 more source
Open-Set Adversarial Defense [PDF]
Accepted by ECCV ...
Rui Shao +3 more
openaire +2 more sources
Demotivate Adversarial Defense in Remote Sensing [PDF]
Convolutional neural networks are currently the state-of-the-art algorithms for many remote sensing applications such as semantic segmentation or object detection. However, these algorithms are extremely sensitive to over-fitting, domain change and adversarial examples specifically designed to fool them.
Chan-Hon-Tong, Adrien +2 more
openaire +2 more sources
Adversarial Attacks and Defenses
Despite the recent advances in a wide spectrum of applications, machine learning models, especially deep neural networks, have been shown to be vulnerable to adversarial attacks. Attackers add carefully-crafted perturbations to input, where the perturbations are almost imperceptible to humans, but can cause models to make wrong predictions.
Liu, Ninghao +4 more
openaire +2 more sources
Defense against Adversarial Swarms with Parameter Uncertainty [PDF]
This paper addresses the problem of optimal defense of a high-value unit (HVU) against a large-scale swarm attack. We discuss multiple models for intra-swarm cooperation strategies and provide a framework for combining these cooperative models with HVU tracking and adversarial interaction forces.
Claire Walton +4 more
openaire +5 more sources

