Results 11 to 20 of about 79,418 (169)
Defense against Universal Adversarial Perturbations [PDF]
Recent advances in Deep Learning show the existence of image-agnostic quasi-imperceptible perturbations that when applied to `any' image can fool a state-of-the-art network classifier to change its prediction about the image label.
Akhtar, Naveed, Liu, Jian, Mian, Ajmal
core +2 more sources
LPF-Defense: 3D adversarial defense based on frequency analysis. [PDF]
The 3D point clouds are increasingly being used in various application including safety-critical fields. It has recently been demonstrated that deep neural networks can successfully process 3D point clouds. However, these deep networks can be misclassified via 3D adversarial attacks intentionality designed to perturb some point cloud’s features.
Naderi H +3 more
europepmc +5 more sources
Open-Set Adversarial Defense with Clean-Adversarial Mutual Learning [PDF]
Accepted by International Journal of Computer Vision (IJCV) 2022. Code will be available at https://github.com/rshaojimmy/ECCV2020-OSAD.
Shao, Rui +3 more
openaire +2 more sources
IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI)
Muzammal Naseer +4 more
openaire +3 more sources
Adversarial example defense based on image reconstruction [PDF]
The rapid development of deep neural networks (DNN) has promoted the widespread application of image recognition, natural language processing, and autonomous driving.
Yu(AUST) Zhang +3 more
doaj +2 more sources
A Defense Method Against FGSM Adversarial Attack [PDF]
Intelligent ship recognition has been widely used in the military,but it also brings increasingly serious security issues.Even the high performance classification models are still vulnerable to the attacks from adversarial examples.For Fast Gradient Sign
WANG Xiaopeng, LUO Wei, QIN Ke, YANG Jintao, WANG Min
doaj +1 more source
Enhancing Adversarial Defense via Brain Activity Integration Without Adversarial Examples. [PDF]
Adversarial attacks on large-scale vision–language foundation models, such as the contrastive language–image pretraining (CLIP) model, can significantly degrade performance across various tasks by generating adversarial examples that are indistinguishable from the original images to human perception.
Nakajima T +4 more
europepmc +4 more sources
Open-Set Adversarial Defense [PDF]
Accepted by ECCV ...
Rui Shao +3 more
openaire +2 more sources
Demotivate Adversarial Defense in Remote Sensing [PDF]
Convolutional neural networks are currently the state-of-the-art algorithms for many remote sensing applications such as semantic segmentation or object detection. However, these algorithms are extremely sensitive to over-fitting, domain change and adversarial examples specifically designed to fool them.
Chan-Hon-Tong, Adrien +2 more
openaire +2 more sources
Adversarial Attacks and Defenses
Despite the recent advances in a wide spectrum of applications, machine learning models, especially deep neural networks, have been shown to be vulnerable to adversarial attacks. Attackers add carefully-crafted perturbations to input, where the perturbations are almost imperceptible to humans, but can cause models to make wrong predictions.
Liu, Ninghao +4 more
openaire +2 more sources

