Results 1 to 10 of about 79,918 (254)

LPF-Defense: 3D adversarial defense based on frequency analysis. [PDF]

open access: yesPLoS One, 2023
The 3D point clouds are increasingly being used in various application including safety-critical fields. It has recently been demonstrated that deep neural networks can successfully process 3D point clouds. However, these deep networks can be misclassified via 3D adversarial attacks intentionality designed to perturb some point cloud’s features.
Naderi H   +3 more
europepmc   +5 more sources

Defense against Universal Adversarial Perturbations [PDF]

open access: yes2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2018
Recent advances in Deep Learning show the existence of image-agnostic quasi-imperceptible perturbations that when applied to `any' image can fool a state-of-the-art network classifier to change its prediction about the image label.
Akhtar, Naveed, Liu, Jian, Mian, Ajmal
core   +2 more sources

Universal attention guided adversarial defense using feature pyramid and non-local mechanisms [PDF]

open access: yesScientific Reports
Deep Neural Networks (DNNs) have been shown to be vulnerable to adversarial examples, significantly hindering the development of deep learning technologies in high-security domains. A key challenge is that current defense methods often lack universality,
Jiawei Zhao   +6 more
doaj   +2 more sources

Enhancing Adversarial Defense via Brain Activity Integration Without Adversarial Examples. [PDF]

open access: yesSensors (Basel)
Adversarial attacks on large-scale vision–language foundation models, such as the contrastive language–image pretraining (CLIP) model, can significantly degrade performance across various tasks by generating adversarial examples that are indistinguishable from the original images to human perception.
Nakajima T   +4 more
europepmc   +4 more sources

Diversity-enhanced reconstruction as plug-in defenders against adversarial perturbations [PDF]

open access: yesFrontiers in Artificial Intelligence
Deep learning models are susceptible to adversarial examples. In large-scale deployed services, plug-in defenders efficiently defend against such attacks.
Zeshan Pang   +7 more
doaj   +2 more sources

Survey of Image Adversarial Example Defense Techniques [PDF]

open access: yesJisuanji kexue yu tansuo, 2023
The rapid and extensive growth of artificial intelligence introduces new security challenges. The generation and defense of adversarial examples for deep neural networks is one of the hot spots.
LIU Ruiqi, LI Hu, WANG Dongxia, ZHAO Chongyang, LI Boyu
doaj   +1 more source

Adversarial Sample Defense Method Based on Noise Dissolution [PDF]

open access: yesJisuanji gongcheng, 2022
The security problems exposed in the rapid development of the Deep Neural Network(DNN) have gradually attracted our attention.However, since adversarial examples were first defined, many adversarial attacks on DNNs have been proposed, and the complexity ...
YANG Wenxue, WU Fei, GUO Tong, XIAO Limin
doaj   +1 more source

Open-Set Adversarial Defense with Clean-Adversarial Mutual Learning [PDF]

open access: yesInternational Journal of Computer Vision, 2022
Accepted by International Journal of Computer Vision (IJCV) 2022. Code will be available at https://github.com/rshaojimmy/ECCV2020-OSAD.
Shao, Rui   +3 more
openaire   +2 more sources

Survey of Adversarial Attacks and Defense Methods for Deep Learning Model [PDF]

open access: yesJisuanji gongcheng, 2021
As an important part of artificial intelligence technology,deep learning is widely used in computer vision,natural language processing and other fields.Although deep learning performs well in tasks such as image classification and target detection,its ...
JIANG Yan, ZHANG Liguo
doaj   +1 more source

Research Progress of Adversarial Defenses on Graphs

open access: yesJisuanji kexue yu tansuo, 2021
Graph neural networks (GNN) have been successfully applied in complex tasks in many fields, but recent studies show that GNN is vulnerable to graph adversarial attacks, leading to severe performance degradation.
LI Penghui, ZHAI Zhengli, FENG Shu
doaj   +1 more source

Home - About - Disclaimer - Privacy