Results 21 to 30 of about 173,113 (165)

Natural Adversarial Examples [PDF]

open access: yes2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2021
We introduce two challenging datasets that reliably cause machine learning model performance to substantially degrade. The datasets are collected with a simple adversarial filtration technique to create datasets with limited spurious cues. Our datasets' real-world, unmodified examples transfer to various unseen models reliably, demonstrating that ...
Hendrycks, Dan   +4 more
openaire   +2 more sources

Survey of Image Adversarial Example Defense Techniques [PDF]

open access: yesJisuanji kexue yu tansuo, 2023
The rapid and extensive growth of artificial intelligence introduces new security challenges. The generation and defense of adversarial examples for deep neural networks is one of the hot spots.
LIU Ruiqi, LI Hu, WANG Dongxia, ZHAO Chongyang, LI Boyu
doaj   +1 more source

Efficient Adversarial Training With Transferable Adversarial Examples [PDF]

open access: yes2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2020
Adversarial training is an effective defense method to protect classification models against adversarial attacks. However, one limitation of this approach is that it can require orders of magnitude additional training time due to high cost of generating strong adversarial examples during training.
Zheng, Haizhong   +4 more
openaire   +2 more sources

Fooling Examples: Another Intriguing Property of Neural Networks

open access: yesSensors, 2023
Neural networks have been proven to be vulnerable to adversarial examples; these are examples that can be recognized by both humans and neural networks, although neural networks give incorrect predictions.
Ming Zhang, Yongkang Chen, Cheng Qian
doaj   +1 more source

Adversarial Examples for Good: Adversarial Examples Guided Imbalanced Learning

open access: yes2022 IEEE International Conference on Image Processing (ICIP), 2022
Appeared in ICIP ...
Zhang, Jie   +3 more
openaire   +2 more sources

FADER: Fast adversarial example rejection [PDF]

open access: yesNeurocomputing, 2022
Deep neural networks are vulnerable to adversarial examples, i.e., carefully-crafted inputs that mislead classification at test time. Recent defenses have been shown to improve adversarial robustness by detecting anomalous deviations from legitimate training samples at different layer representations - a behavior normally exhibited by adversarial ...
Crecchi, Francesco   +4 more
openaire   +4 more sources

Adversarial Examples Generation Method Based on Image Color Random Transformation [PDF]

open access: yesJisuanji kexue, 2023
Although deep neural networks(DNNs) have good performance in most classification tasks,they are vulnerable to adversarial examples,making the security of DNNs questionable.Research designs to generate strongly aggressive adversarial examples can help ...
BAI Zhixu, WANG Hengjun, GUO Kexiang
doaj   +1 more source

Semantic Adversarial Examples [PDF]

open access: yes2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), 2018
Deep neural networks are known to be vulnerable to adversarial examples, i.e., images that are maliciously perturbed to fool the model. Generating adversarial examples has been mostly limited to finding small perturbations that maximize the model prediction error.
Hosseini, Hossein, Poovendran, Radha
openaire   +2 more sources

Adversarial Examples: Opportunities and Challenges [PDF]

open access: yesIEEE Transactions on Neural Networks and Learning Systems, 2019
16 pages, 13 figures, 5 ...
Jiliang Zhang, Chen Li
openaire   +3 more sources

Home - About - Disclaimer - Privacy