Results 11 to 20 of about 5,561,446 (302)

A Novel Adversarial Example Detection Method Based on Frequency Domain Reconstruction for Image Sensors [PDF]

open access: yesSensors
Convolutional neural networks (CNNs) have been extensively used in numerous remote sensing image detection tasks owing to their exceptional performance.
Shuaina Huang, Zhiyong Zhang, Bin Song
doaj   +2 more sources

A survey of practical adversarial example attacks

open access: yesCybersecurity, 2018
Adversarial examples revealed the weakness of machine learning techniques in terms of robustness, which moreover inspired adversaries to make use of the weakness to attack systems employing machine learning.
Lu Sun, Mingtian Tan, Zhe Zhou
doaj   +2 more sources

Adversarial examples in the physical world [PDF]

open access: yesInternational Conference on Learning Representations, 2016
Most existing machine learning classifiers are highly vulnerable to adversarial examples. An adversarial example is a sample of input data which has been modified very slightly in a way that is intended to cause a machine learning classifier to ...
Alexey Kurakin   +2 more
semanticscholar   +4 more sources

Adversarial Example Does Good: Preventing Painting Imitation from Diffusion Models via Adversarial Examples [PDF]

open access: yesInternational Conference on Machine Learning, 2023
Recently, Diffusion Models (DMs) boost a wave in AI for Art yet raise new copyright concerns, where infringers benefit from using unauthorized paintings to train DMs to generate novel paintings in a similar style.
Chumeng Liang   +8 more
semanticscholar   +1 more source

Self-supervised Learning of Adversarial Example: Towards Good Generalizations for Deepfake Detection [PDF]

open access: yesComputer Vision and Pattern Recognition, 2022
Recent studies in deepfake detection have yielded promising results when the training and testing face forgeries are from the same dataset. However, the problem remains challenging when one tries to generalize the detector to forgeries created by unseen ...
Liang Chen   +4 more
semanticscholar   +1 more source

Code Difference Guided Adversarial Example Generation for Deep Code Models [PDF]

open access: yesInternational Conference on Automated Software Engineering, 2023
Adversarial examples are important to test and enhance the robustness of deep code models. As source code is discrete and has to strictly stick to complex grammar and semantics constraints, the adversarial example generation techniques in other domains ...
Zhao Tian, Junjie Chen, Zhi Jin
semanticscholar   +1 more source

Generating adversarial examples without specifying a target model [PDF]

open access: yesPeerJ Computer Science, 2021
Adversarial examples are regarded as a security threat to deep learning models, and there are many ways to generate them. However, most existing methods require the query authority of the target during their work.
Gaoming Yang   +4 more
doaj   +2 more sources

LGV: Boosting Adversarial Example Transferability from Large Geometric Vicinity [PDF]

open access: yesEuropean Conference on Computer Vision, 2022
We propose transferability from Large Geometric Vicinity (LGV), a new technique to increase the transferability of black-box adversarial attacks. LGV starts from a pretrained surrogate model and collects multiple weight sets from a few additional ...
Martin Gubri   +4 more
semanticscholar   +1 more source

Natural Adversarial Examples [PDF]

open access: yes2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2021
We introduce two challenging datasets that reliably cause machine learning model performance to substantially degrade. The datasets are collected with a simple adversarial filtration technique to create datasets with limited spurious cues. Our datasets' real-world, unmodified examples transfer to various unseen models reliably, demonstrating that ...
Hendrycks, Dan   +4 more
openaire   +2 more sources

Minimum Adversarial Examples

open access: yesEntropy, 2022
Deep neural networks in the area of information security are facing a severe threat from adversarial examples (AEs). Existing methods of AE generation use two optimization models: (1) taking the successful attack as the objective function and limiting perturbations as the constraint; (2) taking the minimum of adversarial perturbations as the target and
Zhenyu Du, Fangzheng Liu, Xuehu Yan
openaire   +3 more sources

Home - About - Disclaimer - Privacy