Results 71 to 80 of about 173,113 (165)

Spatially Transformed Adversarial Examples

open access: yes, 2018
Recent studies show that widely used deep neural networks (DNNs) are vulnerable to carefully crafted adversarial examples. Many advanced algorithms have been proposed to generate adversarial examples by leveraging the $\mathcal{L}_p$ distance for penalizing perturbations.
Xiao, Chaowei   +5 more
openaire   +2 more sources

A survey of practical adversarial example attacks

open access: yesCybersecurity, 2018
Adversarial examples revealed the weakness of machine learning techniques in terms of robustness, which moreover inspired adversaries to make use of the weakness to attack systems employing machine learning.
Lu Sun, Mingtian Tan, Zhe Zhou
doaj   +1 more source

Detecting Adversarial Examples

open access: yes
Deep Neural Networks (DNNs) have been shown to be vulnerable to adversarial examples. While numerous successful adversarial attacks have been proposed, defenses against these attacks remain relatively understudied. Existing defense approaches either focus on negating the effects of perturbations caused by the attacks to restore the DNNs' original ...
Mumcu, Furkan, Yilmaz, Yasin
openaire   +2 more sources

Generating Natural Adversarial Examples

open access: yes, 2017
Due to their complex nature, it is hard to characterize the ways in which machine learning models can misbehave or be exploited when deployed. Recent work on adversarial examples, i.e. inputs with minor perturbations that result in substantially different model predictions, is helpful in evaluating the robustness of these models by exposing the ...
Zhao, Zhengli   +2 more
openaire   +2 more sources

Adversarial Examples in the Physical World [PDF]

open access: yes, 2018
Most existing machine learning classifiers are highly vulnerable to adversarial examples. An adversarial example is a sample of input data which has been modified very slightly in a way that is intended to cause a machine learning classifier to misclassify it.
Kurakin, Alexey   +2 more
openaire   +2 more sources

Synthesizing Robust Adversarial Examples

open access: yes, 2017
Standard methods for generating adversarial examples for neural networks do not consistently fool neural network classifiers in the physical world due to a combination of viewpoint shifts, camera noise, and other natural transformations, limiting their relevance to real-world systems.
Athalye, Anish   +3 more
openaire   +2 more sources

Adversarial Examples for CNN-Based Malware Detectors

open access: yesIEEE Access, 2019
The convolutional neural network (CNN)-based models have achieved tremendous breakthroughs in many end-to-end applications, such as image identification, text classification, and speech recognition.
Bingcai Chen   +4 more
doaj   +1 more source

Adversarial Examples and Metrics

open access: yes, 2020
25 pages, 1 figure, under submission, fixe typos from previous ...
Döttling, Nico   +3 more
openaire   +2 more sources

Cross-Gen: An Efficient Generator Network for Adversarial Attacks on Cross-Modal Hashing Retrieval

open access: yesFuture Internet
Research on deep neural network (DNN)-based multi-dimensional data visualization has thoroughly explored cross-modal hash retrieval (CMHR) systems, yet their vulnerability to malicious adversarial examples remains evident.
Chao Hu   +7 more
doaj   +1 more source

Towards Interpretable Adversarial Examples via Sparse Adversarial Attack

open access: yes
Sparse attacks are to optimize the magnitude of adversarial perturbations for fooling deep neural networks (DNNs) involving only a few perturbed pixels (i.e., under the l0 constraint), suitable for interpreting the vulnerability of DNNs. However, existing solutions fail to yield interpretable adversarial examples due to their poor sparsity. Worse still,
Lin, Fudong   +4 more
openaire   +2 more sources

Home - About - Disclaimer - Privacy