Results 51 to 60 of about 173,113 (165)
Exploring Adversarial Examples [PDF]
Failure cases of black-box deep learning, e.g. adversarial examples, might have severe consequences in healthcare. Yet such failures are mostly studied in the context of real-world images with calibrated attacks. To demystify the adversarial examples, rigorous studies need to be designed.
Kügler, David +3 more
openaire +2 more sources
Adversarial Training for Free!
Adversarial training, in which a network is trained on adversarial examples, is one of the few defenses against adversarial attacks that withstands strong attacks.
Davis, Larry S. +8 more
core +1 more source
It has been shown that adversaries can craft example inputs to neural networks which are similar to legitimate inputs but have been created to purposely cause the neural network to misclassify the input.
Athalye Anish +18 more
core +1 more source
Adversarial Example Decomposition
Research has shown that widely used deep neural networks are vulnerable to carefully crafted adversarial perturbations. Moreover, these adversarial perturbations often transfer across models. We hypothesize that adversarial weakness is composed of three sources of bias: architecture, dataset, and random initialization.
He, Horace +5 more
openaire +2 more sources
MAT: A Multi-strength Adversarial Training Method to Mitigate Adversarial Attacks
Some recent works revealed that deep neural networks (DNNs) are vulnerable to so-called adversarial attacks where input examples are intentionally perturbed to fool DNNs.
Chen, Yiran +7 more
core +1 more source
Human-Producible Adversarial Examples
Submitted to ICLR ...
Khachaturov, David +5 more
openaire +2 more sources
Adversarial examples for models of code [PDF]
Neural models of code have shown impressive results when performing tasks such as predicting method names and identifying certain kinds of bugs. We show that these models are vulnerable to adversarial examples , and introduce a novel approach for attacking trained models of code using ...
Yefet, Noam, Alon, Uri, Yahav, Eran
openaire +2 more sources
Maxwell’s Demon in MLP-Mixer: towards transferable adversarial attacks
Models based on MLP-Mixer architecture are becoming popular, but they still suffer from adversarial examples. Although it has been shown that MLP-Mixer is more robust to adversarial attacks compared to convolutional neural networks (CNNs), there has been
Haoran Lyu +5 more
doaj +1 more source
Assessing Optimizer Impact on DNN Model Sensitivity to Adversarial Examples
Deep Neural Networks (DNNs) have been gaining state-of-the-art achievement compared with many traditional Machine Learning (ML) models in diverse fields. However, adversarial examples challenge the further deployment and application of DNNs. Analysis has
Yixiang Wang +5 more
doaj +1 more source
A Cascade Defense Method for Multidomain Adversarial Attacks under Remote Sensing Detection
Deep neural networks have been widely used in detection tasks based on optical remote sensing images. However, in recent studies, deep neural networks have been shown to be vulnerable to adversarial examples.
Wei Xue +4 more
doaj +1 more source

