Results 51 to 60 of about 1,209,773 (317)
Object-Attentional Untargeted Adversarial Attack
Deep neural networks are facing severe threats from adversarial attacks. Most existing black-box attacks fool target model by generating either global perturbations or local patches. However, both global perturbations and local patches easily cause annoying visual artifacts in adversarial example.
Chao Zhou, Yuan-Gen Wang, Guopu Zhu
openaire +2 more sources
Deflecting Adversarial Attacks
There has been an ongoing cycle where stronger defenses against adversarial attacks are subsequently broken by a more advanced defense-aware attack. We present a new approach towards ending this cycle where we "deflect'' adversarial attacks by causing the attacker to produce an input that semantically resembles the attack's target class.
Qin, Yao +4 more
openaire +2 more sources
Online Alternate Generator Against Adversarial Attacks [PDF]
Accepted as a Regular paper in the IEEE Transactions on Image ...
Haofeng Li +4 more
openaire +3 more sources
Meta Gradient Adversarial Attack [PDF]
In recent years, research on adversarial attacks has become a hot spot. Although current literature on the transfer-based adversarial attack has achieved promising results for improving the transferability to unseen black-box models, it still leaves a long way to go. Inspired by the idea of meta-learning, this paper proposes a novel architecture called
Yuan, Zheng +5 more
openaire +2 more sources
Physical Adversarial Attacks Against End-to-End Autoencoder Communication Systems
We show that end-to-end learning of communication systems through deep neural network (DNN) autoencoders can be extremely vulnerable to physical adversarial attacks.
Larsson, Erik G., Sadeghi, Meysam
core +1 more source
All You Need Is Low (Rank): Defending Against Adversarial Attacks on Graphs
Recent studies have demonstrated that machine learning approaches like deep learning methods are easily fooled by adversarial attacks. Recently, a highly-influential study examined the impact of adversarial attacks on graph data and demonstrated that ...
Negin Entezari +3 more
semanticscholar +1 more source
Adversarial Training for Free!
Adversarial training, in which a network is trained on adversarial examples, is one of the few defenses against adversarial attacks that withstands strong attacks.
Davis, Larry S. +8 more
core +1 more source
GradMDM: Adversarial Attack on Dynamic Networks
Dynamic neural networks can greatly reduce computation redundancy without compromising accuracy by adapting their structures based on the input. In this paper, we explore the robustness of dynamic neural networks against energy-oriented attacks targeted at reducing their efficiency.
Jianhong Pan +6 more
openaire +4 more sources
Detection of Iterative Adversarial Attacks via Counter Attack
AbstractDeep neural networks (DNNs) have proven to be powerful tools for processing unstructured data. However, for high-dimensional data, like images, they are inherently vulnerable to adversarial attacks. Small almost invisible perturbations added to the input can be used to fool DNNs.
Matthias Rottmann +4 more
openaire +4 more sources
Exploring the Impact of Conceptual Bottlenecks on Adversarial Robustness of Deep Neural Networks
Deep neural networks (DNNs), while powerful, often suffer from a lack of interpretability and vulnerability to adversarial attacks. Concept bottleneck models (CBMs), which incorporate intermediate high-level concepts into the model architecture, promise ...
Bader Rasheed +4 more
doaj +1 more source

