Results 31 to 40 of about 85,688 (262)

Object-Attentional Untargeted Adversarial Attack

open access: yesSSRN Electronic Journal, 2022
Deep neural networks are facing severe threats from adversarial attacks. Most existing black-box attacks fool target model by generating either global perturbations or local patches. However, both global perturbations and local patches easily cause annoying visual artifacts in adversarial example.
Chao Zhou, Yuan-Gen Wang, Guopu Zhu
openaire   +2 more sources

Using Frequency Attention to Make Adversarial Patch Powerful Against Person Detector

open access: yesIEEE Access, 2023
Deep neural networks (DNNs) are vulnerable to adversarial attacks. In particular, object detectors may be attacked by applying a particular adversarial patch to the image.
Xiaochun Lei   +5 more
doaj   +1 more source

Deflecting Adversarial Attacks

open access: yes, 2020
There has been an ongoing cycle where stronger defenses against adversarial attacks are subsequently broken by a more advanced defense-aware attack. We present a new approach towards ending this cycle where we "deflect'' adversarial attacks by causing the attacker to produce an input that semantically resembles the attack's target class.
Qin, Yao   +4 more
openaire   +2 more sources

Adversarial attacks and adversarial robustness in computational pathology

open access: yesNature Communications, 2022
AbstractArtificial Intelligence (AI) can support diagnostic workflows in oncology by aiding diagnosis and providing biomarkers directly from routine pathology slides. However, AI applications are vulnerable to adversarial attacks. Hence, it is essential to quantify and mitigate this risk before widespread clinical use.
Narmin Ghaffari Laleh   +10 more
openaire   +5 more sources

Online Alternate Generator Against Adversarial Attacks [PDF]

open access: yesIEEE Transactions on Image Processing, 2020
Accepted as a Regular paper in the IEEE Transactions on Image ...
Haofeng Li   +4 more
openaire   +3 more sources

Meta Gradient Adversarial Attack [PDF]

open access: yes2021 IEEE/CVF International Conference on Computer Vision (ICCV), 2021
In recent years, research on adversarial attacks has become a hot spot. Although current literature on the transfer-based adversarial attack has achieved promising results for improving the transferability to unseen black-box models, it still leaves a long way to go. Inspired by the idea of meta-learning, this paper proposes a novel architecture called
Yuan, Zheng   +5 more
openaire   +2 more sources

Robust Audio Adversarial Example for a Physical Attack

open access: yes, 2019
We propose a method to generate audio adversarial examples that can attack a state-of-the-art speech recognition model in the physical world. Previous work assumes that generated adversarial examples are directly fed to the recognition model, and is not ...
Sakuma, Jun, Yakura, Hiromu
core   +1 more source

Using LIP to Gloss Over Faces in Single-Stage Face Detection Networks

open access: yes, 2018
This work shows that it is possible to fool/attack recent state-of-the-art face detectors which are based on the single-stage networks. Successfully attacking face detectors could be a serious malware vulnerability when deploying a smart surveillance ...
D Chen   +5 more
core   +1 more source

GradMDM: Adversarial Attack on Dynamic Networks

open access: yesIEEE Transactions on Pattern Analysis and Machine Intelligence, 2023
Dynamic neural networks can greatly reduce computation redundancy without compromising accuracy by adapting their structures based on the input. In this paper, we explore the robustness of dynamic neural networks against energy-oriented attacks targeted at reducing their efficiency.
Jianhong Pan   +6 more
openaire   +4 more sources

Detection of Iterative Adversarial Attacks via Counter Attack

open access: yesJournal of Optimization Theory and Applications, 2023
AbstractDeep neural networks (DNNs) have proven to be powerful tools for processing unstructured data. However, for high-dimensional data, like images, they are inherently vulnerable to adversarial attacks. Small almost invisible perturbations added to the input can be used to fool DNNs.
Matthias Rottmann   +4 more
openaire   +4 more sources

Home - About - Disclaimer - Privacy