Results 31 to 40 of about 94,262 (290)

Adversarial attacks against supervised machine learning based network intrusion detection systems.

open access: yesPLoS ONE, 2022
Adversarial machine learning is a recent area of study that explores both adversarial attack strategy and detection systems of adversarial attacks, which are inputs specially crafted to outwit the classification of detection systems or disrupt the ...
Ebtihaj Alshahrani   +3 more
doaj   +2 more sources

Adversarial Attack Transferability Enhancement Algorithm Based on Input Channel Splitting [PDF]

open access: yesJisuanji gongcheng, 2023
The Deep Neural Network(DNN) has been widely used in face recognition, automatic driving, and other scenarios;however, it is vulnerable to attacks by adversarial samples.Methods by which adversarial samples are generated can be classified into white-box ...
ZHENG Desheng, CHEN Jixin, ZHOU Jing, KE Wuping, LU Chao, ZHOU Yong, QIU Qian
doaj   +1 more source

Robust Tracking Against Adversarial Attacks [PDF]

open access: yes, 2020
While deep convolutional neural networks (CNNs) are vulnerable to adversarial attacks, considerably few efforts have been paid to construct robust deep tracking algorithms against adversarial attacks. Current studies on adversarial attack and defense mainly reside in a single image. In this work, we first attempt to generate adversarial examples on top
Jia, Shuai   +3 more
openaire   +2 more sources

Secure machine learning against adversarial samples at test time

open access: yesEURASIP Journal on Information Security, 2022
Deep neural networks (DNNs) are widely used to handle many difficult tasks, such as image classification and malware detection, and achieve outstanding performance.
Jing Lin, Laurent L. Njilla, Kaiqi Xiong
doaj   +1 more source

Object-Attentional Untargeted Adversarial Attack

open access: yesSSRN Electronic Journal, 2022
Deep neural networks are facing severe threats from adversarial attacks. Most existing black-box attacks fool target model by generating either global perturbations or local patches. However, both global perturbations and local patches easily cause annoying visual artifacts in adversarial example.
Chao Zhou, Yuan-Gen Wang, Guopu Zhu
openaire   +2 more sources

Deflecting Adversarial Attacks

open access: yes, 2020
There has been an ongoing cycle where stronger defenses against adversarial attacks are subsequently broken by a more advanced defense-aware attack. We present a new approach towards ending this cycle where we "deflect'' adversarial attacks by causing the attacker to produce an input that semantically resembles the attack's target class.
Qin, Yao   +4 more
openaire   +2 more sources

Adversarial attacks and adversarial robustness in computational pathology

open access: yesNature Communications, 2022
AbstractArtificial Intelligence (AI) can support diagnostic workflows in oncology by aiding diagnosis and providing biomarkers directly from routine pathology slides. However, AI applications are vulnerable to adversarial attacks. Hence, it is essential to quantify and mitigate this risk before widespread clinical use.
Narmin Ghaffari Laleh   +10 more
openaire   +5 more sources

Online Alternate Generator Against Adversarial Attacks [PDF]

open access: yesIEEE Transactions on Image Processing, 2020
Accepted as a Regular paper in the IEEE Transactions on Image ...
Haofeng Li   +4 more
openaire   +3 more sources

Meta Gradient Adversarial Attack [PDF]

open access: yes2021 IEEE/CVF International Conference on Computer Vision (ICCV), 2021
In recent years, research on adversarial attacks has become a hot spot. Although current literature on the transfer-based adversarial attack has achieved promising results for improving the transferability to unseen black-box models, it still leaves a long way to go. Inspired by the idea of meta-learning, this paper proposes a novel architecture called
Yuan, Zheng   +5 more
openaire   +2 more sources

Using Frequency Attention to Make Adversarial Patch Powerful Against Person Detector

open access: yesIEEE Access, 2023
Deep neural networks (DNNs) are vulnerable to adversarial attacks. In particular, object detectors may be attacked by applying a particular adversarial patch to the image.
Xiaochun Lei   +5 more
doaj   +1 more source

Home - About - Disclaimer - Privacy