Results 21 to 30 of about 85,688 (262)

On the Effectiveness of Adversarial Training in Defending against Adversarial Example Attacks for Image Classification

open access: yesApplied Sciences, 2020
State-of-the-art neural network models are actively used in various fields, but it is well-known that they are vulnerable to adversarial example attacks.
Sanglee Park, Jungmin So
doaj   +1 more source

Online Adversarial Attacks

open access: yes, 2021
Adversarial attacks expose important vulnerabilities of deep learning models, yet little attention has been paid to settings where data arrives as a stream. In this paper, we formalize the online adversarial attack problem, emphasizing two key elements found in real-world use-cases: attackers must operate under partial knowledge of the target model ...
Mladenovic, Andjela   +6 more
openaire   +2 more sources

Adversarial Attack for SAR Target Recognition Based on UNet-Generative Adversarial Network

open access: yesRemote Sensing, 2021
Some recent articles have revealed that synthetic aperture radar automatic target recognition (SAR-ATR) models based on deep learning are vulnerable to the attacks of adversarial examples and cause security problems.
Chuan Du, Lei Zhang
doaj   +1 more source

Impact of adversarial examples on deep learning models for biomedical image segmentation [PDF]

open access: yes, 2019
Deep learning models, which are increasingly being used in the field of medical image analysis, come with a major security risk, namely, their vulnerability to adversarial examples.
C Pena-Betancor   +3 more
core   +4 more sources

Superclass Adversarial Attack

open access: yes, 2022
ICML Workshop 2022 on Adversarial Machine Learning ...
Kumano, Soichiro   +2 more
openaire   +2 more sources

Adversarial attacks against supervised machine learning based network intrusion detection systems.

open access: yesPLoS ONE, 2022
Adversarial machine learning is a recent area of study that explores both adversarial attack strategy and detection systems of adversarial attacks, which are inputs specially crafted to outwit the classification of detection systems or disrupt the ...
Ebtihaj Alshahrani   +3 more
doaj   +2 more sources

Adversarial Attack Transferability Enhancement Algorithm Based on Input Channel Splitting [PDF]

open access: yesJisuanji gongcheng, 2023
The Deep Neural Network(DNN) has been widely used in face recognition, automatic driving, and other scenarios;however, it is vulnerable to attacks by adversarial samples.Methods by which adversarial samples are generated can be classified into white-box ...
ZHENG Desheng, CHEN Jixin, ZHOU Jing, KE Wuping, LU Chao, ZHOU Yong, QIU Qian
doaj   +1 more source

Secure machine learning against adversarial samples at test time

open access: yesEURASIP Journal on Information Security, 2022
Deep neural networks (DNNs) are widely used to handle many difficult tasks, such as image classification and malware detection, and achieve outstanding performance.
Jing Lin, Laurent L. Njilla, Kaiqi Xiong
doaj   +1 more source

Robust Tracking Against Adversarial Attacks [PDF]

open access: yes, 2020
While deep convolutional neural networks (CNNs) are vulnerable to adversarial attacks, considerably few efforts have been paid to construct robust deep tracking algorithms against adversarial attacks. Current studies on adversarial attack and defense mainly reside in a single image. In this work, we first attempt to generate adversarial examples on top
Jia, Shuai   +3 more
openaire   +2 more sources

Distributionally Adversarial Attack

open access: yesProceedings of the AAAI Conference on Artificial Intelligence, 2019
Recent work on adversarial attack has shown that Projected Gradient Descent (PGD) Adversary is a universal first-order adversary, and the classifier adversarially trained by PGD is robust against a wide range of first-order attacks. It is worth noting that the original objective of an attack/defense model relies on a data distribution p(x), typically ...
Zheng, Tianhang   +2 more
openaire   +3 more sources

Home - About - Disclaimer - Privacy