Results 21 to 30 of about 85,688 (262)
State-of-the-art neural network models are actively used in various fields, but it is well-known that they are vulnerable to adversarial example attacks.
Sanglee Park, Jungmin So
doaj +1 more source
Adversarial attacks expose important vulnerabilities of deep learning models, yet little attention has been paid to settings where data arrives as a stream. In this paper, we formalize the online adversarial attack problem, emphasizing two key elements found in real-world use-cases: attackers must operate under partial knowledge of the target model ...
Mladenovic, Andjela +6 more
openaire +2 more sources
Adversarial Attack for SAR Target Recognition Based on UNet-Generative Adversarial Network
Some recent articles have revealed that synthetic aperture radar automatic target recognition (SAR-ATR) models based on deep learning are vulnerable to the attacks of adversarial examples and cause security problems.
Chuan Du, Lei Zhang
doaj +1 more source
Impact of adversarial examples on deep learning models for biomedical image segmentation [PDF]
Deep learning models, which are increasingly being used in the field of medical image analysis, come with a major security risk, namely, their vulnerability to adversarial examples.
C Pena-Betancor +3 more
core +4 more sources
ICML Workshop 2022 on Adversarial Machine Learning ...
Kumano, Soichiro +2 more
openaire +2 more sources
Adversarial attacks against supervised machine learning based network intrusion detection systems.
Adversarial machine learning is a recent area of study that explores both adversarial attack strategy and detection systems of adversarial attacks, which are inputs specially crafted to outwit the classification of detection systems or disrupt the ...
Ebtihaj Alshahrani +3 more
doaj +2 more sources
Adversarial Attack Transferability Enhancement Algorithm Based on Input Channel Splitting [PDF]
The Deep Neural Network(DNN) has been widely used in face recognition, automatic driving, and other scenarios;however, it is vulnerable to attacks by adversarial samples.Methods by which adversarial samples are generated can be classified into white-box ...
ZHENG Desheng, CHEN Jixin, ZHOU Jing, KE Wuping, LU Chao, ZHOU Yong, QIU Qian
doaj +1 more source
Secure machine learning against adversarial samples at test time
Deep neural networks (DNNs) are widely used to handle many difficult tasks, such as image classification and malware detection, and achieve outstanding performance.
Jing Lin, Laurent L. Njilla, Kaiqi Xiong
doaj +1 more source
Robust Tracking Against Adversarial Attacks [PDF]
While deep convolutional neural networks (CNNs) are vulnerable to adversarial attacks, considerably few efforts have been paid to construct robust deep tracking algorithms against adversarial attacks. Current studies on adversarial attack and defense mainly reside in a single image. In this work, we first attempt to generate adversarial examples on top
Jia, Shuai +3 more
openaire +2 more sources
Distributionally Adversarial Attack
Recent work on adversarial attack has shown that Projected Gradient Descent (PGD) Adversary is a universal first-order adversary, and the classifier adversarially trained by PGD is robust against a wide range of first-order attacks. It is worth noting that the original objective of an attack/defense model relies on a data distribution p(x), typically ...
Zheng, Tianhang +2 more
openaire +3 more sources

