Results 21 to 30 of about 16,308 (300)

Benign Adversarial Attack

open access: yesProceedings of the 30th ACM International Conference on Multimedia, 2022
ACM MM2022 Brave New ...
Sang, Jitao   +3 more
openaire   +2 more sources

Defense against Adversarial Patch Attacks for Aerial Image Semantic Segmentation by Robust Feature Extraction

open access: yesRemote Sensing, 2023
Deep learning (DL) models have recently been widely used in UAV aerial image semantic segmentation tasks and have achieved excellent performance. However, DL models are vulnerable to adversarial examples, which bring significant security risks to safety ...
Zhen Wang   +3 more
doaj   +1 more source

Online Adversarial Attacks

open access: yes, 2021
Adversarial attacks expose important vulnerabilities of deep learning models, yet little attention has been paid to settings where data arrives as a stream. In this paper, we formalize the online adversarial attack problem, emphasizing two key elements found in real-world use-cases: attackers must operate under partial knowledge of the target model ...
Mladenovic, Andjela   +6 more
openaire   +2 more sources

Robustness of Deep Learning Models for Vision Tasks

open access: yesApplied Sciences, 2023
In recent years, artificial intelligence technologies in vision tasks have gradually begun to be applied to the physical world, proving they are vulnerable to adversarial attacks.
Youngseok Lee, Jongweon Kim
doaj   +1 more source

Superclass Adversarial Attack

open access: yes, 2022
ICML Workshop 2022 on Adversarial Machine Learning ...
Kumano, Soichiro   +2 more
openaire   +2 more sources

Exploring Adversarial Robustness of LiDAR Semantic Segmentation in Autonomous Driving

open access: yesSensors, 2023
Deep learning networks have demonstrated outstanding performance in 2D and 3D vision tasks. However, recent research demonstrated that these networks result in failures when imperceptible perturbations are added to the input known as adversarial attacks.
K. T. Yasas Mahima   +3 more
doaj   +1 more source

Robust Tracking Against Adversarial Attacks [PDF]

open access: yes, 2020
While deep convolutional neural networks (CNNs) are vulnerable to adversarial attacks, considerably few efforts have been paid to construct robust deep tracking algorithms against adversarial attacks. Current studies on adversarial attack and defense mainly reside in a single image. In this work, we first attempt to generate adversarial examples on top
Jia, Shuai   +3 more
openaire   +2 more sources

Boosting 3D Adversarial Attacks With Attacking on Frequency

open access: yesIEEE Access, 2022
Deep neural networks (DNNs) have been shown to be vulnerable to adversarial attacks in the image domain. Recently, 3D adversarial attacks, especially adversarial attacks on point clouds, have elicited mounting interest.
Binbin Liu, Jinlai Zhang, Jihong Zhu
doaj   +1 more source

Exploring Diverse Feature Extractions for Adversarial Audio Detection

open access: yesIEEE Access, 2023
Although deep learning models have exhibited excellent performance in various domains, recent studies have discovered that they are highly vulnerable to adversarial attacks.
Yujin Choi   +3 more
doaj   +1 more source

Object-Attentional Untargeted Adversarial Attack

open access: yesSSRN Electronic Journal, 2022
Deep neural networks are facing severe threats from adversarial attacks. Most existing black-box attacks fool target model by generating either global perturbations or local patches. However, both global perturbations and local patches easily cause annoying visual artifacts in adversarial example.
Chao Zhou, Yuan-Gen Wang, Guopu Zhu
openaire   +2 more sources

Home - About - Disclaimer - Privacy