Results 21 to 30 of about 16,308 (300)
ACM MM2022 Brave New ...
Sang, Jitao +3 more
openaire +2 more sources
Deep learning (DL) models have recently been widely used in UAV aerial image semantic segmentation tasks and have achieved excellent performance. However, DL models are vulnerable to adversarial examples, which bring significant security risks to safety ...
Zhen Wang +3 more
doaj +1 more source
Adversarial attacks expose important vulnerabilities of deep learning models, yet little attention has been paid to settings where data arrives as a stream. In this paper, we formalize the online adversarial attack problem, emphasizing two key elements found in real-world use-cases: attackers must operate under partial knowledge of the target model ...
Mladenovic, Andjela +6 more
openaire +2 more sources
Robustness of Deep Learning Models for Vision Tasks
In recent years, artificial intelligence technologies in vision tasks have gradually begun to be applied to the physical world, proving they are vulnerable to adversarial attacks.
Youngseok Lee, Jongweon Kim
doaj +1 more source
ICML Workshop 2022 on Adversarial Machine Learning ...
Kumano, Soichiro +2 more
openaire +2 more sources
Exploring Adversarial Robustness of LiDAR Semantic Segmentation in Autonomous Driving
Deep learning networks have demonstrated outstanding performance in 2D and 3D vision tasks. However, recent research demonstrated that these networks result in failures when imperceptible perturbations are added to the input known as adversarial attacks.
K. T. Yasas Mahima +3 more
doaj +1 more source
Robust Tracking Against Adversarial Attacks [PDF]
While deep convolutional neural networks (CNNs) are vulnerable to adversarial attacks, considerably few efforts have been paid to construct robust deep tracking algorithms against adversarial attacks. Current studies on adversarial attack and defense mainly reside in a single image. In this work, we first attempt to generate adversarial examples on top
Jia, Shuai +3 more
openaire +2 more sources
Boosting 3D Adversarial Attacks With Attacking on Frequency
Deep neural networks (DNNs) have been shown to be vulnerable to adversarial attacks in the image domain. Recently, 3D adversarial attacks, especially adversarial attacks on point clouds, have elicited mounting interest.
Binbin Liu, Jinlai Zhang, Jihong Zhu
doaj +1 more source
Exploring Diverse Feature Extractions for Adversarial Audio Detection
Although deep learning models have exhibited excellent performance in various domains, recent studies have discovered that they are highly vulnerable to adversarial attacks.
Yujin Choi +3 more
doaj +1 more source
Object-Attentional Untargeted Adversarial Attack
Deep neural networks are facing severe threats from adversarial attacks. Most existing black-box attacks fool target model by generating either global perturbations or local patches. However, both global perturbations and local patches easily cause annoying visual artifacts in adversarial example.
Chao Zhou, Yuan-Gen Wang, Guopu Zhu
openaire +2 more sources

