Results 11 to 20 of about 1,209,773 (317)
Distributionally Adversarial Attack
Recent work on adversarial attack has shown that Projected Gradient Descent (PGD) Adversary is a universal first-order adversary, and the classifier adversarially trained by PGD is robust against a wide range of first-order attacks. It is worth noting that the original objective of an attack/defense model relies on a data distribution p(x), typically ...
Tianhang Zheng, Changyou Chen, Kui Ren
openalex +4 more sources
Stochastic sparse adversarial attacks [PDF]
This paper introduces stochastic sparse adversarial attacks (SSAA), standing as simple, fast and purely noise-based targeted and untargeted attacks of neural network classifiers (NNC). SSAA offer new examples of sparse (or $L_0$) attacks for which only few methods have been proposed previously.
Hajri, Hatem +4 more
openaire +4 more sources
Optical Adversarial Attack [PDF]
ICCV Workshop ...
Gnanasambandam, Abhiram +2 more
openaire +2 more sources
Adversarial attacks and adversarial robustness in computational pathology. [PDF]
AbstractArtificial Intelligence (AI) can support diagnostic workflows in oncology by aiding diagnosis and providing biomarkers directly from routine pathology slides. However, AI applications are vulnerable to adversarial attacks. Hence, it is essential to quantify and mitigate this risk before widespread clinical use.
Ghaffari Laleh N +10 more
europepmc +6 more sources
Aerial Image Semantic segmentation based on convolution neural networks (CNNs) has made significant process in recent years. Nevertheless, their vulnerability to adversarial example attacks could not be neglected.
Zhen Wang +3 more
doaj +1 more source
Rethinking Model Ensemble in Transfer-based Adversarial Attacks [PDF]
It is widely recognized that deep learning models lack robustness to adversarial examples. An intriguing property of adversarial examples is that they can transfer across different models, which enables black-box attacks without any knowledge of the ...
Huanran Chen +3 more
semanticscholar +1 more source
Adversarial attacks and defenses in deep learning
The adversarial example is a modified image that is added imperceptible perturbations, which can make deep neural networks decide wrongly. The adversarial examples seriously threaten the availability of the system and bring great security risks to the ...
LIU Ximeng +2 more
doaj +3 more sources
How Deep Learning Sees the World: A Survey on Adversarial Attacks & Defenses [PDF]
Deep Learning is currently used to perform multiple tasks, such as object recognition, face recognition, and natural language processing. However, Deep Neural Networks (DNNs) are vulnerable to perturbations that alter the network prediction, named ...
J. C. Costa +3 more
semanticscholar +1 more source
TextFirewall: Omni-Defending Against Adversarial Texts in Sentiment Classification
Sentiment classification has been broadly applied in real life, such as product recommendation and opinion-oriented analysis. Unfortunately, the widely employed sentiment classification systems based on deep neural networks (DNNs) are susceptible to ...
Wenqi Wang +3 more
doaj +1 more source
On the Robustness of Large Multimodal Models Against Image Adversarial Attacks [PDF]
Recent advances in instruction tuning have led to the development of State-of-the-Art Large Multimodal Models (LMMs). Given the novelty of these models, the impact of visual adversarial attacks on LMMs has not been thoroughly examined.
Xuanming Cui +3 more
semanticscholar +1 more source

