Results 11 to 20 of about 1,209,773 (317)

Distributionally Adversarial Attack

open access: diamondProceedings of the AAAI Conference on Artificial Intelligence, 2019
Recent work on adversarial attack has shown that Projected Gradient Descent (PGD) Adversary is a universal first-order adversary, and the classifier adversarially trained by PGD is robust against a wide range of first-order attacks. It is worth noting that the original objective of an attack/defense model relies on a data distribution p(x), typically ...
Tianhang Zheng, Changyou Chen, Kui Ren
openalex   +4 more sources

Stochastic sparse adversarial attacks [PDF]

open access: yes2021 IEEE 33rd International Conference on Tools with Artificial Intelligence (ICTAI), 2021
This paper introduces stochastic sparse adversarial attacks (SSAA), standing as simple, fast and purely noise-based targeted and untargeted attacks of neural network classifiers (NNC). SSAA offer new examples of sparse (or $L_0$) attacks for which only few methods have been proposed previously.
Hajri, Hatem   +4 more
openaire   +4 more sources

Optical Adversarial Attack [PDF]

open access: yes2021 IEEE/CVF International Conference on Computer Vision Workshops (ICCVW), 2021
ICCV Workshop ...
Gnanasambandam, Abhiram   +2 more
openaire   +2 more sources

Adversarial attacks and adversarial robustness in computational pathology. [PDF]

open access: yesNat Commun, 2022
AbstractArtificial Intelligence (AI) can support diagnostic workflows in oncology by aiding diagnosis and providing biomarkers directly from routine pathology slides. However, AI applications are vulnerable to adversarial attacks. Hence, it is essential to quantify and mitigate this risk before widespread clinical use.
Ghaffari Laleh N   +10 more
europepmc   +6 more sources

Global Feature Attention Network: Addressing the Threat of Adversarial Attack for Aerial Image Semantic Segmentation

open access: yesRemote Sensing, 2023
Aerial Image Semantic segmentation based on convolution neural networks (CNNs) has made significant process in recent years. Nevertheless, their vulnerability to adversarial example attacks could not be neglected.
Zhen Wang   +3 more
doaj   +1 more source

Rethinking Model Ensemble in Transfer-based Adversarial Attacks [PDF]

open access: yesInternational Conference on Learning Representations, 2023
It is widely recognized that deep learning models lack robustness to adversarial examples. An intriguing property of adversarial examples is that they can transfer across different models, which enables black-box attacks without any knowledge of the ...
Huanran Chen   +3 more
semanticscholar   +1 more source

Adversarial attacks and defenses in deep learning

open access: yes网络与信息安全学报, 2020
The adversarial example is a modified image that is added imperceptible perturbations, which can make deep neural networks decide wrongly. The adversarial examples seriously threaten the availability of the system and bring great security risks to the ...
LIU Ximeng   +2 more
doaj   +3 more sources

How Deep Learning Sees the World: A Survey on Adversarial Attacks & Defenses [PDF]

open access: yesIEEE Access, 2023
Deep Learning is currently used to perform multiple tasks, such as object recognition, face recognition, and natural language processing. However, Deep Neural Networks (DNNs) are vulnerable to perturbations that alter the network prediction, named ...
J. C. Costa   +3 more
semanticscholar   +1 more source

TextFirewall: Omni-Defending Against Adversarial Texts in Sentiment Classification

open access: yesIEEE Access, 2021
Sentiment classification has been broadly applied in real life, such as product recommendation and opinion-oriented analysis. Unfortunately, the widely employed sentiment classification systems based on deep neural networks (DNNs) are susceptible to ...
Wenqi Wang   +3 more
doaj   +1 more source

On the Robustness of Large Multimodal Models Against Image Adversarial Attacks [PDF]

open access: yesComputer Vision and Pattern Recognition, 2023
Recent advances in instruction tuning have led to the development of State-of-the-Art Large Multimodal Models (LMMs). Given the novelty of these models, the impact of visual adversarial attacks on LMMs has not been thoroughly examined.
Xuanming Cui   +3 more
semanticscholar   +1 more source

Home - About - Disclaimer - Privacy