Results 11 to 20 of about 94,262 (290)

Distributionally Adversarial Attack

open access: diamondProceedings of the AAAI Conference on Artificial Intelligence, 2019
Recent work on adversarial attack has shown that Projected Gradient Descent (PGD) Adversary is a universal first-order adversary, and the classifier adversarially trained by PGD is robust against a wide range of first-order attacks. It is worth noting that the original objective of an attack/defense model relies on a data distribution p(x), typically ...
Tianhang Zheng, Changyou Chen, Kui Ren
openalex   +4 more sources

Stochastic sparse adversarial attacks [PDF]

open access: yes2021 IEEE 33rd International Conference on Tools with Artificial Intelligence (ICTAI), 2021
This paper introduces stochastic sparse adversarial attacks (SSAA), standing as simple, fast and purely noise-based targeted and untargeted attacks of neural network classifiers (NNC). SSAA offer new examples of sparse (or $L_0$) attacks for which only few methods have been proposed previously.
Hajri, Hatem   +4 more
openaire   +4 more sources

Optical Adversarial Attack [PDF]

open access: yes2021 IEEE/CVF International Conference on Computer Vision Workshops (ICCVW), 2021
ICCV Workshop ...
Gnanasambandam, Abhiram   +2 more
openaire   +2 more sources

Adversarial Attacks on Time Series [PDF]

open access: yesIEEE Transactions on Pattern Analysis and Machine Intelligence, 2021
Time series classification models have been garnering significant importance in the research community. However, not much research has been done on generating adversarial samples for these models. These adversarial samples can become a security concern. In this paper, we propose utilizing an adversarial transformation network (ATN) on a distilled model
Fazle Karim   +2 more
openaire   +3 more sources

Discriminator-free Generative Adversarial Attack [PDF]

open access: yesProceedings of the 29th ACM International Conference on Multimedia, 2021
9 pages, 6 figures, 4 ...
Lu, Shaohao   +7 more
openaire   +2 more sources

Probabilistic Categorical Adversarial Attack & Adversarial Training

open access: yes, 2022
The existence of adversarial examples brings huge concern for people to apply Deep Neural Networks (DNNs) in safety-critical tasks. However, how to generate adversarial examples with categorical data is an important problem but lack of extensive exploration.
Xu, Han   +6 more
openaire   +2 more sources

A Hybrid Adversarial Attack for Different Application Scenarios

open access: yesApplied Sciences, 2020
Adversarial attack against natural language has been a hot topic in the field of artificial intelligence security in recent years. It is mainly to study the methods and implementation of generating adversarial examples. The purpose is to better deal with
Xiaohu Du   +6 more
doaj   +1 more source

Adversarial Attacks on Adversarial Bandits

open access: yes, 2023
Accepted by ICLR ...
Ma, Yuzhe, Zhou, Zhijin
openaire   +2 more sources

Adv-Eye: A Transfer-Based Natural Eye Makeup Attack on Face Recognition

open access: yesIEEE Access, 2023
Deep face recognition models are vulnerable to adversarial samples generated by adversarial attack methods. However, current attack methods do not adequately represent the security problems of the deep FR models, because they either produce adversarial ...
Jiatian Pi   +6 more
doaj   +1 more source

Composite Adversarial Attacks

open access: yesProceedings of the AAAI Conference on Artificial Intelligence, 2021
Adversarial attack is a technique for deceiving Machine Learning (ML) models, which provides a way to evaluate the adversarial robustness. In practice, attack algorithms are artificially selected and tuned by human experts to break a ML system. However, manual selection of attackers tends to be sub-optimal, leading to a mistakenly assessment of model ...
Mao, Xiaofeng   +5 more
openaire   +2 more sources

Home - About - Disclaimer - Privacy