Results 11 to 20 of about 85,688 (262)

Stochastic sparse adversarial attacks [PDF]

open access: yes2021 IEEE 33rd International Conference on Tools with Artificial Intelligence (ICTAI), 2021
This paper introduces stochastic sparse adversarial attacks (SSAA), standing as simple, fast and purely noise-based targeted and untargeted attacks of neural network classifiers (NNC). SSAA offer new examples of sparse (or $L_0$) attacks for which only few methods have been proposed previously.
Hajri, Hatem   +4 more
openaire   +4 more sources

Optical Adversarial Attack [PDF]

open access: yes2021 IEEE/CVF International Conference on Computer Vision Workshops (ICCVW), 2021
ICCV Workshop ...
Gnanasambandam, Abhiram   +2 more
openaire   +2 more sources

Adversarial Attacks on Time Series [PDF]

open access: yesIEEE Transactions on Pattern Analysis and Machine Intelligence, 2021
Time series classification models have been garnering significant importance in the research community. However, not much research has been done on generating adversarial samples for these models. These adversarial samples can become a security concern. In this paper, we propose utilizing an adversarial transformation network (ATN) on a distilled model
Fazle Karim   +2 more
openaire   +3 more sources

Discriminator-free Generative Adversarial Attack [PDF]

open access: yesProceedings of the 29th ACM International Conference on Multimedia, 2021
9 pages, 6 figures, 4 ...
Lu, Shaohao   +7 more
openaire   +2 more sources

Probabilistic Categorical Adversarial Attack & Adversarial Training

open access: yes, 2022
The existence of adversarial examples brings huge concern for people to apply Deep Neural Networks (DNNs) in safety-critical tasks. However, how to generate adversarial examples with categorical data is an important problem but lack of extensive exploration.
Xu, Han   +6 more
openaire   +2 more sources

Adversarial Attacks on Adversarial Bandits

open access: yes, 2023
Accepted by ICLR ...
Ma, Yuzhe, Zhou, Zhijin
openaire   +2 more sources

Composite Adversarial Attacks

open access: yesProceedings of the AAAI Conference on Artificial Intelligence, 2021
Adversarial attack is a technique for deceiving Machine Learning (ML) models, which provides a way to evaluate the adversarial robustness. In practice, attack algorithms are artificially selected and tuned by human experts to break a ML system. However, manual selection of attackers tends to be sub-optimal, leading to a mistakenly assessment of model ...
Mao, Xiaofeng   +5 more
openaire   +2 more sources

Focused Adversarial Attacks

open access: yes, 2022
Recent advances in machine learning show that neural models are vulnerable to minimally perturbed inputs, or adversarial examples. Adversarial algorithms are optimization problems that minimize the accuracy of ML models by perturbing inputs, often using a model's loss function to craft such perturbations.
Cilloni, Thomas   +2 more
openaire   +2 more sources

Survey of Adversarial Attacks and Defense Methods for Deep Learning Model [PDF]

open access: yesJisuanji gongcheng, 2021
As an important part of artificial intelligence technology,deep learning is widely used in computer vision,natural language processing and other fields.Although deep learning performs well in tasks such as image classification and target detection,its ...
JIANG Yan, ZHANG Liguo
doaj   +1 more source

Benign Adversarial Attack

open access: yesProceedings of the 30th ACM International Conference on Multimedia, 2022
ACM MM2022 Brave New ...
Sang, Jitao   +3 more
openaire   +2 more sources

Home - About - Disclaimer - Privacy