Results 11 to 20 of about 16,308 (300)

Stochastic sparse adversarial attacks [PDF]

open access: yes2021 IEEE 33rd International Conference on Tools with Artificial Intelligence (ICTAI), 2021
This paper introduces stochastic sparse adversarial attacks (SSAA), standing as simple, fast and purely noise-based targeted and untargeted attacks of neural network classifiers (NNC). SSAA offer new examples of sparse (or $L_0$) attacks for which only few methods have been proposed previously.
Hajri, Hatem   +4 more
openaire   +4 more sources

Optical Adversarial Attack [PDF]

open access: yes2021 IEEE/CVF International Conference on Computer Vision Workshops (ICCVW), 2021
ICCV Workshop ...
Gnanasambandam, Abhiram   +2 more
openaire   +2 more sources

Adversarial attacks and adversarial robustness in computational pathology. [PDF]

open access: yesNat Commun, 2022
AbstractArtificial Intelligence (AI) can support diagnostic workflows in oncology by aiding diagnosis and providing biomarkers directly from routine pathology slides. However, AI applications are vulnerable to adversarial attacks. Hence, it is essential to quantify and mitigate this risk before widespread clinical use.
Ghaffari Laleh N   +10 more
europepmc   +6 more sources

Distributionally Adversarial Attack

open access: diamondProceedings of the AAAI Conference on Artificial Intelligence, 2019
Recent work on adversarial attack has shown that Projected Gradient Descent (PGD) Adversary is a universal first-order adversary, and the classifier adversarially trained by PGD is robust against a wide range of first-order attacks. It is worth noting that the original objective of an attack/defense model relies on a data distribution p(x), typically ...
Tianhang Zheng, Changyou Chen, Kui Ren
openalex   +4 more sources

Adversarial Attacks on Time Series [PDF]

open access: yesIEEE Transactions on Pattern Analysis and Machine Intelligence, 2021
Time series classification models have been garnering significant importance in the research community. However, not much research has been done on generating adversarial samples for these models. These adversarial samples can become a security concern. In this paper, we propose utilizing an adversarial transformation network (ATN) on a distilled model
Fazle Karim   +2 more
openaire   +3 more sources

Discriminator-free Generative Adversarial Attack [PDF]

open access: yesProceedings of the 29th ACM International Conference on Multimedia, 2021
9 pages, 6 figures, 4 ...
Lu, Shaohao   +7 more
openaire   +2 more sources

Probabilistic Categorical Adversarial Attack & Adversarial Training

open access: yes, 2022
The existence of adversarial examples brings huge concern for people to apply Deep Neural Networks (DNNs) in safety-critical tasks. However, how to generate adversarial examples with categorical data is an important problem but lack of extensive exploration.
Xu, Han   +6 more
openaire   +2 more sources

Adversarial Attacks on Adversarial Bandits

open access: yes, 2023
Accepted by ICLR ...
Ma, Yuzhe, Zhou, Zhijin
openaire   +2 more sources

Composite Adversarial Attacks

open access: yesProceedings of the AAAI Conference on Artificial Intelligence, 2021
Adversarial attack is a technique for deceiving Machine Learning (ML) models, which provides a way to evaluate the adversarial robustness. In practice, attack algorithms are artificially selected and tuned by human experts to break a ML system. However, manual selection of attackers tends to be sub-optimal, leading to a mistakenly assessment of model ...
Mao, Xiaofeng   +5 more
openaire   +2 more sources

Focused Adversarial Attacks

open access: yes, 2022
Recent advances in machine learning show that neural models are vulnerable to minimally perturbed inputs, or adversarial examples. Adversarial algorithms are optimization problems that minimize the accuracy of ML models by perturbing inputs, often using a model's loss function to craft such perturbations.
Cilloni, Thomas   +2 more
openaire   +2 more sources

Home - About - Disclaimer - Privacy