Results 11 to 20 of about 16,308 (300)
Stochastic sparse adversarial attacks [PDF]
This paper introduces stochastic sparse adversarial attacks (SSAA), standing as simple, fast and purely noise-based targeted and untargeted attacks of neural network classifiers (NNC). SSAA offer new examples of sparse (or $L_0$) attacks for which only few methods have been proposed previously.
Hajri, Hatem +4 more
openaire +4 more sources
Optical Adversarial Attack [PDF]
ICCV Workshop ...
Gnanasambandam, Abhiram +2 more
openaire +2 more sources
Adversarial attacks and adversarial robustness in computational pathology. [PDF]
AbstractArtificial Intelligence (AI) can support diagnostic workflows in oncology by aiding diagnosis and providing biomarkers directly from routine pathology slides. However, AI applications are vulnerable to adversarial attacks. Hence, it is essential to quantify and mitigate this risk before widespread clinical use.
Ghaffari Laleh N +10 more
europepmc +6 more sources
Distributionally Adversarial Attack
Recent work on adversarial attack has shown that Projected Gradient Descent (PGD) Adversary is a universal first-order adversary, and the classifier adversarially trained by PGD is robust against a wide range of first-order attacks. It is worth noting that the original objective of an attack/defense model relies on a data distribution p(x), typically ...
Tianhang Zheng, Changyou Chen, Kui Ren
openalex +4 more sources
Adversarial Attacks on Time Series [PDF]
Time series classification models have been garnering significant importance in the research community. However, not much research has been done on generating adversarial samples for these models. These adversarial samples can become a security concern. In this paper, we propose utilizing an adversarial transformation network (ATN) on a distilled model
Fazle Karim +2 more
openaire +3 more sources
Discriminator-free Generative Adversarial Attack [PDF]
9 pages, 6 figures, 4 ...
Lu, Shaohao +7 more
openaire +2 more sources
Probabilistic Categorical Adversarial Attack & Adversarial Training
The existence of adversarial examples brings huge concern for people to apply Deep Neural Networks (DNNs) in safety-critical tasks. However, how to generate adversarial examples with categorical data is an important problem but lack of extensive exploration.
Xu, Han +6 more
openaire +2 more sources
Adversarial Attacks on Adversarial Bandits
Accepted by ICLR ...
Ma, Yuzhe, Zhou, Zhijin
openaire +2 more sources
Adversarial attack is a technique for deceiving Machine Learning (ML) models, which provides a way to evaluate the adversarial robustness. In practice, attack algorithms are artificially selected and tuned by human experts to break a ML system. However, manual selection of attackers tends to be sub-optimal, leading to a mistakenly assessment of model ...
Mao, Xiaofeng +5 more
openaire +2 more sources
Recent advances in machine learning show that neural models are vulnerable to minimally perturbed inputs, or adversarial examples. Adversarial algorithms are optimization problems that minimize the accuracy of ML models by perturbing inputs, often using a model's loss function to craft such perturbations.
Cilloni, Thomas +2 more
openaire +2 more sources

