Results 21 to 30 of about 173,113 (165)
Adversarial Examples-Security Threats to COVID-19 Deep Learning Systems in Medical IoT Devices. [PDF]
Rahman A +3 more
europepmc +2 more sources
Natural Adversarial Examples [PDF]
We introduce two challenging datasets that reliably cause machine learning model performance to substantially degrade. The datasets are collected with a simple adversarial filtration technique to create datasets with limited spurious cues. Our datasets' real-world, unmodified examples transfer to various unseen models reliably, demonstrating that ...
Hendrycks, Dan +4 more
openaire +2 more sources
Survey of Image Adversarial Example Defense Techniques [PDF]
The rapid and extensive growth of artificial intelligence introduces new security challenges. The generation and defense of adversarial examples for deep neural networks is one of the hot spots.
LIU Ruiqi, LI Hu, WANG Dongxia, ZHAO Chongyang, LI Boyu
doaj +1 more source
Efficient Adversarial Training With Transferable Adversarial Examples [PDF]
Adversarial training is an effective defense method to protect classification models against adversarial attacks. However, one limitation of this approach is that it can require orders of magnitude additional training time due to high cost of generating strong adversarial examples during training.
Zheng, Haizhong +4 more
openaire +2 more sources
Fooling Examples: Another Intriguing Property of Neural Networks
Neural networks have been proven to be vulnerable to adversarial examples; these are examples that can be recognized by both humans and neural networks, although neural networks give incorrect predictions.
Ming Zhang, Yongkang Chen, Cheng Qian
doaj +1 more source
Adversarial Examples for Good: Adversarial Examples Guided Imbalanced Learning
Appeared in ICIP ...
Zhang, Jie +3 more
openaire +2 more sources
FADER: Fast adversarial example rejection [PDF]
Deep neural networks are vulnerable to adversarial examples, i.e., carefully-crafted inputs that mislead classification at test time. Recent defenses have been shown to improve adversarial robustness by detecting anomalous deviations from legitimate training samples at different layer representations - a behavior normally exhibited by adversarial ...
Crecchi, Francesco +4 more
openaire +4 more sources
Adversarial Examples Generation Method Based on Image Color Random Transformation [PDF]
Although deep neural networks(DNNs) have good performance in most classification tasks,they are vulnerable to adversarial examples,making the security of DNNs questionable.Research designs to generate strongly aggressive adversarial examples can help ...
BAI Zhixu, WANG Hengjun, GUO Kexiang
doaj +1 more source
Semantic Adversarial Examples [PDF]
Deep neural networks are known to be vulnerable to adversarial examples, i.e., images that are maliciously perturbed to fool the model. Generating adversarial examples has been mostly limited to finding small perturbations that maximize the model prediction error.
Hosseini, Hossein, Poovendran, Radha
openaire +2 more sources
Adversarial Examples: Opportunities and Challenges [PDF]
16 pages, 13 figures, 5 ...
Jiliang Zhang, Chen Li
openaire +3 more sources

