Results 51 to 60 of about 79,418 (169)
Neural networks are vulnerable to meticulously crafted adversarial examples, leading to high-confidence misclassifications in image classification tasks. Due to their consistency with regular input patterns and the absence of reliance on the target model
Xinlei Liu +6 more
doaj +1 more source
GUARD: Graph Universal Adversarial Defense
Graph convolutional networks (GCNs) have been shown to be vulnerable to small adversarial perturbations, which becomes a severe threat and largely limits their applications in security-critical scenarios. To mitigate such a threat, considerable research efforts have been devoted to increasing the robustness of GCNs against adversarial attacks. However,
Jintang Li +7 more
openaire +2 more sources
A Robust CycleGAN-L2 Defense Method for Speaker Recognition System
With the rapid development of voice technology, speaker recognition is becoming increasingly prevalent in our daily lives. However, with its increased usage, security issues have become more apparent.
Lingyi Yang +3 more
doaj +1 more source
Deep learning-based automatic modulation recognition networks are susceptible to adversarial attacks, posing significant performance vulnerabilities. In response, we introduce a defense framework enriched by tailored autoencoder (AE) techniques.
Chao Han +5 more
doaj +1 more source
Scaling provable adversarial defenses
Recent work has developed methods for learning deep network classifiers that are provably robust to norm-bounded adversarial perturbation; however, these methods are currently only possible for relatively small feedforward networks. In this paper, in an effort to scale these approaches to substantially larger models, we extend previous work in three ...
Wong, Eric +3 more
openaire +2 more sources
Exploring Synergy of Denoising and Distillation: Novel Method for Efficient Adversarial Defense
Escalating advancements in artificial intelligence (AI) has prompted significant security concerns, especially with its increasing commercialization. This necessitates research on safety measures to securely utilize AI models.
Inpyo Hong, Sokjoon Lee
doaj +1 more source
Stylized Pairing for Robust Adversarial Defense
Recent studies show that deep neural networks (DNNs)-based object recognition algorithms overly rely on object textures rather than global object shapes, and DNNs are also vulnerable to human-less perceptible adversarial perturbations. Based on these two
Dejian Guan, Wentao Zhao, Xiao Liu
doaj +1 more source
Detecting adversarial examples with inductive Venn-ABERS predictors [PDF]
Inductive Venn-ABERS predictors (IVAPs) are a type of probabilistic predictors with the theoretical guarantee that their predictions are perfectly calibrated.
Goossens, Bart +2 more
core +1 more source
As a safety-related application, visual systems based on deep neural networks (DNNs) in modern unmanned aerial vehicles (UAVs) show adversarial vulnerability when performing real-time inference.
Zihao Lu +3 more
doaj +1 more source
MAD: Meta Adversarial Defense Benchmark
Adversarial training (AT) is a prominent technique employed by deep learning models to defend against adversarial attacks, and to some extent, enhance model robustness. However, there are three main drawbacks of the existing AT-based defense methods: expensive computational cost, low generalization ability, and the dilemma between the original model ...
Peng, X. +4 more
openaire +2 more sources

