Results 51 to 60 of about 79,418 (169)

Mape: defending against transferable adversarial attacks using multi-source adversarial perturbations elimination

open access: yesComplex & Intelligent Systems
Neural networks are vulnerable to meticulously crafted adversarial examples, leading to high-confidence misclassifications in image classification tasks. Due to their consistency with regular input patterns and the absence of reliance on the target model
Xinlei Liu   +6 more
doaj   +1 more source

GUARD: Graph Universal Adversarial Defense

open access: yesProceedings of the 32nd ACM International Conference on Information and Knowledge Management, 2023
Graph convolutional networks (GCNs) have been shown to be vulnerable to small adversarial perturbations, which becomes a severe threat and largely limits their applications in security-critical scenarios. To mitigate such a threat, considerable research efforts have been devoted to increasing the robustness of GCNs against adversarial attacks. However,
Jintang Li   +7 more
openaire   +2 more sources

A Robust CycleGAN-L2 Defense Method for Speaker Recognition System

open access: yesIEEE Access, 2023
With the rapid development of voice technology, speaker recognition is becoming increasingly prevalent in our daily lives. However, with its increased usage, security issues have become more apparent.
Lingyi Yang   +3 more
doaj   +1 more source

Adversarial Example Detection and Restoration Defensive Framework for Signal Intelligent Recognition Networks

open access: yesApplied Sciences, 2023
Deep learning-based automatic modulation recognition networks are susceptible to adversarial attacks, posing significant performance vulnerabilities. In response, we introduce a defense framework enriched by tailored autoencoder (AE) techniques.
Chao Han   +5 more
doaj   +1 more source

Scaling provable adversarial defenses

open access: yes, 2018
Recent work has developed methods for learning deep network classifiers that are provably robust to norm-bounded adversarial perturbation; however, these methods are currently only possible for relatively small feedforward networks. In this paper, in an effort to scale these approaches to substantially larger models, we extend previous work in three ...
Wong, Eric   +3 more
openaire   +2 more sources

Exploring Synergy of Denoising and Distillation: Novel Method for Efficient Adversarial Defense

open access: yesApplied Sciences
Escalating advancements in artificial intelligence (AI) has prompted significant security concerns, especially with its increasing commercialization. This necessitates research on safety measures to securely utilize AI models.
Inpyo Hong, Sokjoon Lee
doaj   +1 more source

Stylized Pairing for Robust Adversarial Defense

open access: yesApplied Sciences, 2022
Recent studies show that deep neural networks (DNNs)-based object recognition algorithms overly rely on object textures rather than global object shapes, and DNNs are also vulnerable to human-less perceptible adversarial perturbations. Based on these two
Dejian Guan, Wentao Zhao, Xiao Liu
doaj   +1 more source

Detecting adversarial examples with inductive Venn-ABERS predictors [PDF]

open access: yes, 2019
Inductive Venn-ABERS predictors (IVAPs) are a type of probabilistic predictors with the theoretical guarantee that their predictions are perfectly calibrated.
Goossens, Bart   +2 more
core   +1 more source

Adversarial Robust Aerial Image Recognition Based on Reactive-Proactive Defense Framework with Deep Ensembles

open access: yesRemote Sensing, 2023
As a safety-related application, visual systems based on deep neural networks (DNNs) in modern unmanned aerial vehicles (UAVs) show adversarial vulnerability when performing real-time inference.
Zihao Lu   +3 more
doaj   +1 more source

MAD: Meta Adversarial Defense Benchmark

open access: yes, 2023
Adversarial training (AT) is a prominent technique employed by deep learning models to defend against adversarial attacks, and to some extent, enhance model robustness. However, there are three main drawbacks of the existing AT-based defense methods: expensive computational cost, low generalization ability, and the dilemma between the original model ...
Peng, X.   +4 more
openaire   +2 more sources

Home - About - Disclaimer - Privacy