Results 61 to 70 of about 79,418 (169)

A divide-and-conquer reconstruction method for defending against adversarial example attacks

open access: yesVisual Intelligence
In recent years, defending against adversarial examples has gained significant importance, leading to a growing body of research in this area. Among these studies, pre-processing defense approaches have emerged as a prominent research direction. However,
Xiyao Liu   +5 more
doaj   +1 more source

Efficient Defenses Against Adversarial Attacks [PDF]

open access: yesProceedings of the 10th ACM Workshop on Artificial Intelligence and Security, 2017
Following the recent adoption of deep neural networks (DNN) accross a wide range of applications, adversarial attacks against these models have proven to be an indisputable threat. Adversarial samples are crafted with a deliberate intention of undermining a system. In the case of DNNs, the lack of better understanding of their working has prevented the
Zantedeschi, Valentina   +2 more
openaire   +2 more sources

A knowledge distillation strategy for enhancing the adversarial robustness of lightweight automatic modulation classification models

open access: yesIET Communications
Automatic modulation classification models based on deep learning models are at risk of being interfered by adversarial attacks. In an adversarial attack, the attacker causes the classification model to misclassify the received signal by adding carefully
Fanghao Xu   +5 more
doaj   +1 more source

PuVAE: A Variational Autoencoder to Purify Adversarial Examples

open access: yesIEEE Access, 2019
Deep neural networks are widely used and exhibit excellent performance in many areas. However, they are vulnerable to adversarial attacks that compromise networks at inference time by applying elaborately designed perturbations to input data.
Uiwon Hwang   +4 more
doaj   +1 more source

Attention, Please! Adversarial Defense via Attention Rectification and Preservation

open access: yes, 2019
This study provides a new understanding of the adversarial attack problem by examining the correlation between adversarial attack and visual attention change.
Jing, Liping   +6 more
core  

Defensive Dual Masking for Robust Adversarial Defense

open access: yesComputational Linguistics
Abstract Adversarial defenses for textual data have gained considerable attention in recent years due to the increasing vulnerability of Natural Language Processing (NLP) models to adversarial attacks. These attacks exploit subtle perturbations in input text to deceive models, posing significant challenges to model robustness and ...
Yang, Wangli   +3 more
openaire   +2 more sources

Generative Adversarial Trainer: Defense to Adversarial Perturbations with GAN

open access: yes, 2017
We propose a novel technique to make neural network robust to adversarial examples using a generative adversarial network. We alternately train both classifier and generator networks.
Han, Sungyeob   +2 more
core  

Knowing is Half the Battle: Enhancing Clean Data Accuracy of Adversarial Robust Deep Neural Networks via Dual-Model Bounded Divergence Gating

open access: yesIEEE Access
Significant advances have been made in recent years in improving the robustness of deep neural networks, particularly under adversarial machine learning scenarios where the data has been contaminated to fool networks into making undesirable predictions ...
Hossein Aboutalebi   +3 more
doaj   +1 more source

Defense-guided Transferable Adversarial Attacks

open access: yes, 2020
Though deep neural networks perform challenging tasks excellently, they are susceptible to adversarial examples, which mislead classifiers by applying human-imperceptible perturbations on clean inputs. Under the query-free black-box scenario, adversarial examples are hard to transfer to unknown models, and several methods have been proposed with the ...
Zhang, Zifei   +3 more
openaire   +2 more sources

Home - About - Disclaimer - Privacy