Results 61 to 70 of about 173,113 (165)

Deep neural rejection against adversarial examples [PDF]

open access: yesEURASIP Journal on Information Security, 2020
AbstractDespite the impressive performances reported by deep neural networks in different application domains, they remain largely vulnerable to adversarial examples, i.e., input samples that are carefully perturbed to cause misclassification at test time.
Angelo Sotgiu   +6 more
openaire   +4 more sources

A Novel Adversarial Example Detection Method Based on Frequency Domain Reconstruction for Image Sensors

open access: yesSensors
Convolutional neural networks (CNNs) have been extensively used in numerous remote sensing image detection tasks owing to their exceptional performance.
Shuaina Huang, Zhiyong Zhang, Bin Song
doaj   +1 more source

Exploring Diverse Feature Extractions for Adversarial Audio Detection

open access: yesIEEE Access, 2023
Although deep learning models have exhibited excellent performance in various domains, recent studies have discovered that they are highly vulnerable to adversarial attacks.
Yujin Choi   +3 more
doaj   +1 more source

A Gradual Adversarial Training Method for Semantic Segmentation

open access: yesRemote Sensing
Deep neural networks (DNNs) have achieved great success in various computer vision tasks. However, they are susceptible to artificially designed adversarial perturbations, which limit their deployment in security-critical applications.
Yinkai Zan, Pingping Lu, Tingyu Meng
doaj   +1 more source

Attack Selectivity of Adversarial Examples in Remote Sensing Image Scene Classification

open access: yesIEEE Access, 2020
Remote sensing image (RSI) scene classification is the foundation and important technology of ground object detection, land use management and geographic analysis.
Li Chen   +7 more
doaj   +1 more source

CommanderUAP: a practical and transferable universal adversarial attacks on speech recognition models

open access: yesCybersecurity
Most of the adversarial attacks against speech recognition systems focus on specific adversarial perturbations, which are generated by adversaries for each normal example to achieve the attack.
Zheng Sun   +4 more
doaj   +1 more source

Image Classification Adversarial Example Defense Method Based on Conditional Diffusion Model [PDF]

open access: yesJisuanji gongcheng
Deep-learning models have achieved impressive results in fields such as image classification; however, they remain vulnerable to interference and threats from adversarial examples.
CHEN Zimin, GUAN Zhitao
doaj   +1 more source

Game Theoretic Mixed Experts for Combinational Adversarial Machine Learning

open access: yesIEEE Access
Recent advances in adversarial machine learning have shown that defenses previously considered robust are actually susceptible to adversarial attacks which are specifically customized to target their weaknesses.
Kaleel Mahmood   +5 more
doaj   +1 more source

Adversarial Robustness by One Bit Double Quantization for Visual Classification

open access: yesIEEE Access, 2019
In this paper, we propose a novel robust visual classification framework that uses double quantization (dquant) to defend against adversarial examples in a specific attack scenario called “subsequent adversarial examples” where test images ...
Maungmaung Aprilpyone   +2 more
doaj   +1 more source

Provably Robust Adversarial Examples

open access: yes, 2020
International Conference on Learning Representations (ICLR 2022)
Dimitrov, Dimitar Iliev   +3 more
openaire   +3 more sources

Home - About - Disclaimer - Privacy