Results 11 to 20 of about 173,113 (165)

Minimum Adversarial Examples [PDF]

open access: yesEntropy, 2022
Deep neural networks in the area of information security are facing a severe threat from adversarial examples (AEs). Existing methods of AE generation use two optimization models: (1) taking the successful attack as the objective function and limiting ...
Zhenyu Du, Fangzheng Liu, Xuehu Yan
doaj   +4 more sources

Experiments on Adversarial Examples for Deep Learning Model Using Multimodal Sensors [PDF]

open access: yesSensors, 2022
Recently, artificial intelligence (AI) based on IoT sensors has been widely used, which has increased the risk of attacks targeting AI. Adversarial examples are among the most serious types of attacks in which the attacker designs inputs that can cause ...
Ade Kurniawan   +2 more
doaj   +2 more sources

Smooth adversarial examples [PDF]

open access: yesEURASIP Journal on Information Security, 2020
This paper investigates the visual quality of the adversarial examples. Recent papers propose to smooth the perturbations to get rid of high frequency artifacts.
Hanwei Zhang   +3 more
doaj   +4 more sources

Clustering Approach for Detecting Multiple Types of Adversarial Examples [PDF]

open access: yesSensors, 2022
With intentional feature perturbations to a deep learning model, the adversary generates an adversarial example to deceive the deep learning model.
Seok-Hwan Choi   +3 more
doaj   +2 more sources

Targeted Universal Adversarial Examples for Remote Sensing

open access: yesRemote Sensing, 2022
Researchers are focusing on the vulnerabilities of deep learning models for remote sensing; various attack methods have been proposed, including universal adversarial examples.
Tao Bai, Hao Wang, Bihan Wen
doaj   +3 more sources

Generating Adversarial Examples with Adversarial Networks [PDF]

open access: yesProceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence, 2019
Deep neural networks (DNNs) have been found to be vulnerable to adversarial examples resulting from adding small-magnitude perturbations to inputs. Such adversarial examples can mislead DNNs to produce adversary-selected results.
He, Warren   +5 more
core   +2 more sources

Defending against and generating adversarial examples together with generative adversarial networks [PDF]

open access: yesScientific Reports
Although deep neural networks have achieved great success in many tasks, they encounter security threats and are often fooled by adversarial examples, which are created by making slight modifications to pixel values. To address these problems, a novel DG-
Ying Wang, Xiao Liao, Wei Cui, Yang Yang
doaj   +2 more sources

Understanding adversarial robustness against on-manifold adversarial examples

open access: yesPattern Recognition
Deep neural networks (DNNs) are shown to be vulnerable to adversarial examples. A well-trained model can be easily attacked by adding small perturbations to the original data. One of the hypotheses of the existence of the adversarial examples is the off-manifold assumption: adversarial examples lie off the data manifold. However, recent research showed
Jiancong Xiao   +4 more
openaire   +4 more sources

Robust Decision Trees Against Adversarial Examples

open access: yes, 2019
Although adversarial examples and model robustness have been extensively studied in the context of linear models and neural networks, research on this issue in tree-based models and how to make tree-based models robust against adversarial examples is ...
Boning, Duane   +3 more
core   +3 more sources

Adversarial Examples Detection Method Based on Image Denoising and Compression [PDF]

open access: yesJisuanji gongcheng, 2023
Numerous deep learning achievements in the field of computer vision have been widely applied in real life. However, adversarial examples can lead to false positives in deep learning models with high confidence, resulting in serious security consequences.
Feiyu WANG, Fan ZHANG, Jiayu DU, Hongle LEI, Xiaofeng QI
doaj   +1 more source

Home - About - Disclaimer - Privacy