Results 1 to 10 of about 172,371 (266)

Minimum Adversarial Examples [PDF]

open access: yesEntropy, 2022
Deep neural networks in the area of information security are facing a severe threat from adversarial examples (AEs). Existing methods of AE generation use two optimization models: (1) taking the successful attack as the objective function and limiting ...
Zhenyu Du, Fangzheng Liu, Xuehu Yan
doaj   +4 more sources

Experiments on Adversarial Examples for Deep Learning Model Using Multimodal Sensors [PDF]

open access: yesSensors, 2022
Recently, artificial intelligence (AI) based on IoT sensors has been widely used, which has increased the risk of attacks targeting AI. Adversarial examples are among the most serious types of attacks in which the attacker designs inputs that can cause ...
Ade Kurniawan   +2 more
doaj   +2 more sources

Smooth adversarial examples [PDF]

open access: yesEURASIP Journal on Information Security, 2020
This paper investigates the visual quality of the adversarial examples. Recent papers propose to smooth the perturbations to get rid of high frequency artifacts.
Hanwei Zhang   +3 more
doaj   +4 more sources

Clustering Approach for Detecting Multiple Types of Adversarial Examples [PDF]

open access: yesSensors, 2022
With intentional feature perturbations to a deep learning model, the adversary generates an adversarial example to deceive the deep learning model.
Seok-Hwan Choi   +3 more
doaj   +2 more sources

Defending against and generating adversarial examples together with generative adversarial networks [PDF]

open access: yesScientific Reports
Although deep neural networks have achieved great success in many tasks, they encounter security threats and are often fooled by adversarial examples, which are created by making slight modifications to pixel values. To address these problems, a novel DG-
Ying Wang, Xiao Liao, Wei Cui, Yang Yang
doaj   +2 more sources

Adversarial Examples Detection Method Based on Image Denoising and Compression [PDF]

open access: yesJisuanji gongcheng, 2023
Numerous deep learning achievements in the field of computer vision have been widely applied in real life. However, adversarial examples can lead to false positives in deep learning models with high confidence, resulting in serious security consequences.
Feiyu WANG, Fan ZHANG, Jiayu DU, Hongle LEI, Xiaofeng QI
doaj   +1 more source

Survey of Image Adversarial Example Defense Techniques [PDF]

open access: yesJisuanji kexue yu tansuo, 2023
The rapid and extensive growth of artificial intelligence introduces new security challenges. The generation and defense of adversarial examples for deep neural networks is one of the hot spots.
LIU Ruiqi, LI Hu, WANG Dongxia, ZHAO Chongyang, LI Boyu
doaj   +1 more source

Fooling Examples: Another Intriguing Property of Neural Networks

open access: yesSensors, 2023
Neural networks have been proven to be vulnerable to adversarial examples; these are examples that can be recognized by both humans and neural networks, although neural networks give incorrect predictions.
Ming Zhang, Yongkang Chen, Cheng Qian
doaj   +1 more source

Targeted Universal Adversarial Examples for Remote Sensing

open access: yesRemote Sensing, 2022
Researchers are focusing on the vulnerabilities of deep learning models for remote sensing; various attack methods have been proposed, including universal adversarial examples.
Tao Bai, Hao Wang, Bihan Wen
doaj   +1 more source

Adversarial Examples Generation Method Based on Image Color Random Transformation [PDF]

open access: yesJisuanji kexue, 2023
Although deep neural networks(DNNs) have good performance in most classification tasks,they are vulnerable to adversarial examples,making the security of DNNs questionable.Research designs to generate strongly aggressive adversarial examples can help ...
BAI Zhixu, WANG Hengjun, GUO Kexiang
doaj   +1 more source

Home - About - Disclaimer - Privacy