Results 31 to 40 of about 5,561,446 (302)
Perceptually Similar Image Classification Adversarial Example Generation Model
The existing generator-based adversarial example generation model can effectively reduce the construction time of an adversarial example compared to the algorithms based on iterative original image modification, but the obvious differences between ...
LI Junjie, WANG Qian
doaj +1 more source
Learning the Unlearnable: Adversarial Augmentations Suppress Unlearnable Example Attacks [PDF]
Unlearnable example attacks are data poisoning techniques that can be used to safeguard public data against unauthorized use for training deep learning models.
Tianrui Qin +4 more
semanticscholar +1 more source
Dual-Targeted Textfooler Attack on Text Classification Systems
Deep neural networks provide good performance on classification tasks such as those for image, audio, and text classification. However, such neural networks are vulnerable to adversarial examples.
Hyun Kwon
doaj +1 more source
Enhancing Adversarial Example Transferability With an Intermediate Level Attack [PDF]
Neural networks are vulnerable to adversarial examples, malicious inputs crafted to fool trained models. Adversarial examples often exhibit black-box transfer, meaning that adversarial examples for one model can fool another model.
Qian Huang +5 more
semanticscholar +1 more source
A Hybrid Adversarial Attack for Different Application Scenarios
Adversarial attack against natural language has been a hot topic in the field of artificial intelligence security in recent years. It is mainly to study the methods and implementation of generating adversarial examples. The purpose is to better deal with
Xiaohu Du +6 more
doaj +1 more source
Exploring Diverse Feature Extractions for Adversarial Audio Detection
Although deep learning models have exhibited excellent performance in various domains, recent studies have discovered that they are highly vulnerable to adversarial attacks.
Yujin Choi +3 more
doaj +1 more source
Generating Adversarial Examples with Adversarial Networks [PDF]
Deep neural networks (DNNs) have been found to be vulnerable to adversarial examples resulting from adding small-magnitude perturbations to inputs. Such adversarial examples can mislead DNNs to produce adversary-selected results. Different attack strategies have been proposed to generate adversarial examples, but how to produce them with high ...
Xiao, Chaowei +5 more
openaire +2 more sources
Deep neural networks (DNNs) have achieved great success in various applications due to their strong expressive power. However, recent studies have shown that DNNs are vulnerable to adversarial examples, and these manipulated instances can mislead DNN ...
Jianyi Liu +4 more
doaj +1 more source
Semantic Adversarial Examples [PDF]
Deep neural networks are known to be vulnerable to adversarial examples, i.e., images that are maliciously perturbed to fool the model. Generating adversarial examples has been mostly limited to finding small perturbations that maximize the model prediction error.
Hosseini, Hossein, Poovendran, Radha
openaire +2 more sources
Survey of Image Adversarial Example Defense Techniques [PDF]
The rapid and extensive growth of artificial intelligence introduces new security challenges. The generation and defense of adversarial examples for deep neural networks is one of the hot spots.
LIU Ruiqi, LI Hu, WANG Dongxia, ZHAO Chongyang, LI Boyu
doaj +1 more source

