Results 31 to 40 of about 5,561,446 (302)

Perceptually Similar Image Classification Adversarial Example Generation Model

open access: yesJisuanji kexue yu tansuo, 2020
The existing generator-based adversarial example generation model can effectively reduce the construction time of an adversarial example compared to the algorithms based on iterative original image modification, but the obvious differences between ...
LI Junjie, WANG Qian
doaj   +1 more source

Learning the Unlearnable: Adversarial Augmentations Suppress Unlearnable Example Attacks [PDF]

open access: yesarXiv.org, 2023
Unlearnable example attacks are data poisoning techniques that can be used to safeguard public data against unauthorized use for training deep learning models.
Tianrui Qin   +4 more
semanticscholar   +1 more source

Dual-Targeted Textfooler Attack on Text Classification Systems

open access: yesIEEE Access, 2023
Deep neural networks provide good performance on classification tasks such as those for image, audio, and text classification. However, such neural networks are vulnerable to adversarial examples.
Hyun Kwon
doaj   +1 more source

Enhancing Adversarial Example Transferability With an Intermediate Level Attack [PDF]

open access: yesIEEE International Conference on Computer Vision, 2019
Neural networks are vulnerable to adversarial examples, malicious inputs crafted to fool trained models. Adversarial examples often exhibit black-box transfer, meaning that adversarial examples for one model can fool another model.
Qian Huang   +5 more
semanticscholar   +1 more source

A Hybrid Adversarial Attack for Different Application Scenarios

open access: yesApplied Sciences, 2020
Adversarial attack against natural language has been a hot topic in the field of artificial intelligence security in recent years. It is mainly to study the methods and implementation of generating adversarial examples. The purpose is to better deal with
Xiaohu Du   +6 more
doaj   +1 more source

Exploring Diverse Feature Extractions for Adversarial Audio Detection

open access: yesIEEE Access, 2023
Although deep learning models have exhibited excellent performance in various domains, recent studies have discovered that they are highly vulnerable to adversarial attacks.
Yujin Choi   +3 more
doaj   +1 more source

Generating Adversarial Examples with Adversarial Networks [PDF]

open access: yesProceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence, 2018
Deep neural networks (DNNs) have been found to be vulnerable to adversarial examples resulting from adding small-magnitude perturbations to inputs. Such adversarial examples can mislead DNNs to produce adversary-selected results. Different attack strategies have been proposed to generate adversarial examples, but how to produce them with high ...
Xiao, Chaowei   +5 more
openaire   +2 more sources

A Two-Stage Generative Adversarial Networks With Semantic Content Constraints for Adversarial Example Generation

open access: yesIEEE Access, 2020
Deep neural networks (DNNs) have achieved great success in various applications due to their strong expressive power. However, recent studies have shown that DNNs are vulnerable to adversarial examples, and these manipulated instances can mislead DNN ...
Jianyi Liu   +4 more
doaj   +1 more source

Semantic Adversarial Examples [PDF]

open access: yes2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), 2018
Deep neural networks are known to be vulnerable to adversarial examples, i.e., images that are maliciously perturbed to fool the model. Generating adversarial examples has been mostly limited to finding small perturbations that maximize the model prediction error.
Hosseini, Hossein, Poovendran, Radha
openaire   +2 more sources

Survey of Image Adversarial Example Defense Techniques [PDF]

open access: yesJisuanji kexue yu tansuo, 2023
The rapid and extensive growth of artificial intelligence introduces new security challenges. The generation and defense of adversarial examples for deep neural networks is one of the hot spots.
LIU Ruiqi, LI Hu, WANG Dongxia, ZHAO Chongyang, LI Boyu
doaj   +1 more source

Home - About - Disclaimer - Privacy