Results 21 to 30 of about 237,731 (274)

Generating Adversarial Examples with Adversarial Networks [PDF]

open access: yesProceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence, 2018
Deep neural networks (DNNs) have been found to be vulnerable to adversarial examples resulting from adding small-magnitude perturbations to inputs. Such adversarial examples can mislead DNNs to produce adversary-selected results. Different attack strategies have been proposed to generate adversarial examples, but how to produce them with high ...
Xiao, Chaowei   +5 more
openaire   +2 more sources

A Two-Stage Generative Adversarial Networks With Semantic Content Constraints for Adversarial Example Generation

open access: yesIEEE Access, 2020
Deep neural networks (DNNs) have achieved great success in various applications due to their strong expressive power. However, recent studies have shown that DNNs are vulnerable to adversarial examples, and these manipulated instances can mislead DNN ...
Jianyi Liu   +4 more
doaj   +1 more source

Semantic Adversarial Examples [PDF]

open access: yes2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), 2018
Deep neural networks are known to be vulnerable to adversarial examples, i.e., images that are maliciously perturbed to fool the model. Generating adversarial examples has been mostly limited to finding small perturbations that maximize the model prediction error.
Hosseini, Hossein, Poovendran, Radha
openaire   +2 more sources

On the Effectiveness of Adversarial Training in Defending against Adversarial Example Attacks for Image Classification

open access: yesApplied Sciences, 2020
State-of-the-art neural network models are actively used in various fields, but it is well-known that they are vulnerable to adversarial example attacks.
Sanglee Park, Jungmin So
doaj   +1 more source

Improving Adversarial Robustness via Attention and Adversarial Logit Pairing

open access: yesFrontiers in Artificial Intelligence, 2022
Though deep neural networks have achieved the state of the art performance in visual classification, recent studies have shown that they are all vulnerable to the attack of adversarial examples. In this paper, we develop improved techniques for defending
Xingjian Li   +4 more
doaj   +1 more source

Survey of Image Adversarial Example Defense Techniques [PDF]

open access: yesJisuanji kexue yu tansuo, 2023
The rapid and extensive growth of artificial intelligence introduces new security challenges. The generation and defense of adversarial examples for deep neural networks is one of the hot spots.
LIU Ruiqi, LI Hu, WANG Dongxia, ZHAO Chongyang, LI Boyu
doaj   +1 more source

Not all adversarial examples require a complex defense : identifying over-optimized adversarial examples with IQR-based logit thresholding [PDF]

open access: yes, 2019
Detecting adversarial examples currently stands as one of the biggest challenges in the field of deep learning. Adversarial attacks, which produce adversarial examples, increase the prediction likelihood of a target class for a particular data point ...
De Neve, Wesley   +2 more
core   +2 more sources

A Robust Adversarial Example Attack Based on Video Augmentation

open access: yesApplied Sciences, 2023
Despite the success of learning-based systems, recent studies have highlighted video adversarial examples as a ubiquitous threat to state-of-the-art video classification systems.
Mingyong Yin   +3 more
doaj   +1 more source

Adversarial Examples: Opportunities and Challenges [PDF]

open access: yesIEEE Transactions on Neural Networks and Learning Systems, 2019
16 pages, 13 figures, 5 ...
Jiliang Zhang, Chen Li
openaire   +3 more sources

Really natural adversarial examples [PDF]

open access: yesInternational Journal of Machine Learning and Cybernetics, 2021
AbstractThe phenomenon of Adversarial Examples has become one of the most intriguing topics associated to deep learning. The so-called adversarial attacks have the ability to fool deep neural networks with inappreciable perturbations. While the effect is striking, it has been suggested that such carefully selected injected noise does not necessarily ...
Anibal Pedraza   +2 more
openaire   +1 more source

Home - About - Disclaimer - Privacy