Results 31 to 40 of about 237,731 (274)

Adversarial Examples for Generative Models [PDF]

open access: yes2018 IEEE Security and Privacy Workshops (SPW), 2018
We explore methods of producing adversarial examples on deep generative models such as the variational autoencoder (VAE) and the VAE-GAN. Deep learning architectures are known to be vulnerable to adversarial examples, but previous work has focused on the application of adversarial examples to classification tasks.
Kos, Jernej, Fischer, Ian, Song, Dawn
openaire   +2 more sources

Multi-Targeted Adversarial Example in Evasion Attack on Deep Neural Network

open access: yesIEEE Access, 2018
Deep neural networks (DNNs) are widely used for image recognition, speech recognition, pattern analysis, and intrusion detection. Recently, the adversarial example attack, in which the input data are only slightly modified, although not an issue for ...
Hyun Kwon   +4 more
doaj   +1 more source

Boundary Adversarial Examples Against Adversarial Overfitting

open access: yes, 2022
Standard adversarial training approaches suffer from robust overfitting where the robust accuracy decreases when models are adversarially trained for too long. The origin of this problem is still unclear and conflicting explanations have been reported, i.e., memorization effects induced by large loss data or because of small loss data and growing ...
Hameed, Muhammad Zaid, Buesser, Beat
openaire   +2 more sources

Adversarial examples in remote sensing [PDF]

open access: yesProceedings of the 26th ACM SIGSPATIAL International Conference on Advances in Geographic Information Systems, 2018
This paper considers attacks against machine learning algorithms used in remote sensing applications, a domain that presents a suite of challenges that are not fully addressed by current research focused on natural image data such as ImageNet. In particular, we present a new study of adversarial examples in the context of satellite image classification
Czaja, Wojciech   +4 more
openaire   +2 more sources

Optimized Adversarial Example With Classification Score Pattern Vulnerability Removed

open access: yesIEEE Access, 2022
Neural networks provide excellent service on recognition tasks such as image recognition and speech recognition as well as for pattern analysis and other tasks in fields related to artificial intelligence.
Hyun Kwon, Kyoungmin Ko, Sunghwan Kim
doaj   +1 more source

Adversarial Example Games

open access: yes, 2020
Appears in: Advances in Neural Information Processing Systems 33 (NeurIPS 2020)
Bose, Avishek Joey   +6 more
openaire   +2 more sources

Adversarial Examples Detection Beyond Image Space [PDF]

open access: yesICASSP 2021 - 2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2021
To appear in ICASSP ...
Chen, Kejiang   +6 more
openaire   +2 more sources

DTFA: Adversarial attack with discrete cosine transform noise and target features on deep neural networks

open access: yesIET Image Processing, 2023
Image recognition on deep neural network is vulnerable to adversarial sample attacks. The adversarial attack accuracy is low when only limited queries on the target are allowed with the current black box environment.
Dong Yang, Wei Chen, Songjie Wei
doaj   +1 more source

Offense and defence against adversarial sample: A reinforcement learning method in energy trading market

open access: yesFrontiers in Energy Research, 2023
The energy trading market that can support free bidding among electricity users is currently the key method in smart grid demand response. Reinforcement learning is used to formulate optimal strategies for them to obtain optimal strategies. Non-etheless,
Donghe Li   +5 more
doaj   +1 more source

Weighted-Sampling Audio Adversarial Example Attack

open access: yes, 2020
Recent studies have highlighted audio adversarial examples as a ubiquitous threat to state-of-the-art automatic speech recognition systems. Thorough studies on how to effectively generate adversarial examples are essential to prevent potential attacks ...
Ding, Yufei   +4 more
core   +1 more source

Home - About - Disclaimer - Privacy