Results 11 to 20 of about 172,371 (266)

Generating Adversarial Examples with Adversarial Networks [PDF]

open access: yesProceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence, 2019
Deep neural networks (DNNs) have been found to be vulnerable to adversarial examples resulting from adding small-magnitude perturbations to inputs. Such adversarial examples can mislead DNNs to produce adversary-selected results.
He, Warren   +5 more
core   +2 more sources

Robust Decision Trees Against Adversarial Examples

open access: yes, 2019
Although adversarial examples and model robustness have been extensively studied in the context of linear models and neural networks, research on this issue in tree-based models and how to make tree-based models robust against adversarial examples is ...
Boning, Duane   +3 more
core   +3 more sources

Natural Adversarial Examples [PDF]

open access: yes2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2021
We introduce two challenging datasets that reliably cause machine learning model performance to substantially degrade. The datasets are collected with a simple adversarial filtration technique to create datasets with limited spurious cues. Our datasets' real-world, unmodified examples transfer to various unseen models reliably, demonstrating that ...
Hendrycks, Dan   +4 more
openaire   +2 more sources

Efficient Adversarial Training With Transferable Adversarial Examples [PDF]

open access: yes2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2020
Adversarial training is an effective defense method to protect classification models against adversarial attacks. However, one limitation of this approach is that it can require orders of magnitude additional training time due to high cost of generating strong adversarial examples during training.
Zheng, Haizhong   +4 more
openaire   +2 more sources

Adversarial Examples for Good: Adversarial Examples Guided Imbalanced Learning

open access: yes2022 IEEE International Conference on Image Processing (ICIP), 2022
Appeared in ICIP ...
Zhang, Jie   +3 more
openaire   +2 more sources

FADER: Fast adversarial example rejection [PDF]

open access: yesNeurocomputing, 2022
Deep neural networks are vulnerable to adversarial examples, i.e., carefully-crafted inputs that mislead classification at test time. Recent defenses have been shown to improve adversarial robustness by detecting anomalous deviations from legitimate training samples at different layer representations - a behavior normally exhibited by adversarial ...
Crecchi, Francesco   +4 more
openaire   +4 more sources

A Robust Adversarial Example Attack Based on Video Augmentation

open access: yesApplied Sciences, 2023
Despite the success of learning-based systems, recent studies have highlighted video adversarial examples as a ubiquitous threat to state-of-the-art video classification systems.
Mingyong Yin   +3 more
doaj   +1 more source

Evaluation of Model Quantization Method on Vitis-AI for Mitigating Adversarial Examples

open access: yesIEEE Access, 2023
Adversarial examples (AEs) are typical model evasion attacks and security threats in deep neural networks (DNNs). One of the countermeasures is adversarial training (AT), and it trains DNNs by using a training dataset containing AEs to achieve robustness
Yuta Fukuda   +2 more
doaj   +1 more source

A Two-Stage Generative Adversarial Networks With Semantic Content Constraints for Adversarial Example Generation

open access: yesIEEE Access, 2020
Deep neural networks (DNNs) have achieved great success in various applications due to their strong expressive power. However, recent studies have shown that DNNs are vulnerable to adversarial examples, and these manipulated instances can mislead DNN ...
Jianyi Liu   +4 more
doaj   +1 more source

Adversarial Examples Detection for XSS Attacks Based on Generative Adversarial Networks

open access: yesIEEE Access, 2020
Models based on deep learning are prone to misjudging the results when faced with adversarial examples. In this paper, we propose an MCTS-T algorithm for generating adversarial examples of cross-site scripting (XSS) attacks based on Monte Carlo tree ...
Xueqin Zhang   +4 more
doaj   +1 more source

Home - About - Disclaimer - Privacy