Results 51 to 60 of about 237,731 (274)

Stochastic Substitute Training: A Gray-box Approach to Craft Adversarial Examples Against Gradient Obfuscation Defenses

open access: yes, 2018
It has been shown that adversaries can craft example inputs to neural networks which are similar to legitimate inputs but have been created to purposely cause the neural network to misclassify the input.
Athalye Anish   +18 more
core   +1 more source

Unrestricted Adversarial Examples

open access: yes, 2018
We introduce a two-player contest for evaluating the safety and robustness of machine learning systems, with a large prize pool. Unlike most prior work in ML robustness, which studies norm-constrained adversaries, we shift our focus to unconstrained adversaries.
Brown, Tom B.   +5 more
openaire   +2 more sources

Exploring Adversarial Examples [PDF]

open access: yes, 2018
Failure cases of black-box deep learning, e.g. adversarial examples, might have severe consequences in healthcare. Yet such failures are mostly studied in the context of real-world images with calibrated attacks. To demystify the adversarial examples, rigorous studies need to be designed.
Kügler, David   +3 more
openaire   +2 more sources

Adversarial Diversity and Hard Positive Generation

open access: yes, 2016
State-of-the-art deep neural networks suffer from a fundamental problem - they misclassify adversarial examples formed by applying small perturbations to inputs.
Boult, Terrance E.   +2 more
core   +1 more source

Selective Untargeted Evasion Attack: An Adversarial Example That Will Not Be Classified as Certain Avoided Classes

open access: yesIEEE Access, 2019
Deep neural networks (DNNs) have useful applications in machine learning tasks involving recognition and pattern analysis. Despite the favorable applications of DNNs, these systems can be exploited by adversarial examples.
Hyun Kwon   +3 more
doaj   +1 more source

Are Accuracy and Robustness Correlated?

open access: yes, 2016
Machine learning models are vulnerable to adversarial examples formed by applying small carefully chosen perturbations to inputs that cause unexpected classification errors.
Boult, Terrance E.   +2 more
core   +1 more source

Adversarial Example Decomposition

open access: yes, 2018
Research has shown that widely used deep neural networks are vulnerable to carefully crafted adversarial perturbations. Moreover, these adversarial perturbations often transfer across models. We hypothesize that adversarial weakness is composed of three sources of bias: architecture, dataset, and random initialization.
He, Horace   +5 more
openaire   +2 more sources

Active Learning‐Guided Accelerated Discovery of Ultra‐Efficient High‐Entropy Thermoelectrics

open access: yesAdvanced Materials, EarlyView.
An active learning framework is introduced for the accelerated discovery of high‐entropy chalcogenides with superior thermoelectric performance. Only 80 targeted syntheses, selected from 16206 possible combinations, led to three high‐performance compositions, demonstrating the remarkable efficiency of data‐driven guidance in experimental materials ...
Hanhwi Jang   +8 more
wiley   +1 more source

DualFlow: Generating imperceptible adversarial examples by flow field and normalize flow-based model

open access: yesFrontiers in Neurorobotics, 2023
Recent adversarial attack research reveals the vulnerability of learning-based deep learning models (DNN) against well-designed perturbations. However, most existing attack methods have inherent limitations in image quality as they rely on a relatively ...
Renyang Liu   +10 more
doaj   +1 more source

Robust Audio Adversarial Example for a Physical Attack

open access: yes, 2019
We propose a method to generate audio adversarial examples that can attack a state-of-the-art speech recognition model in the physical world. Previous work assumes that generated adversarial examples are directly fed to the recognition model, and is not ...
Sakuma, Jun, Yakura, Hiromu
core   +1 more source

Home - About - Disclaimer - Privacy