Results 51 to 60 of about 237,731 (274)
It has been shown that adversaries can craft example inputs to neural networks which are similar to legitimate inputs but have been created to purposely cause the neural network to misclassify the input.
Athalye Anish +18 more
core +1 more source
Unrestricted Adversarial Examples
We introduce a two-player contest for evaluating the safety and robustness of machine learning systems, with a large prize pool. Unlike most prior work in ML robustness, which studies norm-constrained adversaries, we shift our focus to unconstrained adversaries.
Brown, Tom B. +5 more
openaire +2 more sources
Exploring Adversarial Examples [PDF]
Failure cases of black-box deep learning, e.g. adversarial examples, might have severe consequences in healthcare. Yet such failures are mostly studied in the context of real-world images with calibrated attacks. To demystify the adversarial examples, rigorous studies need to be designed.
Kügler, David +3 more
openaire +2 more sources
Adversarial Diversity and Hard Positive Generation
State-of-the-art deep neural networks suffer from a fundamental problem - they misclassify adversarial examples formed by applying small perturbations to inputs.
Boult, Terrance E. +2 more
core +1 more source
Deep neural networks (DNNs) have useful applications in machine learning tasks involving recognition and pattern analysis. Despite the favorable applications of DNNs, these systems can be exploited by adversarial examples.
Hyun Kwon +3 more
doaj +1 more source
Are Accuracy and Robustness Correlated?
Machine learning models are vulnerable to adversarial examples formed by applying small carefully chosen perturbations to inputs that cause unexpected classification errors.
Boult, Terrance E. +2 more
core +1 more source
Adversarial Example Decomposition
Research has shown that widely used deep neural networks are vulnerable to carefully crafted adversarial perturbations. Moreover, these adversarial perturbations often transfer across models. We hypothesize that adversarial weakness is composed of three sources of bias: architecture, dataset, and random initialization.
He, Horace +5 more
openaire +2 more sources
Active Learning‐Guided Accelerated Discovery of Ultra‐Efficient High‐Entropy Thermoelectrics
An active learning framework is introduced for the accelerated discovery of high‐entropy chalcogenides with superior thermoelectric performance. Only 80 targeted syntheses, selected from 16206 possible combinations, led to three high‐performance compositions, demonstrating the remarkable efficiency of data‐driven guidance in experimental materials ...
Hanhwi Jang +8 more
wiley +1 more source
DualFlow: Generating imperceptible adversarial examples by flow field and normalize flow-based model
Recent adversarial attack research reveals the vulnerability of learning-based deep learning models (DNN) against well-designed perturbations. However, most existing attack methods have inherent limitations in image quality as they rely on a relatively ...
Renyang Liu +10 more
doaj +1 more source
Robust Audio Adversarial Example for a Physical Attack
We propose a method to generate audio adversarial examples that can attack a state-of-the-art speech recognition model in the physical world. Previous work assumes that generated adversarial examples are directly fed to the recognition model, and is not ...
Sakuma, Jun, Yakura, Hiromu
core +1 more source

