Results 41 to 50 of about 94,262 (290)
Robust Audio Adversarial Example for a Physical Attack
We propose a method to generate audio adversarial examples that can attack a state-of-the-art speech recognition model in the physical world. Previous work assumes that generated adversarial examples are directly fed to the recognition model, and is not ...
Sakuma, Jun, Yakura, Hiromu
core +1 more source
Using LIP to Gloss Over Faces in Single-Stage Face Detection Networks
This work shows that it is possible to fool/attack recent state-of-the-art face detectors which are based on the single-stage networks. Successfully attacking face detectors could be a serious malware vulnerability when deploying a smart surveillance ...
D Chen +5 more
core +1 more source
GradMDM: Adversarial Attack on Dynamic Networks
Dynamic neural networks can greatly reduce computation redundancy without compromising accuracy by adapting their structures based on the input. In this paper, we explore the robustness of dynamic neural networks against energy-oriented attacks targeted at reducing their efficiency.
Jianhong Pan +6 more
openaire +4 more sources
Multi-Targeted Adversarial Example in Evasion Attack on Deep Neural Network
Deep neural networks (DNNs) are widely used for image recognition, speech recognition, pattern analysis, and intrusion detection. Recently, the adversarial example attack, in which the input data are only slightly modified, although not an issue for ...
Hyun Kwon +4 more
doaj +1 more source
Detection of Iterative Adversarial Attacks via Counter Attack
AbstractDeep neural networks (DNNs) have proven to be powerful tools for processing unstructured data. However, for high-dimensional data, like images, they are inherently vulnerable to adversarial attacks. Small almost invisible perturbations added to the input can be used to fool DNNs.
Matthias Rottmann +4 more
openaire +4 more sources
Multi-Stage Adversarial Defense for Online DDoS Attack Detection System in IoT
Machine learning-based Distributed Denial of Service (DDoS) attack detection systems have proven effective in detecting and preventing DDoD attacks in Internet of Things (IoT) systems.
Yonas Kibret Beshah +2 more
doaj +1 more source
Generating Adversarial Examples with Adversarial Networks
Deep neural networks (DNNs) have been found to be vulnerable to adversarial examples resulting from adding small-magnitude perturbations to inputs. Such adversarial examples can mislead DNNs to produce adversary-selected results.
He, Warren +5 more
core +1 more source
Adversarial Attack Attribution: Discovering Attributable Signals in Adversarial ML Attacks
Accepted to RSEML Workshop at AAAI ...
Dotter, Marissa +5 more
openaire +2 more sources
Adversarial Attacks and Defenses
Despite the recent advances in a wide spectrum of applications, machine learning models, especially deep neural networks, have been shown to be vulnerable to adversarial attacks. Attackers add carefully-crafted perturbations to input, where the perturbations are almost imperceptible to humans, but can cause models to make wrong predictions.
Liu, Ninghao +4 more
openaire +2 more sources

