Results 41 to 50 of about 94,262 (290)

Robust Audio Adversarial Example for a Physical Attack

open access: yes, 2019
We propose a method to generate audio adversarial examples that can attack a state-of-the-art speech recognition model in the physical world. Previous work assumes that generated adversarial examples are directly fed to the recognition model, and is not ...
Sakuma, Jun, Yakura, Hiromu
core   +1 more source

Using LIP to Gloss Over Faces in Single-Stage Face Detection Networks

open access: yes, 2018
This work shows that it is possible to fool/attack recent state-of-the-art face detectors which are based on the single-stage networks. Successfully attacking face detectors could be a serious malware vulnerability when deploying a smart surveillance ...
D Chen   +5 more
core   +1 more source

GradMDM: Adversarial Attack on Dynamic Networks

open access: yesIEEE Transactions on Pattern Analysis and Machine Intelligence, 2023
Dynamic neural networks can greatly reduce computation redundancy without compromising accuracy by adapting their structures based on the input. In this paper, we explore the robustness of dynamic neural networks against energy-oriented attacks targeted at reducing their efficiency.
Jianhong Pan   +6 more
openaire   +4 more sources

Multi-Targeted Adversarial Example in Evasion Attack on Deep Neural Network

open access: yesIEEE Access, 2018
Deep neural networks (DNNs) are widely used for image recognition, speech recognition, pattern analysis, and intrusion detection. Recently, the adversarial example attack, in which the input data are only slightly modified, although not an issue for ...
Hyun Kwon   +4 more
doaj   +1 more source

Detection of Iterative Adversarial Attacks via Counter Attack

open access: yesJournal of Optimization Theory and Applications, 2023
AbstractDeep neural networks (DNNs) have proven to be powerful tools for processing unstructured data. However, for high-dimensional data, like images, they are inherently vulnerable to adversarial attacks. Small almost invisible perturbations added to the input can be used to fool DNNs.
Matthias Rottmann   +4 more
openaire   +4 more sources

Multi-Stage Adversarial Defense for Online DDoS Attack Detection System in IoT

open access: yesIEEE Access
Machine learning-based Distributed Denial of Service (DDoS) attack detection systems have proven effective in detecting and preventing DDoD attacks in Internet of Things (IoT) systems.
Yonas Kibret Beshah   +2 more
doaj   +1 more source

Generating Adversarial Examples with Adversarial Networks

open access: yes, 2019
Deep neural networks (DNNs) have been found to be vulnerable to adversarial examples resulting from adding small-magnitude perturbations to inputs. Such adversarial examples can mislead DNNs to produce adversary-selected results.
He, Warren   +5 more
core   +1 more source

Adversarial Attack Attribution: Discovering Attributable Signals in Adversarial ML Attacks

open access: yes, 2021
Accepted to RSEML Workshop at AAAI ...
Dotter, Marissa   +5 more
openaire   +2 more sources

Adversarial Attacks and Defenses

open access: yesACM SIGKDD Explorations Newsletter, 2021
Despite the recent advances in a wide spectrum of applications, machine learning models, especially deep neural networks, have been shown to be vulnerable to adversarial attacks. Attackers add carefully-crafted perturbations to input, where the perturbations are almost imperceptible to humans, but can cause models to make wrong predictions.
Liu, Ninghao   +4 more
openaire   +2 more sources

Adversarial Imitation Attack

open access: yes, 2020
8 ...
Zhou, Mingyi   +6 more
openaire   +2 more sources

Home - About - Disclaimer - Privacy