Results 41 to 50 of about 85,688 (262)

Generating Adversarial Examples with Adversarial Networks

open access: yes, 2019
Deep neural networks (DNNs) have been found to be vulnerable to adversarial examples resulting from adding small-magnitude perturbations to inputs. Such adversarial examples can mislead DNNs to produce adversary-selected results.
He, Warren   +5 more
core   +1 more source

Multi-Stage Adversarial Defense for Online DDoS Attack Detection System in IoT

open access: yesIEEE Access
Machine learning-based Distributed Denial of Service (DDoS) attack detection systems have proven effective in detecting and preventing DDoD attacks in Internet of Things (IoT) systems.
Yonas Kibret Beshah   +2 more
doaj   +1 more source

Multi-Targeted Adversarial Example in Evasion Attack on Deep Neural Network

open access: yesIEEE Access, 2018
Deep neural networks (DNNs) are widely used for image recognition, speech recognition, pattern analysis, and intrusion detection. Recently, the adversarial example attack, in which the input data are only slightly modified, although not an issue for ...
Hyun Kwon   +4 more
doaj   +1 more source

Adversarial Attack Attribution: Discovering Attributable Signals in Adversarial ML Attacks

open access: yes, 2021
Accepted to RSEML Workshop at AAAI ...
Dotter, Marissa   +5 more
openaire   +2 more sources

Adversarial Attacks and Defenses

open access: yesACM SIGKDD Explorations Newsletter, 2021
Despite the recent advances in a wide spectrum of applications, machine learning models, especially deep neural networks, have been shown to be vulnerable to adversarial attacks. Attackers add carefully-crafted perturbations to input, where the perturbations are almost imperceptible to humans, but can cause models to make wrong predictions.
Liu, Ninghao   +4 more
openaire   +2 more sources

Adversarial Imitation Attack

open access: yes, 2020
8 ...
Zhou, Mingyi   +6 more
openaire   +2 more sources

Adversarial Feature Selection Against Evasion Attacks [PDF]

open access: yesIEEE Transactions on Cybernetics, 2016
Pattern recognition and machine learning techniques have been increasingly adopted in adversarial settings such as spam, intrusion and malware detection, although their security against well-crafted attacks that aim to evade detection by manipulating data at test time has not yet been thoroughly assessed.
Zhang F   +4 more
openaire   +4 more sources

Computational Modeling Meets 3D Bioprinting: Emerging Synergies in Cardiovascular Disease Modeling

open access: yesAdvanced Healthcare Materials, EarlyView.
Emerging advances in three‐dimensional bioprinting and computational modeling are reshaping cardiovascular (CV) research by enabling more realistic, patient‐specific tissue platforms. This review surveys cutting‐edge approaches that merge biomimetic CV constructs with computational simulations to overcome the limitations of traditional models, improve ...
Tanmay Mukherjee   +7 more
wiley   +1 more source

DTFA: Adversarial attack with discrete cosine transform noise and target features on deep neural networks

open access: yesIET Image Processing, 2023
Image recognition on deep neural network is vulnerable to adversarial sample attacks. The adversarial attack accuracy is low when only limited queries on the target are allowed with the current black box environment.
Dong Yang, Wei Chen, Songjie Wei
doaj   +1 more source

Adversarial Robust and Explainable Network Intrusion Detection Systems Based on Deep Learning

open access: yesApplied Sciences, 2022
The ever-evolving cybersecurity environment has given rise to sophisticated adversaries who constantly explore new ways to attack cyberinfrastructure. Recently, the use of deep learning-based intrusion detection systems has been on the rise. This rise is
Kudzai Sauka   +3 more
doaj   +1 more source

Home - About - Disclaimer - Privacy