Results 181 to 190 of about 94,262 (290)
Transform-Dependent Adversarial Attacks
Deep networks are highly vulnerable to adversarial attacks, yet conventional attack methods utilize static adversarial perturbations that induce fixed mispredictions. In this work, we exploit an overlooked property of adversarial perturbations--their dependence on image transforms--and introduce transform-dependent adversarial attacks.
Tan, Yaoteng +2 more
openaire +2 more sources
An Overview of Deep Learning Techniques for Big Data IoT Applications
Reviews deep learning integration with cloud, fog, and edge computing in IoT architectures. Examines model suitability across IoT applications, key challenges, and emerging trends Provides a comparative analysis to guide future deep learning research in IoT environments.
Gagandeep Kaur +2 more
wiley +1 more source
Adversarial attack of sequence-free enhancer prediction identifies chromatin architecture. [PDF]
Gafur J, Lang OW, Lai WKM.
europepmc +1 more source
Fail‐Controlled Classifiers: A Swiss‐Army Knife Toward Trustworthy Systems
ABSTRACT Background Modern critical systems often require to take decisions and classify data and scenarios autonomously without having detrimental effects on people, infrastructures or the environment, ensuring desired dependability attributes. Researchers typically strive to craft classifiers with perfect accuracy, which should be always correct and ...
Fahad Ahmed Khokhar +4 more
wiley +1 more source
Identifying significant features in adversarial attack detection framework using federated learning empowered medical IoT network security. [PDF]
Sharaf SA, Nooh S.
europepmc +1 more source
Adversarial Attacks on AI-driven Cybersecurity Systems: A Taxonomy and Defense Strategies
Krishna Chaganti
openalex +1 more source
ABSTRACT Zero‐day exploits remain challenging to detect because they often appear in unknown distributions of signatures and rules. The article entails a systematic review and cross‐sectional synthesis of four fundamental model families for identifying zero‐day intrusions, namely, convolutional neural networks (CNN), deep neural networks (DNN ...
Abdullah Al Siam +3 more
wiley +1 more source
Gradual poisoning of a chest x-ray convolutional neural network with an adversarial attack and AI explainability methods. [PDF]
Lee SB.
europepmc +1 more source
Detection of On-Manifold Adversarial Attacks Via Latent Space Transformation
Mohammad anon +3 more
openalex +1 more source

