Results 181 to 190 of about 94,262 (290)

Transform-Dependent Adversarial Attacks

open access: yes
Deep networks are highly vulnerable to adversarial attacks, yet conventional attack methods utilize static adversarial perturbations that induce fixed mispredictions. In this work, we exploit an overlooked property of adversarial perturbations--their dependence on image transforms--and introduce transform-dependent adversarial attacks.
Tan, Yaoteng   +2 more
openaire   +2 more sources

An Overview of Deep Learning Techniques for Big Data IoT Applications

open access: yesInternational Journal of Communication Systems, Volume 39, Issue 4, 10 March 2026.
Reviews deep learning integration with cloud, fog, and edge computing in IoT architectures. Examines model suitability across IoT applications, key challenges, and emerging trends Provides a comparative analysis to guide future deep learning research in IoT environments.
Gagandeep Kaur   +2 more
wiley   +1 more source

Robustness to Adversarial Attacks.

open access: green
Mohsen H. Alhazmi (22756263)
openalex   +1 more source

Fail‐Controlled Classifiers: A Swiss‐Army Knife Toward Trustworthy Systems

open access: yesSoftware: Practice and Experience, Volume 56, Issue 3, Page 239-259, March 2026.
ABSTRACT Background Modern critical systems often require to take decisions and classify data and scenarios autonomously without having detrimental effects on people, infrastructures or the environment, ensuring desired dependability attributes. Researchers typically strive to craft classifiers with perfect accuracy, which should be always correct and ...
Fahad Ahmed Khokhar   +4 more
wiley   +1 more source

Securing the Unseen: A Comprehensive Exploration Review of AI‐Powered Models for Zero‐Day Attack Detection

open access: yesExpert Systems, Volume 43, Issue 3, March 2026.
ABSTRACT Zero‐day exploits remain challenging to detect because they often appear in unknown distributions of signatures and rules. The article entails a systematic review and cross‐sectional synthesis of four fundamental model families for identifying zero‐day intrusions, namely, convolutional neural networks (CNN), deep neural networks (DNN ...
Abdullah Al Siam   +3 more
wiley   +1 more source

Detection of On-Manifold Adversarial Attacks Via Latent Space Transformation

open access: green
Mohammad anon   +3 more
openalex   +1 more source

Home - About - Disclaimer - Privacy