Results 71 to 80 of about 215,342 (268)

Detection of Adversarial Attacks Using Deep Learning and Features Extracted From Interpretability Methods in Industrial Scenarios

open access: yesIEEE Access
The adversarial training technique has been shown to improve the robustness of Machine Learning and Deep Learning models to adversarial attacks in the Computer Vision field.
Angel Luis Perales Gomez   +3 more
doaj   +1 more source

Boosting Adversarial Training Using Robust Selective Data Augmentation

open access: yesInternational Journal of Computational Intelligence Systems, 2023
Artificial neural networks are currently applied in a wide variety of fields, and they are near to achieving performance similar to humans in many tasks.
Bader Rasheed   +4 more
doaj   +1 more source

Solid Harmonic Wavelet Bispectrum for Image Analysis

open access: yesAdvanced Science, EarlyView.
The Solid Harmonic Wavelet Bispectrum (SHWB), a rotation‐ and translation‐invariant descriptor that captures higher‐order (phase) correlations in signals, is introduced. Combining wavelet scattering, bispectral analysis, and group theory, SHWB achieves interpretable, data‐efficient representations and demonstrates competitive performance across texture,
Alex Brown   +3 more
wiley   +1 more source

Three-Dimensional Reconstruction Pre-Training as a Prior to Improve Robustness to Adversarial Attacks and Spurious Correlation

open access: yesEntropy
Ensuring robustness of image classifiers against adversarial attacks and spurious correlation has been challenging. One of the most effective methods for adversarial robustness is a type of data augmentation that uses adversarial examples during training.
Yutaro Yamada   +3 more
doaj   +1 more source

RazorNet: Adversarial Training and Noise Training on a Deep Neural Network Fooled by a Shallow Neural Network

open access: yesBig Data and Cognitive Computing, 2019
In this work, we propose ShallowDeepNet, a novel system architecture that includes a shallow and a deep neural network. The shallow neural network has the duty of data preprocessing and generating adversarial samples. The deep neural network has the duty
Shayan Taheri   +2 more
doaj   +1 more source

Outlier Robust Adversarial Training

open access: yes, 2023
Supervised learning models are challenged by the intrinsic complexities of training data such as outliers and minority subpopulations and intentional attacks at inference time with adversarial samples. While traditional robust learning methods and the recent adversarial training approaches are designed to handle each of the two challenges, to date, no ...
Hu, Shu   +4 more
openaire   +2 more sources

His‐MMDM: Multi‐Domain and Multi‐Omics Translation of Histopathological Images with Diffusion Models

open access: yesAdvanced Science, EarlyView.
His‐MMDM is a diffusion model‐based framework for scalable multi‐domain and multi‐omics translation of histopathological images, enabling tasks from virtual staining, cross‐tumor knowledge transfer, and omics‐guided image editing. ABSTRACT Generative AI (GenAI) has advanced computational pathology through various image translation models.
Zhongxiao Li   +13 more
wiley   +1 more source

Exploring the Impact of Conceptual Bottlenecks on Adversarial Robustness of Deep Neural Networks

open access: yesIEEE Access
Deep neural networks (DNNs), while powerful, often suffer from a lack of interpretability and vulnerability to adversarial attacks. Concept bottleneck models (CBMs), which incorporate intermediate high-level concepts into the model architecture, promise ...
Bader Rasheed   +4 more
doaj   +1 more source

Nanozymes Integrated Biochips Toward Smart Detection System

open access: yesAdvanced Science, EarlyView.
This review systematically outlines the integration of nanozymes, biochips, and artificial intelligence (AI) for intelligent biosensing. It details how their convergence enhances signal amplification, enables portable detection, and improves data interpretation.
Dongyu Chen   +10 more
wiley   +1 more source

Batch-in-Batch: a new adversarial training framework for initial perturbation and sample selection

open access: yesComplex & Intelligent Systems
Adversarial training methods commonly generate initial perturbations that are independent across epochs, and obtain subsequent adversarial training samples without selection.
Yinting Wu, Pai Peng, Bo Cai, Le Li
doaj   +1 more source

Home - About - Disclaimer - Privacy