Results 71 to 80 of about 217,331 (267)

Solid Harmonic Wavelet Bispectrum for Image Analysis

open access: yesAdvanced Science, EarlyView.
The Solid Harmonic Wavelet Bispectrum (SHWB), a rotation‐ and translation‐invariant descriptor that captures higher‐order (phase) correlations in signals, is introduced. Combining wavelet scattering, bispectral analysis, and group theory, SHWB achieves interpretable, data‐efficient representations and demonstrates competitive performance across texture,
Alex Brown   +3 more
wiley   +1 more source

Detection of Adversarial Attacks Using Deep Learning and Features Extracted From Interpretability Methods in Industrial Scenarios

open access: yesIEEE Access
The adversarial training technique has been shown to improve the robustness of Machine Learning and Deep Learning models to adversarial attacks in the Computer Vision field.
Angel Luis Perales Gomez   +3 more
doaj   +1 more source

Outlier Robust Adversarial Training

open access: yes, 2023
Supervised learning models are challenged by the intrinsic complexities of training data such as outliers and minority subpopulations and intentional attacks at inference time with adversarial samples. While traditional robust learning methods and the recent adversarial training approaches are designed to handle each of the two challenges, to date, no ...
Hu, Shu   +4 more
openaire   +2 more sources

A Bottom‐Up Design Framework for Multifunctional Lattice Metamaterials

open access: yesAdvanced Science, EarlyView.
This study introduces a generative AI framework for designing multifunctional lattice metamaterials. The method combines 3D Gaussian voxel generation with deep learning, enabling greater design freedom and structural performance. The optimized lattice metamaterials achieve enhanced energy absorption by 40–200% compared to conventional structures and ...
Zongxin Hu   +13 more
wiley   +1 more source

Three-Dimensional Reconstruction Pre-Training as a Prior to Improve Robustness to Adversarial Attacks and Spurious Correlation

open access: yesEntropy
Ensuring robustness of image classifiers against adversarial attacks and spurious correlation has been challenging. One of the most effective methods for adversarial robustness is a type of data augmentation that uses adversarial examples during training.
Yutaro Yamada   +3 more
doaj   +1 more source

RazorNet: Adversarial Training and Noise Training on a Deep Neural Network Fooled by a Shallow Neural Network

open access: yesBig Data and Cognitive Computing, 2019
In this work, we propose ShallowDeepNet, a novel system architecture that includes a shallow and a deep neural network. The shallow neural network has the duty of data preprocessing and generating adversarial samples. The deep neural network has the duty
Shayan Taheri   +2 more
doaj   +1 more source

Boosting Adversarial Training Using Robust Selective Data Augmentation

open access: yesInternational Journal of Computational Intelligence Systems, 2023
Artificial neural networks are currently applied in a wide variety of fields, and they are near to achieving performance similar to humans in many tasks.
Bader Rasheed   +4 more
doaj   +1 more source

Learnable Diffusion Framework for Mouse V1 Neural Decoding

open access: yesAdvanced Science, EarlyView.
We introduce Sensorium‐Viz, a diffusion‐based framework for reconstructing high‐fidelity visual stimuli from mouse primary visual cortex activity. By integrating a novel spatial embedding module with a Diffusion Transformer (DiT) and a synthetic‐response augmentation strategy, our model outperforms state‐of‐the‐art fMRI‐based baselines, enabling robust
Kaiwen Deng   +2 more
wiley   +1 more source

Batch-in-Batch: a new adversarial training framework for initial perturbation and sample selection

open access: yesComplex & Intelligent Systems
Adversarial training methods commonly generate initial perturbations that are independent across epochs, and obtain subsequent adversarial training samples without selection.
Yinting Wu, Pai Peng, Bo Cai, Le Li
doaj   +1 more source

Exploring the Impact of Conceptual Bottlenecks on Adversarial Robustness of Deep Neural Networks

open access: yesIEEE Access
Deep neural networks (DNNs), while powerful, often suffer from a lack of interpretability and vulnerability to adversarial attacks. Concept bottleneck models (CBMs), which incorporate intermediate high-level concepts into the model architecture, promise ...
Bader Rasheed   +4 more
doaj   +1 more source

Home - About - Disclaimer - Privacy