Results 61 to 70 of about 85,147 (260)

Exploring the Impact of Conceptual Bottlenecks on Adversarial Robustness of Deep Neural Networks

open access: yesIEEE Access
Deep neural networks (DNNs), while powerful, often suffer from a lack of interpretability and vulnerability to adversarial attacks. Concept bottleneck models (CBMs), which incorporate intermediate high-level concepts into the model architecture, promise ...
Bader Rasheed   +4 more
doaj   +1 more source

Provably Robust Adversarial Examples

open access: yes, 2020
International Conference on Learning Representations (ICLR 2022)
Dimitrov, Dimitar Iliev   +3 more
openaire   +3 more sources

Early Radiation Therapy Response Assessment Using Multi‐Scale Photoacoustic Imaging

open access: yesAdvanced Science, EarlyView.
Tomographic and mesoscopic photoacoustics capture intratumoural features of radioresistance and response. ABSTRACT There is a critical unmet clinical need to identify biomarkers that predict and detect radiation therapy (RT) response in cancer. Using the unique capabilities of multi‐scale photoacoustic imaging (PAI) to depict tumor oxygenation and ...
Thierry L. Lefebvre   +12 more
wiley   +1 more source

Benchmarking the adversarial resilience of machine learning models for DDoS detection

open access: yesArray
Distributed Denial of Service (DDoS) attacks continue to grow in scale and sophistication, making timely and reliable detection increasingly challenging.
Harsh Dadhwal   +3 more
doaj   +1 more source

Evaluating the Utilities of Foundation Models in Single‐Cell Data Analysis

open access: yesAdvanced Science, EarlyView.
This study delivers the first systematic, task‐level evaluation of single‐cell foundation models across eight core analytical tasks. By benchmarking 10 leading models with the scEval framework, it reveals where foundation models truly add value, where task‐specific methods still dominate, and provides concrete, reproducible guidelines to steer the next
Tianyu Liu   +4 more
wiley   +1 more source

MTFM: Multi-Teacher Feature Matching for Cross-Dataset and Cross-Architecture Adversarial Robustness Transfer in Remote Sensing Applications

open access: yesRemote Sensing
Remote sensing plays a critical role in environmental monitoring, land use analysis, and disaster response by enabling large-scale, data-driven observation of Earth’s surface.
Ravi Kumar Rogannagari   +1 more
doaj   +1 more source

Adversarially Robust Kernel Smoothing

open access: yes, 2021
We propose a scalable robust learning algorithm combining kernel smoothing and robust optimization. Our method is motivated by the convex analysis perspective of distributionally robust optimization based on probability metrics, such as the Wasserstein distance and the maximum mean discrepancy.
Zhu, Jia-Jie   +3 more
openaire   +4 more sources

High‐Fidelity Synthetic Data Replicates Clinical Prediction Performance in a Million‐Patient Diabetes Cohort

open access: yesAdvanced Science, EarlyView.
This study generates high‐fidelity synthetic longitudinal records for a million‐patient diabetes cohort, successfully replicating clinical predictive performance. However, deeper analysis reveals algorithmic biases and trajectory inconsistencies that escape standard quality metrics. These findings challenge current validation norms, demonstrating why a
Francisco Ortuño   +5 more
wiley   +1 more source

Solid Harmonic Wavelet Bispectrum for Image Analysis

open access: yesAdvanced Science, EarlyView.
The Solid Harmonic Wavelet Bispectrum (SHWB), a rotation‐ and translation‐invariant descriptor that captures higher‐order (phase) correlations in signals, is introduced. Combining wavelet scattering, bispectral analysis, and group theory, SHWB achieves interpretable, data‐efficient representations and demonstrates competitive performance across texture,
Alex Brown   +3 more
wiley   +1 more source

A knowledge distillation strategy for enhancing the adversarial robustness of lightweight automatic modulation classification models

open access: yesIET Communications
Automatic modulation classification models based on deep learning models are at risk of being interfered by adversarial attacks. In an adversarial attack, the attacker causes the classification model to misclassify the received signal by adding carefully
Fanghao Xu   +5 more
doaj   +1 more source

Home - About - Disclaimer - Privacy