Results 51 to 60 of about 85,688 (262)
All‐Optical Reconfigurable Physical Unclonable Function for Sustainable Security
An all‐optical reconfigurable physical unclonable function (PUF) is demonstrated using plasmonic coupling–induced sintering of optically trapped gold nanoparticles, where Brownian motion serves as a robust entropy source. The resulting optical PUF exhibits high encoding density, strong resistance to modeling attacks, and practical authentication ...
Jang‐Kyun Kwak +4 more
wiley +1 more source
Comprehensive comparisons of gradient-based multi-label adversarial attacks
Adversarial examples which mislead deep neural networks by adding well-crafted perturbations have become a major threat to classification models. Gradient-based white-box attack algorithms have been widely used to generate adversarial examples.
Zhijian Chen +4 more
doaj +1 more source
Jamming aided Generalized Data Attacks: Exposing Vulnerabilities in Secure Estimation
Jamming refers to the deletion, corruption or damage of meter measurements that prevents their further usage. This is distinct from adversarial data injection that changes meter readings while preserving their utility in state estimation.
Baldick, Ross +2 more
core +1 more source
This study generates high‐fidelity synthetic longitudinal records for a million‐patient diabetes cohort, successfully replicating clinical predictive performance. However, deeper analysis reveals algorithmic biases and trajectory inconsistencies that escape standard quality metrics. These findings challenge current validation norms, demonstrating why a
Francisco Ortuño +5 more
wiley +1 more source
The Adversarial Attack and Detection under the Fisher Information Metric
Many deep learning models are vulnerable to the adversarial attack, i.e., imperceptible but intentionally-designed perturbations to the input can cause incorrect output of the networks.
Fletcher, P. Thomas +5 more
core +1 more source
Learnable Diffusion Framework for Mouse V1 Neural Decoding
We introduce Sensorium‐Viz, a diffusion‐based framework for reconstructing high‐fidelity visual stimuli from mouse primary visual cortex activity. By integrating a novel spatial embedding module with a Diffusion Transformer (DiT) and a synthetic‐response augmentation strategy, our model outperforms state‐of‐the‐art fMRI‐based baselines, enabling robust
Kaiwen Deng +2 more
wiley +1 more source
Adversarial Attacks to Manipulate Target Localization of Object Detector
Adversarial attack has gradually become an important branch in the field of artificial intelligence security, where the potential threat brought by adversarial example attack is more not to be ignored.
Kai Xu +7 more
doaj +1 more source
Review of Artificial Intelligence Adversarial Attack and Defense Technologies
In recent years, artificial intelligence technologies have been widely used in computer vision, natural language processing, automatic driving, and other fields.
Shilin Qiu +3 more
doaj +1 more source
Attacking Adversarial Attacks as A Defense
It is well known that adversarial attacks can fool deep neural networks with imperceptible perturbations. Although adversarial training significantly improves model robustness, failure cases of defense still broadly exist. In this work, we find that the adversarial attacks can also be vulnerable to small perturbations.
Wu, Boxi +8 more
openaire +2 more sources
Schematic illustration of SiNDs composite materials synthesis and its internal photophysical process mechanism. And an AI‐assisted dynamic information encryption process. ABSTRACT Persistent luminescence materials typically encounter an intrinsic trade‐off between high phosphorescence quantum yield (PhQY) and ultralong phosphorescence lifetime.
Yulu Liu +9 more
wiley +1 more source

