Results 111 to 120 of about 5,561,446 (302)
Detecting Adversarial Examples
Deep Neural Networks (DNNs) have been shown to be vulnerable to adversarial examples. While numerous successful adversarial attacks have been proposed, defenses against these attacks remain relatively understudied. Existing defense approaches either focus on negating the effects of perturbations caused by the attacks to restore the DNNs' original ...
Mumcu, Furkan, Yilmaz, Yasin
openaire +2 more sources
Multimodal Wearable Biosensing Meets Multidomain AI: A Pathway to Decentralized Healthcare
Multimodal biosensing meets multidomain AI. Wearable biosensors capture complementary biochemical and physiological signals, while cross‐device, population‐aware learning aligns noisy, heterogeneous streams. This Review distills key sensing modalities, fusion and calibration strategies, and privacy‐preserving deployment pathways that transform ...
Chenshu Liu +10 more
wiley +1 more source
Generating Natural Adversarial Examples
Due to their complex nature, it is hard to characterize the ways in which machine learning models can misbehave or be exploited when deployed. Recent work on adversarial examples, i.e. inputs with minor perturbations that result in substantially different model predictions, is helpful in evaluating the robustness of these models by exposing the ...
Zhao, Zhengli +2 more
openaire +2 more sources
A conditional multi‐task deep learning framework is developed for designing and optimizing Full‐Stokes Hyperspectro‐Polarimetric Encoding Metasurfaces (FHPEMs). This framework achieves joint spectro‐polarimetric learning and unified forward–inverse design.
Chenjie Gong +9 more
wiley +1 more source
High-frequency Feature Masking-based Adversarial Attack Algorithm [PDF]
Deep neural networks have achieved widespread application in the field of imagerecognition,however,their complex structures make them vulnerable to adversarial attacks.Constructing adversarial examples that are imperceptible to the human eye is crucial ...
WANG Liuyi, ZHOU Chun, ZENG Wenqiang, HE Xingxing, MENG Hua
doaj +1 more source
Synthesizing Robust Adversarial Examples
Standard methods for generating adversarial examples for neural networks do not consistently fool neural network classifiers in the physical world due to a combination of viewpoint shifts, camera noise, and other natural transformations, limiting their relevance to real-world systems.
Athalye, Anish +3 more
openaire +2 more sources
ABSTRACT Conventional software‐based encryption faces mounting limitations in power efficiency and security, inspiring the development of emerging neuromorphic computing hardware encryption. This study presents a hardware‐level multi‐dimensional encryption paradigm utilizing optoelectronic neuromorphic devices with low energy consumption of 3.3 fJ ...
Bo Sun +3 more
wiley +1 more source
Research on Image Adversarial Example Generation Method Based on SE-AdvGAN [PDF]
Adversarial examples are crucial for evaluating the robustness of Deep Neural Network (DNN) and revealing their potential security risks. The adversarial example generation method based on a Generative Adversarial Network (GAN), AdvGAN, has made ...
ZHAO Hong, SONG Furong, LI Wengai
doaj +1 more source
Adversarial Examples and Metrics
25 pages, 1 figure, under submission, fixe typos from previous ...
Döttling, Nico +3 more
openaire +2 more sources
A concealable physical unclonable function (PUF) based on an array of 384 nanoscale voltage‐controlled magnetic tunnel junctions is demonstrated. The PUF operates without any external magnetic field. It uses a combination of deterministic and stochastic switching mechanisms, based on the spin transfer torque and voltage‐controlled magnetic anisotropy ...
Thomas Neuner +6 more
wiley +1 more source

