Results 101 to 110 of about 353,439 (278)
Generative Adversarial Trainer: Defense to Adversarial Perturbations with GAN
We propose a novel technique to make neural network robust to adversarial examples using a generative adversarial network. We alternately train both classifier and generator networks.
Han, Sungyeob +2 more
core
ABSTRACT Conventional software‐based encryption faces mounting limitations in power efficiency and security, inspiring the development of emerging neuromorphic computing hardware encryption. This study presents a hardware‐level multi‐dimensional encryption paradigm utilizing optoelectronic neuromorphic devices with low energy consumption of 3.3 fJ ...
Bo Sun +3 more
wiley +1 more source
Fortify the Guardian, Not the Treasure: Resilient Adversarial Detectors
Adaptive adversarial attacks, where adversaries tailor their strategies with full knowledge of defense mechanisms, pose significant challenges to the robustness of adversarial detectors. In this paper, we introduce RADAR (Robust Adversarial Detection via
Raz Lapid, Almog Dubin, Moshe Sipper
doaj +1 more source
SUMMARY The life testing of items that exhibit a distribution of times to failure is undertaken for making decisions such as design qualification and reliability demonstration. In such contexts, procedures based on the Bayesian paradigm have assumed a common prior distribution of item reliability by both the consumer and the manufacturer.
Lindley, Dennis V. +1 more
openaire +2 more sources
A concealable physical unclonable function (PUF) based on an array of 384 nanoscale voltage‐controlled magnetic tunnel junctions is demonstrated. The PUF operates without any external magnetic field. It uses a combination of deterministic and stochastic switching mechanisms, based on the spin transfer torque and voltage‐controlled magnetic anisotropy ...
Thomas Neuner +6 more
wiley +1 more source
Recent studies have shown that machine-learning models are vulnerable to adversarial attacks. Adversarial attacks are deliberate attempts to modify the input data of a machine learning model in a way that causes it to produce incorrect predictions.
Palakorn Kamnounsing +3 more
doaj +1 more source
Towards Imperceptible and Robust Adversarial Example Attacks against Neural Networks
Machine learning systems based on deep neural networks, being able to produce state-of-the-art results on various perception tasks, have gained mainstream adoption in many applications.
Liu, Yannan +3 more
core +1 more source
Generative Artificial Intelligence Shaping the Future of Agri‐Food Innovation
Emerging use cases of generative artificial intelligence in agri‐food innovation. ABSTRACT The recent surge in generative artificial intelligence (AI), typified by models such as GPT, diffusion models, and large vision‐language architectures, has begun to influence the agri‐food sector.
Jun‐Li Xu +2 more
wiley +1 more source
Adversarial Information Bottleneck
10 pages,7 figures,2 ...
Penglong Zhai, Shihua Zhang
openaire +3 more sources
Abstract This work experimentally validates the RESPONSE (Resilient Process cONtrol SystEm) framework as a solution for maintaining safe, continuous operation of cyber‐physical process systems under cyberattacks. RESPONSE implements a dual‐loop architecture that runs a networked online controller in parallel with a hard‐isolated offline controller ...
Luyang Liu +5 more
wiley +1 more source

