Results 91 to 100 of about 237,731 (274)

Understanding adversarial robustness against on-manifold adversarial examples

open access: yesPattern Recognition
Deep neural networks (DNNs) are shown to be vulnerable to adversarial examples. A well-trained model can be easily attacked by adding small perturbations to the original data. One of the hypotheses of the existence of the adversarial examples is the off-manifold assumption: adversarial examples lie off the data manifold. However, recent research showed
Jiancong Xiao   +4 more
openaire   +2 more sources

Spatially Transformed Adversarial Examples

open access: yes, 2018
Recent studies show that widely used deep neural networks (DNNs) are vulnerable to carefully crafted adversarial examples. Many advanced algorithms have been proposed to generate adversarial examples by leveraging the $\mathcal{L}_p$ distance for penalizing perturbations.
Xiao, Chaowei   +5 more
openaire   +2 more sources

Nanozymes Integrated Biochips Toward Smart Detection System

open access: yesAdvanced Science, EarlyView.
This review systematically outlines the integration of nanozymes, biochips, and artificial intelligence (AI) for intelligent biosensing. It details how their convergence enhances signal amplification, enables portable detection, and improves data interpretation.
Dongyu Chen   +10 more
wiley   +1 more source

Reconstructing Coherent Functional Landscape From Multi‐Modal Multi‐Slice Spatial Transcriptomics by a Variational Spatial Gaussian Process

open access: yesAdvanced Science, EarlyView.
This study introduces stVGP, a variational spatial Gaussian process framework for multi‐modal, multi‐slice spatial transcriptomics. By integrating histological and genomic data through hybrid alignment and attention‐based fusion, stVGP reconstructs coherent 3D functional landscapes.
Zedong Wang   +3 more
wiley   +1 more source

Photoacoustic Microscopy for Multiscale Biological System Visualization and Clinical Translation

open access: yesAdvanced Science, EarlyView.
Photoacoustic microscopy (PAM) is a powerful biomedical imaging tool renowned for its non‐invasiveness and high resolution. This review synthesizes recent technological advances and highlights their broad applications from cellular and organ‐level to whole‐animal imaging.
Tingting Wang   +3 more
wiley   +1 more source

Detecting Adversarial Examples

open access: yes
Deep Neural Networks (DNNs) have been shown to be vulnerable to adversarial examples. While numerous successful adversarial attacks have been proposed, defenses against these attacks remain relatively understudied. Existing defense approaches either focus on negating the effects of perturbations caused by the attacks to restore the DNNs' original ...
Mumcu, Furkan, Yilmaz, Yasin
openaire   +2 more sources

Atomic Defects in Layered Transition Metal Dichalcogenides for Sustainable Energy Storage and the Intelligent Trends in Data Analytics

open access: yesAdvanced Science, EarlyView.
This review comprehensively summarizes the atomic defects in TMDs for their applications in sustainable energy storage devices, along with the latest progress in ML methodologies for high‐throughput TEM data analysis, offering insights on how ML‐empowered microscopy facilitates bridging structure–property correlation and inspires knowledge for precise ...
Zheng Luo   +6 more
wiley   +1 more source

High-frequency Feature Masking-based Adversarial Attack Algorithm [PDF]

open access: yesJisuanji kexue
Deep neural networks have achieved widespread application in the field of imagerecognition,however,their complex structures make them vulnerable to adversarial attacks.Constructing adversarial examples that are imperceptible to the human eye is crucial ...
WANG Liuyi, ZHOU Chun, ZENG Wenqiang, HE Xingxing, MENG Hua
doaj   +1 more source

Generating Natural Adversarial Examples

open access: yes, 2017
Due to their complex nature, it is hard to characterize the ways in which machine learning models can misbehave or be exploited when deployed. Recent work on adversarial examples, i.e. inputs with minor perturbations that result in substantially different model predictions, is helpful in evaluating the robustness of these models by exposing the ...
Zhao, Zhengli   +2 more
openaire   +2 more sources

Adversarial Examples in the Physical World [PDF]

open access: yes, 2018
Most existing machine learning classifiers are highly vulnerable to adversarial examples. An adversarial example is a sample of input data which has been modified very slightly in a way that is intended to cause a machine learning classifier to misclassify it.
Kurakin, Alexey   +2 more
openaire   +2 more sources

Home - About - Disclaimer - Privacy