Results 111 to 120 of about 2,268,403 (339)

Enhancing Adversarial Robustness through Stable Adversarial Training

open access: yesSymmetry
Deep neural network models are vulnerable to attacks from adversarial methods, such as gradient attacks. Evening small perturbations can cause significant differences in their predictions. Adversarial training (AT) aims to improve the model’s adversarial robustness against gradient attacks by generating adversarial samples and optimizing the ...
Kun Yan   +3 more
openaire   +1 more source

Urea‐Formaldehyde Resin Confined Silicon Nanodots Composites: High‐Performance and Ultralong Persistent Luminescence for Dynamic AI Information Encryption

open access: yesAdvanced Science, EarlyView.
Schematic illustration of SiNDs composite materials synthesis and its internal photophysical process mechanism. And an AI‐assisted dynamic information encryption process. ABSTRACT Persistent luminescence materials typically encounter an intrinsic trade‐off between high phosphorescence quantum yield (PhQY) and ultralong phosphorescence lifetime.
Yulu Liu   +9 more
wiley   +1 more source

Class-Aware Robust Adversarial Training for Object Detection [PDF]

open access: green, 2021
Pin-Chun Chen   +2 more
openalex   +1 more source

Probabilistic Modeling for Prediction Errors to Enhance Balancing Market Participation of Photovoltaic Systems: Error Threshold Estimation, Multisite Aggregation, and Overloading Effects

open access: yesAdvanced Energy and Sustainability Research, EarlyView.
This study proposes a method to increase the value of solar power in balancing markets by managing prediction errors. The approach models prediction uncertainties and quantifies reserve requirements based on a probabilistic model. This enables the more reliable participation of photovoltaic plants in balancing markets across multiple sites, especially ...
Jindan Cui   +3 more
wiley   +1 more source

On the Adversarial Robustness of Hand-Crafted Features and Their Role in Defending Adversarial Examples

open access: yesIEEE Access
Deep Neural Networks (DNNs) have achieved tremendous success in various computer vision tasks but remain highly vulnerable to adversarial examples. To address this limitation, we investigate the inherent robustness of hand-crafted features and validate ...
Shuohan Xue   +2 more
doaj   +1 more source

Pareto adversarial robustness: balancing spatial robustness and sensitivity-based robustness

open access: yesScience China Information Sciences
Adversarial robustness, which primarily comprises sensitivity-based robustness and spatial robustness, plays an integral part in achieving robust generalization. In this paper, we endeavor to design strategies to achieve universal adversarial robustness.
Sun, Ke, Li, Mingjie, Lin, Zhouchen
openaire   +2 more sources

Synthesizing Robust Adversarial Examples

open access: yes, 2017
Standard methods for generating adversarial examples for neural networks do not consistently fool neural network classifiers in the physical world due to a combination of viewpoint shifts, camera noise, and other natural transformations, limiting their relevance to real-world systems.
Athalye, Anish   +3 more
openaire   +2 more sources

Generative Artificial Intelligence Shaping the Future of Agri‐Food Innovation

open access: yesAgriFood: Journal of Agricultural Products for Food, EarlyView.
Emerging use cases of generative artificial intelligence in agri‐food innovation. ABSTRACT The recent surge in generative artificial intelligence (AI), typified by models such as GPT, diffusion models, and large vision‐language architectures, has begun to influence the agri‐food sector.
Jun‐Li Xu   +2 more
wiley   +1 more source

ATVis: Understanding and diagnosing adversarial training processes through visual analytics

open access: yesVisual Informatics
Adversarial training has emerged as a major strategy against adversarial perturbations in deep neural networks, which mitigates the issue of exploiting model vulnerabilities to generate incorrect predictions.
Fang Zhu   +4 more
doaj   +1 more source

Home - About - Disclaimer - Privacy