Results 11 to 20 of about 85,147 (260)

Achieving adversarial robustness via sparsity [PDF]

open access: yesMachine Learning, 2021
Network pruning has been known to produce compact models without much accuracy degradation. However, how the pruning process affects a network's robustness and the working mechanism behind remain unresolved. In this work, we theoretically prove that the sparsity of network weights is closely associated with model robustness.
Ningyi Liao   +5 more
openaire   +3 more sources

Robustness-Eva-MRC: Assessing and analyzing the robustness of neural models in extractive machine reading comprehension

open access: yesIntelligent Systems with Applications, 2023
Deep neural networks, despite their remarkable success in various language understanding tasks, have been found vulnerable to adversarial attacks and subtle input perturbations, revealing a robustness shortfall.
Jingliang Fang   +5 more
doaj   +1 more source

Adversarial Robustness Curves [PDF]

open access: yes, 2020
The existence of adversarial examples has led to considerable uncertainty regarding the trust one can justifiably put in predictions produced by automated systems. This uncertainty has, in turn, lead to considerable research effort in understanding adversarial robustness. In this work, we take first steps towards separating robustness analysis from the
Göpfert, Christina   +2 more
openaire   +2 more sources

A Robust Adversarial Example Attack Based on Video Augmentation

open access: yesApplied Sciences, 2023
Despite the success of learning-based systems, recent studies have highlighted video adversarial examples as a ubiquitous threat to state-of-the-art video classification systems.
Mingyong Yin   +3 more
doaj   +1 more source

Study on Adversarial Robustness of Deep Learning Models Based on SVD [PDF]

open access: yesJisuanji kexue, 2023
The emergence of adversarial attacks poses a substantial threat to the large-scale deployment of deep neural networks(DNNs) in real-world scenarios,especially in security-related domains.Most of the current defense methods are based on heuristic ...
ZHAO Zitian, ZHAN Wenhan, DUAN Hancong, WU Yue
doaj   +1 more source

Towards Adversarial Robustness via Feature Matching

open access: yesIEEE Access, 2020
Image classification systems are known to be vulnerable to adversarial attacks, which are imperceptibly perturbed but lead to spectacularly disgraceful classification.
Zhuorong Li   +4 more
doaj   +1 more source

Lightweight defense mechanism against adversarial attacks via adaptive pruning and robust distillation

open access: yes网络与信息安全学报, 2022
Adversarial training is one of the commonly used defense methods against adversarial attacks, by incorporating adversarial samples into the training process.However, the effectiveness of adversarial training heavily relied on the size of the trained ...
Bin WANG, Simin LI, Yaguan QIAN, Jun ZHANG, Chaohao LI, Chenming ZHU, Hongfei ZHANG
doaj   +3 more sources

Are facial attributes adversarially robust? [PDF]

open access: yes2016 23rd International Conference on Pattern Recognition (ICPR), 2016
Facial attributes are emerging soft biometrics that have the potential to reject non-matches, for example, based on mismatching gender. To be usable in stand-alone systems, facial attributes must be extracted from images automatically and reliably. In this paper, we propose a simple yet effective solution for automatic facial attribute extraction by ...
Rozsa, Andras   +3 more
openaire   +2 more sources

Adversarial Robustness Via Fisher-Rao Regularization

open access: yesIEEE Transactions on Pattern Analysis and Machine Intelligence, 2023
Adversarial robustness has become a topic of growing interest in machine learning since it was observed that neural networks tend to be brittle. We propose an information-geometric formulation of adversarial defense and introduce FIRE, a new Fisher-Rao regularization for the categorical cross-entropy loss, which is based on the geodesic distance ...
Picot, Marine   +6 more
openaire   +3 more sources

Robust Tracking Against Adversarial Attacks [PDF]

open access: yes, 2020
While deep convolutional neural networks (CNNs) are vulnerable to adversarial attacks, considerably few efforts have been paid to construct robust deep tracking algorithms against adversarial attacks. Current studies on adversarial attack and defense mainly reside in a single image. In this work, we first attempt to generate adversarial examples on top
Jia, Shuai   +3 more
openaire   +2 more sources

Home - About - Disclaimer - Privacy