Results 11 to 20 of about 2,268,403 (339)
Adversarially Robust Distillation
Knowledge distillation is effective for producing small, high-performance neural networks for classification, but these small networks are vulnerable to adversarial attacks.
Feizi, Soheil +3 more
core +3 more sources
(Certified!!) Adversarial Robustness for Free! [PDF]
In this paper we show how to achieve state-of-the-art certified adversarial robustness to 2-norm bounded perturbations by relying exclusively on off-the-shelf pretrained models.
Nicholas Carlini +3 more
semanticscholar +1 more source
Recent Advances in Adversarial Training for Adversarial Robustness [PDF]
Adversarial training is one of the most effective approaches for deep learning models to defend against adversarial examples. Unlike other defense strategies, adversarial training aims to enhance the robustness of models intrinsically.
Tao Bai +4 more
semanticscholar +1 more source
Understanding Zero-Shot Adversarial Robustness for Large-Scale Models [PDF]
Pretrained large-scale vision-language models like CLIP have exhibited strong generalization over unseen tasks. Yet imperceptible adversarial perturbations can significantly reduce CLIP's performance on new tasks.
Chengzhi Mao +4 more
semanticscholar +1 more source
On the Adversarial Robustness of Multi-Modal Foundation Models [PDF]
Multi-modal foundation models combining vision and language models such as Flamingo or GPT-4 have recently gained enormous interest. Alignment of foundation models is used to prevent models from providing toxic or harmful output.
Christian Schlarmann, Matthias Hein
semanticscholar +1 more source
On the Adversarial Robustness of Robust Estimators [PDF]
Motivated by recent data analytics applications, we study the adversarial robustness of robust estimators. Instead of assuming that only a fraction of the data points are outliers as considered in the classic robust estimation setup, in this paper, we consider an adversarial setup in which an attacker can observe the whole dataset and can modify all ...
Lifeng Lai, Erhan Bayraktar
openaire +2 more sources
Achieving adversarial robustness via sparsity [PDF]
Network pruning has been known to produce compact models without much accuracy degradation. However, how the pruning process affects a network's robustness and the working mechanism behind remain unresolved. In this work, we theoretically prove that the sparsity of network weights is closely associated with model robustness.
Ningyi Liao +5 more
openaire +3 more sources
On Adversarial Robustness of Trajectory Prediction for Autonomous Vehicles [PDF]
Trajectory prediction is a critical component for autonomous vehicles (AVs) to perform safe planning and navigation. However, few studies have analyzed the adversarial robustness of trajectory prediction or investigated whether the worst-case prediction ...
Qingzhao Zhang +4 more
semanticscholar +1 more source
Delving into the Adversarial Robustness of Federated Learning [PDF]
In Federated Learning (FL), models are as fragile as centrally trained models against adversarial examples. However, the adversarial robustness of federated learning remains largely unexplored.
J Zhang +6 more
semanticscholar +1 more source
Revisiting Adversarial Robustness Distillation: Robust Soft Labels Make Student Better [PDF]
Adversarial training is one effective approach for training robust deep neural networks against adversarial attacks. While being able to bring reliable robustness, adversarial training (AT) methods in general favor high capacity models, i.e., the larger ...
Bojia Zi +3 more
semanticscholar +1 more source

