Results 41 to 50 of about 2,268,403 (339)

Robust Tracking Against Adversarial Attacks [PDF]

open access: yes, 2020
While deep convolutional neural networks (CNNs) are vulnerable to adversarial attacks, considerably few efforts have been paid to construct robust deep tracking algorithms against adversarial attacks. Current studies on adversarial attack and defense mainly reside in a single image. In this work, we first attempt to generate adversarial examples on top
Jia, Shuai   +3 more
openaire   +2 more sources

One Prompt Word is Enough to Boost Adversarial Robustness for Pre-Trained Vision-Language Models [PDF]

open access: yesComputer Vision and Pattern Recognition
Large pre-trained Vision-Language Models (VLMs) like CLIP, despite having remarkable generalization ability, are highly vulnerable to adversarial examples. This work studies the adversarial robustness of VLMs from the novel perspective of the text prompt
Lin Li   +3 more
semanticscholar   +1 more source

Pre-Trained Model Guided Fine-Tuning for Zero-Shot Adversarial Robustness [PDF]

open access: yesComputer Vision and Pattern Recognition
Large-scale pre-trained vision-language models like CLIP have demonstrated impressive performance across various tasks, and exhibit remarkable zero-shot generalization capability, while they are also vulnerable to impercep-tible adversarial examples ...
Sibo Wang   +3 more
semanticscholar   +1 more source

Adversarial Self-Supervised Learning for Robust SAR Target Recognition

open access: yesRemote Sensing, 2021
Synthetic aperture radar (SAR) can perform observations at all times and has been widely used in the military field. Deep neural network (DNN)-based SAR target recognition models have achieved great success in recent years.
Yanjie Xu   +5 more
doaj   +1 more source

Advancing Adversarial Robustness Through Adversarial Logit Update

open access: yes, 2023
Deep Neural Networks are susceptible to adversarial perturbations. Adversarial training and adversarial purification are among the most widely recognized defense strategies. Although these methods have different underlying logic, both rely on absolute logit values to generate label predictions.
Xuan, Hao, Zhu, Peican, Li, Xingyu
openaire   +2 more sources

Adversarially Robust Hyperspectral Image Classification via Random Spectral Sampling and Spectral Shape Encoding

open access: yesIEEE Access, 2021
Although the hyperspectral image (HSI) classification has adopted deep neural networks (DNNs) and shown remarkable performances, there is a lack of studies of the adversarial vulnerability for the HSI classifications.
Sungjune Park, Hong Joo Lee, Yong Man Ro
doaj   +1 more source

Adversarially Robust Learning via Entropic Regularization [PDF]

open access: yesFrontiers in Artificial Intelligence, 2022
In this paper we propose a new family of algorithms, ATENT, for training adversarially robust deep neural networks. We formulate a new loss function that is equipped with an additional entropic regularization. Our loss function considers the contribution of adversarial samples that are drawn from a specially designed distribution in the data space that
Gauri Jagatap   +4 more
openaire   +4 more sources

Robust Algorithms under Adversarial Injections

open access: yes, 2020
In this paper, we study streaming and online algorithms in the context of randomness in the input. For several problems, a random order of the input sequence---as opposed to the worst-case order---appears to be a necessary evil in order to prove satisfying guarantees. However, algorithmic techniques that work under this assumption tend to be vulnerable
Garg, Paritosh   +3 more
openaire   +5 more sources

Increasing the Robustness of Image Quality Assessment Models Through Adversarial Training

open access: yesTechnologies
The adversarial robustness of image quality assessment (IQA) models to adversarial attacks is emerging as a critical issue. Adversarial training has been widely used to improve the robustness of neural networks to adversarial attacks, but little in-depth
Anna Chistyakova   +6 more
doaj   +1 more source

Learning Lipschitz Feedback Policies From Expert Demonstrations: Closed-Loop Guarantees, Robustness and Generalization

open access: yesIEEE Open Journal of Control Systems, 2022
In this work, we propose a framework in which we use a Lipschitz-constrained loss minimization scheme to learn feedback control policies with guarantees on closed-loop stability, adversarial robustness, and generalization.
Abed AlRahman Al Makdah   +2 more
doaj   +1 more source

Home - About - Disclaimer - Privacy