Results 121 to 130 of about 79,418 (169)

Safeguarding large language models: a survey. [PDF]

open access: yesArtif Intell Rev
Dong Y   +11 more
europepmc   +1 more source

Sinkhorn Adversarial Attack and Defense

IEEE Transactions on Image Processing, 2022
Adversarial attacks have been extensively investigated in the recent past. Quite interestingly, a majority of these attacks primarily work in the lp space. In this work, we propose a novel approach for generating adversarial samples using Wasserstein distance.
openaire   +2 more sources

Adversarial Attacks and Defenses

Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, 2020
Deep neural networks (DNN) have achieved unprecedented success in numerous machine learning tasks in various domains. However, the existence of adversarial examples leaves us a big hesitation when applying DNN models on safety-critical tasks such as autonomous vehicles and malware detection.
Han Xu, Yaxin Li, Wei Jin, Jiliang Tang
openaire   +1 more source

Variational Adversarial Defense: A Bayes Perspective for Adversarial Training

IEEE Transactions on Pattern Analysis and Machine Intelligence
Various methods have been proposed to defend against adversarial attacks. However, there is a lack of enough theoretical guarantee of the performance, thus leading to two problems: First, deficiency of necessary adversarial training samples might attenuate the normal gradient's back-propagation, which leads to overfitting and gradient masking ...
Chenglong Zhao   +5 more
openaire   +2 more sources

Home - About - Disclaimer - Privacy