Results 121 to 130 of about 79,418 (169)
Robust deepfake detector against deep image watermarking. [PDF]
Yu J, Liu X, Zan F, Peng Y.
europepmc +1 more source
Safeguarding large language models: a survey. [PDF]
Dong Y +11 more
europepmc +1 more source
Defending against and generating adversarial examples together with generative adversarial networks. [PDF]
Wang Y, Liao X, Cui W, Yang Y.
europepmc +1 more source
Robust detection framework for adversarial threats in Autonomous Vehicle Platooning. [PDF]
Ness S.
europepmc +1 more source
LatAtk: A Medical Image Attack Method Focused on Lesion Areas with High Transferability. [PDF]
Li L +5 more
europepmc +1 more source
A generative AI cybersecurity risks mitigation model for code generation: using ANN-ISM hybrid approach. [PDF]
Al-Hashimi HA.
europepmc +1 more source
Some of the next articles are maybe not open access.
Related searches:
Related searches:
Sinkhorn Adversarial Attack and Defense
IEEE Transactions on Image Processing, 2022Adversarial attacks have been extensively investigated in the recent past. Quite interestingly, a majority of these attacks primarily work in the lp space. In this work, we propose a novel approach for generating adversarial samples using Wasserstein distance.
openaire +2 more sources
Adversarial Attacks and Defenses
Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, 2020Deep neural networks (DNN) have achieved unprecedented success in numerous machine learning tasks in various domains. However, the existence of adversarial examples leaves us a big hesitation when applying DNN models on safety-critical tasks such as autonomous vehicles and malware detection.
Han Xu, Yaxin Li, Wei Jin, Jiliang Tang
openaire +1 more source
Variational Adversarial Defense: A Bayes Perspective for Adversarial Training
IEEE Transactions on Pattern Analysis and Machine IntelligenceVarious methods have been proposed to defend against adversarial attacks. However, there is a lack of enough theoretical guarantee of the performance, thus leading to two problems: First, deficiency of necessary adversarial training samples might attenuate the normal gradient's back-propagation, which leads to overfitting and gradient masking ...
Chenglong Zhao +5 more
openaire +2 more sources

