Results 31 to 40 of about 215,342 (268)

Target Training Does Adversarial Training Without Adversarial Samples

open access: yes, 2021
arXiv admin note: text overlap with arXiv:2006 ...
openaire   +2 more sources

Prior-Guided Adversarial Initialization forĀ Fast Adversarial Training

open access: yes, 2022
Fast adversarial training (FAT) effectively improves the efficiency of standard adversarial training (SAT). However, initial FAT encounters catastrophic overfitting, i.e.,the robust accuracy against adversarial attacks suddenly and dramatically decreases.
Jia, Xiaojun   +6 more
openaire   +2 more sources

Combining Adversaries with Anti-adversaries in Training

open access: yesProceedings of the AAAI Conference on Artificial Intelligence, 2023
Adversarial training is an effective learning technique to improve the robustness of deep neural networks. In this study, the influence of adversarial training on deep learning models in terms of fairness, robustness, and generalization is theoretically investigated under more general perturbation scope that different samples can have different ...
Zhou, Xiaoling, Yang, Nan, Wu, Ou
openaire   +2 more sources

Universal Adversarial Training

open access: yesProceedings of the AAAI Conference on Artificial Intelligence, 2020
Standard adversarial attacks change the predicted class label of a selected image by adding specially tailored small perturbations to its pixels. In contrast, a universal perturbation is an update that can be added to any image in a broad class of images, while still changing the predicted class label.
Shafahi, Ali   +5 more
openaire   +3 more sources

EIFDAA: Evaluation of an IDS with function-discarding adversarial attacks in the IIoT

open access: yesHeliyon, 2023
The complexity of the Industrial Internet of Things (IIoT) presents higher requirements for intrusion detection systems (IDSs). An adversarial attack is a threat to the security of machine learning-based IDSs.
Shiming Li   +4 more
doaj   +1 more source

CAT:Collaborative Adversarial Training

open access: yes, 2023
Tech ...
Liu, Xingbin   +4 more
openaire   +2 more sources

Directional Adversarial Training for Robust Ownership-Based Recommendation System

open access: yesIEEE Access, 2022
Machine learning algorithms are susceptible to cyberattacks, posing security problems in computer vision, speech recognition, and recommendation systems. So far, researchers have made great strides in adopting adversarial training as a defensive strategy.
Zhefu Wu   +3 more
doaj   +1 more source

Fast-M Adversarial Training Algorithm for Deep Neural Networks

open access: yesApplied Sciences
Although deep neural networks have been successfully applied in many fields, research studies show that neural network models are easily disrupted by small malicious inputs, greatly reducing their performance.
Yu Ma   +4 more
doaj   +1 more source

MAT: A Multi-strength Adversarial Training Method to Mitigate Adversarial Attacks

open access: yes, 2018
Some recent works revealed that deep neural networks (DNNs) are vulnerable to so-called adversarial attacks where input examples are intentionally perturbed to fool DNNs.
Chen, Yiran   +7 more
core   +1 more source

Improving Adversarial Robustness via Distillation-Based Purification

open access: yesApplied Sciences, 2023
Despite the impressive performance of deep neural networks on many different vision tasks, they have been known to be vulnerable to intentionally added noise to input images.
Inhwa Koo, Dong-Kyu Chae, Sang-Chul Lee
doaj   +1 more source

Home - About - Disclaimer - Privacy