Results 71 to 80 of about 79,418 (169)

CMDN: Pre-Trained Visual Representations Boost Adversarial Robustness for UAV Tracking

open access: yesDrones
Visual object tracking is widely adopted to unmanned aerial vehicle (UAV)-related applications, which demand reliable tracking precision and real-time performance.
Ruilong Yu   +5 more
doaj   +1 more source

Shape Defense Against Adversarial Attacks

open access: yes, 2020
Humans rely heavily on shape information to recognize objects. Conversely, convolutional neural networks (CNNs) are biased more towards texture. This is perhaps the main reason why CNNs are vulnerable to adversarial examples. Here, we explore how shape bias can be incorporated into CNNs to improve their robustness. Two algorithms are proposed, based on
openaire   +2 more sources

Adversarial Attacks and Defenses in Fault Detection and Diagnosis: A Comprehensive Benchmark on the Tennessee Eastman Process

open access: yesIEEE Open Journal of the Industrial Electronics Society
Integrating machine learning into Automated Control Systems (ACS) enhances decision-making in industrial process management. One of the limitations to the widespread adoption of these technologies in industry is the vulnerability of neural networks to ...
Vitaliy Pozdnyakov   +4 more
doaj   +1 more source

Detection and Defense: Student-Teacher Network for Adversarial Robustness

open access: yesIEEE Access
Defense against adversarial attacks is critical for the reliability and safety of deep neural networks (DNNs). Current state-of-the-art defense methods achieve significant robustness against adversarial attacks.
Kyoungchan Park, Pilsung Kang
doaj   +1 more source

Adversarial Defense for Medical Images

open access: yesElectronics
The rapid advancement of deep learning is significantly hindered by its vulnerability to adversarial attacks, a critical concern in sensitive domains like medicine where misclassification can have severe, irreversible consequences. This issue directly underscores prediction unreliability and is central to the goals of Explainable Artificial ...
Min-Jen Tsai   +3 more
openaire   +1 more source

A Gradual Adversarial Training Method for Semantic Segmentation

open access: yesRemote Sensing
Deep neural networks (DNNs) have achieved great success in various computer vision tasks. However, they are susceptible to artificially designed adversarial perturbations, which limit their deployment in security-critical applications.
Yinkai Zan, Pingping Lu, Tingyu Meng
doaj   +1 more source

Lightweight defense mechanism against adversarial attacks via adaptive pruning and robust distillation

open access: yes网络与信息安全学报, 2022
Adversarial training is one of the commonly used defense methods against adversarial attacks, by incorporating adversarial samples into the training process.However, the effectiveness of adversarial training heavily relied on the size of the trained ...
Bin WANG   +6 more
doaj  

Home - About - Disclaimer - Privacy