Results 21 to 30 of about 94,861 (274)

Learning Adversarially Robust Object Detector with Consistency Regularization in Remote Sensing Images

open access: yesRemote Sensing, 2023
Object detection in remote sensing has developed rapidly and has been applied in many fields, but it is known to be vulnerable to adversarial attacks. Improving the robustness of models has become a key issue for reliable application deployment.
Yang Li   +5 more
doaj   +1 more source

Detecting Adversarial Examples Using Surrogate Models

open access: yesMachine Learning and Knowledge Extraction, 2023
Deep Learning has enabled significant progress towards more accurate predictions and is increasingly integrated into our everyday lives in real-world applications; this is true especially for Convolutional Neural Networks (CNNs) in the field of image analysis.
Borna Feldsar   +2 more
openaire   +2 more sources

Evading Logits-Based Detections to Audio Adversarial Examples by Logits-Traction Attack

open access: yesApplied Sciences, 2022
Automatic Speech Recognition (ASR) provides a new way of human-computer interaction. However, it is vulnerable to adversarial examples, which are obtained by deliberately adding perturbations to the original audios.
Songshen Han   +4 more
doaj   +1 more source

Adversarial Traffic Detection Method Based on Ensemble Learning and Anomaly Detection [PDF]

open access: yesJisuanji gongcheng
In recent years, deep learning technology has been increasingly used for malicious traffic detection. However, adversarial example attacks pose challenges to deep learning-based malicious traffic detection. To address this problem, this study proposes an
DONG Fanghe, SHI Qiong, SHI Zhibin
doaj   +1 more source

Adversarial Sample Detection in Computer Vision:A Survey [PDF]

open access: yesJisuanji kexue
With the increase in data volume and improvement in hardware performance,deep learning(DL) has made significant progress in the field of computer vision.However,deep learning models are vulnerable to adversarial samples,causing significant changes in the
ZHANG Xin, ZHANG Han, NIU Manyu, JI Lixia
doaj   +1 more source

Anomaly detection of adversarial examples using class-conditional generative adversarial networks

open access: yesComputers & Security, 2023
Deep Neural Networks (DNNs) have been shown vulnerable to Test-Time Evasion attacks (TTEs, or adversarial examples), which, by making small changes to the input, alter the DNN's decision. We propose an unsupervised attack detector on DNN classifiers based on class-conditional Generative Adversarial Networks (GANs).
Wang, Hang   +2 more
openaire   +2 more sources

Closeness and uncertainty aware adversarial examples detection in adversarial machine learning

open access: yesComputers and Electrical Engineering, 2022
While state-of-the-art Deep Neural Network (DNN) models are considered to be robust to random perturbations, it was shown that these architectures are highly vulnerable to deliberately crafted perturbations, albeit being quasi-imperceptible. These vulnerabilities make it challenging to deploy DNN models in security-critical areas. In recent years, many
Omer Faruk Tuna   +2 more
openaire   +3 more sources

Towards Measuring Adversarial Twitter Interactions against Candidates in the US Midterm Elections

open access: yes, 2020
Adversarial interactions against politicians on social media such as Twitter have significant impact on society. In particular they disrupt substantive political discussions online, and may discourage people from seeking public office.
Hua, Yiqing   +2 more
core   +1 more source

Image Classification Adversarial Example Defense Method Based on Conditional Diffusion Model [PDF]

open access: yesJisuanji gongcheng
Deep-learning models have achieved impressive results in fields such as image classification; however, they remain vulnerable to interference and threats from adversarial examples.
CHEN Zimin, GUAN Zhitao
doaj   +1 more source

Exploring Adversarial Examples in Malware Detection [PDF]

open access: yes2019 IEEE Security and Privacy Workshops (SPW), 2019
The convolutional neural network (CNN) architecture is increasingly being applied to new domains, such as malware detection, where it is able to learn malicious behavior from raw bytes extracted from executables. These architectures reach impressive performance with no feature engineering effort involved, but their robustness against active attackers ...
Suciu, Octavian   +2 more
openaire   +2 more sources

Home - About - Disclaimer - Privacy