Results 231 to 240 of about 94,861 (274)
Some of the next articles are maybe not open access.
Adversarial Example Detection Bayesian Game
2023 IEEE International Conference on Image Processing (ICIP), 2023Despite the increasing attack ability and transferability of adversarial examples (AE), their security, i.e., how unlikely they can be detected, has been ignored more or less. Without the ability to circumvent popular detectors, the chance that an AE successfully fools a deep neural network is slim.
Hui Zeng +3 more
openaire +2 more sources
Detecting adversarial examples of text
2022Although deep neural networks have achieved state-of-the-art performance in various tasks, a lot of their decisions are non-interpretable because high dimensional feature vectors cause deep neural net functions to be extremely difficult to visualize, widely used gradient descent is an approximate solution, not an analytical solution, etc.
openaire +1 more source
Detecting Adversarial Examples - a Lesson from Multimedia Security
2018 26th European Signal Processing Conference (EUSIPCO), 2018Adversarial classification is the task of performing robust classification in the presence of a strategic attacker. Originating from information hiding and multimedia forensics, adversarial classification recently received a lot of attention in a broader security context.
Schöttle, Pascal +3 more
openaire +1 more source
Detecting Adversarial Examples Using Data Manifolds
MILCOM 2018 - 2018 IEEE Military Communications Conference (MILCOM), 2018Models produced by machine learning, particularly deep neural networks, are state-of-the-art for many machine learning tasks and demonstrate very high prediction accuracy. Unfortunately, these models are also very brittle and vulnerable to specially crafted adversarial examples.
Susmit Jha +3 more
openaire +1 more source
Detecting Adversarial Examples Through Image Transformation
Proceedings of the AAAI Conference on Artificial Intelligence, 2018Deep Neural Networks (DNNs) have demonstrated remarkable performance in a diverse range of applications. Along with the prevalence of deep learning, it has been revealed that DNNs are vulnerable to attacks. By deliberately crafting adversarial examples, an adversary can manipulate a DNN to generate incorrect outputs, which may lead ...
Shixin Tian, Guolei Yang, Ying Cai
openaire +1 more source
Adversarial examples for network intrusion detection systems
Journal of Computer Security, 2022Machine learning-based network intrusion detection systems have demonstrated state-of-the-art accuracy in flagging malicious traffic. However, machine learning has been shown to be vulnerable to adversarial examples, particularly in domains such as image recognition.
Sheatsley, Ryan +4 more
openaire +1 more source
Adversarial Examples for Malware Detection
2017Machine learning models are known to lack robustness against inputs crafted by an adversary. Such adversarial examples can, for instance, be derived from regular inputs by introducing minor—yet carefully selected—perturbations.
Kathrin Grosse +4 more
openaire +1 more source
Adversarial DGA Domain Examples Generation and Detection
2020 International Conference on Control, Robotics and Intelligent System, 2020Botnets have long relied on the Domain Generation Algorithm (DGA) to survive to this day. The detection rate of the DGA detection methods based on machine learning is already high. However, the models trained by the existing data sets sometimes are blind to new variant domains.To mitigate such problem, a method based on generation adversarial networks (
Heng Cao +4 more
openaire +1 more source
Intrusion detection systems vulnerability on adversarial examples
2018 Innovations in Intelligent Systems and Applications (INISTA), 2018Intrusion detection systems define an important and dynamic research area for cybersecurity. The role of Intrusion Detection System within security architecture is to improve a security level by identification of all malicious and also suspicious events that could be observed in computer or network system.
Arkadiusz Warzynski, Grzegorz Kolaczek
openaire +1 more source

