Results 21 to 30 of about 94,262 (290)
A Brute-Force Black-Box Method to Attack Machine Learning-Based Systems in Cybersecurity
Machine learning algorithms are widely utilized in cybersecurity. However, recent studies show that machine learning algorithms are vulnerable to adversarial examples.
Sicong Zhang, Xiaoyao Xie, Yang Xu
doaj +1 more source
Attention distraction with gradient sharpening for multi-task adversarial attack
The advancement of deep learning has resulted in significant improvements on various visual tasks. However, deep neural networks (DNNs) have been found to be vulnerable to well-designed adversarial examples, which can easily deceive DNNs by adding ...
Bingyu Liu , Jiani Hu , Weihong Deng
doaj +1 more source
Recent advances in machine learning show that neural models are vulnerable to minimally perturbed inputs, or adversarial examples. Adversarial algorithms are optimization problems that minimize the accuracy of ML models by perturbing inputs, often using a model's loss function to craft such perturbations.
Cilloni, Thomas +2 more
openaire +2 more sources
ACM MM2022 Brave New ...
Sang, Jitao +3 more
openaire +2 more sources
State-of-the-art neural network models are actively used in various fields, but it is well-known that they are vulnerable to adversarial example attacks.
Sanglee Park, Jungmin So
doaj +1 more source
Survey of Adversarial Attacks and Defense Methods for Deep Learning Model [PDF]
As an important part of artificial intelligence technology,deep learning is widely used in computer vision,natural language processing and other fields.Although deep learning performs well in tasks such as image classification and target detection,its ...
JIANG Yan, ZHANG Liguo
doaj +1 more source
Adversarial attacks expose important vulnerabilities of deep learning models, yet little attention has been paid to settings where data arrives as a stream. In this paper, we formalize the online adversarial attack problem, emphasizing two key elements found in real-world use-cases: attackers must operate under partial knowledge of the target model ...
Mladenovic, Andjela +6 more
openaire +2 more sources
Impact of adversarial examples on deep learning models for biomedical image segmentation [PDF]
Deep learning models, which are increasingly being used in the field of medical image analysis, come with a major security risk, namely, their vulnerability to adversarial examples.
C Pena-Betancor +3 more
core +4 more sources
Adversarial Attack for SAR Target Recognition Based on UNet-Generative Adversarial Network
Some recent articles have revealed that synthetic aperture radar automatic target recognition (SAR-ATR) models based on deep learning are vulnerable to the attacks of adversarial examples and cause security problems.
Chuan Du, Lei Zhang
doaj +1 more source
ICML Workshop 2022 on Adversarial Machine Learning ...
Kumano, Soichiro +2 more
openaire +2 more sources

