Results 271 to 280 of about 5,561,446 (302)
Some of the next articles are maybe not open access.

Towards Accuracy-Fairness Paradox: Adversarial Example-based Data Augmentation for Visual Debiasing

ACM Multimedia, 2020
Machine learning fairness concerns about the biases towards certain protected or sensitive group of people when addressing the target tasks. This paper studies the debiasing problem in the context of image classification tasks.
Yi Zhang, Jitao Sang
semanticscholar   +1 more source

Review on Image Processing Based Adversarial Example Defenses in Computer Vision

2020 IEEE 6th Intl Conference on Big Data Security on Cloud (BigDataSecurity), IEEE Intl Conference on High Performance and Smart Computing, (HPSC) and IEEE Intl Conference on Intelligent Data and Security (IDS), 2020
Recent research works showed that deep neural networks are vulnerable to adversarial examples, which are usually maliciously created by carefully adding deliberate and imperceptible perturbations to examples. Several states of the art defense methods are
Meikang Qiu, Han Qiu
semanticscholar   +1 more source

POSTER: Detecting Audio Adversarial Example through Audio Modification

Conference on Computer and Communications Security, 2019
Deep neural networks (DNNs) perform well in the fields of image recognition, speech recognition, pattern analysis, and intrusion detection. However, DNNs are vulnerable to adversarial examples that add a small amount of noise to the original samples ...
Hyun Kwon, H. Yoon, Ki-Woong Park
semanticscholar   +1 more source

New algorithm to generate the adversarial example of image

, 2020
This paper focuses on the algorithm to generate adversarial example of image. Firstly a new memristive chaotic mapping was proposed. Then two chaotic sequences based on the constructed memristive chaotic mapping are adopted to design the algorithm to ...
Bo Wang, F. Zou, X. W. Liu
semanticscholar   +1 more source

Adversarial Examples in Arabic

2019 International Conference on Computational Science and Computational Intelligence (CSCI), 2019
Several studies have shown that deep neural networks (DNNs) are vulnerable to adversarial examples - perturbed inputs that cause DNN-based models to produce incorrect outputs. A variety of adversarial attacks have been proposed in the domains of computer vision and natural language processing (NLP); however, most attacks in the NLP domain have been ...
Basemah Alshemali, Jugal Kalita
openaire   +1 more source

Revealing Perceptual Proxies with Adversarial Examples

IEEE Transactions on Visualization and Computer Graphics, 2021
Data visualizations convert numbers into visual marks so that our visual system can extract data from an image instead of raw numbers. Clearly, the visual system does not compute these values as a computer would, as an arithmetic mean or a correlation. Instead, it extracts these patterns using perceptual proxies; heuristic shortcuts of the visual marks,
Brian D, Ondov   +4 more
openaire   +2 more sources

Adversarial Examples for Hamming Space Search

IEEE Transactions on Cybernetics, 2020
Due to its strong representation learning ability and its facilitation of joint learning for representation and hash codes, deep learning-to-hash has achieved promising results and is becoming increasingly popular for the large-scale approximate nearest neighbor search.
Erkun Yang   +3 more
openaire   +2 more sources

Advops: Decoupling Adversarial Examples

Pattern Recognition, 2023
Donghua Wang   +3 more
openaire   +1 more source

Adversarial example detection based on saliency map features

Applied intelligence (Boston), 2021
Shen Wang, Yuxin Gong
semanticscholar   +1 more source

Rethinking Adversarial Examples

Traditionally, adversarial examples have been defined as imperceptible perturbations that fool deep neural networks. This thesis challenges this view by examining unrestricted adversarial examples – a broader class of manipulations that can compromise model security while preserving semantics.
openaire   +1 more source

Home - About - Disclaimer - Privacy