Results 21 to 30 of about 237,731 (274)
Generating Adversarial Examples with Adversarial Networks [PDF]
Deep neural networks (DNNs) have been found to be vulnerable to adversarial examples resulting from adding small-magnitude perturbations to inputs. Such adversarial examples can mislead DNNs to produce adversary-selected results. Different attack strategies have been proposed to generate adversarial examples, but how to produce them with high ...
Xiao, Chaowei +5 more
openaire +2 more sources
Deep neural networks (DNNs) have achieved great success in various applications due to their strong expressive power. However, recent studies have shown that DNNs are vulnerable to adversarial examples, and these manipulated instances can mislead DNN ...
Jianyi Liu +4 more
doaj +1 more source
Semantic Adversarial Examples [PDF]
Deep neural networks are known to be vulnerable to adversarial examples, i.e., images that are maliciously perturbed to fool the model. Generating adversarial examples has been mostly limited to finding small perturbations that maximize the model prediction error.
Hosseini, Hossein, Poovendran, Radha
openaire +2 more sources
State-of-the-art neural network models are actively used in various fields, but it is well-known that they are vulnerable to adversarial example attacks.
Sanglee Park, Jungmin So
doaj +1 more source
Improving Adversarial Robustness via Attention and Adversarial Logit Pairing
Though deep neural networks have achieved the state of the art performance in visual classification, recent studies have shown that they are all vulnerable to the attack of adversarial examples. In this paper, we develop improved techniques for defending
Xingjian Li +4 more
doaj +1 more source
Survey of Image Adversarial Example Defense Techniques [PDF]
The rapid and extensive growth of artificial intelligence introduces new security challenges. The generation and defense of adversarial examples for deep neural networks is one of the hot spots.
LIU Ruiqi, LI Hu, WANG Dongxia, ZHAO Chongyang, LI Boyu
doaj +1 more source
Not all adversarial examples require a complex defense : identifying over-optimized adversarial examples with IQR-based logit thresholding [PDF]
Detecting adversarial examples currently stands as one of the biggest challenges in the field of deep learning. Adversarial attacks, which produce adversarial examples, increase the prediction likelihood of a target class for a particular data point ...
De Neve, Wesley +2 more
core +2 more sources
A Robust Adversarial Example Attack Based on Video Augmentation
Despite the success of learning-based systems, recent studies have highlighted video adversarial examples as a ubiquitous threat to state-of-the-art video classification systems.
Mingyong Yin +3 more
doaj +1 more source
Adversarial Examples: Opportunities and Challenges [PDF]
16 pages, 13 figures, 5 ...
Jiliang Zhang, Chen Li
openaire +3 more sources
Really natural adversarial examples [PDF]
AbstractThe phenomenon of Adversarial Examples has become one of the most intriguing topics associated to deep learning. The so-called adversarial attacks have the ability to fool deep neural networks with inappreciable perturbations. While the effect is striking, it has been suggested that such carefully selected injected noise does not necessarily ...
Anibal Pedraza +2 more
openaire +1 more source

