Results 51 to 60 of about 5,561,446 (302)
Appears in: Advances in Neural Information Processing Systems 33 (NeurIPS 2020)
Bose, Avishek Joey +6 more
openaire +2 more sources
The energy trading market that can support free bidding among electricity users is currently the key method in smart grid demand response. Reinforcement learning is used to formulate optimal strategies for them to obtain optimal strategies. Non-etheless,
Donghe Li +5 more
doaj +1 more source
Adversarial Examples Detection Beyond Image Space [PDF]
To appear in ICASSP ...
Chen, Kejiang +6 more
openaire +2 more sources
Image recognition on deep neural network is vulnerable to adversarial sample attacks. The adversarial attack accuracy is low when only limited queries on the target are allowed with the current black box environment.
Dong Yang, Wei Chen, Songjie Wei
doaj +1 more source
Image Classification Adversarial Example Defense Method Based on Conditional Diffusion Model [PDF]
Deep-learning models have achieved impressive results in fields such as image classification; however, they remain vulnerable to interference and threats from adversarial examples.
CHEN Zimin, GUAN Zhitao
doaj +1 more source
Are adversarial examples inevitable?
ISBN:978-1-7138-7273 ...
Shafahi, Ali +4 more
openaire +3 more sources
POSES: Patch Optimization Strategies for Efficiency and Stealthiness Using eXplainable AI
Adversarial examples, which are carefully crafted inputs designed to deceive deep learning models, create significant challenges in Artificial Intelligence.
Han-Ju Lee +3 more
doaj +1 more source
Adversarial Example Detection by Classification for Deep Speech Recognition [PDF]
Machine Learning systems are vulnerable to adversarial attacks and will highly likely produce incorrect outputs under these attacks. There are white-box and black-box attacks regarding to adversary’s access level to the victim learning algorithm.
Saeid Samizade +3 more
semanticscholar +1 more source
Downstream-agnostic Adversarial Examples
This paper has been accepted by the International Conference on Computer Vision (ICCV '23, October 2--6, 2023, Paris, France)
Zhou, Ziqi +6 more
openaire +2 more sources
Distinguishability of adversarial examples [PDF]
Machine learning models can be easily fooled by adversarial examples which are generated from clean examples with small perturbations. This poses a critical challenge to machine learning security, and impedes the wide application of machine learning in many important domains such as computer vision and malware detection. From a unique angle, we propose
Yi Qin, Ryan Hunt, Chuan Yue
openaire +1 more source

