Results 61 to 70 of about 94,262 (290)
Attacking Adversarial Attacks as A Defense
It is well known that adversarial attacks can fool deep neural networks with imperceptible perturbations. Although adversarial training significantly improves model robustness, failure cases of defense still broadly exist. In this work, we find that the adversarial attacks can also be vulnerable to small perturbations.
Wu, Boxi +8 more
openaire +2 more sources
Schematic illustration of SiNDs composite materials synthesis and its internal photophysical process mechanism. And an AI‐assisted dynamic information encryption process. ABSTRACT Persistent luminescence materials typically encounter an intrinsic trade‐off between high phosphorescence quantum yield (PhQY) and ultralong phosphorescence lifetime.
Yulu Liu +9 more
wiley +1 more source
Review of Artificial Intelligence Adversarial Attack and Defense Technologies
In recent years, artificial intelligence technologies have been widely used in computer vision, natural language processing, automatic driving, and other fields.
Shilin Qiu +3 more
doaj +1 more source
Functional Adversarial Attacks
Accepted to NeurIPS ...
Laidlaw, Cassidy, Feizi, Soheil
openaire +2 more sources
Abstract This work experimentally validates the RESPONSE (Resilient Process cONtrol SystEm) framework as a solution for maintaining safe, continuous operation of cyber‐physical process systems under cyberattacks. RESPONSE implements a dual‐loop architecture that runs a networked online controller in parallel with a hard‐isolated offline controller ...
Luyang Liu +5 more
wiley +1 more source
As with classification models, object detection models are vulnerable to adversarial attacks. In particular, adversarial attacks on key components of object detection models such as Region Proposal Network (RPN) and Non-Maximum Suppression (NMS ...
Gwang-Nam Kim +4 more
doaj +1 more source
Deep neural networks (DNNs) have achieved great success in various applications due to their strong expressive power. However, recent studies have shown that DNNs are vulnerable to adversarial examples, and these manipulated instances can mislead DNN ...
Jianyi Liu +4 more
doaj +1 more source
A Distributed Biased Boundary Attack Method in Black-Box Attack
The adversarial samples threaten the effectiveness of machine learning (ML) models and algorithms in many applications. In particular, black-box attack methods are quite close to actual scenarios.
Fengtao Xiang +3 more
doaj +1 more source
SURVEY OF ADVERSARIAL ATTACKS AND DEFENSE AGAINST ADVERSARIAL ATTACKS
In recent years, the fields of Artificial Intelligence (AI) and Deep learning (DL) techniques along with Neural Networks (NNs) have shown great progress and scope for future research. Along with all the developments comes the threats and security vulnerabilities to Neural Networks and AI models. A few fabricated inputs/samples can lead to deviations in
Akshat Jain +3 more
openaire +1 more source
Artificial Intelligence for Bone: Theory, Methods, and Applications
Advances in artificial intelligence (AI) offer the potential to improve bone research. The current review explores the contributions of AI to pathological study, biomarker discovery, drug design, and clinical diagnosis and prognosis of bone diseases. We envision that AI‐driven methodologies will enable identifying novel targets for drugs discovery. The
Dongfeng Yuan +3 more
wiley +1 more source

