Results 101 to 110 of about 94,262 (290)
Adaptive Perturbation for Adversarial Attack
In recent years, the security of deep learning models achieves more and more attentions with the rapid development of neural networks, which are vulnerable to adversarial examples. Almost all existing gradient-based attack methods use the sign function in the generation to meet the requirement of perturbation budget on $L_\infty$ norm. However, we find
Zheng Yuan +4 more
openaire +3 more sources
ABSTRACT This article explores the management adaptation strategies non‐governmental organizations (NGOs) managers employ in order to operate in repressive political environments. It answers the question: how do NGO managers initiate, manage and sustain internal change when the political/regulatory environment changes?
Charles Kaye‐Essien +2 more
wiley +1 more source
Llm-ga: A gradient-based multi-label adversarial attack by large language models
Deep neural networks (DNNs) are highly sensitive to small, meticulously crafted perturbations, which have been utilized in adversarial attacks, threatening the reliability of DNNs in practical applications. Current adversarial attack methods rely heavily
Yujiang Liu +4 more
doaj +1 more source
Deep neural networks have achieved remarkable performance in remote sensing image (RSI) classification tasks. However, they remain vulnerable to adversarial attack.
Xiyu Peng, Jingyi Zhou, Xiaofeng Wu
doaj +1 more source
Transferable Adversarial Attacks Against ASR
Given the extensive research and real-world applications of automatic speech recognition (ASR), ensuring the robustness of ASR models against minor input perturbations becomes a crucial consideration for maintaining their effectiveness in real-time scenarios.
Xiaoxue Gao +4 more
openaire +2 more sources
Extending Adversarial Attacks to Produce Adversarial Class Probability Distributions
Despite the remarkable performance and generalization levels of deep learning models in a wide range of artificial intelligence tasks, it has been demonstrated that these models can be easily fooled by the addition of imperceptible yet malicious perturbations to natural inputs.
Vadillo, Jon +2 more
openaire +3 more sources
Abstract Managing wildfire risk requires consideration of complex and uncertain scientific evidence as well as trade‐offs between different values and goals. Conflicting perspectives on what values and goals are most important, what ought to be done and what trade‐offs are acceptable complicate those decisions.
Pele J. Cannon, Sarah Clement
wiley +1 more source
Black-Box Universal Adversarial Attack for DNN-Based Models of SAR Automatic Target Recognition
Synthetic aperture radar automatic target recognition (SAR-ATR) models based on deep neural networks (DNNs) are vulnerable to attacks of adversarial examples. Universal adversarial attack algorithms can help evaluate and improve the robustness of the SAR-
Xuanshen Wan +5 more
doaj +1 more source
Advancing Extracellular Vesicle Research: A Review of Systems Biology and Multiomics Perspectives
ABSTRACT Extracellular vesicles (EVs) are membrane‐bound vesicles secreted by various cell types into the extracellular space and play a role in intercellular communication. Their molecular cargo varies depending on the cell of origin and its functional state.
Gloria Kemunto +2 more
wiley +1 more source
Researching infrared adversarial attacks is crucial for ensuring the safe deployment of security-sensitive systems reliant on infrared object detectors.
Zhiyang Hu +6 more
doaj +1 more source

