Highly Accurate Adaptive Federated Forests Based on Resistance to Adversarial Attacks in Wireless Traffic Prediction. [PDF]
Wang L +7 more
europepmc +1 more source
Universal adversarial attacks on deep neural networks for medical image classification. [PDF]
Hirano H, Minagi A, Takemoto K.
europepmc +1 more source
Abstract In collective punishment, a group as a whole receives negative consequences because of the actions of a few. We argue that collective punishments lead to ingroup cohesiveness and adverse intergroup relations by instigating a punishment‐revenge cycle.
Mete Sefa Uysal +2 more
wiley +1 more source
A multi-layered defense against adversarial attacks in brain tumor classification using ensemble adversarial training and feature squeezing. [PDF]
Yinusa A, Faezipour M.
europepmc +1 more source
Robustness Certificates for Sparse Adversarial Attacks by Randomized Ablation
Alexander Levine, Soheil Feizi
openalex +2 more sources
Moral disagreements: Unearthing pathways to constructive and destructive behavioral responses
Abstract Issues like transgender rights often provoke strong emotional reactions, leading to polarized conflicts. Moral psychology suggests that emotions like anger and disgust drive destructive behaviors, such as avoiding or insulting the opponent. However, we argue that constructive behaviors, such as listening to the opponent, are also possible.
Bhakti Khati +3 more
wiley +1 more source
Channel Attention for Fire and Smoke Detection: Impact of Augmentation, Color Spaces, and Adversarial Attacks. [PDF]
Ejaz U, Hamza MA, Kim HC.
europepmc +1 more source
Salient object detection dataset with adversarial attacks for genetic programming and neural networks. [PDF]
Olague M +3 more
europepmc +1 more source
Auto encoder-based defense mechanism against popular adversarial attacks in deep learning. [PDF]
Ashraf SN, Siddiqi R, Farooq H.
europepmc +1 more source
Fast Adversarial Training against Textual Adversarial Attacks
Many adversarial defense methods have been proposed to enhance the adversarial robustness of natural language processing models. However, most of them introduce additional pre-set linguistic knowledge and assume that the synonym candidates used by attackers are accessible, which is an ideal assumption.
Yang, Yichen, Liu, Xin, He, Kun
openaire +1 more source

