Results 151 to 160 of about 94,262 (290)
Boosting Adversarial Transferability Through Adversarial Attack Enhancer
Adversarial attacks against deep learning models achieve high performance in white-box settings but often exhibit low transferability in black-box scenarios, especially against defended models. In this work, we propose Multi-Path Random Restart (MPRR), which initializes multiple restart points with random noise to optimize gradient updates and improve ...
Wenli Zeng, Hong Huang, Jixin Chen
openaire +1 more source
Abstract This paper offers a psychoanalytic critique of the affirmation model in gender identity care, drawing on clinical experience from the Tavistock Gender Identity Development Service (GIDS). It argues that institutional and therapeutic responses to gender distress in young people are increasingly shaped by pressures to affirm rather than to ...
Marcus Evans
wiley +1 more source
Turn Fake into Real: Adversarial Head Turn Attacks Against Deepfake Detection [PDF]
Weijie Wang +3 more
openalex +1 more source
On Trace of PGD-Like Adversarial Attacks [PDF]
Mo Zhou, Vishal M. Patel
openalex +1 more source
Rigid Body Adversarial Attacks
Due to their performance and simplicity, rigid body simulators are often used in applications where the objects of interest can considered very stiff. However, no material has infinite stiffness, which means there are potentially cases where the non-zero compliance of the seemingly rigid object can cause a significant difference between its ...
Ramakrishnan, Aravind +2 more
openaire +2 more sources
Textile and colour defect detection using deep learning methods
Abstract Recent advances in deep learning (DL) have significantly enhanced the detection of textile and colour defects. This review focuses specifically on the application of DL‐based methods for defect detection in textile and coloration processes, with an emphasis on object detection and related computer vision (CV) tasks.
Hao Cui +2 more
wiley +1 more source
Comparison and Evaluation of the attacks and defenses against Adversarial attacks
Aleksandar Janković
openalex +1 more source
Why do universal adversarial attacks work on large language models?: Geometry might be the answer [PDF]
Varshini Subhash +3 more
openalex +1 more source
An Economic Analysis of Difficulty Adjustment Algorithms in Proof‐of‐Work Blockchain Systems
ABSTRACT We study the stability of cryptocurrency systems through difficulty adjustment. Bitcoin's difficulty adjustment algorithm (DAA) exhibits instability when the reward elasticity of the hash rate is high, implying that a sharp price reduction could disrupt the current Bitcoin system.
Shunya Noda +2 more
wiley +1 more source
Automatic modulation classification models based on deep learning models are at risk of being interfered by adversarial attacks. In an adversarial attack, the attacker causes the classification model to misclassify the received signal by adding carefully
Fanghao Xu +5 more
doaj +1 more source

