Results 91 to 100 of about 85,688 (262)
Ctta: a novel chain-of-thought transfer adversarial attacks framework for large language models
Recent studies have indicated that large language models (LLMs) remain susceptible to adversarial attacks, despite enhanced robustness through the chain-of-thought (CoT) capability.
Xinxin Yue +3 more
doaj +1 more source
Attack Type Agnostic Perceptual Enhancement of Adversarial Images
Adversarial images are samples that are intentionally modified to deceive machine learning systems. They are widely used in applications such as CAPTHAs to help distinguish legitimate human users from bots.
Aksoy, Bilgin, Temizel, Alptekin
core
A new energy paradigm assisted by AI. ABSTRACT The tremendous penetration of renewable energy sources and the integration of power electronics components increase the complexity of the operation and power system control. The advancements in Artificial Intelligence and machine learning have demonstrated proficiency in processing tasks requiring ...
Balasundaram Bharaneedharan +4 more
wiley +1 more source
Deep neural networks have achieved remarkable performance in remote sensing image (RSI) classification tasks. However, they remain vulnerable to adversarial attack.
Xiyu Peng, Jingyi Zhou, Xiaofeng Wu
doaj +1 more source
Caught in the fire: An accidental ethnography of discomfort in researching sex work
Abstract Drawing on fifteen years of engagement with researching Israel's sex industry, this article uses accidental ethnography to propose discomfort‐as‐method for feminist anthropology. I argue that discomfort is not a by‐product of fieldwork but a constitutive condition that disciplines researchers and shapes what can be known.
Yeela Lahav‐Raz
wiley +1 more source
Llm-ga: A gradient-based multi-label adversarial attack by large language models
Deep neural networks (DNNs) are highly sensitive to small, meticulously crafted perturbations, which have been utilized in adversarial attacks, threatening the reliability of DNNs in practical applications. Current adversarial attack methods rely heavily
Yujiang Liu +4 more
doaj +1 more source
DIPA: Adversarial Attack on DNNs by Dropping Information and Pixel-Level Attack on Attention
Deep neural networks (DNNs) have shown remarkable performance across a wide range of fields, including image recognition, natural language processing, and speech processing. However, recent studies indicate that DNNs are highly vulnerable to well-crafted
Jing Liu +4 more
doaj +1 more source
Adaptive Perturbation for Adversarial Attack
In recent years, the security of deep learning models achieves more and more attentions with the rapid development of neural networks, which are vulnerable to adversarial examples. Almost all existing gradient-based attack methods use the sign function in the generation to meet the requirement of perturbation budget on $L_\infty$ norm. However, we find
Zheng Yuan +4 more
openaire +3 more sources
Artificial Intelligence in Ophthalmology: Current Status, Challenges, and Future Perspectives
Current research of artificial intelligence (AI) in ophthalmology. ABSTRACT Artificial intelligence (AI) is revolutionizing ophthalmology by providing innovative solutions for disease screening, diagnosis, personalized treatment, and the delivery of global healthcare services.
She Chongyang, Tao Yong
wiley +1 more source
Black-Box Universal Adversarial Attack for DNN-Based Models of SAR Automatic Target Recognition
Synthetic aperture radar automatic target recognition (SAR-ATR) models based on deep neural networks (DNNs) are vulnerable to attacks of adversarial examples. Universal adversarial attack algorithms can help evaluate and improve the robustness of the SAR-
Xuanshen Wan +5 more
doaj +1 more source

