Results 301 to 310 of about 2,268,403 (339)

Boosting adversarial robustness via self-paced adversarial training

Neural Networks, 2023
Adversarial training is considered one of the most effective methods to improve the adversarial robustness of deep neural networks. Despite the success, it still suffers from unsatisfactory performance and overfitting. Considering the intrinsic mechanism of adversarial training, recent studies adopt the idea of curriculum learning to alleviate ...
Lirong He   +5 more
openaire   +2 more sources

Adversarial Robustness Comparison of Vision Transformer and MLP-Mixer to CNNs

British Machine Vision Conference, 2021
Convolutional Neural Networks (CNNs) have become the de facto gold standard in computer vision applications in the past years. Recently, however, new model architectures have been proposed challenging the status quo.
Philipp Benz
semanticscholar   +1 more source

Benchmarking Adversarial Robustness on Image Classification

Computer Vision and Pattern Recognition, 2020
Deep neural networks are vulnerable to adversarial examples, which becomes one of the most important research problems in the development of deep learning.
Yinpeng Dong   +6 more
semanticscholar   +1 more source

Trading Inference-Time Compute for Adversarial Robustness

arXiv.org
We conduct experiments on the impact of increasing inference-time compute in reasoning models (specifically OpenAI o1-preview and o1-mini) on their robustness to adversarial attacks.
Wojciech Zaremba   +10 more
semanticscholar   +1 more source

Adversarial Robustness for Visual Grounding of Multimodal Large Language Models

arXiv.org
Multi-modal Large Language Models (MLLMs) have recently achieved enhanced performance across various vision-language tasks including visual grounding capabilities. However, the adversarial robustness of visual grounding remains unexplored in MLLMs.
Kuofeng Gao   +4 more
semanticscholar   +1 more source

Towards Resilient and Efficient LLMs: A Comparative Study of Efficiency, Performance, and Adversarial Robustness

Artificial Intelligence and Cloud Computing Conference
With the increasing demand for practical applications of Large Language Models (LLMs), many attention-efficient models have been developed to balance performance and computational cost.
Xiaojing Fan, Chunliang Tao
semanticscholar   +1 more source

Home - About - Disclaimer - Privacy