Results 51 to 60 of about 83,465 (280)
Artwork Style Recognition Using Vision Transformers and MLP Mixer
Through the extensive study of transformers, attention mechanisms have emerged as potentially more powerful than sequential recurrent processing and convolution.
Lazaros Alexios Iliadis +4 more
doaj +1 more source
A Provable Defense for Deep Residual Networks
We present a training system, which can provably defend significantly larger neural networks than previously possible, including ResNet-34 and DenseNet-100. Our approach is based on differentiable abstract interpretation and introduces two novel concepts:
Mirman, Matthew +2 more
core +1 more source
ABSTRACT Objective Peripheral neuropathies contribute to patient disability but may be diagnosed late or missed altogether due to late referral, limitation of current diagnostic methods and lack of specialized testing facilities. To address this clinical gap, we developed NeuropathAI, an interpretable deep learning–based multiclass classification ...
Chaima Ben Rabah +7 more
wiley +1 more source
A vision transformer machine learning model for COVID-19 diagnosis using chest X-ray images
This study leverages machine learning to enhance the diagnostic accuracy of COVID-19 using chest X-rays. The study evaluates various architectures, including efficient neural networks (EfficientNet), multiscale vision transformers (MViT), efficient ...
Tianyi Chen +6 more
doaj +1 more source
The musical key serves as a crucial element in a piece, offering vital insights into the tonal center, harmonic structure, and chord progressions while enabling tasks such as transposition and arrangement.
Manav Garg +6 more
doaj +1 more source
Objective We aimed to estimate the prevalence and cumulative incidence of hydroxychloroquine retinopathy (HCQ‐R) and its risk factors among patients receiving long‐term HCQ with rheumatic diseases through a systematic review and meta‐analysis of observational studies that used spectral‐domain optical coherence tomography (SD‐OCT) for screening ...
Narsis Daftarian +4 more
wiley +1 more source
Vision Transformers for Image Classification: A Comparative Survey
Transformers were initially introduced for natural language processing, leveraging the self-attention mechanism. They require minimal inductive biases in their design and can function effectively as set-based architectures.
Yaoli Wang +4 more
doaj +1 more source
Neural Architecture Search for Transformers: A Survey
Transformer-based Deep Neural Network architectures have gained tremendous interest due to their effectiveness in various applications across Natural Language Processing (NLP) and Computer Vision (CV) domains.
Krishna Teja Chitty-Venkata +3 more
doaj +1 more source
Adaptive Attention Span in Transformers
We propose a novel self-attention mechanism that can learn its optimal attention span. This allows us to extend significantly the maximum context size used in Transformer, while maintaining control over their memory footprint and computational time.
Bojanowski, Piotr +3 more
core +1 more source
What Do Large Language Models Know About Materials?
If large language models (LLMs) are to be used inside the material discovery and engineering process, they must be benchmarked for the accurateness of intrinsic material knowledge. The current work introduces 1) a reasoning process through the processing–structure–property–performance chain and 2) a tool for benchmarking knowledge of LLMs concerning ...
Adrian Ehrenhofer +2 more
wiley +1 more source

