Results 21 to 30 of about 170,334 (258)

Supervised deep learning with vision transformer predicts delirium using limited lead EEG

open access: yesScientific Reports, 2023
As many as 80% of critically ill patients develop delirium increasing the need for institutionalization and higher morbidity and mortality. Clinicians detect less than 40% of delirium when using a validated screening tool.
Malissa A. Mulkey   +4 more
doaj   +1 more source

Scaling Vision Transformers

open access: yes2022 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2022
Attention-based neural networks such as the Vision Transformer (ViT) have recently attained state-of-the-art results on many computer vision benchmarks. Scale is a primary ingredient in attaining excellent results, therefore, understanding a model's scaling properties is a key to designing future generations effectively.
Zhai, Xiaohua   +3 more
openaire   +2 more sources

RT-ViT: Real-Time Monocular Depth Estimation Using Lightweight Vision Transformers

open access: yesSensors, 2022
The latest research in computer vision highlighted the effectiveness of the vision transformers (ViT) in performing several computer vision tasks; they can efficiently understand and process the image globally unlike the convolution which processes the ...
Hatem Ibrahem   +2 more
doaj   +1 more source

Vision Transformer Pruning

open access: yes, 2021
Vision transformer has achieved competitive performance on a variety of computer vision applications. However, their storage, run-time memory, and computational demands are hindering the deployment to mobile devices. Here we present a vision transformer pruning approach, which identifies the impacts of dimensions in each layer of transformer and then ...
Zhu, Mingjian, Tang, Yehui, Han, Kai
openaire   +2 more sources

Super Vision Transformer

open access: yesInternational Journal of Computer Vision, 2023
We attempt to reduce the computational costs in vision transformers (ViTs), which increase quadratically in the token number. We present a novel training paradigm that trains only one ViT model at a time, but is capable of providing improved image recognition performance with various computational costs. Here, the trained ViT model, termed super vision
Lin, Mingbao   +5 more
openaire   +2 more sources

Distinguishing Malicious Drones Using Vision Transformer

open access: yesAI, 2022
Drones are commonly used in numerous applications, such as surveillance, navigation, spraying pesticides in autonomous agricultural systems, various military services, etc., due to their variable sizes and workloads.
Sonain Jamil   +2 more
doaj   +1 more source

DeepKey: Towards End-to-End Physical Key Replication From a Single Photograph [PDF]

open access: yes, 2018
This paper describes DeepKey, an end-to-end deep neural architecture capable of taking a digital RGB image of an 'everyday' scene containing a pin tumbler key (e.g.
Alex Krizhevsky   +4 more
core   +3 more sources

Vision Transformer in Industrial Visual Inspection

open access: yesApplied Sciences, 2022
Artificial intelligence as an approach to visual inspection in industrial applications has been considered for decades. Recent successes, driven by advances in deep learning, present a potential paradigm shift and have the potential to facilitate an ...
Nils Hütten   +2 more
doaj   +1 more source

Improving diagnosis and prognosis of lung cancer using vision transformers: a scoping review

open access: yesBMC Medical Imaging, 2023
Background Vision transformer-based methods are advancing the field of medical artificial intelligence and cancer imaging, including lung cancer applications.
Hazrat Ali, Farida Mohsen, Zubair Shah
doaj   +1 more source

Adjustment of model parameters to estimate distribution transformers remaining lifespan [PDF]

open access: yes, 2018
Currently, the electrical system in Argentina is working at its maximum capacity, decreasing the margin between the installed power and demanded consumption, and drastically reducing the service life of transformer substations due to overload (since the ...
Gotay Sardiñas, Jorge   +3 more
core   +1 more source

Home - About - Disclaimer - Privacy