Results 21 to 30 of about 82,164 (262)
Vision Transformer with Progressive Sampling [PDF]
Accepted to ICCV ...
Yue, X +6 more
openaire +3 more sources
Transformer architectures for computer vision: A comprehensive review and future research directions [PDF]
Long-range dependencies and contextual relationships in videos were captured by using Convolutional Neural Networks (CNNs) in past. Recently the use of Transformers is started for capturing the long-range dependencies and contextual relationships in ...
Ugile Tukaram, Uke Nilesh
doaj +1 more source
Identifying the role of vision transformer for skin cancer—A scoping review
IntroductionDetecting and accurately diagnosing early melanocytic lesions is challenging due to extensive intra- and inter-observer variabilities. Dermoscopy images are widely used to identify and study skin cancer, but the blurred boundaries between ...
Sulaiman Khan, Hazrat Ali, Zubair Shah
doaj +1 more source
The role of intelligent systems in delivering the smart grid [PDF]
The development of "smart" or "intelligent" energy networks has been proposed by both EPRI's IntelliGrid initiative and the European SmartGrids Technology Platform as a key step in meeting our future energy needs.
Catterson, Victoria +2 more
core +1 more source
Semi-supervised Vision Transformers
We study the training of Vision Transformers for semi-supervised image classification. Transformers have recently demonstrated impressive performance on a multitude of supervised learning tasks. Surprisingly, we show Vision Transformers perform significantly worse than Convolutional Neural Networks when only a small set of labeled data is available ...
Zejia Weng +4 more
openaire +2 more sources
Voltage regulation considerations for the design of hybrid distribution transformers [PDF]
The future substation depends on finding a way to mitigate the effects of the drawbacks of the conventional legacy by employing the efficiency of the solid state switches [1].
Alqarni, M, Darwish, M, Radi, MA
core +1 more source
Attention-based neural networks such as the Vision Transformer (ViT) have recently attained state-of-the-art results on many computer vision benchmarks. Scale is a primary ingredient in attaining excellent results, therefore, understanding a model's scaling properties is a key to designing future generations effectively.
Zhai, Xiaohua +3 more
openaire +2 more sources
Re-Introducing BN Into Transformers for Vision Tasks
In recent years, Transformer-based models have exhibited significant advancements over previous models in natural language processing and vision tasks. This powerful methodology has also been extended to the 3D point cloud domain, where it can mitigate ...
Xue-Song Tang, Xian-Lin Xie
doaj +1 more source
Polyp-PVT: Polyp Segmentation with Pyramid Vision Transformers
Most polyp segmentation methods use convolutional neural networks (CNNs) as their backbone, leading to two key issues when exchanging information between the encoder and decoder: (1) taking into account the differences in contribution between different ...
Bo Dong +5 more
doaj +1 more source
Vision transformer has achieved competitive performance on a variety of computer vision applications. However, their storage, run-time memory, and computational demands are hindering the deployment to mobile devices. Here we present a vision transformer pruning approach, which identifies the impacts of dimensions in each layer of transformer and then ...
Zhu, Mingjian, Tang, Yehui, Han, Kai
openaire +2 more sources

