Results 31 to 40 of about 947,558 (364)

Point Transformer V3: Simpler, Faster, Stronger [PDF]

open access: yesComputer Vision and Pattern Recognition, 2023
This paper is not motivated to seek innovation within the attention mechanism. Instead, it focuses on overcoming the existing trade-offs between accuracy and efficiency within the context of point cloud processing, leveraging the power of scale.
Xiaoyang Wu   +8 more
semanticscholar   +1 more source

Learned Image Compression with Mixed Transformer-CNN Architectures [PDF]

open access: yesComputer Vision and Pattern Recognition, 2023
Learned image compression (LIC) methods have exhibited promising progress and superior rate-distortion performance compared with classical image compression standards.
Jinming Liu, Heming Sun, J. Katto
semanticscholar   +1 more source

Learning A Sparse Transformer Network for Effective Image Deraining [PDF]

open access: yesComputer Vision and Pattern Recognition, 2023
Transformers-based methods have achieved significant performance in image deraining as they can model the non-local information which is vital for high-quality image reconstruction.
Xiang Chen   +3 more
semanticscholar   +1 more source

BiFormer: Vision Transformer with Bi-Level Routing Attention [PDF]

open access: yesComputer Vision and Pattern Recognition, 2023
As the core building block of vision transformers, attention is a powerful tool to capture long-range dependency. However, such power comes at a cost: it incurs a huge computation burden and heavy memory footprint as pairwise token interaction across all
Lei Zhu   +4 more
semanticscholar   +1 more source

Transformer in Transformer

open access: yes, 2021
Accepted by NeurIPS ...
Han, Kai   +5 more
openaire   +2 more sources

MaskGIT: Masked Generative Image Transformer [PDF]

open access: yesComputer Vision and Pattern Recognition, 2022
Generative transformers have experienced rapid popularity growth in the computer vision community in synthesizing high-fidelity and high-resolution images. The best generative transformer models so far, however, still treat an image naively as a sequence
Huiwen Chang   +4 more
semanticscholar   +1 more source

Optimization and analysis of tapping position on leakage reactance of a two winding transformer

open access: yesAlexandria Engineering Journal, 2023
To ensure a reliable and stable power supply, optimal design and proper installation of the transformer are essential for the power suppliers and distributors.
Kamran Dawood   +2 more
doaj   +1 more source

CSWin Transformer: A General Vision Transformer Backbone with Cross-Shaped Windows [PDF]

open access: yesComputer Vision and Pattern Recognition, 2021
We present CSWin Transformer, an efficient and effective Transformer-based backbone for general-purpose vision tasks. A challenging issue in Transformer design is that global self-attention is very expensive to compute whereas local self-attention often ...
Xiaoyi Dong   +7 more
semanticscholar   +1 more source

Research on new energy vehicle charging prediction based on Monte Carlo algorithm and its impact on distribution network

open access: yesFrontiers in Energy Research, 2023
With the vigorous promotion of new energy policies, the large-scale charging of new energy vehicles has put forward higher requirements for the safety and stability of the distribution network.
Zheng Li   +6 more
doaj   +1 more source

Pre-Trained Image Processing Transformer [PDF]

open access: yesComputer Vision and Pattern Recognition, 2020
As the computing power of modern hardware is increasing strongly, pre-trained deep learning models (e.g., BERT, GPT-3) learned on large-scale datasets have shown their effectiveness over conventional methods. The big progress is mainly contributed to the
Hanting Chen   +9 more
semanticscholar   +1 more source

Home - About - Disclaimer - Privacy