Results 31 to 40 of about 947,558 (364)
Point Transformer V3: Simpler, Faster, Stronger [PDF]
This paper is not motivated to seek innovation within the attention mechanism. Instead, it focuses on overcoming the existing trade-offs between accuracy and efficiency within the context of point cloud processing, leveraging the power of scale.
Xiaoyang Wu +8 more
semanticscholar +1 more source
Learned Image Compression with Mixed Transformer-CNN Architectures [PDF]
Learned image compression (LIC) methods have exhibited promising progress and superior rate-distortion performance compared with classical image compression standards.
Jinming Liu, Heming Sun, J. Katto
semanticscholar +1 more source
Learning A Sparse Transformer Network for Effective Image Deraining [PDF]
Transformers-based methods have achieved significant performance in image deraining as they can model the non-local information which is vital for high-quality image reconstruction.
Xiang Chen +3 more
semanticscholar +1 more source
BiFormer: Vision Transformer with Bi-Level Routing Attention [PDF]
As the core building block of vision transformers, attention is a powerful tool to capture long-range dependency. However, such power comes at a cost: it incurs a huge computation burden and heavy memory footprint as pairwise token interaction across all
Lei Zhu +4 more
semanticscholar +1 more source
MaskGIT: Masked Generative Image Transformer [PDF]
Generative transformers have experienced rapid popularity growth in the computer vision community in synthesizing high-fidelity and high-resolution images. The best generative transformer models so far, however, still treat an image naively as a sequence
Huiwen Chang +4 more
semanticscholar +1 more source
Optimization and analysis of tapping position on leakage reactance of a two winding transformer
To ensure a reliable and stable power supply, optimal design and proper installation of the transformer are essential for the power suppliers and distributors.
Kamran Dawood +2 more
doaj +1 more source
CSWin Transformer: A General Vision Transformer Backbone with Cross-Shaped Windows [PDF]
We present CSWin Transformer, an efficient and effective Transformer-based backbone for general-purpose vision tasks. A challenging issue in Transformer design is that global self-attention is very expensive to compute whereas local self-attention often ...
Xiaoyi Dong +7 more
semanticscholar +1 more source
With the vigorous promotion of new energy policies, the large-scale charging of new energy vehicles has put forward higher requirements for the safety and stability of the distribution network.
Zheng Li +6 more
doaj +1 more source
Pre-Trained Image Processing Transformer [PDF]
As the computing power of modern hardware is increasing strongly, pre-trained deep learning models (e.g., BERT, GPT-3) learned on large-scale datasets have shown their effectiveness over conventional methods. The big progress is mainly contributed to the
Hanting Chen +9 more
semanticscholar +1 more source

