Results 281 to 290 of about 672,947 (320)
Some of the next articles are maybe not open access.
Accelerating Tensor Contraction Products via Tensor-Train Decomposition [Tips & Tricks]
IEEE Signal Processing Magazine, 2022Tensors (multiway arrays) and tensor decompositions (TDs) have recently received tremendous attention in the data analytics community, due to their ability to mitigate the curse of dimensionality associated with modern large-dimensional big data [1], [2].
Ilya Kisil +4 more
semanticscholar +2 more sources
A Performance Portability Study Using Tensor Contraction Benchmarks
IEEE International Symposium on Parallel & Distributed Processing, Workshops and Phd Forum, 2023Driven by the end of Moore’s law, heterogeneous architectures, particularly GPUs, are experiencing a surge in demand and utilization. While these platforms hold the potential for achieving high performance, their programming remains challenging and ...
M. E. Ozturk +4 more
semanticscholar +1 more source
Sparta: high-performance, element-wise sparse tensor contraction on heterogeneous memory
ACM SIGPLAN Symposium on Principles & Practice of Parallel Programming, 2021Sparse tensor contractions appear commonly in many applications. Efficiently computing a two sparse tensor product is challenging: It not only inherits the challenges from common sparse matrix-matrix multiplication (SpGEMM), i.e., indirect memory access ...
Jiawen Liu +4 more
semanticscholar +1 more source
Magnetic Anomaly Detection and Localization Using Orthogonal Basis of Magnetic Tensor Contraction
IEEE Transactions on Geoscience and Remote Sensing, 2020In certain scenarios, such as detection of unexploded ordnance (UXO), submarines, and intruders, magnetic anomaly detection (MAD) is an effective method due to the magnetic field advantages of small operating power, strong penetrability, and strong anti ...
Huanghuang Jin +5 more
semanticscholar +1 more source
Volume 1, 2020
An efficient Galerkin averaging-incremental harmonic balance (EGA-IHB) method is developed based on the fast Fourier transform (FFT) and tensor contraction to increase efficiency and robustness of the IHB method when calculating periodic responses of ...
Ren Ju, W. Fan, Wei-dong Zhu
semanticscholar +1 more source
An efficient Galerkin averaging-incremental harmonic balance (EGA-IHB) method is developed based on the fast Fourier transform (FFT) and tensor contraction to increase efficiency and robustness of the IHB method when calculating periodic responses of ...
Ren Ju, W. Fan, Wei-dong Zhu
semanticscholar +1 more source
Fast Bilinear Algorithms for Symmetric Tensor Contractions
Computational Methods in Applied Mathematics, 2020Abstract In matrix-vector multiplication, matrix symmetry does not permit a straightforward reduction in computational cost. More generally, in contractions of symmetric tensors, the symmetries are not preserved in the usual algebraic form of contraction algorithms.
Edgar Solomonik, James Demmel
openaire +2 more sources
TCP: A Tensor Contraction Processor for AI Workloads Industrial Product*
International Symposium on Computer ArchitectureWe introduce a novel tensor contraction processor (TCP) architecture that offers a paradigm shift from traditional architectures that rely on fixed-size matrix multiplications.
Hanjoon Kim +48 more
semanticscholar +1 more source
IEEE Computer Graphics and Applications, 2001
In my last column (see ibid., January/February 2001), I talked about a notational device for matrix algebra called tensor diagrams. This time I write some C++ code to symbolically evaluate these quantities. This gives me a chance to play with some as yet untried features in the C++ standard library, such as strings and standard template library (STL ...
openaire +1 more source
In my last column (see ibid., January/February 2001), I talked about a notational device for matrix algebra called tensor diagrams. This time I write some C++ code to symbolically evaluate these quantities. This gives me a chance to play with some as yet untried features in the C++ standard library, such as strings and standard template library (STL ...
openaire +1 more source
FLAASH: Flexible Accelerator Architecture for Sparse High-Order Tensor Contraction
arXiv.orgTensors play a vital role in machine learning (ML) and often exhibit properties best explored while maintaining high-order. Efficiently performing ML computations requires taking advantage of sparsity, but generalized hardware support is challenging ...
Gabriel Kulp +2 more
semanticscholar +1 more source

