Results 31 to 40 of about 726,041 (339)

Dynamic Sparsity Is Channel-Level Sparsity Learner

open access: yes, 2023
Accepted by the 37th Conference on Neural Information Processing Systems (NeurIPS 2023)
Yin, Lu   +9 more
openaire   +2 more sources

Constructing Measures of Sparsity [PDF]

open access: yesIEEE Transactions on Knowledge and Data Engineering, 2022
This paper presents a rigorous but tractable study of sparsity. We postulate a definition of sparsity that is as broad as possible, so that it generates all the various measures that are useful in practice, but narrow enough that the fundamental properties of generalized sparsity still hold.
Mora-Jiménez, Inmaculada   +4 more
openaire   +3 more sources

Accelerating Sparse DNN Models without Hardware-Support via Tile-Wise Sparsity [PDF]

open access: yesInternational Conference for High Performance Computing, Networking, Storage and Analysis, 2020
Network pruning can reduce the high computation cost of deep neural network (DNN) models. However, to maintain their accuracies, sparse models often carry randomly-distributed weights, leading to irregular computations. Consequently, sparse models cannot
Cong Guo   +9 more
semanticscholar   +1 more source

Sparsifying the Fisher Linear Discriminant by Rotation [PDF]

open access: yes, 2014
Many high dimensional classification techniques have been proposed in the literature based on sparse linear discriminant analysis (LDA). To efficiently use them, sparsity of linear classifiers is a prerequisite.
Dong, Bin, Fan, Jianqing, Hao, Ning
core   +1 more source

SparseRT: Accelerating Unstructured Sparsity on GPUs for Deep Learning Inference [PDF]

open access: yesInternational Conference on Parallel Architectures and Compilation Techniques, 2020
In recent years, there has been a flurry of research in deep neural network pruning and compression. Early approaches prune weights individually. However, it is difficult to take advantage of the resulting unstructured sparsity patterns on modern ...
Ziheng Wang
semanticscholar   +1 more source

Structured Sparsity of Convolutional Neural Networks via Nonconvex Sparse Group Regularization

open access: yesFrontiers in Applied Mathematics and Statistics, 2021
Convolutional neural networks (CNN) have been hugely successful recently with superior accuracy and performance in various imaging applications, such as classification, object detection, and segmentation.
Kevin Bui   +4 more
doaj   +1 more source

PCONV: The Missing but Desirable Sparsity in DNN Weight Pruning for Real-time Execution on Mobile Devices [PDF]

open access: yesAAAI Conference on Artificial Intelligence, 2019
Model compression techniques on Deep Neural Network (DNN) have been widely acknowledged as an effective way to achieve acceleration on a variety of platforms, and DNN weight pruning is a straightforward and effective method.
Xiaolong Ma   +7 more
semanticscholar   +1 more source

Dynamic MR Image Reconstruction From Highly Undersampled (k, t)-Space Data Exploiting Low Tensor Train Rank and Sparse Prior

open access: yesIEEE Access, 2020
Dynamic magnetic resonance imaging (dynamic MRI) is used to visualize living tissues and their changes over time. In this paper, we propose a new tensor-based dynamic MRI approach for reconstruction from highly undersampled (k, t)-space data, which ...
Shuli Ma, Huiqian Du, Wenbo Mei
doaj   +1 more source

Manifold Discovery for High-Dimensional Data Using Deep Method

open access: yesIEEE Access, 2022
It is a challenge for manifold discovery from the data in the high-dimensional space, since the data in the high-dimensional space is sparsely distributed, which hardly provides rich information for manifold discovery so as to be possible to obtain ...
Jingjin Chen, Shuping Chen, Xuan Ding
doaj   +1 more source

Sparsity information and regularization in the horseshoe and other shrinkage priors [PDF]

open access: yes, 2017
The horseshoe prior has proven to be a noteworthy alternative for sparse Bayesian estimation, but has previously suffered from two problems. First, there has been no systematic way of specifying a prior for the global shrinkage hyperparameter based on ...
Juho Piironen, Aki Vehtari
semanticscholar   +1 more source

Home - About - Disclaimer - Privacy