Results 21 to 30 of about 6,914,944 (265)
Pruning Method for Convolutional Neural Network Models Based on Sparse Regularization [PDF]
The existing pruning algorithms for Convolutional Neural Network(CNN) models exhibit a low accuracy in evaluating the importance of parameters by relying on their own parameter information, which would easily lead to mispruning and affect the performance
WEI Yue, CHEN Shichao, ZHU Fenghua, XIONG Gang
doaj +1 more source
Deep neural networks have achieved significant development and wide applications for their amazing performance. However, their complex structure, high computation and storage resource limit their applications in mobile or embedding devices such as sensor
Tao Wu +4 more
doaj +1 more source
Sparse Double Descent: Where Network Pruning Aggravates Overfitting [PDF]
People usually believe that network pruning not only reduces the computational cost of deep networks, but also prevents overfitting by decreasing model capacity.
Zhengqi He +3 more
semanticscholar +1 more source
Progressive multi-level distillation learning for pruning network
Although the classification method based on the deep neural network has achieved excellent results in classification tasks, it is difficult to apply to real-time scenarios because of high memory footprints and prohibitive inference times.
Ruiqing Wang +9 more
doaj +1 more source
Rethinking Network Pruning – under the Pre-train and Fine-tune Paradigm [PDF]
Transformer-based pre-trained language models have significantly improved the performance of various natural language processing (NLP) tasks in the recent years.
Dongkuan Xu +3 more
semanticscholar +1 more source
Network Pruning Using Adaptive Exemplar Filters [PDF]
Popular network pruning algorithms reduce redundant information by optimizing hand-crafted models, and may cause suboptimal performance and long time in selecting filters.
Mingbao Lin +6 more
semanticscholar +1 more source
Among various network compression methods, network pruning has developed rapidly due to its superior compression performance. However, the trivial pruning threshold limits the compression performance of pruning.
Yunlong Ding, Di-Rong Chen
doaj +1 more source
Cloud–Edge Collaborative Inference with Network Pruning
With the increase in model parameters, deep neural networks (DNNs) have achieved remarkable performance in computer vision, but larger DNNs create a bottleneck for deploying DNNs on resource-constrained edge devices.
Mingran Li +3 more
semanticscholar +1 more source
Network pruning techniques, including weight pruning and filter pruning, reveal that most state-of-the-art neural networks can be accelerated without a significant performance drop.
Xuanyu He +7 more
semanticscholar +1 more source
Heuristic Method for Minimizing Model Size of CNN by Combining Multiple Pruning Techniques
Network pruning techniques have been widely used for compressing computational and memory intensive deep learning models through removing redundant components of the model.
Danhe Tian +2 more
doaj +1 more source

