Results 21 to 30 of about 79,498 (272)
Neural Network Pruning by Cooperative Coevolution
Neural network pruning is a popular model compression method which can significantly reduce the computing cost with negligible loss of accuracy. Recently, filters are often pruned directly by designing proper criteria or using auxiliary modules to measure their importance, which, however, requires expertise and trial-and-error.
Haopu Shang +3 more
openaire +2 more sources
Adversarial Structured Neural Network Pruning [PDF]
In recent years, convolutional neural networks (CNN) have been successfully employed for performing various tasks due to their high capacity. However, just like a double-edged sword, high capacity results from millions of parameters, which also brings a huge amount of redundancy and dramatically increases the computational complexity.
Xingyu Cai +3 more
openaire +1 more source
Efficient Digital Predistortion Using Sparse Neural Network
This paper proposes an efficient neural-network-based digital predistortion (DPD), named as envelope time-delay neural network (ETDNN) DPD. The method complies with the physical characteristics of radio-frequency (RF) power amplifiers (PAs) and uses a ...
Masaaki Tanio +2 more
doaj +1 more source
A Verification Method on Post-Pruning Generalization Ability of Neural Network Model [PDF]
To address the over-fitting problem caused by the down-regulation of the Dropout rate in the pruning operation of the neural network model,a verification method for the generalization ability of the pruning model is proposed.By artificially occluding the
LIU Chongyang, LIU Qinrang
doaj +1 more source
Symmetric Pruning in Quantum Neural Networks
Many fundamental properties of a quantum system are captured by its Hamiltonian and ground state. Despite the significance of ground states preparation (GSP), this task is classically intractable for large-scale Hamiltonians. Quantum neural networks (QNNs), which exert the power of modern quantum machines, have emerged as a leading protocol to conquer ...
Xinbiao Wang +5 more
openaire +3 more sources
In this work, the network complexity should be reduced with a concomitant reduction in the number of necessary training examples. The focus thus was on the dependence of proper evaluation metrics on the number of adjustable parameters of the considered ...
Th.I. Götz +8 more
doaj +1 more source
Cyclical Pruning for Sparse Neural Networks
Current methods for pruning neural network weights iteratively apply magnitude-based pruning on the model weights and re-train the resulting model to recover lost accuracy. In this work, we show that such strategies do not allow for the recovery of erroneously pruned weights.
Suraj Srinivas +5 more
openaire +2 more sources
Dirichlet Pruning for Neural Network Compression
We introduce Dirichlet pruning, a novel post-processing technique to transform a large neural network model into a compressed one. Dirichlet pruning is a form of structured pruning that assigns the Dirichlet distribution over each layer's channels in convolutional layers (or neurons in fully-connected layers) and estimates the parameters of the ...
Adamczewski, K., Park, M.
openaire +3 more sources
Dynamically Optimizing Network Structure Based on Synaptic Pruning in the Brain
Most neural networks need to predefine the network architecture empirically, which may cause over-fitting or under-fitting. Besides, a large number of parameters in a fully connected network leads to the prohibitively expensive computational cost and ...
Feifei Zhao +6 more
doaj +1 more source
Dissecting Pruned Neural Networks
Pruning is a standard technique for removing unnecessary structure from a neural network to reduce its storage footprint, computational demands, or energy consumption. Pruning can reduce the parameter-counts of many state-of-the-art neural networks by an order of magnitude without compromising accuracy, meaning these networks contain a vast amount of ...
Jonathan Frankle, David Bau
openaire +2 more sources

