Results 31 to 40 of about 79,498 (272)
Neural network models for hyperspectral images classification are complex and therefore difficult to deploy directly onto mobile platforms. Neural network model compression methods can effectively optimize the storage space and inference time of the ...
Yu Lei +5 more
doaj +1 more source
Unsupervised Adaptive Weight Pruning for Energy-Efficient Neuromorphic Systems
To tackle real-world challenges, deep and complex neural networks are generally used with a massive number of parameters, which require large memory size, extensive computational operations, and high energy consumption in neuromorphic hardware systems ...
Wenzhe Guo +7 more
doaj +1 more source
Neuroinspired unsupervised learning and pruning with subquantum CBRAM arrays. [PDF]
Resistive RAM crossbar arrays offer an attractive solution to minimize off-chip data transfer and parallelize on-chip computations for neural networks.
Jameson, John R +6 more
core +3 more sources
Automatic Pruning for Quantized Neural Networks
Neural network quantization and pruning are two techniques commonly used to reduce the computational complexity and memory footprint of these models for deployment. However, most existing pruning strategies operate on full-precision and cannot be directly applied to discrete parameter distributions after quantization.
Luis Guerra +3 more
openaire +2 more sources
Neural Network-Based Fixed-Complexity Precoder Selection for Multiple Antenna Systems
In this paper, we propose a neural network-based precoder selection method for multiple antenna systems that are equipped with maximum likelihood detectors.
Jaekwon Kim, Hyo-Sang Lim
doaj +1 more source
Importance Estimation for Neural Network Pruning [PDF]
Structural pruning of neural network parameters reduces computation, energy, and memory transfer costs during inference. We propose a novel method that estimates the contribution of a neuron (filter) to the final loss and iteratively removes those with smaller scores.
Pavlo Molchanov 0001 +4 more
openaire +2 more sources
Neural Network Pruning by Gradient Descent
21 pages, 5 ...
Zhang Zhang, Ruyi Tao, Jiang Zhang
openaire +2 more sources
Dissecting the Biological Motherboard (Systems Biology and Beyond) [PDF]
Genome-scale molecular networks, including gene pathways, gene regulatory networks and protein interactions, are central to the investigation of the nascent disciplines of systems biology and bio-complexity.
Abhay Krishna, Ajit Narayanan
core +2 more sources
Renormalized Sparse Neural Network Pruning
Large neural networks are heavily over-parameterized. This is done because it improves training to optimality. However once the network is trained, this means many parameters can be zeroed, or pruned, leaving an equivalent sparse neural network. We propose renormalizing sparse neural networks in order to improve accuracy.
openaire +2 more sources
Neural Networks at a Fraction with Pruned Quaternions
Contemporary state-of-the-art neural networks have increasingly large numbers of parameters, which prevents their deployment on devices with limited computational power. Pruning is one technique to remove unnecessary weights and reduce resource requirements for training and inference. In addition, for ML tasks where the input data is multi-dimensional,
Sahel Mohammad Iqbal, Subhankar Mishra
openaire +2 more sources

