Results 31 to 40 of about 1,314,055 (250)

Pre-Computing Batch Normalisation Parameters for Edge Devices on a Binarized Neural Network

open access: yesSensors, 2023
Binarized Neural Network (BNN) is a quantized Convolutional Neural Network (CNN), reducing the precision of network parameters for a much smaller model size. In BNNs, the Batch Normalisation (BN) layer is essential.
Nicholas Phipps   +3 more
doaj   +1 more source

B-VGG16: Binary Quantized Convolutional Neuronal Network for image classification

open access: yesRevista Elektrón, 2022
In this work, a Binary Quantized Convolution neural network for image classification is trained and evaluated. Binarized neural networks reduce the amount of memory, and it is possible to implement them with less hardware than those that use real value ...
Nicolás Urbano Pintos   +2 more
doaj   +1 more source

ReBNet: Residual Binarized Neural Network [PDF]

open access: yes2018 IEEE 26th Annual International Symposium on Field-Programmable Custom Computing Machines (FCCM), 2018
This paper proposes ReBNet, an end-to-end framework for training reconfigurable binary neural networks on software and developing efficient accelerators for execution on FPGA. Binary neural networks offer an intriguing opportunity for deploying large-scale deep learning models on resource-constrained devices.
Ghasemzadeh, Mohammad   +2 more
openaire   +2 more sources

Larq: An Open-Source Library for Training Binarized Neural Networks

open access: yesJournal of Open Source Software, 2020
Modern deep learning methods have been successfully applied to many different tasks and have the potential to revolutionize everyday lives. However, existing neural networks that use 32 bits to encode each weight and activation often have an energy ...
Lukas Geiger, Plumerai Team
semanticscholar   +1 more source

Implementation of Binarized Neural Networks in All-Programmable System-on-Chip Platforms

open access: yesElectronics, 2022
The Binarized Neural Network (BNN) is a Convolutional Neural Network (CNN) consisting of binary weights and activation rather than real-value weights. Smaller models are used, allowing for inference effectively on mobile or embedded devices with limited ...
Maoyang Xiang, T. Teo
semanticscholar   +1 more source

Stochastic Computing for Hardware Implementation of Binarized Neural Networks

open access: yesIEEE Access, 2019
Binarized neural networks, a recently discovered class of neural networks with minimal memory requirements and no reliance on multiplication, are a fantastic opportunity for the realization of compact and energy efficient inference hardware.
Tifenn Hirtzlin   +5 more
doaj   +1 more source

Convergence of a Relaxed Variable Splitting Coarse Gradient Descent Method for Learning Sparse Weight Binarized Activation Neural Network

open access: yesFrontiers in Applied Mathematics and Statistics, 2020
Sparsification of neural networks is one of the effective complexity reduction methods to improve efficiency and generalizability. Binarized activation offers an additional computational saving for inference.
Thu Dinh, Jack Xin
doaj   +1 more source

An SMT-Based Approach for Verifying Binarized Neural Networks [PDF]

open access: yesInternational Conference on Tools and Algorithms for Construction and Analysis of Systems, 2020
Deep learning has emerged as an effective approach for creating modern software systems, with neural networks often surpassing hand-crafted systems. Unfortunately, neural networks are known to suffer from various safety and security issues.
Guy Amir   +3 more
semanticscholar   +1 more source

Banners: Binarized Neural Networks with Replicated Secret Sharing

open access: yesIACR Cryptology ePrint Archive, 2021
Binarized Neural Networks (BNN) provide efficient implementations of Convolutional Neural Networks (CNN). This makes them particularly suitable to perform fast and memory-light inference of neural networks running on resource-constrained devices ...
Alberto Ibarrondo   +2 more
semanticscholar   +1 more source

On the role of synaptic stochasticity in training low-precision neural networks [PDF]

open access: yes, 2018
Stochasticity and limited precision of synaptic weights in neural network models are key aspects of both biological and hardware modeling of learning processes.
Baldassi, Carlo   +6 more
core   +4 more sources

Home - About - Disclaimer - Privacy