Results 51 to 60 of about 22,800 (222)

Embedded Binarized Neural Networks

open access: yes, 2017
We study embedded Binarized Neural Networks (eBNNs) with the aim of allowing current binarized neural networks (BNNs) in the literature to perform feedforward inference efficiently on small embedded devices. We focus on minimizing the required memory footprint, given that these devices often have memory as small as tens of kilobytes (KB).
McDanel, Bradley   +2 more
openaire   +2 more sources

Training Hardware for Binarized Convolutional Neural Network Based on CMOS Invertible Logic

open access: yesIEEE Access, 2020
In this article, we implement fast and power-efficient training hardware for convolutional neural networks (CNNs) based on CMOS invertible logic. The backpropagation algorithm is generally hard to implement in hardware because it requires high-precision ...
Duckgyu Shin   +3 more
doaj   +1 more source

Binarized Neural Network With Parameterized Weight Clipping and Quantization Gap Minimization for Online Knowledge Distillation

open access: yesIEEE Access, 2023
As the applications for artificial intelligence are growing rapidly, numerous network compression algorithms have been developed to restrict computing resources such as smartphones, edge, and IoT devices. Knowledge distillation (KD) leverages soft labels
Ju Yeon Kang, Chang Ho Ryu, Tae Hee Han
doaj   +1 more source

Binarized Convolutional Neural Networks with Separable Filters for Efficient Hardware Acceleration

open access: yes, 2017
State-of-the-art convolutional neural networks are enormously costly in both compute and memory, demanding massively parallel GPUs for execution. Such networks strain the computational capabilities and energy available to embedded and mobile processing ...
Gupta, Rajesh K.   +6 more
core   +1 more source

Attacking Binarized Neural Networks

open access: yes, 2017
Neural networks with low-precision weights and activations offer compelling efficiency advantages over their full-precision equivalents. The two most frequently discussed benefits of quantization are reduced memory consumption, and a faster forward pass when implemented with efficient bitwise operations. We propose a third benefit of very low-precision
Galloway, Angus   +2 more
openaire   +2 more sources

Efficient Super Resolution Using Binarized Neural Network [PDF]

open access: yes2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), 2019
Deep convolutional neural networks (DCNNs) have recently demonstrated high-quality results in single-image super-resolution (SR). DCNNs often suffer from over-parametrization and large amounts of redundancy, which results in inefficient inference and high memory usage, preventing massive applications on mobile devices.
Ma, Yinglan   +3 more
openaire   +2 more sources

Digital Biologically Plausible Implementation of Binarized Neural Networks With Differential Hafnium Oxide Resistive Memory Arrays

open access: yesFrontiers in Neuroscience, 2020
The brain performs intelligent tasks with extremely low energy consumption. This work takes its inspiration from two strategies used by the brain to achieve this energy efficiency: the absence of separation between computing and memory functions and ...
Tifenn Hirtzlin   +7 more
doaj   +1 more source

“Ghost” and Attention in Binary Neural Network

open access: yesIEEE Access, 2022
As the memory footprint requirement and computational scale concerned, the light-weighted Binary Neural Networks (BNNs) have great advantages in limited-resources platforms, such as AIoT (Artificial Intelligence in Internet of Things) edge terminals ...
Ruimin Sun, Wanbing Zou, Yi Zhan
doaj   +1 more source

Accelerating Deterministic and Stochastic Binarized Neural Networks on FPGAs Using OpenCL

open access: yes, 2019
Recent technological advances have proliferated the available computing power, memory, and speed of modern Central Processing Units (CPUs), Graphics Processing Units (GPUs), and Field Programmable Gate Arrays (FPGAs).
Azghadi, Mostafa Rahimi   +2 more
core   +1 more source

Binarized Simplicial Convolutional Neural Networks

open access: yesNeural Networks
Graph Neural Networks have a limitation of solely processing features on graph nodes, neglecting data on high-dimensional structures such as edges and triangles. Simplicial Convolutional Neural Networks (SCNN) represent higher-order structures using simplicial complexes to break this limitation albeit still lacking time efficiency.
Yi Yan, Ercan Engin Kuruoglu
openaire   +3 more sources

Home - About - Disclaimer - Privacy