Results 51 to 60 of about 22,800 (222)
Embedded Binarized Neural Networks
We study embedded Binarized Neural Networks (eBNNs) with the aim of allowing current binarized neural networks (BNNs) in the literature to perform feedforward inference efficiently on small embedded devices. We focus on minimizing the required memory footprint, given that these devices often have memory as small as tens of kilobytes (KB).
McDanel, Bradley +2 more
openaire +2 more sources
Training Hardware for Binarized Convolutional Neural Network Based on CMOS Invertible Logic
In this article, we implement fast and power-efficient training hardware for convolutional neural networks (CNNs) based on CMOS invertible logic. The backpropagation algorithm is generally hard to implement in hardware because it requires high-precision ...
Duckgyu Shin +3 more
doaj +1 more source
As the applications for artificial intelligence are growing rapidly, numerous network compression algorithms have been developed to restrict computing resources such as smartphones, edge, and IoT devices. Knowledge distillation (KD) leverages soft labels
Ju Yeon Kang, Chang Ho Ryu, Tae Hee Han
doaj +1 more source
Binarized Convolutional Neural Networks with Separable Filters for Efficient Hardware Acceleration
State-of-the-art convolutional neural networks are enormously costly in both compute and memory, demanding massively parallel GPUs for execution. Such networks strain the computational capabilities and energy available to embedded and mobile processing ...
Gupta, Rajesh K. +6 more
core +1 more source
Attacking Binarized Neural Networks
Neural networks with low-precision weights and activations offer compelling efficiency advantages over their full-precision equivalents. The two most frequently discussed benefits of quantization are reduced memory consumption, and a faster forward pass when implemented with efficient bitwise operations. We propose a third benefit of very low-precision
Galloway, Angus +2 more
openaire +2 more sources
Efficient Super Resolution Using Binarized Neural Network [PDF]
Deep convolutional neural networks (DCNNs) have recently demonstrated high-quality results in single-image super-resolution (SR). DCNNs often suffer from over-parametrization and large amounts of redundancy, which results in inefficient inference and high memory usage, preventing massive applications on mobile devices.
Ma, Yinglan +3 more
openaire +2 more sources
The brain performs intelligent tasks with extremely low energy consumption. This work takes its inspiration from two strategies used by the brain to achieve this energy efficiency: the absence of separation between computing and memory functions and ...
Tifenn Hirtzlin +7 more
doaj +1 more source
“Ghost” and Attention in Binary Neural Network
As the memory footprint requirement and computational scale concerned, the light-weighted Binary Neural Networks (BNNs) have great advantages in limited-resources platforms, such as AIoT (Artificial Intelligence in Internet of Things) edge terminals ...
Ruimin Sun, Wanbing Zou, Yi Zhan
doaj +1 more source
Accelerating Deterministic and Stochastic Binarized Neural Networks on FPGAs Using OpenCL
Recent technological advances have proliferated the available computing power, memory, and speed of modern Central Processing Units (CPUs), Graphics Processing Units (GPUs), and Field Programmable Gate Arrays (FPGAs).
Azghadi, Mostafa Rahimi +2 more
core +1 more source
Binarized Simplicial Convolutional Neural Networks
Graph Neural Networks have a limitation of solely processing features on graph nodes, neglecting data on high-dimensional structures such as edges and triangles. Simplicial Convolutional Neural Networks (SCNN) represent higher-order structures using simplicial complexes to break this limitation albeit still lacking time efficiency.
Yi Yan, Ercan Engin Kuruoglu
openaire +3 more sources

