Results 61 to 70 of about 1,314,055 (250)
FCA-BNN: Flexible and Configurable Accelerator for Binarized Neural Networks on FPGA
SUMMARY A series of Binarized Neural Networks (BNNs) show the accepted accuracy in image classification tasks and achieve the excellent performance on field programmable gate array (FPGA).
Jiabao Gao +3 more
semanticscholar +1 more source
FP-BNN: Binarized neural network on FPGA [PDF]
Deep neural networks (DNNs) have attracted significant attention for their excellent accuracy especially in areas such as computer vision and artificial intelligence. To enhance their performance, technologies for their hardware acceleration are being studied.
Liang, Shuang +4 more
openaire +3 more sources
Accelerating Deterministic and Stochastic Binarized Neural Networks on FPGAs Using OpenCL
Recent technological advances have proliferated the available computing power, memory, and speed of modern Central Processing Units (CPUs), Graphics Processing Units (GPUs), and Field Programmable Gate Arrays (FPGAs).
Azghadi, Mostafa Rahimi +2 more
core +1 more source
Embedded Binarized Neural Networks
We study embedded Binarized Neural Networks (eBNNs) with the aim of allowing current binarized neural networks (BNNs) in the literature to perform feedforward inference efficiently on small embedded devices. We focus on minimizing the required memory footprint, given that these devices often have memory as small as tens of kilobytes (KB).
McDanel, Bradley +2 more
openaire +2 more sources
End to End Binarized Neural Networks for Text Classification [PDF]
Deep neural networks have demonstrated their superior performance in almost every Natural Language Processing task, however, their increasing complexity raises concerns.
Harshil Jain +3 more
semanticscholar +1 more source
Convolutional neural networks (CNNs) have been widely used in image recognition and processing tasks. Memristor-based CNNs accumulate the advantages of emerging memristive devices, such as nanometer critical dimensions, low power consumption, and ...
Anna N. Matsukatova +11 more
doaj +1 more source
Binarized Convolutional Neural Networks with Separable Filters for Efficient Hardware Acceleration
State-of-the-art convolutional neural networks are enormously costly in both compute and memory, demanding massively parallel GPUs for execution. Such networks strain the computational capabilities and energy available to embedded and mobile processing ...
Gupta, Rajesh K. +6 more
core +1 more source
CBin-NN: An Inference Engine for Binarized Neural Networks
Binarization is an extreme quantization technique that is attracting research in the Internet of Things (IoT) field, as it radically reduces the memory footprint of deep neural networks without a correspondingly significant accuracy drop.
Fouad Sakr +6 more
semanticscholar +1 more source
Accelerating Binarized Neural Networks via Bit-Tensor-Cores in Turing GPUs [PDF]
Despite foreseeing tremendous speedups over conventional deep neural networks, the performance advantage of binarized neural networks (BNNs) has merely been showcased on general-purpose processors such as CPUs and GPUs.
Ang Li, Simon Su
semanticscholar +1 more source
Attacking Binarized Neural Networks
Neural networks with low-precision weights and activations offer compelling efficiency advantages over their full-precision equivalents. The two most frequently discussed benefits of quantization are reduced memory consumption, and a faster forward pass when implemented with efficient bitwise operations. We propose a third benefit of very low-precision
Galloway, Angus +2 more
openaire +2 more sources

