Results 251 to 260 of about 1,721,209 (309)

Optimizing binary neural network quantization for fixed pattern noise robustness. [PDF]

open access: yesSci Rep
Andreo-Oliver FJ   +4 more
europepmc   +1 more source

Transformer based HF communication demodulation. [PDF]

open access: yesSci Rep
Lu C   +5 more
europepmc   +1 more source

Binary Neural Networks

2023
Baochang Zhang   +4 more
openaire   +2 more sources

TD-SRAM: Time-Domain-Based In-Memory Computing Macro for Binary Neural Networks

IEEE Transactions on Circuits and Systems Part 1: Regular Papers, 2021
In-Memory Computing (IMC), which takes advantage of analog multiplication-accumulation (MAC) insides memory, is promising to alleviate the Von-Neumann bottleneck and improve the energy efficiency of deep neural networks (DNNs). Since the time-domain (TD)
Jiahao Song   +8 more
semanticscholar   +1 more source

RB-Net: Training Highly Accurate and Efficient Binary Neural Networks With Reshaped Point-Wise Convolution and Balanced Activation

IEEE transactions on circuits and systems for video technology (Print), 2022
In this paper, we find that the conventional convolution operation becomes the bottleneck for extremely efficient binary neural networks (BNNs). To address this issue, we open up a new direction by introducing a reshaped point-wise convolution (RPC) to ...
Chunlei Liu   +7 more
semanticscholar   +1 more source

Highly parallelized memristive binary neural network

Neural Networks, 2021
At present, in the new hardware design work of deep learning, memristor as a non-volatile memory with computing power has become a research hotspot. The weights in the deep neural network are the floating-point number. Writing a floating-point value into a memristor will result in a loss of accuracy, and the writing process will take more time.
Jiadong, Chen   +3 more
openaire   +2 more sources

XNOR-Net: ImageNet Classification Using Binary Convolutional Neural Networks

European Conference on Computer Vision, 2016
We propose two efficient approximations to standard convolutional neural networks: Binary-Weight-Networks and XNOR-Networks. In Binary-Weight-Networks, the filters are approximated with binary values resulting in 32\(\times \) memory saving.
Mohammad Rastegari   +3 more
semanticscholar   +1 more source

Home - About - Disclaimer - Privacy