Results 221 to 230 of about 1,314,055 (250)
Some of the next articles are maybe not open access.

Low-Power Adiabatic/MTJ LIM-Based XNOR/XOR Synapse and Neuron for Binarized Neural Networks

2023 IEEE 23rd International Conference on Nanotechnology (NANO), 2023
Using binarized neural network (BNN) as an alternative to the conventional convolutional neural network is a promising candidate to answer the demand of using human brain-inspired in applications with limited hardware and power resources, such as ...
Milad Tanavardi Nasab   +1 more
semanticscholar   +1 more source

Design of Max Pooling Operation Circuit for Binarized Neural Networks Using Single-Flux-Quantum Circuit

IEEE transactions on applied superconductivity, 2023
For the binary neural network (BNN), We design a max pooling operation circuit (MPOC) using single-flux-quantum circuits based on the multiple data comparators.
Zeyu Han   +3 more
semanticscholar   +1 more source

Toward Accurate Binarized Neural Networks With Sparsity for Mobile Application

IEEE Transactions on Neural Networks and Learning Systems, 2022
While binarized neural networks (BNNs) have attracted great interest, popular approaches proposed so far mainly exploit the symmetric $sign$ function for feature binarization, i.e., to binarize activations into −1 and +1 with a fixed threshold of 0 ...
Peisong Wang, Xiangyu He, Jian Cheng
semanticscholar   +1 more source

Optical processor for a binarized neural network

Optics Letters, 2022
We propose and experimentally demonstrate an optical processor for a binarized neural network (NN). Implementation of a binarized NN involves multiply-accumulate operations, in which positive and negative weights should be implemented.
Long Huang, Jianping Yao
openaire   +2 more sources

JBNN: A Hardware Design for Binarized Neural Networks Using Single-Flux-Quantum Circuits

IEEE transactions on computers, 2022
As a high-performance application of low-temperature superconductivity, superconducting single-flux-quantum (SFQ) circuits have high speed and low-power consumption characteristics, which have recently received extensive attention, especially in the ...
Rongliang Fu   +5 more
semanticscholar   +1 more source

An Efficient Channel-Aware Sparse Binarized Neural Networks Inference Accelerator

IEEE Transactions on Circuits and Systems - II - Express Briefs, 2022
The binarized neural network (BNN) inference accelerators show great promise in cost- and power-restricted domains. However, the performances of these accelerators are still severely limited by the significant redundancies in BNNs inference.
Qingliang Liu, Jinmei Lai, Jiabao Gao
semanticscholar   +1 more source

SBNN: Slimming binarized neural network

Neurocomputing, 2020
Abstract With the rapid developments of deep neural networks related applications, approaches for accelerating computationally intensive convolutional neural networks, such as network quantization, pruning, knowledge distillation, have attracted ever-increasing attention.
Qing Wu   +5 more
openaire   +1 more source

Binarized Attributed Network Embedding via Neural Networks

2020 International Joint Conference on Neural Networks (IJCNN), 2020
Traditional attributed network embedding methods are designed to map structural and attribute information of networks jointly into a continuous Euclidean space, while recently a novel branch of them named binarized attributed network embedding has emerged to learn binary codes in Hamming space, aiming to save time and memory costs and to naturally fit ...
Hangyu Xia   +4 more
openaire   +1 more source

Design of Binary Convolution Operation Circuit for Binarized Neural Networks Using Single-Flux-Quantum Circuit

IEEE transactions on applied superconductivity, 2022
We design a binary convolution operation circuit (BCOC) using a single-flux-quantum circuit for high-speed and energy-efficient neural network. The proposed circuit is used for binary convolution operations using a convolution kernel size of 3 $ \times $
Zongyuan Li, Y. Yamanashi, N. Yoshikawa
semanticscholar   +1 more source

Weight Compression-Friendly Binarized Neural Network

2020 IEEE Global Conference on Artificial Intelligence and Internet of Things (GCAIoT), 2020
The resources of edge devices in AIoT systems are usually constrained with size and power. The computational complexity of neural network models in these edge devices has become a major concern. The most compact form of deep neural networks is binarized neural network (BNN), which adopts binary weights and exclusive NOR (XNOR) operations as binary ...
Yuzhong Jiao   +4 more
openaire   +1 more source

Home - About - Disclaimer - Privacy