Results 301 to 310 of about 5,721,777 (374)
Some of the next articles are maybe not open access.
Design Automation Conference, 2020
Conventional hardware-friendly quantization methods, such as fixed-point or integer, tend to perform poorly at very low precision as their shrunken dynamic ranges cannot adequately capture the wide data distributions commonly seen in sequence ...
Thierry Tambe +7 more
semanticscholar +1 more source
Conventional hardware-friendly quantization methods, such as fixed-point or integer, tend to perform poorly at very low precision as their shrunken dynamic ranges cannot adequately capture the wide data distributions commonly seen in sequence ...
Thierry Tambe +7 more
semanticscholar +1 more source
Intel Nervana Neural Network Processor-T (NNP-T) Fused Floating Point Many-Term Dot Product
IEEE Symposium on Computer Arithmetic, 2020Intel’s Nervana Neural Network Processor for Training (NNP-T) contains at its core an advanced floating point dot product design to accelerate the matrix multiplication operations found in many AI applications.
Brian J. Hickmann +5 more
semanticscholar +1 more source
IEEE Journal of Solid-State Circuits
With the rapid advancement of artificial intelligence (AI), computing-in-memory (CIM) structure is proposed to improve energy efficiency (EF). However, previous CIMs often rely on INT8 data types, which pose challenges when addressing more complex ...
An Guo +11 more
semanticscholar +1 more source
With the rapid advancement of artificial intelligence (AI), computing-in-memory (CIM) structure is proposed to improve energy efficiency (EF). However, previous CIMs often rely on INT8 data types, which pose challenges when addressing more complex ...
An Guo +11 more
semanticscholar +1 more source
Scalable yet Rigorous Floating-Point Error Analysis
International Conference for High Performance Computing, Networking, Storage and Analysis, 2020Automated techniques for rigorous floating-point round-off error analysis are a prerequisite to placing important activities in HPC such as precision allocation, verification, and code optimization on a formal footing.
Arnab Das +4 more
semanticscholar +1 more source
Reverse-Engineering Deep Neural Networks Using Floating-Point Timing Side-Channels
Design Automation Conference, 2020Trained Deep Neural Network (DNN) models have become valuable intellectual property. A new attack surface has emerged for DNNs: model reverse engineering. Several recent attempts have utilized various common side channels.
Gongye Cheng, Yunsi Fei, T. Wahl
semanticscholar +1 more source
Tapered Floating Point: A New Floating-Point Representation
IEEE Transactions on Computers, 1971It is well known that there is a possible tradeoff in the binary representation of floating-point numbers in which one bit of accuracy can be gained at the cost of halving the exponent range, and vice versa. A way in which the exponent range can be greatly increased while preserving full accuracy for most computations is suggested.
openaire +2 more sources
Unnormalized Floating Point Arithmetic
Journal of the ACM, 1959Algorithms for floating point computer arithmetic are described, in which fractional parts are not subject to the usual normalization convention. These algorithms give results in a form which furnishes some indication of their degree of precision. An analysis of one-stage error propagation is developed for each operation; a suggested statistical model ...
Ashenhurst, R. L., Metropolis, N.
openaire +2 more sources
Accurate floating-point operation using controlled floating-point precision
Proceedings of 2011 IEEE Pacific Rim Conference on Communications, Computers and Signal Processing, 2011Rounding and accumulation of errors when using floating point numbers are important factors in computer arithmetic. Many applications suffer from these problems. The underlying machine architecture and representation of floating point numbers play the major role in the level and value of errors in this type of calculations.
Ahmad M. Zaki +3 more
openaire +1 more source
High-Performance FPGA-Based CNN Accelerator With Block-Floating-Point Arithmetic
IEEE Transactions on Very Large Scale Integration (VLSI) Systems, 2019Convolutional neural networks (CNNs) are widely used and have achieved great success in computer vision and speech processing applications. However, deploying the large-scale CNN model in the embedded system is subject to the constraints of computation ...
Xiaocong Lian +5 more
semanticscholar +1 more source
Journal of the ACM, 1960
Three types of floating-point arithmetics with error control are discussed and compared with conventional floating-point arithmetic. General multiplication and division shift criteria are derived (for any base) for Metropolis-type arithmetics. The limitations and most suitable range of application for each arithmetic are discussed.
openaire +1 more source
Three types of floating-point arithmetics with error control are discussed and compared with conventional floating-point arithmetic. General multiplication and division shift criteria are derived (for any base) for Metropolis-type arithmetics. The limitations and most suitable range of application for each arithmetic are discussed.
openaire +1 more source

