Results 311 to 320 of about 5,721,777 (374)
Some of the next articles are maybe not open access.
Low Latency Floating-Point Division and Square Root Unit
IEEE transactions on computers, 2020Digit-recurrence algorithms are widely used in actual microprocessors to compute floating-point division and square root. These iterative algorithms present a good trade-off in terms of performance, area and power.
J. Bruguera
semanticscholar +1 more source
Mathematics and Computers in Simulation, 1996
zbMATH Open Web Interface contents unavailable due to conflicting licenses.
openaire +1 more source
zbMATH Open Web Interface contents unavailable due to conflicting licenses.
openaire +1 more source
Integer vs. Floating-Point Processing on Modern FPGA Technology
Computing and Communication Workshop and Conference, 2020Historically, FPGA designers have used integer processing whenever possible because floating-point processing was prohibitively costly due to higher logic requirements and speed reduction. Therefore, fixed-point processing was the norm.
D. L. N. Hettiarachchi +2 more
semanticscholar +1 more source
Roundings in floating point arithmetic
1972 IEEE 2nd Symposium on Computer Arithmetic (ARITH), 1972In this paper we discuss directed roundings and indicate how hardware might be designed to produce proper upward-directed, downward-directed, and certain commonly used symmetric roundings. Algorithms for the four binary arithmetic operations and for rounding are presented, together with proofs of their correctness; appropriate formulas for a priori ...
openaire +1 more source
Efficient Multiple-Precision Floating-Point Fused Multiply-Add with Mixed-Precision Support
IEEE transactions on computers, 2019In this paper, an efficient multiple-precision floating-point fused multiply-add (FMA) unit is proposed. The proposed FMA supports not only single-precision, double-precision, and quadruple-precision operations, as some previous works do, but also half ...
Hao Zhang, Dongdong Chen, S. Ko
semanticscholar +1 more source
DLFloat: A 16-b Floating Point Format Designed for Deep Learning Training and Inference
IEEE Symposium on Computer Arithmetic, 2019The resilience of Deep Learning (DL) training and inference workloads to low-precision computations, coupled with the demand for power-and area-efficient hardware accelerators for these workloads, has led to the emergence of 16-bit floating point formats
A. Agrawal +6 more
semanticscholar +1 more source
Approximate Integer and Floating-Point Dividers with Near-Zero Error Bias
Design Automation Conference, 2019We propose approximate dividers with near-zero error bias for both integer and floating-point numbers. The integer divider, INZeD, is designed using a novel, analytically deduced error-correction method in an approximate log based divider.
Hassaan Saadat +2 more
semanticscholar +1 more source
Just fuzz it: solving floating-point constraints using coverage-guided fuzzing
ESEC/SIGSOFT FSE, 2019We investigate the use of coverage-guided fuzzing as a means of proving satisfiability of SMT formulas over finite variable domains, with specific application to floating-point constraints.
Daniel Liew +3 more
semanticscholar +1 more source
2021
Standard IEEE floating point, which defines the representation and calculations of real numbers using a binary representation similar to scientific notation, does not define an exact floating-point result. In contrast, here we use a patented bounded floating-point (BFP) device and method for calculating and retaining the precision of the floating-point
Alan A. Jorgensen, Andrew C. Masters
openaire +1 more source
Standard IEEE floating point, which defines the representation and calculations of real numbers using a binary representation similar to scientific notation, does not define an exact floating-point result. In contrast, here we use a patented bounded floating-point (BFP) device and method for calculating and retaining the precision of the floating-point
Alan A. Jorgensen, Andrew C. Masters
openaire +1 more source
Fixed point versus floating point
SIMULATION, 1968This paper discusses the advantages of using fixed-point arithmetic in a hybrid environment. The particular ex ample used is that of a hybrid simulation system. The paper discusses the performance penalties which are paid in using floating-point arithmetic as opposed to fixed-point arithmetic.
openaire +1 more source

