Results 21 to 30 of about 194,546 (266)
Floating-Point LLL Revisited [PDF]
Everybody knows the Lenstra-Lenstra-Lovász lattice basis reduction algorithm (LLL), which has proved invaluable in public-key cryptanalysis and in many other fields. Given an integer $d$-dimensional lattice basis which vectors have norms smaller than $B$, LLL outputs a so-called LLL-reduced basis in time $O(d^6 log^3 B)$, using arithmetic operations on
Nguyen, Phong Q., Stehlé, Damien
openaire +3 more sources
Radix Conversion for IEEE754-2008 Mixed Radix Floating-Point Arithmetic [PDF]
Conversion between binary and decimal floating-point representations is ubiquitous. Floating-point radix conversion means converting both the exponent and the mantissa.
Kupriianova, O. +2 more
core +4 more sources
Templatized Fused Vector Floating-Point Dot Product for High-Level Synthesis
Machine-learning accelerators rely on floating-point matrix and vector multiplication kernels. To reduce their cost, customized many-term fused architectures are preferred, which improve the latency, power, and area of the designs.
Dionysios Filippas +2 more
doaj +1 more source
Floating Point Verification [PDF]
JUCS - Journal of Universal Computer Science Volume Nr.
openaire +1 more source
On Floating-Point Summation [PDF]
The author starts with a general algorithm for summing up \(n + 1\) numbers \(x_i\), \(i = 0,\dots,n\). He relates the total error \(\widehat{s}_n - s_n\) in the floating point summation to the computed intermediate machine sums \(\widehat{s}_i\) and to the initial errors \(e_{\widehat{x}_i} = \widehat{x}_i - x_i\) where \(\widehat{x}_i\) is the ...
openaire +1 more source
Verification of Floating-Point Adders [PDF]
The floating-point(FP) division bug in Intel’s Pentium processor and the overflow flag erratum of the FIST instruction in Intel’s Pentium Pro and Pentium II processor have demonstrated the importance and the difficulty of verifying FP arithmetic circuits.
Yirng-An Chen, Bryant, Randal E.
openaire +2 more sources
AxCEM: Designing Approximate Comparator-Enabled Multipliers
Floating-point multipliers have been the key component of nearly all forms of modern computing systems. Most data-intensive applications, such as deep neural networks (DNNs), expend the majority of their resources and energy budget for floating-point ...
Samar Ghabraei +3 more
doaj +1 more source
Design and Implementation of Numerical Linear Algebra Algorithms on Fixed Point DSPs
Numerical linear algebra algorithms use the inherent elegance of matrix formulations and are usually implemented using C/C++ floating point representation.
Nguyen Ha Thai +2 more
doaj +2 more sources
Mixed-precision weights network for field-programmable gate array.
In this study, we introduced a mixed-precision weights network (MPWN), which is a quantization neural network that jointly utilizes three different weight spaces: binary {-1,1}, ternary {-1,0,1}, and 32-bit floating-point.
Ninnart Fuengfusin, Hakaru Tamukoh
doaj +1 more source
Efficient Floating Point Arithmetic for Quantum Computers
One of the major promises of quantum computing is the realization of SIMD (single instruction - multiple data) operations using the phenomenon of superposition.
Raphael Seidel +4 more
doaj +1 more source

