Results 231 to 240 of about 194,546 (266)
Some of the next articles are maybe not open access.

Floating-Point Arithmetics

Journal of the ACM, 1960
Three types of floating-point arithmetics with error control are discussed and compared with conventional floating-point arithmetic. General multiplication and division shift criteria are derived (for any base) for Metropolis-type arithmetics. The limitations and most suitable range of application for each arithmetic are discussed.
openaire   +1 more source

Floating point Gröbner bases

Mathematics and Computers in Simulation, 1996
zbMATH Open Web Interface contents unavailable due to conflicting licenses.
openaire   +1 more source

Floating-Point Arithmetic

2020
Working with big integers can be seen as an abstract art, and if the cryptosystems are not implemented properly, the entire cryptographic algorithm or scheme can lead to a real disaster. This chapter focuses on floating-point arithmetic and its importance for cryptography.
Marius Iulian Mihailescu   +1 more
openaire   +2 more sources

Roundings in floating point arithmetic

1972 IEEE 2nd Symposium on Computer Arithmetic (ARITH), 1972
In this paper we discuss directed roundings and indicate how hardware might be designed to produce proper upward-directed, downward-directed, and certain commonly used symmetric roundings. Algorithms for the four binary arithmetic operations and for rounding are presented, together with proofs of their correctness; appropriate formulas for a priori ...
openaire   +1 more source

Exact Floating Point

2021
Standard IEEE floating point, which defines the representation and calculations of real numbers using a binary representation similar to scientific notation, does not define an exact floating-point result. In contrast, here we use a patented bounded floating-point (BFP) device and method for calculating and retaining the precision of the floating-point
Alan A. Jorgensen, Andrew C. Masters
openaire   +1 more source

Floating-Point Numbers

2021
So far, the only numbers we’ve dealt with are integers—numbers with no decimal point. Computers have a general problem with numbers with decimal points, because computers can only store fixed-size, finite values. Decimal numbers can be any length, including infinite length (think of a repeating decimal, like the result of 1/3).
openaire   +1 more source

Arbitrary-Precision Floating Point

2017
Floating-Point numbers using the IEEE standard have a fixed number of bits associated with them. This limits the precision with which they can represent actual real numbers. In this chapter we examine arbitrary precision, also called multiple precision, floating-point numbers. We look at how they are represented in memory and how basic arithmetic works.
openaire   +1 more source

Floating Point Arithmetic

2012
There are many data processing applications (e.g. image and voice processing), which use a large range of values and that need a relatively high precision. In such cases, instead of encoding the information in the form of integers or fixed-point numbers, an alternative solution is a floating-point representation.
Jean-Pierre Deschamps   +2 more
openaire   +1 more source

Floating-Point Numbers

1995
Floating-point storage of data takes care of many of the nasty problems that occur when you are using integers and fixed-point numbers. But floating-point storage has its own problems.
openaire   +1 more source

Decimal Floating Point

2015
Decimal floating-point is an emerging standard which uses base 10 instead of base 2 to represent floating-point numbers. In this chapter we will take a look at how decimal floating-point numbers are stored using the IEEE 754-2008 standard as our reference.
openaire   +1 more source

Home - About - Disclaimer - Privacy