Results 211 to 220 of about 16,570 (263)
Some of the next articles are maybe not open access.

Parameterisable floating-point operations on FPGA

Conference Record of the Thirty-Sixth Asilomar Conference on Signals, Systems and Computers, 2002., 2003
The paper presents a group of IEEE 754-style floating-point units targeted at Xilinx VirtexII FPGA. Special features of the technology are taken advantage of to produce optimised components. Pipelined designs are given that show the latency of /spl sim/100 MHz single-precision components.
B. Lee, N. Burgess
openaire   +1 more source

VHDL Floating Point Operations

1995
In this paper, we present a set of portable floating point VHDL functions. These functions provide the VHDL programmer with absolute portability and very precise control over floating point operations. A single VHDL type is used to represent single, double, and extended precision floating point numbers.
George S. Powley, Joanne E. DeGroat
openaire   +1 more source

Semantics for exact floating point operations

[1991] Proceedings 10th IEEE Symposium on Computer Arithmetic, 2002
Semantics are given for the four elementary arithmetic operations and the square root, to characterize what are termed exact floating point operations. The operands of the arithmetic operations and the argument of the square root are all floating point numbers in one format.
Bohlender, Gerd   +3 more
openaire   +1 more source

Floating-Point Operations

2019
The Raspberry Pi is based on a system on a chip. This chip contains the quad-core ARM CPU that we have been studying along with a couple of coprocessors. In this chapter, we’ll be looking at what the floating-point unit (FPU) does. Some ARM documentation refers to this as the Vector Floating Point (VFP) to promote the fact that it can do some limited ...
openaire   +1 more source

Floating Point Operation

1980
The range of numbers available in a digital computer word as discussed so far is strictly limited. A 32-bit number has a range of about 232 or 1010 numbers. If the numbers are regarded as integers, then it is necessary to scale many problems in order to represent fractions.
openaire   +1 more source

Analysis of floating point operations in microcontrollers

2011 Proceedings of IEEE Southeastcon, 2011
The purpose of this paper is to identify the advantages of including a floating point hardware / a mathematical co-processor in microcontrollers used for critical floating point operations. Three different microcontrollers are considered: Renesas M16C/62P (CISC without FPU), ATMEGA1280 (RISC without MCU) and Renesas RX62N (CISC with FPU).
Aswin Ramakrishnan, James M. Conrad
openaire   +1 more source

Formalization and implementation of floating-point matrix operations

Computing, 1976
The paper shows that floating-point matrix operations can be implemented in a way which leads to reasonable mathematical structures as well as to sensible compatibility properties between these structures and the structure of the real matrices. It turns out, for instance, that all the rules of the minus-operator for real matrices can be saved and that ...
Kulisch, U., Bohlender, G.
openaire   +1 more source

Micro-optimization of floating-point operations

Proceedings of the third international conference on Architectural support for programming languages and operating systems, 1989
This paper describes micro-optimization, a technique for reducing the operation count and time required to perform floating-point calculations. Micro-optimization involves breaking floating-point operations into their constituent micro-operations and optimizing the resulting code. Exposing micro-operations to the compiler creates many opportunities for
openaire   +1 more source

A dyadic floating-point mutation operator of EC

International Conference on Neural Networks and Signal Processing, 2003. Proceedings of the 2003, 2003
The performance of evolutionary computation (EC) is determined by many parameters among which the mutation operator plays an important role especially for floating-point EC. However, the traditional mutation operation can't effectively keep EC from trapping in local extremum. In order to improve the efficiency of EC, a novel dyadic mutation operator is
null Xu Xiangyong   +2 more
openaire   +1 more source

Acceleration of accurate floating point operations using SIMD

2014 9th International Conference on Computer Engineering & Systems (ICCES), 2014
Several computing systems that use decimal number calculations suffer from the accumulation and propagation of errors. Decimal numbers are represented using specific length floating point formats and hence there will always be a truncation of extra fraction bits causing errors. Several solutions had been proposed for such a problem.
DiaaEldin M. Abdalla   +2 more
openaire   +1 more source

Home - About - Disclaimer - Privacy