Results 1 to 10 of about 94,745 (265)
Scaling up and down of 3-D floating-point data in quantum computation. [PDF]
In the past few decades, quantum computation has become increasingly attractive due to its remarkable performance. Quantum image scaling is considered a common geometric transformation in quantum image processing, however, the quantum floating-point data
Xu M, Lu D, Sun X.
europepmc +2 more sources
NULL convention floating point multiplier. [PDF]
Floating point multiplication is a critical part in high dynamic range and computational intensive digital signal processing applications which require high precision and low power.
Albert AJ, Albert AJ, Ramachandran S.
europepmc +2 more sources
Hybrid Precision Floating-Point (HPFP) Selection to Optimize Hardware-Constrained Accelerator for CNN Training. [PDF]
The rapid advancement in AI requires efficient accelerators for training on edge devices, which often face challenges related to the high hardware costs of floating-point arithmetic operations.
Junaid M +5 more
europepmc +2 more sources
High-Performance Computing in Meteorology under a Context of an Era of Graphical Processing Units
This short review shows how innovative processing units—including graphical processing units (GPUs)—are used in high-performance computing (HPC) in meteorology, introduces current scientific studies relevant to HPC, and discusses the latest topics in ...
Tosiyuki Nakaegawa
doaj +1 more source
Detecting Floating-Point Expression Errors Based Improved PSO Algorithm
The use of floating-point numbers inevitably leads to inaccurate results and, in certain cases, significant program failures. Detecting floating-point errors is critical to ensuring that floating-point programs outputs are proper.
Hongru Yang +4 more
doaj +1 more source
Floating Point Optimization Using VHDL [PDF]
Due to inherent limitations of the fixed-point representation, it is sometimes desirable to perform arithmetic operations in the floating-point format.
Manal Hammadi Jassim
doaj +1 more source
Hadamard product-based in-memory computing design for floating point neural network training
Deep neural networks (DNNs) are one of the key fields of machine learning. It requires considerable computational resources for cognitive tasks. As a novel technology to perform computing inside/near memory units, in-memory computing (IMC) significantly ...
Anjunyi Fan +9 more
doaj +1 more source
Analysis of Posit and Bfloat Arithmetic of Real Numbers for Machine Learning
Modern computational tasks are often required to not only guarantee predefined accuracy, but get the result fast. Optimizing calculations using floating point numbers, as opposed to integers, is a non-trivial task.
Aleksandr Yu. Romanov +8 more
doaj +1 more source
We develop a bit manipulation technique for single precision floating point numbers which leads to new algorithms for fast computation of the cube root and inverse cube root.
Leonid Moroz +3 more
doaj +1 more source
This article presents a design methodology for designing an artificial neural network as an equalizer for a binary signal. Firstly, the system is modelled in floating point format using Matlab.
Santiago T. Pérez Suárez +2 more
doaj +1 more source

