Results 21 to 30 of about 5,721,777 (374)
Analysis of Posit and Bfloat Arithmetic of Real Numbers for Machine Learning
Modern computational tasks are often required to not only guarantee predefined accuracy, but get the result fast. Optimizing calculations using floating point numbers, as opposed to integers, is a non-trivial task.
Aleksandr Yu. Romanov +8 more
doaj +1 more source
Succinct Zero Knowledge for Floating Point Computations
We study the problem of constructing succinct zero knowledge proof systems for floating point computations. The standard approach to handle floating point computations requires conversion to binary circuits, following the IEEE-754 floating point standard.
Sanjam Garg +3 more
semanticscholar +1 more source
We develop a bit manipulation technique for single precision floating point numbers which leads to new algorithms for fast computation of the cube root and inverse cube root.
Leonid Moroz +3 more
doaj +1 more source
Manticore: A 4096-Core RISC-V Chiplet Architecture for Ultraefficient Floating-Point Computing [PDF]
Data-parallel problems demand ever growing floating-point (FP) operations per second under tight area- and energy-efficiency constraints. In this work, we present Manticore, a general-purpose, ultraefficient chiplet-based architecture for data-parallel ...
Florian Zaruba +2 more
semanticscholar +1 more source
Exploiting Verified Neural Networks via Floating Point Numerical Error [PDF]
We show how to construct adversarial examples for neural networks with exactly verified robustness against $\ell_{\infty}$-bounded input perturbations by exploiting floating point error. We argue that any exact verification of real-valued neural networks
Kai Jia, M. Rinard
semanticscholar +1 more source
Optimistic Parallelization of Floating-Point Accumulation [PDF]
Floating-point arithmetic is notoriously non-associative due to the limited precision representation which demands intermediate values be rounded to fit in the available precision.
DeHon, André, Kapre, Nachiket
core +4 more sources
This article presents a design methodology for designing an artificial neural network as an equalizer for a binary signal. Firstly, the system is modelled in floating point format using Matlab.
Santiago T. Pérez Suárez +2 more
doaj +1 more source
DESIGN AND PERFORMANCE ANALYSIS OF TERNARY LOGIC BASED ALU USING DOUBLE PRECISION FLOATING POINT [PDF]
In digital circuits, particularly space signal applications, the detection/estimation of phase (angle) like milli degree is challenging and involves many complex operations.
Nagarathna R , A R Aswatha
doaj +1 more source
Instruction Fetch Policy for SMT Processors with Different Allocations of Floating-point and Integer Resources [PDF]
In Simultaneous Multithreading(SMT) processors,different threads have different demands for floating-point and integer resources.How to allocate shared resources among threads is the key point to improve the whole performance for SMT processors.Aiming at
JIANG Shengjian,HU Xiangdong,YANG Jianxin
doaj +1 more source
Unbiased Rounding for HUB Floating-point Addition [PDF]
Copyright (c) 2018 IEEE doi:10.1109/TC.2018.2807429Half-Unit-Biased (HUB) is an emerging format based on shifting the represented numbers by half Unit in the Last Place.
Gonzalez-Navarro, Sonia +2 more
core +1 more source

