Results 21 to 30 of about 1,194,201 (348)
PTQD: Accurate Post-Training Quantization for Diffusion Models [PDF]
Diffusion models have recently dominated image synthesis tasks. However, the iterative denoising process is expensive in computations at inference time, making diffusion models less practical for low-latency and scalable real-world applications.
Yefei He+5 more
semanticscholar +1 more source
The paper presents a technique for estimating the information parameters of the quantization noise generated by the analog-to-digital conversion of the measuring signal. An experiment and algorithm descriptions are presented to confirm the correctness of
V. K. Zheleznyak+2 more
doaj +1 more source
Introduction. The digital representation of received radar signals has provided a wide range of opportunities for their processing. However, the used hardware and software impose some limits on the number of bits and sampling rate of the signal at all ...
S. R. Heister, V. V. Kirichenko
doaj +1 more source
Computational Complexity Evaluation of Neural Network Applications in Signal Processing
—In this paper, we provide a systematic approach for assessing and comparing the computational complexity of neural network layers in digital signal processing.
Pedro J. Freire+4 more
semanticscholar +1 more source
Speed up integer-arithmetic-only inference via bit-shifting. [PDF]
Quantization is a widely adopted technique in model deployment as it offers a favorable trade-off between computational overhead and performance loss.
Song M+5 more
europepmc +2 more sources
Signal Processing Methods to Enhance the Energy Efficiency of In-Memory Computing Architectures
This paper presents signal processing methods to enhance the energy vs. accuracy trade-off of in-memory computing (IMC) architectures. First, an optimal clipping criterion (OCC) for signal quantization is proposed in order to minimize the precision of ...
Charbel Sakr, Naresh R Shanbhag
semanticscholar +1 more source
Adaptive Quantization of Model Updates for Communication-Efficient Federated Learning [PDF]
Communication of model updates between client nodes and the central aggregating server is a major bottleneck in federated learning, especially in bandwidth-limited settings and high-dimensional models.
Divyansh Jhunjhunwala+3 more
semanticscholar +1 more source
UNO: Unlimited Sampling Meets One-Bit Quantization [PDF]
Recent results in one-bit sampling provide a framework for a relatively low-cost, low-power sampling, at a high rate by employing time-varying sampling threshold sequences.
Arian Eamaz+3 more
semanticscholar +1 more source
Quantization error for weak RF simultaneous signal estimation
In a congested signal environment, it is difficult to obtain estimates of weak RF signal parameters. Determining signal parameter estimates in real time is a challenge for electronic warfare receivers that aim to receive multiple simultaneous signals ...
Mary Y. Lanzerotti+2 more
doaj +1 more source
UVeQFed: Universal Vector Quantization for Federated Learning [PDF]
Traditional deep learning models are trained at a centralized server using data samples collected from users. Such data samples often include private information, which the users may not be willing to share.
Nir Shlezinger+4 more
semanticscholar +1 more source