Results 41 to 50 of about 249,885 (208)
Data-Free Quantization Through Weight Equalization and Bias Correction [PDF]
We introduce a data-free quantization method for deep neural networks that does not require fine-tuning or hyperparameter selection. It achieves near-original model performance on common computer vision architectures and tasks.
Markus Nagel+3 more
semanticscholar +1 more source
Differentiable Soft Quantization: Bridging Full-Precision and Low-Bit Neural Networks [PDF]
Hardware-friendly network quantization (e.g., binary/uniform quantization) can efficiently accelerate the inference and meanwhile reduce memory consumption of the deep neural networks, which is crucial for model deployment on resource-limited devices like
Ruihao Gong+7 more
semanticscholar +1 more source
Q-BERT: Hessian Based Ultra Low Precision Quantization of BERT [PDF]
Transformer based architectures have become de-facto models used for a range of Natural Language Processing tasks. In particular, the BERT based models achieved significant accuracy gain for GLUE tasks, CoNLL-03 and SQuAD. However, BERT based models have
Sheng Shen+7 more
semanticscholar +1 more source
This paper studies the compact coding approach to approximate nearest neighbor search. We introduce a composite quantization framework. It uses the composition of several ($M$) elements, each of which is selected from a different dictionary, to accurately approximate a $D$-dimensional vector, thus yielding accurate search, and represents the data ...
Jingdong Wang, Ting Zhang
openaire +3 more sources
Quantization of Midisuperspace Models [PDF]
To appear in Living Review in ...
Barbero González, Jesús Fernando+1 more
openaire +8 more sources
To quantize or not to quantize gravity ?
It is shown here that the Standard Model (SM) of particle physics supports the view that gravity need not be quantized. It is shown that the SM gives a consistent description of the origin of the universe. It is guggested that the universe came into existence when the SM symmetry was broken spontaneously. This brings out a complete and consistent model
openaire +2 more sources
Vector quantization(VQ) is a lossy data compression technique from signal processing, which is restricted to feature vectors and therefore inapplicable for combinatorial structures. This contribution presents a theoretical foundation of graph quantization (GQ) that extends VQ to the domain of attributed graphs.
Klaus Obermayer, Brijnesh J. Jain
openaire +3 more sources
From Quantized DNNs to Quantizable DNNs
This paper proposes Quantizable DNNs, a special type of DNNs that can flexibly quantize its bit-width (denoted as `bit modes' thereafter) during execution without further re-training. To simultaneously optimize for all bit modes, a combinational loss of all bit modes is proposed, which enforces consistent predictions ranging from low-bit mode to 32-bit
Du, Kunyuan, Zhang, Ya, Guan, Haibing
openaire +2 more sources
In this paper we will analyse the creation of the multiverse. We will first calculate the wave function for the multiverse using third quantization. Then we will fourth quantize this theory. We will show that there is no single vacuum state for this theory. Thus, we can end up with a multiverse, even after starting from a vacuum state.
openaire +2 more sources
We study the connection between the Eynard-Orantin topological recursion and quantum curves for the family of genus one spectral curves given by the Weierstrass equation. We construct quantizations of the spectral curve that annihilate the perturbative and non-perturbative wave-functions. In particular, for the non-perturbative wave-function, we prove,
Vincent Bouchard+2 more
openaire +3 more sources