Results 31 to 40 of about 249,885 (208)
The Quantization of Gravity: Quantization of the Hamilton Equations [PDF]
We quantize the Hamilton equations instead of the Hamilton condition. The resulting equation has the simple form −Δu=0 in a fiber bundle, where the Laplacian is the Laplacian of the Wheeler–DeWitt metric provided n≠4. Using then separation of variables, the solutions u can be expressed as products of temporal and spatial eigenfunctions, where the ...
openaire +5 more sources
A dislocation, just like a phonon, is a type of atomic lattice displacement but subject to an extra topological constraint. However, unlike the phonon which has been quantized for decades, the dislocation has long remained classical. This article is a comprehensive review of the recent progress on quantized dislocations, aka the "dislon" theory.
openaire +4 more sources
Quantization Dimension via Quantization Numbers [PDF]
zbMATH Open Web Interface contents unavailable due to conflicting licenses.
Kesseböhmer, Marc, Zhu, Sanguo
openaire +2 more sources
We propose a method for quantization of Lagrangians for which the Hamiltonian, as a function of momentum, is a branched function with cusps. Appropriate boundary conditions, which we identify, insure unitary time evolution. In special cases a dual (canonical) transformation maps the problem into a problem of quantum mechanics on singular spatial ...
Shapere, Alfred, Wilczek, Frank
openaire +5 more sources
There is no “first” quantization [PDF]
The introduction of spinor and other massive fields by ``quantizing'' particles (corpuscles) is conceptually misleading. Only spatial fields must be postulated to form the fundamental objects to be quantized (that is, to define a formal basis for all quantum states), while apparent ``particles'' are a mere consequence of decoherence. This conclusion is
openaire +3 more sources
HAQ: Hardware-Aware Automated Quantization With Mixed Precision [PDF]
Model quantization is a widely used technique to compress and accelerate deep neural network (DNN) inference. Emergent DNN hardware accelerators begin to support mixed precision (1-8 bits) to further improve the computation efficiency, which raises a ...
Kuan Wang+4 more
semanticscholar +1 more source
20 pages, LaTex file, Substantially revised ...
openaire +3 more sources
HAWQ: Hessian AWare Quantization of Neural Networks With Mixed-Precision [PDF]
Model size and inference speed/power have become a major challenge in the deployment of neural networks for many applications. A promising approach to address these problems is quantization.
Zhen Dong+4 more
semanticscholar +1 more source
Quantized Feature Distillation for Network Quantization
Neural network quantization aims to accelerate and trim full-precision neural network models by using low bit approximations. Methods adopting the quantization aware training (QAT) paradigm have recently seen a rapid growth, but are often conceptually complicated.
Zhu, Ke, He, Yin-Yin, Wu, Jianxin
openaire +2 more sources