Results 291 to 300 of about 1,866,673 (365)
Some of the next articles are maybe not open access.

Vector quantization: a review

Frontiers of Information Technology & Electronic Engineering, 2019
Vector quantization (VQ) is a very effective way to save bandwidth and storage for speech coding and image coding. Traditional vector quantization methods can be divided into mainly seven types, tree-structured VQ, direct sum VQ, Cartesian product VQ, lattice VQ, classified VQ, feedback VQ, and fuzzy VQ, according to their codebook generation ...
Ze-bin Wu, Jun-qing Yu
openaire   +3 more sources

Compact3D: Compressing Gaussian Splat Radiance Field Models with Vector Quantization

arXiv.org, 2023
3D Gaussian Splatting is a new method for modeling and rendering 3D radiance fields that achieves much faster learning and rendering time compared to SOTA NeRF meth-ods. However, it comes with a drawback in the much larger storage demand compared to NeRF
K. Navaneet   +3 more
semanticscholar   +1 more source

CompGS: Smaller and Faster Gaussian Splatting with Vector Quantization

European Conference on Computer Vision, 2023
3D Gaussian Splatting (3DGS) is a new method for modeling and rendering 3D radiance fields that achieves much faster learning and rendering time compared to SOTA NeRF methods.
K. Navaneet   +3 more
semanticscholar   +1 more source

Two-stage vector quantization-lattice vector quantization

IEEE Transactions on Information Theory, 1995
Summary: A two-stage vector quantizer is introduced that uses an unstructured first-stage codebook and a second-stage lattice codebook. Joint optimum two-stage encoding is accomplished by exhaustive search of the parent codebook of the two-stage product code.
Thomas R. Fischer, Jianping Pan
openaire   +2 more sources

One-Shot Voice Conversion by Vector Quantization

IEEE International Conference on Acoustics, Speech, and Signal Processing, 2020
In this paper, we propose a vector quantization (VQ) based one-shot voice conversion (VC) approach without any supervision on speaker label. We model the content embedding as a series of discrete codes and take the difference between quantize-before and ...
Da-Yi Wu, Hung-yi Lee
semanticscholar   +1 more source

Soft Learning Vector Quantization

Neural Computation, 2003
Learning vector quantization (LVQ) is a popular class of adaptive nearest prototype classifiers for multiclass classification, but learning algorithms from this family have so far been proposed on heuristic grounds. Here, we take a more principled approach and derive two variants of LVQ using a gaussian mixture ansatz. We propose an objective function
Klaus Obermayer, Sambu Seo
openaire   +3 more sources

Autoregressive Image Generation without Vector Quantization

Neural Information Processing Systems
Conventional wisdom holds that autoregressive models for image generation are typically accompanied by vector-quantized tokens. We observe that while a discrete-valued space can facilitate representing a categorical distribution, it is not a necessity ...
Tianhong Li   +4 more
semanticscholar   +1 more source

Vector quantization-lattice vector quantization of speech LPC coefficients

Proceedings of ICASSP '94. IEEE International Conference on Acoustics, Speech and Signal Processing, 2002
Two-stage vector quantization-lattice vector quantization (VQ-LVQ) is used to encode the speech line spectrum pair (LSP) parameters. VQ-LVQ has the advantages of lower implementational complexity and less required memory than split vector quantization (SVQ) and multi-stage vector quantization (MSVQ) with unstructured codebooks.
Thomas R. Fischer, Jianping Pan
openaire   +2 more sources

Autoregressive Speech Synthesis without Vector Quantization

Annual Meeting of the Association for Computational Linguistics
We present MELLE, a novel continuous-valued token based language modeling approach for text-to-speech synthesis (TTS). MELLE autoregressively generates continuous mel-spectrogram frames directly from text condition, bypassing the need for vector ...
Lingwei Meng   +11 more
semanticscholar   +1 more source

On the training distortion of vector quantizers [PDF]

open access: possibleIEEE Transactions on Information Theory, 2000
Summary: The in-training-set performance of a vector quantizer as a function of its training set size is investigated. For squared error distortion and independent training data, worst case type upper bounds are derived on the minimum training distortion achieved by an empirically optimal quantizer.
openaire   +1 more source

Home - About - Disclaimer - Privacy