Results 261 to 270 of about 57,535 (295)
Some of the next articles are maybe not open access.

Order statistics learning vector quantizer

IEEE Transactions on Image Processing, 1996
We propose a novel class of learning vector quantizers (LVQs) based on multivariate data ordering principles. A special case of the novel LVQ class is the median LVQ, which uses either the marginal median or the vector median as a multivariate estimator of location.
Pitas, I.   +4 more
openaire   +4 more sources

Improving Dynamic Learning Vector Quantization

18th International Conference on Pattern Recognition (ICPR'06), 2006
We introduce some improvements to the Dynamic Learning Vector Quantization algorithm proposed by us for tackling the two major problems of those networks, namely neuron over-splitting and their distribution in the feature space. We suggest to explicitly estimate the potential improvement on the recognition rate achievable by splitting neurons in those ...
DE STEFANO, Claudio   +3 more
openaire   +3 more sources

A Median Variant of Generalized Learning Vector Quantization [PDF]

open access: possible, 2013
We introduce a median variant of the Generalized Learning Vector Quantization GLVQ algorithm. Thus, GLVQ can be used for classification problem learning, for which only dissimilarity information between the objects to be classified is available. For this purpose, the cost function of GLVQ is reformulated as a probabilistic model such that a generalized
Nebel, David   +6 more
openaire   +2 more sources

Convergence of the Vectors in Kohonen’s Learning Vector Quantization

1990
Kohonen’s Learning Vector Quantization is a nonparametric classification scheme which classifies observations by comparing them to k templates called Voronoi vectors. The locations of these vectors are determined from past labeled data through a learning algorithm.
Anthony LaVigna, John S. Baras
openaire   +2 more sources

Federated Learning Vector Quantization

ESANN 2021 proceedings, 2021
Brinkrolf, Johannes   +2 more
openaire   +3 more sources

Learning algorithms with boosting for vector quantization

2008 3rd International Symposium on Communications, Control and Signal Processing, 2008
There have been proposed many learning algorithms for VQ based on the steepest descend method. However, any learning algorithm known as a superior one does not always work well. This paper proposes a new learning algorithm with boosting. Boosting is a general method which attempts to boost the accuracy of any given learning algorithm.
Hiromi Miyajima   +3 more
openaire   +2 more sources

A fuzzy-soft learning vector quantization

Neurocomputing, 2003
Abstract This paper presents a batch competitive learning method called fuzzy-soft learning vector quantization (FSLVQ). The proposed FSLVQ is a batch type of clustering learning network by fusing the batch learning, soft competition and fuzzy membership functions. The comparisons between the well-known fuzzy LVQ and the proposed FSLVQ are made. In a
Kuo-Lung Wu, Miin-Shen Yang
openaire   +2 more sources

Noise Fuzzy Learning Vector Quantization

Key Engineering Materials, 2010
Fuzzy learning vector quantization (FLVQ) benefits from using the membership values coming from fuzzy c-means (FCM) as learning rates and it overcomes several problems of learning vector quantization (LVQ). However, FLVQ is sensitive to noises because it is a FCM-based algorithm (FCM is sensitive to noises).
Jie Wen Zhao, Bin Wu, Xiao Hong Wu
openaire   +2 more sources

A fuzzy algorithm for learning vector quantization

Proceedings of IEEE International Conference on Systems, Man and Cybernetics, 2002
This paper proposes a fuzzy algorithm for learning vector quantization, which can train feature maps to function as pattern classifiers through an unsupervised learning process. The development of the proposed algorithms is based on the minimization of a fuzzy objective function, formed as the weighted sum of the squared Euclidean distances between an ...
Pin-I Pai, Nicolaos B. Karayiannis
openaire   +2 more sources

Learning vector quantization for the probabilistic neural network

IEEE Transactions on Neural Networks, 1991
A modified version of the PNN (probabilistic neural network) learning phase which allows a considerable simplification of network structure by including a vector quantization of learning data is proposed. It can be useful if large training sets are available. The procedure has been successfully tested in two synthetic data experiments.
openaire   +4 more sources

Home - About - Disclaimer - Privacy