Fact-checked by Grok 2 weeks ago

Learning vector quantization

Learning vector quantization (LVQ) is a family of supervised prototype-based algorithms designed for statistical , where codebook vectors (prototypes) are adjusted to represent class-conditional densities and define decision boundaries through Voronoi of the input . Introduced by Teuvo Kohonen in the late 1980s as an extension of unsupervised techniques, LVQ employs a competitive learning to prototypes by updating the "winner" prototype—the one closest to an input vector in —based on whether the input is correctly , thereby approximating optimal Bayesian decision borders for tasks. The foundational variant, LVQ1, uses a simple Hebbian-style update rule where correctly classified winners are pulled toward the input and errors are pushed away, while subsequent algorithms like LVQ2 and LVQ3 introduce pairwise updates between winners and errors to refine boundaries more precisely. Over the decades, LVQ has evolved into numerous advanced variants, including generalized LVQ (GLVQ) for margin maximization, robust soft LVQ (RSLVQ) for probabilistic modeling, and kernel-based extensions like KGLVQ for non-linear separability, enhancing its applicability to complex datasets. These algorithms are noted for their interpretability, as prototypes directly represent learned class centers, and have demonstrated competitive performance against methods like support vector machines in multi-class problems, particularly in domains such as image processing, biomedical , and fault detection in systems.

Introduction

Definition and overview

Learning vector quantization (LVQ) is a family of prototype-based supervised algorithms designed to learn a set of vectors, or prototypes, that represent class regions in the input space for tasks. These prototypes serve as reference points to which input data vectors are assigned via nearest neighbor matching, enabling by determining the class label of the closest prototype. Unlike purely methods, LVQ leverages labeled training data to refine the prototypes, ensuring they approximate optimal decision boundaries that minimize errors. At its core, LVQ treats prototypes as centroids for their respective classes, iteratively adjusting their positions to better capture the distribution of labeled samples. When a training vector is correctly classified by a prototype of the same class, the prototype is pulled slightly closer to the vector; conversely, if misclassified, the incorrect prototype is pushed away while the correct one may be attracted. This supervised adaptation enhances the separation of class-conditional densities, leading to more robust classifiers. LVQ builds on the principles of vector quantization (VQ), an technique for data compression and clustering that maps input vectors to a of representative code vectors to minimize reconstruction error. By incorporating class labels, LVQ transforms this unsupervised mapping into a discriminative process, improving delineation between classes without requiring complex architectures like multilayer neural networks. For illustration, consider a two-dimensional feature space with two classes separated by a nonlinear . Initial prototypes are placed roughly in class regions; during training, those near correctly classified points of the same class move toward them, while prototypes on the "wrong" side of the boundary are repelled from misclassified points, gradually sharpening the decision surface to reduce overlap.

Historical development

Learning vector quantization (LVQ) emerged in the late 1980s as a extension of self-organizing maps (SOMs), developed by Finnish researcher Teuvo Kohonen to enable prototype-based in architectures. Kohonen, a pioneer in associative memory and topographic mapping, introduced LVQ to address the limitations of unsupervised SOMs by incorporating class labels for discriminative tasks, marking a shift toward hybrid unsupervised-supervised paradigms in early . This innovation builds upon SOMs, which Kohonen had proposed in for feature mapping, but adapts them for targeted challenges prevalent in and speech analysis at the time. The foundational LVQ1 algorithm was first described in a and detailed in Kohonen's 1988 publication in the journal Neural Networks, stemming from presentations at early neural conferences. This work laid the groundwork for competitive learning with labeled prototypes, gaining traction through subsequent discussions in neural forums. Refinements followed swiftly, with LVQ2 and LVQ3 introduced in Kohonen's 1990 paper at the International Joint Conference on Neural Networks (IJCNN), enhancing boundary adjustment and stability for more robust performance. These developments, spanning 1986–1990, positioned LVQ as a key contribution amid the explosive growth of connectionist models in the era's renaissance. By the mid-1990s, LVQ had evolved from theoretical proposals to practical implementations, exemplified by the release of the LVQ-PAK software package in 1996, co-authored by Kohonen and collaborators at , which standardized algorithm application for research and engineering. Integration into mainstream toolboxes accelerated in the 2000s, with LVQ incorporated into MATLAB's Neural Network Toolbox around the early 2000s, facilitating widespread adoption in academic and industrial settings for tasks like phoneme recognition. During this period, LVQ played a pivotal role in advancing prototype-based classifiers, offering interpretable alternatives to emerging methods like support vector machines (introduced in 1995) and multilayer perceptrons, particularly in domains requiring few labeled examples and linear decision boundaries. Post-2010, LVQ has seen minor adaptations to interface with frameworks, such as recurrent variants for sequential data processing, yet its core principles of prototype learning and nearest-neighbor remain largely unchanged, underscoring its enduring and efficacy in an era dominated by end-to-end neural architectures. These updates, including generalized matrix LVQ extensions around 2015, reflect LVQ's niche influence in interpretable rather than widespread reinvention.

Mathematical foundations

Vector quantization principles

Vector quantization (VQ) is a classical technique in data compression and that maps high-dimensional input vectors from a continuous space to a finite set of discrete representative vectors, known as code vectors or , thereby approximating the of the input data. This mapping enables efficient representation of large datasets by partitioning the input space into Voronoi regions, each associated with a , which serves as the of the data points assigned to it. The core process of VQ involves nearest-prototype matching, where each input is assigned to the closest based on a measure, most commonly the squared . This assignment is followed by iterative minimization of the quantization error to refine the prototypes and achieve a better approximation of the data distribution. The quantization error, which quantifies the overall introduced by the mapping, is formally defined as E = \sum_{i=1}^N \| \mathbf{x}_i - \mathbf{w}_{c(i)} \|^2, where \mathbf{x}_i denotes the i-th input vector, \mathbf{w}_{c(i)} is the prototype closest to \mathbf{x}_i, c(i) is the index of that prototype, and N is the total number of input vectors. Training of vector quantizers is typically performed using algorithms such as the generalized Lloyd algorithm, which underlies methods like k-means clustering and emphasizes competitive learning dynamics. In this framework, prototypes compete to represent subsets of the data: during each iteration, a winning prototype is selected for an input vector based on proximity, and it is then updated by moving closer to that vector, gradually forming a codebook that minimizes the average distortion. This iterative, unsupervised process converges to a local minimum of the quantization error, providing a foundational partitioning of the data space without requiring labeled information. Despite its effectiveness in data approximation and , pure VQ operates in an manner and lacks inherent awareness of class structures in , which can lead to suboptimal performance in tasks where decision boundaries need to align with categorical distinctions. This limitation motivates the extension to supervised variants like learning vector quantization, which incorporate class labels to refine prototype placements.

Supervised

In the supervised classification framework of (LVQ), class labels from training data are integrated into the process to enable discriminative learning. Each , or codebook vector, is explicitly assigned to one of the predefined , transforming the clustering of into a labeled partitioning of the input space. An unseen input is then classified by assigning it to the class of its nearest , determined via a distance metric such as the Euclidean , which effectively delineates class-conditional regions through Voronoi . This labeled assignment ensures that represent not just data clusters but decision-making boundaries tailored to classification tasks. The core of the supervised adjustment mechanism involves iteratively refining positions using labeled inputs to enhance separation between es. When an input is presented, the closest (the ) is updated conditionally on agreement: if the input and share the same , the is pulled toward the input to better capture intra- variations; conversely, if the labels differ, the is pushed away from the input to widen inter- margins and sharpen boundaries. These attract-and-repel , driven by a diminishing , progressively adapt the to minimize overlap between regions while preserving compactness within . As a result, the framework forms nonlinear decision boundaries in the feature space, composed of linear hyperplanes midway between adjacent of opposing , allowing LVQ to model complex, non-convex distributions without assuming linear separability. The general update rule for the winning prototype \mathbf{w}_j at time step t, given input \mathbf{x}(t) and learning rate \alpha(t), incorporates this conditional direction via a sign factor s: \mathbf{w}_j(t+1) = \mathbf{w}_j(t) + s \cdot \alpha(t) \cdot \big( \mathbf{x}(t) - \mathbf{w}_j(t) \big) where s = +1 for same-class attraction and s = -1 for different-class repulsion. This rule directly contrasts with unsupervised vector quantization's sole focus on error minimization, as LVQ prioritizes classification accuracy over distortion reduction. Evaluation in this framework centers on the misclassification rate, which measures the proportion of inputs incorrectly assigned to a prototype's class, serving as the primary objective unlike the mean squared quantization error in settings. Prototypes are typically optimized until the error rate stabilizes, often assessed through hold-out validation or cross-validation to ensure .

Core algorithms

LVQ1 algorithm

The LVQ1 algorithm, the original supervised variant of learning vector quantization introduced by Teuvo Kohonen, employs an iterative training procedure based on winner-take-all selection of the nearest prototype to refine class prototypes using . This simplest form focuses on adjusting a single winning prototype per input to attract it toward same-class samples and repel it from different-class samples, thereby sharpening decision boundaries without explicit optimization of a . The algorithm begins with initialization of prototypes, typically one or more per class, placed in the input space using unsupervised methods such as k-means clustering to ensure reasonable starting positions that approximate the data distribution. Training proceeds sequentially through the labeled dataset over multiple epochs. For each input vector \mathbf{x}(t) with associated class label c(\mathbf{x}(t)), the winning prototype \mathbf{w}_j(t) is determined as the one minimizing the Euclidean distance \|\mathbf{x}(t) - \mathbf{w}_j(t)\|. If the winner's class c(\mathbf{w}_j(t)) matches c(\mathbf{x}(t)), the prototype is updated to move closer to the input via attraction: \mathbf{w}_j(t+1) = \mathbf{w}_j(t) + \alpha(t) \left( \mathbf{x}(t) - \mathbf{w}_j(t) \right) Conversely, if the classes mismatch, the prototype is repelled away from the input: \mathbf{w}_j(t+1) = \mathbf{w}_j(t) - \alpha(t) \left( \mathbf{x}(t) - \mathbf{w}_j(t) \right) All other prototypes remain unchanged in this step. Key parameters include the time-varying learning rate \alpha(t), which starts at an initial value (often around 0.1) and decreases monotonically—typically linearly or exponentially—to a small final value (e.g., 0.001) to promote stability as training progresses. Convergence is achieved empirically, often after 10–100 epochs, by monitoring the stability of classification error rates on held-out data, though the process can be sensitive to initialization and may require preprocessing for optimal results. This procedure directly incorporates supervision by leveraging class labels in the update decisions, extending the core vector quantization principles to classification tasks.

LVQ2 and LVQ3 variants

The LVQ2 (often specified as LVQ2.1) enhances the basic LVQ1 by incorporating paired updates involving the two closest prototypes to the input , enabling more precise boundary adjustments between classes. Updates are applied only when the two closest prototypes \mathbf{w}_j and \mathbf{w}_k (with j the ) belong to different classes—one matching the input's class and the other not—and the input falls within a relative window around them. This condition ensures modifications target regions near decision boundaries. The window is defined by a parameter w (typically 0.2–0.3), such that the update occurs if \min\left(\frac{d_j}{d_k}, \frac{d_k}{d_j}\right) > \frac{1-w}{1+w}, where d_j and d_k are the distances from the input \mathbf{x} to the prototypes. In such cases, the prototype of the wrong class is repelled from \mathbf{x}, while the one of the correct class is attracted toward it: \mathbf{w}_\text{wrong} \leftarrow \mathbf{w}_\text{wrong} - \alpha (\mathbf{x} - \mathbf{w}_\text{wrong}), \quad \mathbf{w}_\text{correct} \leftarrow \mathbf{w}_\text{correct} + \alpha (\mathbf{x} - \mathbf{w}_\text{correct}) where \alpha is the . This mechanism refines class boundaries more effectively than single-prototype updates in LVQ1, particularly in regions where prototypes of different classes are proximate. The LVQ3 variant builds on LVQ2 to improve stability and prevent excessive boundary overshooting during prolonged training. It applies the same paired updates for mixed-class prototype pairs as in LVQ2. Additionally, when both closest prototypes match the input's class, the winner is mildly attracted toward the input using a reduced \epsilon \alpha (with \epsilon typically 0.1–0.5) to consolidate same-class regions without major shifts: \mathbf{w}_j \leftarrow \mathbf{w}_j + \epsilon \alpha (\mathbf{x} - \mathbf{w}_j) The window parameter w is used similarly for the paired updates. Unlike LVQ2, which can be unstable for extended training, LVQ3 is designed for fine-tuning phases following LVQ1 or LVQ2 pretraining to maintain prototype stability without distorting established decision surfaces.

Extensions and variants

Optimized LVQ methods

Optimized Learning Vector Quantization (OLVQ) represents an advancement over earlier LVQ variants by replacing heuristic update rules with optimization of learning rates to better approximate gradient descent and minimize misclassification errors, enhancing convergence and classification accuracy. Introduced by Kohonen, OLVQ adjusts prototype vectors through updates similar to LVQ1, but with dynamically optimized learning rates for each codebook vector, determined locally to maximize the probability of correct classification for the current input. This approach mitigates issues in traditional LVQ, such as inconsistent updates, by tailoring the step sizes to balance attraction of correctly classified winners and repulsion of erroneous ones. Fuzzy Learning Vector Quantization (FLVQ) extends the LVQ framework by incorporating fuzzy set theory to handle class overlaps and ambiguous assignments, assigning partial memberships to multiple prototypes rather than hard winner-take-all decisions. In FLVQ, membership degrees are computed using a softmax-like over distance-based similarities, allowing prototypes to share responsibility for a sample proportional to their proximity and class relevance; for input \mathbf{x}, the membership u_{jk} for prototype j and sample k is u_{jk} = \frac{\exp(-\|\mathbf{x}_k - \mathbf{w}_j\|^2 / \sigma)}{\sum \exp(-\|\mathbf{x}_k - \mathbf{w}_m\|^2 / \sigma)}, where \sigma controls fuzziness. Prototypes are then updated weighted by these memberships, promoting smoother boundaries and robustness to noisy data. This method, developed by Tsao et al., improves performance on datasets with inherent uncertainties by reducing overcommitment to single prototypes. Further optimizations include variants with rates and kernel integrations, such as Generalized Learning Vector Quantization (GLVQ), which optimizes a margin-based to enhance . In GLVQ, the objective is to minimize S = \sum_i f(\mu(\mathbf{x}_i)), where \mu(\mathbf{x}_i) = \frac{d_+ - d_-}{d_+ + d_-} measures the relative to the nearest correct prototype (d_+) versus incorrect one (d_-), and f is a for robustness; updates use steepest descent: \Delta \mathbf{w}_+ = \eta \frac{f'(\mu)}{d_+ + d_-} (\mathbf{x} - \mathbf{w}_+), with analogous adjustment for \mathbf{w}_-. Proposed by Sato and Yamada, GLVQ supports non-linear mappings via s, making it suitable for complex decision boundaries. These optimized methods collectively reduce sensitivity to initial prototype placement and excel in high-dimensional spaces. Recent developments as of 2025 include recurrent variants of LVQ for handling sequential data, such as RecLVQ, which integrates recurrent structures into GLVQ for improved performance on time-series classification. Additionally, LVQ has been adapted for settings to learn representations without labels, enhancing its utility in tasks. Learning vector quantization (LVQ) shares foundational principles with other prototype-based techniques, which represent data classes through a set of learned prototypes in feature space and classify inputs by proximity to the nearest prototype. One closely related method is the (SOM), an clustering algorithm that organizes high-dimensional data onto a low-dimensional grid while preserving topological relationships among inputs. Unlike SOM, which lacks class labels and focuses on exploratory data visualization and , LVQ incorporates supervision by assigning labels to prototypes and adjusting them based on labeled training examples to optimize decision boundaries for tasks. SOM can serve as a preprocessing step for LVQ, where its unsupervised prototypes are fine-tuned via supervised updates to enhance accuracy. Nearest prototype classifiers (NPCs) provide a broader encompassing LVQ, where relies on assigning an input to the class of its closest under a defined distance metric, often . LVQ variants represent adaptive procedures within this NPC , learning iteratively to improve , in contrast to static NPCs that use fixed centroids like class means. This distinguishes NPCs, including LVQ, from instance-based methods such as k-nearest neighbors (k-NN), which compare inputs to all samples rather than a compact set of , leading to higher computational demands and less interpretability during inference. NPCs like LVQ emphasize margin maximization between , promoting robustness to perturbations compared to the density-based decisions in k-NN. Generalized LVQ (GLVQ) extends the standard LVQ framework by employing a margin-based sensitive to class boundaries, updating prototypes via on the relative distance between the nearest correct and incorrect prototypes. This approach addresses limitations in original LVQ algorithms, such as potential divergence, by ensuring and better separability, particularly for overlapping classes, as demonstrated on character recognition tasks where GLVQ achieved near-perfect accuracy. GLVQ maintains the prototype-based core of LVQ but shifts focus from heuristic winner-take-all updates to a principled optimization that incorporates borderline sensitivity, making it a direct supervised enhancement over unsupervised clustering prototypes. A key distinction across these techniques lies in LVQ's emphasis on supervised adaptation of prototypes for discriminative tasks, contrasting with the unsupervised, topology-preserving nature of SOM or purely clustering-based prototypes. This supervised focus enables LVQ to outperform methods in labeled while inheriting efficiency from prototype reduction. Post-2015, LVQ principles have influenced methods integrating into embeddings, such as LVQ (DLVQ), which combines neural feature extraction with GLVQ optimization to mitigate adversarial vulnerabilities in networks, achieving lower rates on benchmarks like MNIST compared to softmax classifiers. These evolutions extend LVQ's learning to scalable, end-to-end trainable systems in modern .

Applications and performance

Use in pattern recognition

Learning vector quantization (LVQ) serves as a primary tool in tasks, where it maps input feature vectors to class labels by associating them with the nearest prototype during classification. In handwritten recognition, LVQ quantizes feature vectors extracted from images, enabling effective mapping to classes on datasets akin to MNIST, which consist of grayscale images of handwritten numerals. This approach has been applied to classify s by training prototypes on pixel-based or transformed features, demonstrating its utility in unsupervised-to-supervised transitions for identification. In , LVQ facilitates clustering by learning labeled prototypes from acoustic feature vectors, such as mel-frequency cepstral coefficients, to distinguish speech units. Early applications achieved recognition accuracies of 80-90% in continuous speech contexts, with benchmarks in the reporting over 90% accuracy for isolated phonemes using LVQ variants integrated with hidden Markov models. These prototypes enable robust classification of phonetic segments, supporting larger vocabulary systems. For image analysis, LVQ quantizes or vectors to perform texture classification and , grouping similar image regions based on statistical or transform-based descriptors like coefficients. In texture discrimination tasks, such as classifying material surfaces or patterns in images, LVQ prototypes capture discriminative , outperforming traditional clustering in labeled scenarios by adapting to boundaries. This quantization reduces dimensionality while preserving textural variance for accurate . In biomedical data analysis, LVQ has been applied to classify sequences for distinguishing virus types, using prototypes to represent sequence features and achieve interpretable without . It has also supported of parasitic larvae in veterinary diagnostics and handling imbalanced datasets in tasks. For fault detection in industrial systems, LVQ enables diagnosis of anomalies in equipment like fuel cells (PEMFCs) by learning prototypes from such as voltage and , facilitating early detection of faults. Applications include fault isolation in machines and of transformers using multi-level LVQ for precise disk . Practical implementations of LVQ in often involve 10-100 per class to balance representational power and computational efficiency, depending on data complexity and class separability. Preprocessing steps, including of feature vectors to unit length and via (), enhance convergence and classification performance by mitigating scale variations and noise. Hybrids combining LVQ with first project high-dimensional inputs to lower spaces before prototype learning, improving for large datasets. A seminal case study is Teuvo Kohonen's development of the neural phonetic typewriter in the 1980s, which employed phonotopic maps refined by LVQ for real-time speech-to-text transcription. The system processed acoustic signals into phonemic sequences using LVQ-labeled prototypes, achieving 75-90% raw phoneme accuracy and 96-98% word recognition on a 1,000-word Finnish vocabulary after speaker adaptation with minimal samples. This hardware implementation on IBM PC/AT platforms demonstrated LVQ's feasibility for practical speech recognition systems in the pre-deep learning era.

Comparative advantages and limitations

Learning vector quantization (LVQ) offers several advantages over other classifiers, particularly in terms of interpretability and computational efficiency. The prototype-based nature of LVQ allows for straightforward visualization and inspection of decision boundaries, enabling domain experts to understand and validate classifications more easily than with black-box models like or multilayer perceptrons. For medium-sized datasets, LVQ demonstrates faster training and prediction times compared to , as its Hebbian-style updates depend on a fixed number of prototypes rather than the full training sample size. Additionally, LVQ effectively captures nonlinear decision boundaries through adaptive prototype positioning, providing a balance of flexibility and simplicity without requiring kernel extensions. In comparisons with instance-based methods like k-nearest neighbors (k-NN), LVQ provides significant efficiency gains at test time by classifying inputs based on distance to a small set of prototypes (often one per class), rather than computing distances to all training examples, achieving speed-ups of 1–3 orders of magnitude while maintaining comparable accuracy. Relative to neural networks, LVQ is simpler to implement and interpret, avoiding the scalability challenges of deep architectures in terms of parameter tuning and resource demands, though it may underperform on very large or complex datasets. On standard benchmarks such as UCI datasets, LVQ variants typically achieve accuracies of 85–95%, with error rates 10–20% lower than alone due to supervised prototype refinement. Despite these strengths, LVQ has notable limitations. It is sensitive to initial prototype placement and outliers, which can lead to suboptimal and requires careful initialization strategies for reliable . In very high-dimensional spaces, such as hyperspectral data, LVQ often proves suboptimal without incorporating or relevance weighting extensions, showing reduced robustness compared to SVMs or networks. Furthermore, without techniques like cross-validation, LVQ risks , especially in adaptive variants that adjust metrics dynamically. LVQ is best suited for labeled datasets of moderate size where explainability is prioritized over maximum accuracy on intricate nonlinear problems, such as in sensor applications or tasks requiring human oversight.

References

  1. [1]
    A review of learning vector quantization classifiers - arXiv
    Sep 23, 2015 · In this work we present a review of the state of the art of Learning Vector Quantization (LVQ) classifiers. A taxonomy is proposed which integrates the most ...<|control11|><|separator|>
  2. [2]
    Learning Vector Quantization | SpringerLink
    Learning Vector Quantization. Download book PDF. Teuvo Kohonen. Part of the book series: Springer Series in Information Sciences ((SSINF,volume 30)). 2757 ...
  3. [3]
    (PDF) LVQ-PAK: The learning vector quantization program package
    PDF | On Jan 1, 1996, Teuvo Kohonen and others published LVQ-PAK: The learning vector quantization program package | Find, read and cite all the research ...Missing: original | Show results with:original
  4. [4]
    Variants of recurrent learning vector quantization - ScienceDirect
    Sep 1, 2022 · Generalized Learning Vector Quantization (GLVQ) is a prototype-based sparse classification model that can be trained using gradient descent ...Missing: definition | Show results with:definition
  5. [5]
    Vector Quantization and Signal Compression - SpringerLink
    This book is devoted to the theory and practice of signal compression, ie , data compression applied to signals such as speech, audio, images, and video ...Missing: principles | Show results with:principles
  6. [6]
    [PDF] Least Squares Quantization in PCM
    S. P. Lloyd and B. McMillan, “Linear least squares filtering and prediction of sampled signals,” in Proc. Symp. on Modern Network. The Hexagon Theorem. DONALD ...
  7. [7]
    [PDF] arXiv:1509.07093v1 [cs.LG] 23 Sep 2015
    Sep 23, 2015 · In this paper we present a comprehensive review of the most relevant su- pervised LVQ algorithms developed since the original work of Teuvo ...
  8. [8]
  9. [9]
    Generalized Learning Vector Quantization - NIPS papers
    We propose a new learning method, "Generalized Learning Vec(cid:173) tor Quantization (GLVQ)," in which reference vectors are updated based on the steepest ...Missing: Optimized seminal
  10. [10]
  11. [11]
    [PDF] ECE656-Machine Learning and Adaptive Systems Lectures 25 & 26
    Kohonen proposed three different types of LVQ algorithms. LVQ1 Algorithm: It is assumed that several codebook vectors, wis, have been established using.
  12. [12]
    [PDF] arXiv:2207.07208v1 [cs.LG] 14 Jul 2022
    Jul 14, 2022 · Nearest prototype classifiers (NPCs) assign to each input point the label of the nearest proto- type with respect to a chosen distance metric. A.
  13. [13]
    None
    ### Definition and Equation for Cost Function in GLVQ
  14. [14]
    [PDF] Deep Learning Vector Quantization
    We initialize the prototypes for DLVQ as the class conditional means in the embedding space. We start training with a large neighborhood, i.e. τ = 0.1, and ...Missing: influence post-
  15. [15]
    Recognizing images of handwritten digits using learning vector ...
    This research is based on supervised learning vector quantization neural network categorized under artificial neural network. The images of digits are ...
  16. [16]
    [PDF] The self-organizing map - Proceedings of the IEEE
    These strategies and learning algorithms were introduced by the present author [38], [43], [45] and called Learning Vector Quantization (LVQ). ... Teuvo Kohonen ( ...
  17. [17]
    A new HMM/LVQ hybrid algorithm for speech recognition
    The average word accuracy for the original HMM-based system was 62%. When the LVQ classifier was incorporated, the word accuracy increased to 81%.< >.
  18. [18]
    A Texture Analysis Approach to Corrosion Image Classification
    The classification is performed with a Learning Vector Quantization network and comparison is made with Gaussian and k-NN classifiers. The effectivity of the ...
  19. [19]
    Implementation of Principal Component Analysis and Learning ...
    Aug 10, 2025 · the learning vector quantization network architecture. Parameter initialization is carried out for the training. model with a learning rate ( ...
  20. [20]
    Generalized relevance learning vector quantization - ScienceDirect
    We propose a new scheme for enlarging generalized learning vector quantization (GLVQ) with weighting factors for the input dimensions.Missing: equation | Show results with:equation
  21. [21]
    [PDF] ISCA Archive
    A flexible and inexpensive real-time speech recognitlon system is described. It operates in the following modes: recognitlon of isolated words from a large ...
  22. [22]
    Can Learning Vector Quantization be an Alternative to SVM...
    Dec 17, 2016 · Learning vector quantization (LVQ) is one of the most powerful approaches for prototype based classification of vector data, ...
  23. [23]
    [PDF] Asymmetric Learning Vector Quantization for Efficient Nearest ...
    Mar 24, 2017 · Empirical results exhibited superior performance of asymmetric generalized LVQ (GLVQ) over other state-of-the-art prototype generation methods ...<|control11|><|separator|>
  24. [24]
    Classification in high-dimensional spectral data: Accuracy vs ...
    Aug 9, 2025 · We propose a new scheme for enlarging generalized learning vector quantization (GLVQ) with weighting factors for the input dimensions. The ...