Fact-checked by Grok 2 weeks ago

Multidimensional scaling

Multidimensional scaling (MDS) is a family of statistical techniques designed to analyze and visualize the structure inherent in a set of proximity measures—such as similarities or dissimilarities between pairs of objects—by representing those objects as points in a low-dimensional space, where the inter-point distances correspond as closely as possible to the observed proximities. This approach allows for the discovery of underlying dimensions that explain the data's relational patterns, often reducing high-dimensional information to two or three dimensions for intuitive interpretation. The foundational theory of MDS was established by Warren S. Torgerson in 1952, who introduced a metric method assuming interval or ratio-level proximity data and using classical scaling to derive coordinates via eigenvalue decomposition of a . Building on this, Roger N. Shepard extended the framework in 1962 with nonmetric MDS, which accommodates by applying a monotonic transformation to preserve only the rank order of proximities rather than their exact values, making it suitable for subjective judgments. Joseph B. Kruskal further advanced the field in 1964 by developing numerical optimization algorithms, including the use of stress measures like Kruskal's Stress-1 to quantify fit and iterative procedures such as steepest descent for minimizing discrepancies between observed and represented distances. Key variants of MDS include metric MDS, which fits models assuming distances and linear relationships (e.g., classical MDS or Torgerson scaling), and nonmetric MDS, which relaxes these assumptions for more flexible analysis of rank-order data using criteria like monotone regression. Extensions such as individual differences scaling (INDSCAL), introduced by Carroll and Chang in 1970, account for subject-specific variations by incorporating a common space weighted by individual factors. Goodness-of-fit is typically assessed via measures like or squared correlation (RSQ), with lower dimensions preferred to avoid while capturing essential structure. MDS has broad applications across disciplines, originating in for modeling perceptual similarities (e.g., color or sound spaces) but extending to for brand positioning and consumer preference mapping, sociology for analyzing social distances in networks, and bioinformatics for visualizing or molecular similarities. In modern contexts, it supports data visualization in and , such as for large datasets or embedding high-dimensional points into 2D plots for exploratory analysis.

Introduction

Definition and Purpose

Multidimensional scaling (MDS) is a family of statistical techniques used to visualize and interpret the level of similarity or dissimilarity among a set of objects by representing them as points in a low-dimensional , typically two or three dimensions, such that the distances between points approximate the original pairwise dissimilarities provided as input. The input to MDS is usually a derived from similarity or dissimilarity measures, and the output is a configuration of points in that preserves these relationships as closely as possible. This approach allows for the reduction of high-dimensional data into a more interpretable form without assuming a specific underlying . The primary purpose of MDS is , particularly for uncovering hidden patterns in complex , such as clusters of similar objects or gradients in perceptual or relational . It is especially valuable in fields like , , and bioinformatics, where direct of high-dimensional similarities is challenging. A key concept in MDS is the , which quantifies the goodness-of-fit between the input dissimilarities and the distances in the output configuration; lower values indicate better preservation of the original relationships. MDS distinguishes between similarity (higher values indicate closer relationships) and dissimilarity (higher values indicate greater separation), often transforming one into the other for . For instance, in perceptual experiments, MDS can map subjective similarity ratings of color patches—such as how alike two hues appear on a 1-9 scale—into a resembling a , where closer points reflect perceived similarities. This technique shares conceptual similarities with (PCA) as a method, though MDS operates directly on dissimilarity measures rather than raw features.

History and Development

Multidimensional scaling (MDS) originated in the field of psychology during the late 1930s, primarily as a method for analyzing perceptual data and similarities among stimuli. Early contributions built on Louis L. Thurstone's work in the 1930s, which emphasized multidimensional approaches to psychophysical scaling and factor analysis to represent perceptual structures. The technique was formally introduced by M. W. Richardson in 1938 with his paper on multidimensional psychophysics, proposing a model to determine the dimensionality of psychological spaces from pairwise similarity judgments. This was closely followed by G. A. Young and A. S. Householder's 1938 derivation of psychophysical scales using multidimensional representations, marking the inception of MDS as a distinct analytical tool in perceptual research. Louis Guttman contributed foundational ideas to scaling in the 1940s and later advanced non-metric aspects in 1968 with his rank image principle, which enabled the recovery of coordinate spaces from ordinal data without assuming metric properties. The method was formalized and expanded in the 1950s and 1960s through key publications that established both metric and non-metric variants. Warren S. Torgerson's 1952 paper, "Multidimensional Scaling: I. Theory and Method," provided a rigorous theoretical framework for metric MDS, including procedures for converting similarity data into distances via scalar product analysis. This work was complemented by Roger N. Shepard's 1962 contributions, which introduced non-metric MDS to handle unknown monotonic transformations of distances in psychological data. A pivotal advancement came in 1964 with Joseph B. Kruskal's two seminal papers: one optimizing goodness-of-fit to non-metric hypotheses using a measure, and the other detailing a numerical based on monotonic to minimize discrepancies between observed and embedded distances. Kruskal's monotonic approach revolutionized minimization, making non-metric MDS computationally feasible and widely applicable beyond strict assumptions. In the 1970s, MDS evolved to accommodate individual differences, with J. Douglas Carroll and Jih-Jie Chang introducing the INDSCAL model in 1970, an N-way generalization that analyzes multiple proximity matrices simultaneously to reveal subject-specific weightings on common dimensions. This expansion facilitated applications in social and . By the , the shift from manual to computational implementations accelerated, exemplified by Jan de Leeuw's 1977 majorization algorithm (SMACOF), which offered efficient iterative solutions for MDS fitting and became a cornerstone for software implementations. The integration of MDS with in the 2000s further broadened its scope, adapting classical techniques for in high-dimensional data. A landmark example is the algorithm by Joshua B. Tenenbaum, Vin de Silva, and John C. Langford in 2000, which extends MDS by estimating geodesic distances on manifolds to preserve global nonlinear structures, influencing manifold learning methods in computational fields.

Mathematical Foundations

Distance Measures and Matrices

In multidimensional scaling (MDS), the primary input is a dissimilarity matrix, denoted as \Delta = (\delta_{ij})_{i,j=1}^n, which is a square n \times n symmetric matrix where each entry \delta_{ij} (for i \neq j) quantifies the observed dissimilarity or distance between a pair of n objects, and the diagonal elements are set to zero (\delta_{ii} = 0) to reflect zero distance from an object to itself. This matrix serves as the foundational data structure, capturing pairwise relationships that MDS aims to represent geometrically in a lower-dimensional space. The symmetry property ensures \delta_{ij} = \delta_{ji}, mirroring the bidirectional nature of dissimilarity measures. Various types of distance measures can populate the dissimilarity matrix, depending on the data's nature and application. In metric MDS, Euclidean distance is commonly used, defined as \delta_{ij} = \sqrt{\sum_{k=1}^p (x_{ik} - x_{jk})^2} for objects with coordinate vectors \mathbf{x}_i, \mathbf{x}_j \in \mathbb{R}^p, assuming embeddability in Euclidean space. Non-Euclidean measures, such as the city-block (Manhattan) distance \delta_{ij} = \sum_{k=1}^p |x_{ik} - x_{jk}| or cosine dissimilarity \delta_{ij} = 1 - \frac{\mathbf{x}_i \cdot \mathbf{x}_j}{\|\mathbf{x}_i\| \|\mathbf{x}_j\|} for directional data like text vectors, are applied in contexts like urban planning or information retrieval where straight-line assumptions do not hold. When raw data consists of similarities (e.g., correlation coefficients or preference ratings), these are typically converted to dissimilarities via transformations like \delta_{ij} = c - s_{ij} (where c is a constant larger than the maximum similarity) or \delta_{ij} = 1 - s_{ij} for normalized scales, preserving the relative ordering for non-metric MDS. Key prerequisites for the dissimilarity matrix include non-negativity (\delta_{ij} \geq 0), , and zero diagonal, which are essential for meaningful geometric interpretation. For classical MDS, the associated inner product matrix (derived from \Delta) must be positive semi-definite to ensure real-valued coordinates without negative eigenvalues, though non-metric variants relax this for ordinal or qualitative data. Handling or incomplete pairwise observations is feasible in non-metric MDS by excluding those pairs from the optimization objective, such as the stress function, thereby avoiding imputation biases; for instance, software implementations may impute via row/column means only if necessary for approaches. Ordinal inputs, like ranked preferences, are accommodated by treating \Delta as a rank-order matrix rather than interval-scaled values. A practical example arises in survey , such as pairwise similarity ratings of product brands by consumers on a 1–9 scale (1 = very dissimilar, 9 = very similar). To construct the dissimilarity , ratings are inverted (e.g., \delta_{ij} = 10 - s_{ij}) to yield higher values for greater dissimilarity, resulting in a like:
Brand ABrand BBrand C
Brand A03.57.2
Brand B3.502.1
Brand C7.22.10
Here, the entry for Brands A and B (average inverted rating of 3.5) indicates moderate dissimilarity, derived from aggregated respondent scores.

Eigenvalue Decomposition in MDS

In classical multidimensional scaling, the core algebraic operation begins with transforming the squared D^2 into an inner product matrix B, which represents the covariances among the objects when embedded in . The centering matrix H = I - \frac{1}{n} \mathbf{1}\mathbf{1}^T, where I is the , \mathbf{1} is the column vector of ones, and n is the number of objects, ensures the coordinates are mean-centered. The inner product matrix is derived as B = -\frac{1}{2} H D^2 H, a double-centering operation that adjusts for the origin and yields a symmetric, positive semi-definite matrix whose entries b_{ij} approximate the products \mathbf{x}_i \cdot \mathbf{x}_j of the coordinate vectors \mathbf{x}_i and \mathbf{x}_j. The eigenvalue decomposition of B then provides the coordinates: B = V \Lambda V^T, where V is the matrix of eigenvectors and \Lambda is the of eigenvalues \lambda_1 \geq \lambda_2 \geq \cdots \geq \lambda_n. Only the positive eigenvalues contribute to the dimensionality of the space, as negative values indicate non-Euclidean distances in the input . The coordinate matrix X is obtained by retaining the k largest positive eigenvalues and corresponding eigenvectors, forming X = V_k \Lambda_k^{1/2}, where V_k and \Lambda_k are the truncated matrices; this scales the eigenvectors by the square roots of the eigenvalues to recover the original variances. To determine the appropriate dimensionality k, the largest eigenvalues are examined, typically retaining those that account for a substantial portion of the total variance (sum of eigenvalues). A , plotting the eigenvalues in descending order, aids in identifying an "elbow" where additional dimensions contribute negligibly, analogous to . The resulting coordinates allow reconstruction of the distances via the norm: \delta_{ij} \approx \|\mathbf{x}_i - \mathbf{x}_j\|, where equality holds exactly if the input distances are Euclidean and no is applied. For illustration, consider a simple for three objects forming an with side length 1: D = \begin{pmatrix} 0 & 1 & 1 \\ 1 & 0 & 1 \\ 1 & 1 & 0 \end{pmatrix}, \quad D^2 = \begin{pmatrix} 0 & 1 & 1 \\ 1 & 0 & 1 \\ 1 & 1 & 0 \end{pmatrix}. The row (and column) means of D^2 are all $2/3, and the grand mean is $2/3. The centering matrix is H = I - \frac{1}{3} \begin{pmatrix} 1 & 1 & 1 \\ 1 & 1 & 1 \\ 1 & 1 & 1 \end{pmatrix} = \begin{pmatrix} 2/3 & -1/3 & -1/3 \\ -1/3 & 2/3 & -1/3 \\ -1/3 & -1/3 & 2/3 \end{pmatrix}. Compute B = -\frac{1}{2} H D^2 H: First, H D^2 = \begin{pmatrix} -2/3 & 1/3 & 1/3 \\ 1/3 & -2/3 & 1/3 \\ 1/3 & 1/3 & -2/3 \end{pmatrix}, then B = -\frac{1}{2} (H D^2) H = \begin{pmatrix} 1/3 & -1/6 & -1/6 \\ -1/6 & 1/3 & -1/6 \\ -1/6 & -1/6 & 1/3 \end{pmatrix}. The eigenvalue decomposition yields eigenvalues \lambda_1 = \lambda_2 = 1/2 (positive, spanning 2D) and \lambda_3 = 0, with corresponding eigenvectors forming an orthogonal basis for the plane (e.g., one possible V_k after normalization and rotation for convenience: columns [1, -1/2, -1/2]^T / \sqrt{6/4} and [0, \sqrt{3}/2, -\sqrt{3}/2]^T / \sqrt{6/4}). The coordinates are X = V_k \begin{pmatrix} \sqrt{1/2} & 0 \\ 0 & \sqrt{1/2} \end{pmatrix}, yielding points at \left( \sqrt{\frac{1}{3}}, 0 \right), \left( -\frac{\sqrt{1/3}}{2}, \frac{1}{2} \right), and \left( -\frac{\sqrt{1/3}}{2}, -\frac{1}{2} \right) up to rotation, preserving the input distances exactly.

Types of Multidimensional Scaling

Classical Multidimensional Scaling

Classical multidimensional scaling (MDS), also known as Torgerson scaling or principal coordinates analysis, provides an exact method for representing a set of objects as points in a low-dimensional when the input dissimilarities are precise Euclidean distances. Introduced by Warren S. Torgerson in the context of to model perceptual spaces from similarity judgments, it assumes that the data can be embedded perfectly in without approximation errors. This approach transforms a of pairwise distances into coordinates via algebraic operations, yielding a global optimum rather than an iterative approximation. The procedure begins with a of squared distances D^2, where the off-diagonal elements represent squared distances between objects and the diagonal is zero. To obtain the inner product matrix, the squared distance matrix undergoes double centering: B = -\frac{1}{2} H D^2 H, where H = I - \frac{1}{n} \mathbf{1}\mathbf{1}^T is the centering matrix that subtracts row and column means, with n being the number of objects and \mathbf{1} a vector of ones. An eigendecomposition of B follows: B = V \Lambda V^T, where \Lambda is the of eigenvalues and V the matrix of eigenvectors; the coordinates are then extracted as X = V_k \Lambda_k^{1/2}, using the k largest positive eigenvalues and corresponding eigenvectors to form a k-dimensional configuration. This process, formalized by John C. Gower, requires no iterative optimization and directly recovers the original point configuration up to , , and translation if the distances are truly . A key advantage of classical MDS is its computational efficiency and linearity, as the eigendecomposition scales cubically with the number of objects but avoids the convergence issues of iterative methods; moreover, when applied to distances derived from centered feature vectors, it is mathematically equivalent to () on those vectors. However, it is limited to metric data and assumes perfect structure; for non-Euclidean or noisy distances, the inner product matrix may yield negative eigenvalues, indicating that no exact low-dimensional embedding exists and potentially leading to distorted configurations if negative values are ignored. An illustrative example is mapping geographic locations using Cartesian coordinates on a flat approximation, where inter-city distances (e.g., between , , and ) serve as input; classical MDS recovers the precise point layout from the squared without error, demonstrating its utility for known spatial data.

Metric Multidimensional Scaling

multidimensional scaling (MDS) extends the classical approach by employing iterative optimization to minimize the squared discrepancies between observed interval-level distances and the corresponding distances in a low-dimensional configuration, accommodating cases where the input data do not precisely satisfy assumptions. This method is particularly suited for data where exact algebraic solutions fail due to noise or approximations, using least-squares criteria to achieve a flexible . As a special case, when the input distances form an exact , MDS recovers the classical solution via eigenvalue decomposition. The fit of the configuration is evaluated using the raw metric, defined as S = \sqrt{ \frac{ \sum_{i < j} (\delta_{ij} - d_{ij})^2 }{ \sum_{i < j} d_{ij}^2 } }, where \delta_{ij} denotes the observed input distances and d_{ij} the distances between points in the embedded space; lower values of S indicate better preservation of the original distances. To compare or align multiple configurations, such as across iterations or subsets, Procrustes analysis applies orthogonal rotation and translation to minimize the summed squared differences between them, ensuring rotation-invariant assessments. Variants of metric MDS include weighted forms that incorporate unequal error variances by assigning weights w_{ij} to each term in the stress function, such as S_w = \sqrt{ \sum w_{ij} (\delta_{ij} - d_{ij})^2 / \sum w_{ij} \delta_{ij}^2 }, which is useful when certain distance estimates are more reliable than others. For handling non-Euclidean input metrics, transformations like scaling or additive adjustments can be applied to the distances prior to optimization, improving convergence to a viable embedding. A representative application involves fitting perceptual distances from psychological experiments, such as similarity judgments of colors or sounds, where experimental noise distorts the ideal Euclidean structure, allowing metric MDS to reveal underlying psychological dimensions through approximate recovery.

Non-metric Multidimensional Scaling

Non-metric multidimensional scaling (NMDS), introduced by in 1964, extends classical MDS to handle ordinal dissimilarity data by preserving the rank order of observed dissimilarities rather than their precise magnitudes. In NMDS, the goal is to find a configuration of points in low-dimensional space such that the interpoint distances d_{ij} approximate a monotonically increasing function of the input dissimilarities \delta_{ij}, denoted as f(\delta_{ij}) \approx d_{ij}, where f is an unknown non-decreasing transformation. This approach assumes only that higher dissimilarities correspond to larger distances, making it suitable for subjective or ranked data where interval assumptions do not hold. The procedure for NMDS is iterative and begins with an initial configuration of points, often derived from random starts or . The primary approach to fit (PATFIT) then applies to the distances, adjusting them to match the rank order of the \delta_{ij} while minimizing violations of monotonicity; this involves partitioning the data into blocks and iteratively refining until the configuration is "up-satisfied" and "down-satisfied." Refinement follows with the secondary approach to fit (SECFIT), which addresses ties in the dissimilarities by enforcing equal distances for tied \delta_{ij} values, starting from tie-block partitions to improve convergence. To evaluate monotonicity, the plots the observed \delta_{ij} against the corresponding d_{ij}, overlaying the fitted monotone curve to visualize fit quality and detect any systematic departures from ordinal preservation. Goodness-of-fit in NMDS is quantified using stress measures that penalize deviations from the monotonic relationship. Stress-1 assesses discrepancies between the distances d_{ij} and their monotone regression on the \delta_{ij}, given by S_1 = \sqrt{ \frac{ \sum_{i < j} (d_{ij} - \hat{\delta}_{ij})^2 }{ \sum_{i < j} d_{ij}^2 } }, where \hat{\delta}_{ij} represents the fitted values from , providing a normalized metric sensitive to distance preservation. Stress-2 modifies this by normalizing against the sum of squares of the \delta_{ij} instead, emphasizing fit to the transformed dissimilarities. offers a hybrid for cases blending metric and non-metric elements, defined as the square root of the raw sum of squared differences between \delta_{ij} and d_{ij}, which reduces to metric stress when no transformation is needed. Values of S_1 < 0.05 indicate excellent fit, while $0.05 \leq S_1 < 0.10 are considered good, with higher values suggesting poorer rank preservation. NMDS offers advantages in robustness to ordinal noise and violations of interval scaling, as it relies solely on rank information without assuming equal spacing between categories, which is ideal for handling subjective judgments. This makes it particularly valuable in preference scaling tasks, where data consist of rankings rather than measured intervals, allowing flexible recovery of underlying perceptual structures. A representative example is the application to consumer preferences in marketing, where ranks of perceived similarities between products—such as automobiles or beverages—are input to NMDS to generate a 2D perceptual map. This positions products based on ordinal similarity judgments from surveys, revealing clusters (e.g., luxury vs. economy cars) without presupposing quantitative distance equivalences, thereby aiding market segmentation and positioning strategies.

Advanced Variants

Advanced variants of multidimensional scaling (MDS) extend the core methods to address specific challenges such as individual differences, large-scale data, non-Euclidean geometries, and non-linear structures. These developments build on the foundational principles of metric and non-metric MDS to enable more flexible and scalable applications in diverse fields like psychometrics and computational geometry. Generalized multidimensional scaling (GMD) allows for embeddings in non-Euclidean target spaces by incorporating flexible distance functions that preserve isometry-invariant properties, making it suitable for partial surface matching and complex manifolds. Introduced as a framework for handling arbitrary smooth surfaces rather than flat Euclidean spaces, GMD optimizes configurations to minimize distortions in generalized distances, often using spectral methods or iterative approximations. This variant has been particularly influential in computer vision and shape analysis since its formalization in the mid-2000s. For handling very large datasets, scalable variants like landmark MDS (LMDS), sometimes referred to in the context of super MDS approximations, reduce computational complexity from O(N^2) to O(N log N) or better by selecting a subset of landmark points to approximate the full distance matrix. LMDS first computes exact coordinates for the landmarks using classical MDS and then projects remaining points based on their distances to these landmarks, enabling efficient dimensionality reduction for datasets with thousands of objects. This approach maintains much of the global structure while avoiding exhaustive pairwise computations, as demonstrated in early large-scale embedding tasks. Individual differences scaling (INDSCAL) addresses multi-subject data by modeling separate subject weights for shared dimensions in a common perceptual space, allowing analysis of how individuals differentially emphasize axes in similarity judgments. Developed in the early 1970s, INDSCAL decomposes a three-way dissimilarity tensor into a group stimulus space and individual weight matrices via a weighted Euclidean model, facilitating the study of perceptual variations without assuming identical cognitive structures across raters. For example, INDSCAL has been applied to multi-rater similarity judgments in cross-cultural studies of nation perceptions, revealing dimensions like political alignment and economic development that vary by national background. Preference mapping (PREFMAP) extends to incorporate external preference or attribute data by projecting ideal points or vectors onto an existing spatial configuration, enabling the analysis of how subjective preferences align with objective similarities. Originating in the late 1960s and refined in the early 1970s, PREFMAP uses least-squares optimization in metric or non-metric forms to fit preference functions, such as vector or ideal-point models, to MDS-derived coordinates. This variant is widely used in market research to map consumer preferences onto product spaces derived from sensory attributes. Post-2000 developments include kernel MDS, which leverages kernel functions to perform non-linear embeddings in high-dimensional feature spaces, akin to kernel PCA but framed through MDS stress minimization. By replacing Euclidean distances with kernel-induced metrics, kernel MDS captures complex manifolds without explicit feature mapping, improving upon linear assumptions in traditional variants. Additionally, integrations with graph embeddings, such as in , approximate geodesic distances on manifolds via shortest-path computations in neighborhood graphs, providing a non-linear extension suitable for curved data structures like Swiss roll datasets.

Algorithms and Procedures

General Procedure for MDS

Multidimensional scaling (MDS) follows a structured workflow that applies to most implementations, whether metric or non-metric, iterative or classical variants. The process begins with preparing the input data and culminates in interpreting the resulting configuration of points in a low-dimensional space. This high-level procedure ensures that the embedded points preserve the structure of the original dissimilarities as closely as possible. The first step involves collecting or constructing a dissimilarity matrix \Delta = (\delta_{ij}), where \delta_{ij} represents the observed dissimilarity between objects i and j. Proximities, such as similarity ratings or correlations, are often converted to dissimilarities (e.g., \delta_{ij} = 1 - s_{ij} for similarities s_{ij}) and scaled according to the data's measurement level—ordinal for ranks, interval for ratios—to ensure comparability. Handling ties (equal dissimilarities) can use a primary approach that allows slight untying during optimization or a secondary approach that strictly preserves them; missing values are managed by assigning zero weights to affected pairs or imputing estimates, accommodating significant amounts of missing data with minimal error in robust implementations. Next, select the MDS type (e.g., metric for quantitative distances or non-metric for ordinal data) and the target dimensionality k, typically 1 to 5, often determined via scree plots of stress values across dimensions or theoretical considerations, with 2D common for visualization. Initialization follows by assigning starting coordinates to the n objects in k-dimensional space, often randomly or via a preliminary classical MDS solution for better convergence. The core iterative phase then minimizes a stress function, such as Kruskal's Stress-1 \sqrt{\sum (d'_{ij} - d_{ij})^2 / \sum d_{ij}^2}, where d_{ij} are the configuration distances and d'_{ij} are transformed dissimilarities, using algorithms like steepest descent or majorization (SMACOF). Iterations adjust coordinates until convergence, defined by a small change in stress (e.g., \Delta \sigma < 10^{-6}) or a maximum of 100 iterations. While classical MDS employs a non-iterative eigenvalue decomposition for exact metric fits, the iterative approach handles flexible models. To assess fit, evaluate the final stress value—excellent if below 0.025, good if around 0.05—and supplementary metrics like the correlation between observed and fitted distances (ideally >0.95) or Shepard diagrams to detect outliers. Interpretation involves plotting the points, labeling axes based on variance explained by each dimension (e.g., the first axis capturing 50% of ), and relating clusters or positions to substantive factors, such as perceptual attributes in sensory data. For illustration, consider applying MDS to a small of 10 cities with pairwise travel times as dissimilarities: (1) Construct the from times, scaling to z-scores; (2) Choose non-metric type and k=2; (3) Initialize random points; (4) Iterate SMACOF until stress <0.05 (e.g., 30 iterations); (5) Plot the map, interpreting the x-axis as east-west separation (60% variance) and y-axis as north-south (30%), revealing geographic clusters. This flowchart—data preparation → model selection → initialization → optimization → evaluation → visualization—guides practical application across datasets.

Optimization and Iteration Methods

In metric multidimensional scaling (MDS), iterative optimization often employs gradient-based methods to minimize the stress function, which measures the discrepancy between observed distances \delta_{ij} and configuration distances d_{ij}(X). The raw stress is defined as \sigma(X) = \sqrt{\sum_{i < j} (d_{ij}(X) - \delta_{ij})^2}, and optimization proceeds by computing partial derivatives of the squared stress with respect to the coordinates x_{ik} of point i in dimension k: \frac{\partial \sigma^2}{\partial x_{ik}} = -2 \sum_{j \neq i} \frac{(d_{ij}(X) - \delta_{ij})(x_{ik} - x_{jk})}{d_{ij}(X)}. This gradient guides adjustments in the steepest descent method, where the configuration updates as X^{(t+1)} = X^{(t)} - \alpha \nabla \sigma^2(X^{(t)}), with step size \alpha selected via line search to reduce stress. Conjugate gradient methods enhance efficiency by accumulating search directions orthogonal with respect to the Hessian approximation, accelerating convergence compared to plain steepest descent while maintaining the same gradient computation. For non-metric MDS, optimization incorporates ordinal constraints via alternating least squares, where the process iterates between updating the configuration to minimize a metric stress (using gradient descent as above) with a fixed monotone transformation f applied to the input dissimilarities, and updating f to ensure monotonicity. The transformation update employs isotonic regression, which solves \min \sum (f(\delta_{ij}) - d_{ij}(X))^2 subject to f being non-decreasing, typically via a pooling algorithm that merges tied values to enforce order while minimizing squared error. This alternation, pioneered in , preserves rank-order information without assuming interval-scale data, converging to a local minimum of the non-metric stress \sqrt{\sum (d_{ij}(X) - f(\delta_{ij}))^2}. Advanced optimizers like the SMACOF (Scaling by MAjorizing a COmplicated Function) algorithm address convergence speed by majorizing the non-convex stress with a simpler quadratic surrogate at each iteration, ensuring monotonic decrease. SMACOF is particularly effective for minimizing Stress-1 via the update X^{(t+1)} = V^{-1} B(X^{(t)}), where V is derived from weights and current distances (e.g., a generalized Laplacian), and B(X^{(t)}) is constructed from weighted inner products using b_{ij} = \delta_{ij} / d_{ij}(X^{(t)}) for d_{ij} > 0, often using the Moore-Penrose pseudo-inverse if singular. Initial configurations often derive from classical MDS via eigenvalue decomposition of the double-centered , employing pseudo-inverse for rank-deficient cases to provide a strong starting point in the low-dimensional subspace. SMACOF extends to non-metric stress by incorporating in the majorization step, achieving faster convergence than gradient methods due to guaranteed descent. For the related S-stress, which minimizes discrepancies in squared distances \sqrt{\sum (d_{ij}^2(X) - \delta_{ij}^2)^2 / \sum \delta_{ij}^4}, the case is solved non-iteratively using classical MDS: perform eigenvalue on the double-centered of squared dissimilarities -\frac{1}{2} J \Delta^2 J, retaining the top k eigenvectors scaled by positive eigenvalues. Iterative methods like adapted SMACOF may be used for constrained or non- variants of S-stress. Convergence in these iterative methods can trap solutions in local minima owing to the non-convexity of stress landscapes, mitigated by multiple random initializations or seeded starts from classical MDS solutions to explore diverse basins. Each iteration incurs computational complexity O(n^2 k), dominated by pairwise distance calculations and matrix operations, where n is the number of objects and k the embedding dimension; for large n, this scales quadratically, prompting parallel implementations. As an illustrative example, consider SMACOF applied to minimize Stress-1 for a dissimilarity matrix of 10 objects in 2 dimensions. Begin with an initial configuration X^{(0)} from classical MDS (e.g., retaining the top two eigenvectors scaled by eigenvalues). In each iteration, compute distances D(X^{(t)}); form the target B(X^{(t)}) using b_{ij} = \delta_{ij} / d_{ij}(X^{(t)}); construct V from weights and distances; then update X^{(t+1)} = V^+ B(X^{(t)}) using pseudo-inverse. Repeat, monitoring Stress-1 until tolerance of $10^{-5}; for this small scale, convergence typically occurs within 5-10 iterations, yielding coordinates that reproduce input distances with low distortion.

Applications

In Psychometrics and Social Sciences

In psychometrics, multidimensional scaling (MDS) has been instrumental in , particularly for analyzing similarities in sensory experiences such as odors, tastes, and tactile sensations. For instance, researchers have applied MDS to sorting tasks where participants group stimuli based on perceived similarity, revealing underlying perceptual dimensions that traditional methods like might overlook due to their assumption of linearity. This approach originated in efforts to quantify subjective judgments, allowing for the of how individuals perceive sensory arrays in low-dimensional spaces. MDS also facilitates attitude scaling in surveys, where it models respondents' preferences or evaluations of stimuli, such as emotional states or intensities, by representing both subjects and items as points in a shared space. In these applications, unfolding models—extensions of MDS tailored to preference data—position individuals' ideal points relative to stimuli, enabling the recovery of latent preference structures from rank-order or paired-comparison responses. Developed by Coombs in the , these models treat preferences as distances from an ideal point, proving useful for common in psychological surveys, where non-metric MDS preserves rank orders without assuming interval-level measurements. In the social sciences, MDS has been employed to analyze kinship structures and linguistic distances, mapping relational proximities to uncover cultural or social patterns. For example, it visualizes linguistic variations across dialects or languages by treating phonetic or syntactic differences as distances, revealing clusters that align with geographic or historical influences. A seminal case from the involves Shepard's application of non-metric MDS to experiments, where participants judged similarities among color chips; the resulting configurations highlighted psychological dimensions like hue and , distinct from physical spectra, and demonstrated individual differences in perceptual structures. The interpretability of MDS outputs in these fields lies in their ability to reveal latent traits, such as dimensions of "warmth" or in semantic spaces derived from emotional or attitudinal data. For instance, mappings of semantic differentials—adjective pairs like "warm-cold"—yield spaces where proximity reflects psychological associations, aiding in the identification of underlying constructs like approachability in social perceptions. Historically, MDS advanced scaling theory following Thurstone's 1927 law of comparative judgment, which focused on unidimensional attitudes; extensions by Torgerson in 1952 and Kruskal in 1964 introduced multidimensional representations, transforming from linear models to geometric ones that better capture complex human judgments.

In Data Visualization and Machine Learning

In data visualization, multidimensional scaling (MDS) serves as a powerful technique for projecting high-dimensional embeddings, such as word vectors from models, into 2D or 3D spaces to reveal semantic relationships and clusters. For instance, MDS can map thousands of word embeddings derived from models like or , allowing analysts to interpret similarities based on preserved distances in the lower-dimensional plot, where semantically related terms appear closer together. This approach contrasts with local-structure-focused methods like t-SNE, as MDS employs to minimize stress across all pairwise distances, ensuring a more faithful representation of the overall data geometry. Within pipelines, MDS functions as a preprocessing step for clustering algorithms by reducing dimensionality while retaining distance-based similarities, which enhances subsequent tasks like k-means or on high-dimensional datasets. In manifold learning applications, particularly in bioinformatics, MDS embeds protein structures into low-dimensional spaces to facilitate analysis of conformational similarities; for example, classical MDS has been applied to classify protein folds by projecting structural distance matrices, enabling automated grouping of homologous proteins. Advancements such as , introduced in 2000—a MDS variant that approximates distances on nonlinear manifolds—have extended these uses to capture intrinsic data geometries in complex datasets, including graph-based representations in recommendation systems where user-item interactions are embedded to compute similarity matrices for personalized suggestions. A notable involves visualizing genomic s in single-cell RNA-sequencing (scRNA-seq) data, where metric MDS scales to datasets with millions of cells by solving the problem via approximations, thereby uncovering cell-type clusters and trajectories while preserving biological metrics like or correlation-based dissimilarities. This application highlights MDS's advantage in maintaining global structure—such as inter-cluster separations—in contrast to local methods like t-SNE, which may distort large-scale relationships despite superior local preservation, making MDS preferable for exploratory analyses requiring holistic overviews.

Implementations

Software Libraries and Tools

Several prominent software libraries and tools facilitate the implementation of multidimensional scaling (MDS) across various programming languages and statistical environments. In , the vegan package provides robust support for non-metric MDS (NMDS), particularly tailored for ecological applications where it ordains community data based on dissimilarity matrices using functions like metaMDS and monoMDS. The smacof package offers a comprehensive suite for MDS, implementing the for stress minimization in both metric and non-metric variants, including extensions for individual differences scaling and bootstrap diagnostics. For handling larger datasets, landmark MDS approximations are available through specialized extensions like the lmds package, which reduces by selecting representative subsets of points. In , the library includes the MDS class within its manifold module, enabling metric and non-metric MDS computations on dissimilarity matrices, with options for normalized stress and integration into broader pipelines. Commercial and proprietary tools also provide accessible MDS implementations. MATLAB's Statistics and Machine Learning Toolbox features the mdscale function for non-metric MDS and cmdscale for classical metric MDS, supporting visualization of point configurations in low-dimensional spaces. In , the PROXSCAL procedure performs proximity scaling for individual differences MDS, accommodating various data structures and constraint models. offers PROC MDS in its statistical procedures, estimating coordinates from dissimilarity data with options for metric and non-metric analyses. These tools emphasize open-source accessibility, with and packages dominating since the early 2000s due to their flexibility and community-driven updates; integrations, such as vegan's compatibility with for ecological ordinations, enhance usability.

Programming Examples

Multidimensional scaling (MDS) implementations are available in various programming languages, enabling researchers to apply the technique to distance matrices for and . This section presents executable examples in and , focusing on classical and non-metric variants, along with interpretation steps and best practices. These snippets use for reproducibility and assume basic familiarity with the respective environments. In , the cmdscale() function from the base stats package performs classical MDS on a symmetric , recovering coordinates via eigenvalue decomposition. Consider a sample derived from three points in 2D space: (0,0), (1,0), and (0,1), with distances forming the matrix D. The code below computes the MDS embedding and visualizes it using biplot() for coordinate projection and vector interpretation:
r
# Sample [distance matrix](/page/Distance_matrix) (Euclidean distances)
D <- as.dist(matrix(c(0, 1, 1, 1, 0, sqrt(2), 1, sqrt(2), 0), nrow=3))
# Perform classical MDS with k=2 dimensions
fit <- cmdscale(D, k=2, eig=TRUE)
# Extract coordinates
coords <- fit$points
# Plot using biplot (shows points and configuration)
biplot(fit)
# Eigenvalues for dimensionality assessment
fit$eig[1:2]  # First two eigenvalues: approximately 0.5 and 0.5
The resulting coordinates approximate the original points, with eigenvalues indicating the variance explained by each dimension (here, both capture 50% of the total). For Python, the sklearn.manifold.MDS class supports both metric and non-metric MDS via the metric parameter, optimizing embeddings to minimize stress. On synthetic data—such as distances from five points on a circle—the fit_transform method yields the configuration. To handle non-metric MDS, set metric=False and provide a dissimilarity function; post-fit stress can be computed manually using the raw stress formula. The example below generates circular data, fits the model, and visualizes the results:
python
import numpy as np
from sklearn.manifold import [MDS](/page/Multidimensional_Scaling)
import matplotlib.pyplot as plt

# Synthetic data: 5 points on a unit circle
theta = np.linspace(0, 2*np.pi, 5, endpoint=False)
true_points = np.column_stack((np.cos(theta), np.sin(theta)))
# Compute Euclidean distance matrix
D = np.zeros((5, 5))
for i in range(5):
    for j in range(5):
        D[i, j] = np.linalg.norm(true_points[i] - true_points[j])

# Set seed for reproducibility
np.random.seed(42)
# Fit non-metric [MDS](/page/Multidimensional_Scaling) (use metric=True for classical)
mds = [MDS](/page/Multidimensional_Scaling)(n_components=2, metric=False, random_state=42, dissimilarity='precomputed')
coords = mds.fit_transform(D)
# Compute raw stress: sqrt(sum((distances - fitted_distances)^2) / sum(distances^2))
fitted_dist = np.array([[np.linalg.norm(coords[i] - coords[j]) for j in range(5)] for i in range(5)])
stress = np.sqrt(np.sum((D - fitted_dist)**2) / np.sum(D**2))
print(f"Stress: {stress:.3f}")  # Typically around 0.05-0.10 for this data

# Visualize
plt.scatter(coords[:, 0], coords[:, 1])
plt.title("MDS Embedding")
plt.show()
The embedding coords provides 2D coordinates close to the original circle points, with stress values below 0.1 indicating a good fit; lower stress reflects better preservation of distances. Interpreting MDS outputs involves extracting the coordinate matrix (e.g., fit$points in R or mds.embedding_ in Python), which represents points in the reduced space. Stress computation quantifies fit—values under 0.05 suggest excellent recovery, 0.05-0.10 good, and above 0.20 poor—while visualization via scatter plots () or biplots reveals clusters or distortions. For dimensionality selection, inspect eigenvalues from the fit (e.g., fit$eig in R or via singular value decomposition on the double-centered matrix in Python); retain dimensions where cumulative eigenvalues exceed 70-80% of total variance. Best practices include setting random seeds (e.g., random_state=42 in scikit-learn or set.seed(42) in R) for reproducible initializations, especially in iterative non-metric MDS, and validating inputs as symmetric positive semi-definite matrices to avoid numerical issues. Preprocessing distances (e.g., via monotonic regression for non-metric) enhances robustness. For large datasets (n > 1000), landmark MDS reduces complexity by embedding a subset of "" points and extrapolating others, achieving O(n k^2) time versus O(n^3) for full MDS. Efficient implementations using approximations like Nyström extension are available in research literature and can be adapted in .

Limitations and Extensions

Common Challenges and Limitations

One major challenge in applying multidimensional scaling (MDS) arises from the optimization process, where iterative algorithms for stress minimization often converge to local minima rather than the global optimum, potentially yielding degenerate solutions such as collapsed points or uninformative configurations. These local minima can be exacerbated by initial configurations, leading to multiple suboptimal solutions that vary in interpretability. Additionally, MDS configurations suffer from rotational and reflectional ambiguity, as the axes lack a fixed orientation relative to the data, necessitating post-hoc alignment techniques like to facilitate comparison across runs or with external references. A key limitation of classical MDS is its computational scalability, stemming from the O(n²) storage and processing requirements for the dissimilarity , which restricts practical application to datasets with several thousand to tens of thousands of objects, depending on available computational resources, without specialized approximations. Furthermore, MDS exhibits high sensitivity to outliers, where even a few anomalous dissimilarities can disproportionately distort the global configuration and inflate values, as seen in cases where erroneous pairwise distances lead to grossly inaccurate embeddings. Interpretability poses another hurdle, as the recovered dimensions often have arbitrary scales and orientations that do not correspond to meaningful psychological or physical constructs without domain-specific , risking overinterpretation of spurious patterns. For validation, researchers employ cross-validation to assess reliability and determine optimal dimensionality, often comparing MDS fits (e.g., below 0.10 for good solutions, with values up to 0.20 considered acceptable) against baselines like (), which may reveal MDS's advantages in nonlinear data but poorer performance in high-noise scenarios where exceeds 0.2, indicating failure modes such as in perceptual datasets with measurement errors. Weighted multidimensional scaling (WMDS) extends classical MDS by incorporating weights to handle heterogeneous data, where dissimilarities vary in reliability or importance across pairs, allowing for more robust embeddings in applications like sensor networks and localization. This approach derives weighting matrices that emphasize accurate measurements, improving performance over unweighted methods in noisy or uneven datasets. Post-2015 developments have integrated neural networks into MDS frameworks, termed deep MDS, to capture non-linear structures in high-dimensional data beyond the linear assumptions of traditional MDS. These methods employ architectures, such as convolutional or fully connected networks, to optimize embeddings iteratively, enabling applications in shape recovery and large-scale . Isomap, introduced as a geodesic extension of MDS, preserves manifold structures by approximating intrinsic distances via shortest paths on neighborhood graphs, addressing limitations of Euclidean assumptions in nonlinear data. MDS also connects to spectral clustering and Laplacian eigenmaps, where graph Laplacian eigenvectors provide low-dimensional representations that align with MDS stress minimization, unifying dimensionality reduction and clustering tasks. In the 2020s, scalable MDS variants leverage (SGD) to optimize large-scale embeddings efficiently, reducing from O(n^2) to near-linear time for datasets exceeding millions of points. Recent applications extend MDS to (VR) for immersive data visualization, embedding multidimensional datasets in spaces to enable intuitive exploration and interaction. To address gaps in traditional MDS, time-varying MDS adapts embeddings for dynamic data streams, using incremental updates or moving windows to track evolving dissimilarities over time. Integration with autoencoders in combines MDS's distance preservation with neural learning, enhancing non-linear for multimodal . MDS exemplifies scalability by selecting a of representative points () to approximate full distances, enabling efficient computation for million-point genomic datasets, such as single-cell sequencing, where it reduces runtime while maintaining fidelity.

References

  1. [1]
    Multidimensional scaling with an unknown distance function. I.
    The analysis of proximities: Multidimensional scaling with an unknown distance function. I. Published: June 1962. Volume 27, pages 125–140, (1962) ...
  2. [2]
    [PDF] Multidimensional Scaling - University of Illinois
    I. INTRODUCTION. This technique comprises a family of geometric models for representation of data in one or, more frequently, two or more dimensions and a ...
  3. [3]
    Nonmetric multidimensional scaling: A numerical method
    We describe the numerical methods required in our approach to multi-dimensional scaling. The rationale of this approach has appeared previously.
  4. [4]
    [PDF] Nonmetric multidimensional scaling: a numerical method
    Introduction. We describe a numerical method for multidimensional scaling. In a companion paper [7] we describe the rationale for our approach to scaling,.
  5. [5]
    [PDF] Multidimensional Scaling
    Multidimensional Scaling (MDS) is one of the most popular methods in data analysis, and it would be impossible to provide a complete set of applications in this ...
  6. [6]
    A Survey on Multidimensional Scaling - ACM Digital Library
    This survey presents multidimensional scaling (MDS) methods and their applications in real world. MDS is an exploratory and multivariate data analysis ...
  7. [7]
    Multidimensional scaling: I. Theory and method | Psychometrika
    Multidimensional scaling can be considered as involving three basic steps. In the first step, a scale of comparative distances between all pairs of stimuli.
  8. [8]
    [PDF] Multidimensional scaling by optimizing goodness of fit to a ...
    [12] Kruskal, J. B. Nonmetric multidimensional scaling: A numerical method. Psycho- metrika, (accepted for publication, June, 1964). [13] Rothkopf, E. Z. A ...
  9. [9]
    Multidimensional scaling - PMC - PubMed Central - NIH
    Multidimensional scaling (MDS) is a tool by which researchers can obtain quantitative estimates of similarity among groups of items.
  10. [10]
    Thurstone (1934) - Classics in the History of Psychology
    The multi-dimensionality of mind must be recognized before we can make progress toward the isolation and description of separate abilities. It remains a fact, ...Missing: 1930s | Show results with:1930s
  11. [11]
    Multidimensional Scaling: Definition, Overview, Examples
    The term metric multidimensional scaling made its first appearance in 1938 in Richardson's Multidimensional Psychophysics. Several people before Richardson ...
  12. [12]
    [PDF] Introduction to Multidimensional Scaling and Its Applications
    Psychometrika, 1983, 48, 393-423. Torgerson, W. S. Multidimensional scaling: I. Theory and method. Psychometrika, 1952, 17, 401-419. Torgerson, W. Theory ...
  13. [13]
    [PDF] multidimensional scaling: 1. theory and method
    DECEMBER, 1952. MULTIDIMENSIONAL SCALING: 1. THEORY AND METHOD*. WARREN S. TORGERSON ... Multidimensional scaling can be considered as involving three basic.
  14. [14]
  15. [15]
    [PDF] Looking Back and Looking Ahead in Psychometrics.
    Scaling. In the early 1930's, Thurstone's interests changed from scaling to factor analysis. For the next decade development in the scaling area was very slow.
  16. [16]
    [PDF] A Global Geometric Framework for Nonlinear Dimensionality ...
    , Isomap, for computing a quasi-isomet- ric, low-dimensional embedding of a set of high-dimensional data points. Two issues need to be raised concerning this ...
  17. [17]
    [PDF] Lecture 8: Multidimensional scaling - Department of Statistics
    Perception of Color in human vision. MDS reproduces the well-known two-dimensional color circle. 6 / 41. Page 7. Distance, dissimilarity and similarity.
  18. [18]
    [PDF] STAT 730 Chapter 14: Multidimensional scaling
    Non-metric approach. The classical approach thus far uses a metric, i.e. the drs are distances between points. Kruskal (1964) considers construction of x1 ...
  19. [19]
    [PDF] Some Distance Properties of Latent Root and Vector Methods Used ...
    Multidimensional scaling by optimising goodness of fit to a nonmetric hypo- thesis. Psychometrika 29, 1-27. KiRusEAL, J. B. (1964b). Nonmetric ...
  20. [20]
    6.1 Classical MDS | Multivariate Statistics
    Link with PCA​​ In MDS, we work with the centred Gram matrix B=HXX⊤H B = H X X ⊤ H , which has spectral decomposition HXX⊤H=UΣ2U⊤ H X X ⊤ H = U Σ 2 U ⊤ where Σ2= ...
  21. [21]
    Modern Multidimensional Scaling: Theory and Applications
    Apr 18, 2013 · Multidimensional scaling (MDS) is a technique for the analysis of similarity or dissimilarity data on a set of objects.
  22. [22]
    [PDF] The Use of Nonmetric Multidimensional Scaling in Marketing Analysis
    These are called interval scales. A very obvious example is the measurement of temperature in which different arbitrary' zero points are established according ...
  23. [23]
    [PDF] Data Visualization With Multidimensional Scaling
    We discuss methodology for multidimensional scaling (MDS) and its implementa- tion in two software systems, GGvis and XGvis. MDS is a visualization ...
  24. [24]
    Multidimensional Scaling Using Majorization: SMACOF in R
    In this paper we present the methodology of multidimensional scaling problems (MDS) solved by means of the majorization algorithm.Missing: original | Show results with:original
  25. [25]
    [PDF] A Review of Multidimensional Scaling (MDS) and its Utility in ...
    Multidimensional scaling (MDS) is an exploratory data analysis technique that can be used in testing the hypothesized existence of particular dimensions or ...
  26. [26]
    Multidimensional unfolding: Determining the dimensionality of ...
    In Coombs' "unfolding method," both Ss and items can be represented by points on a line representing some unidimensional variable.
  27. [27]
    Geographical and linguistic structure in the people of Kenya ...
    Genetic relationships of Kenya and its linguistic groups with other populations in Africa and Arabia. Multidimensional scaling (MDS) plot based on pairwise FST ...
  28. [28]
    [PDF] Understanding How Dimension Reduction Tools Work
    Local structure preservation methods before t-SNE: Since global methods such as PCA and MDS cannot capture local nonlinear structure, researchers first ...
  29. [29]
    Data Visualization With Multidimensional Scaling
    Jan 1, 2012 · Abstract. We discuss methodology for multidimensional scaling (MDS) and its implementation in two software systems, GGvis and XGvis.
  30. [30]
    Automatic classification of protein structures using low-dimensional ...
    Jan 24, 2014 · The low dimensional embeddings are next computed using an embedding technique called multidimensional scaling (MDS).
  31. [31]
    Solving the Accuracy and Diversity Dilemma in Information Filtering
    In this paper, we employ the multidimensional scaling (MDS) method to measure the similarities between nodes in user-item bipartite networks. The MDS method can ...
  32. [32]
    Metric multidimensional scaling for large single-cell datasets using ...
    Jun 11, 2024 · Here, we propose a simple neural network-based approach for solving the metric multidimensional scaling problem that is orders of magnitude ...Missing: paper | Show results with:paper
  33. [33]
  34. [34]
    MDS — scikit-learn 1.7.2 documentation
    Multidimensional scaling by optimizing goodness of fit to a nonmetric hypothesis. Kruskal, J. Psychometrika, 29, (1964)
  35. [35]
    lmds: Landmark Multi-Dimensional Scaling - Robrecht Cannoodt
    Dec 9, 2020 · Landmark MDS only calculates the distances between a set of landmarks and all other data points, thereby sacrificing determinism for scalability.Missing: scikit- learn
  36. [36]
    mdscale - Nonclassical multidimensional scaling - MATLAB
    By default, mdscale uses Kruskal's normalized stress formula 1 criterion. To perform metric multidimensional scaling, specify the Criterion name-value argument.<|control11|><|separator|>
  37. [37]
    cmdscale - Classical multidimensional scaling - MATLAB - MathWorks
    Y = cmdscale(D) performs classical multidimensional scaling on the n-by-n distance or dissimilarity matrix D, and returns an n-by-p configuration matrix.
  38. [38]
    Multidimensional Scaling (PROXSCAL) - IBM
    To Obtain a Multidimensional Scaling. This feature requires the Categories option. From the menus choose: Analyze > Scale > Multidimensional Scaling (PROXSCAL).
  39. [39]
    Overview: MDS Procedure - SAS Help Center
    Feb 21, 2025 · Multidimensional scaling (MDS) refers to a class of methods. These methods estimate coordinates for a set of objects in a space of specified dimensionality.
  40. [40]
    vegan FAQ - CRAN
    What is vegan? Vegan is an R package for community ecologists. It contains the most popular methods of multivariate analysis needed in analysing ecological ...
  41. [41]
    [PDF] Constraining Nonmetric Multidimensional Scaling Configurations.
    stress. One of these non-optimal solutions, moreover , may be more interpretable than the local minimum solution. This means that small rank reversals in ...Missing: challenges | Show results with:challenges
  42. [42]
    (PDF) Multidimensional Scaling II - ResearchGate
    Dec 4, 2018 · Multidimensional scaling (MDS) is a method for the visualization of dissimilarities between pairs of objects. The representation of the ...
  43. [43]
    [PDF] Outlier Detection for Robust Multi-dimensional Scaling - arXiv
    Feb 7, 2018 · This sensitivity of MDS to outliers is demonstrated in Figure 1, where only two pairwise distances (out of 435 pairs) are erroneous (colored in ...
  44. [44]
    Robust multi-dimensional scaling via outlier-sparsity control
    Despite their popularity, MDS algorithms are sensitive to outliers, yielding grossly erroneous embeddings even if few outliers contaminate the available ...
  45. [45]
    [PDF] Distributed Weighted-Multidimensional Scaling for Node ...
    This paper proposes a distributed localization algorithm, based on a weighted version of multidimensional scaling. (MDS), which naturally incorporates local ...
  46. [46]
    Weighted Multidimensional Scaling Localization Method With Bias ...
    Jul 24, 2023 · The aim of this article is to enhance the performance of weighted MDS method. It starts through deriving a novel weighting matrix, which considers second-order ...
  47. [47]
    Deep-MDS Framework for Recovering the 3D Shape of 2D ... - arXiv
    Oct 27, 2022 · In this paper, a low parameter deep learning framework utilizing the Non-metric Multi-Dimensional scaling (NMDS) method, is proposed to recover ...Missing: 2015 | Show results with:2015
  48. [48]
    Metric multidimensional scaling for large single-cell datasets using ...
    Jun 11, 2024 · A technique called landmark MDS was introduced by Silva and Tenenbaum [24]. The idea behind this approach is to select only a subset of the ...Missing: paper | Show results with:paper
  49. [49]
    [PDF] Out-of-Sample Extensions for LLE, Isomap, MDS, Eigenmaps, and ...
    This paper provides a unified framework for extending LLE, Isomap, MDS, Eigenmaps, and Spectral Clustering for out-of-sample examples, based on learning ...Missing: seminal | Show results with:seminal<|separator|>
  50. [50]
    [PDF] Laplacian-Based Dimensionality Reduction Including Spectral ...
    Aug 6, 2022 · Laplacian eigenmaps and spectral techniques for embedding and clustering. ... MDS, eigenmaps, and spectral clustering. In Advances in neural ...
  51. [51]
    Non-metric multidimensional scaling at scale - OpenReview
    Sep 20, 2025 · Here we develop a fast MDS implementation, supporting both metric and non-metric MDS, using stochastic gradient descent in PyTorch. This ...Missing: 2020s paper
  52. [52]
    A virtual reality data visualization tool for dimensionality reduction ...
    Feb 12, 2024 · A virtual reality interactive tool for generating and manipulating visualizations for high-dimensional data in a natural and intuitive stereoscopic way.Missing: multidimensional | Show results with:multidimensional<|separator|>
  53. [53]
    Moving window incremental multidimensional scale analysis-based ...
    Oct 17, 2023 · A moving window incremental multidimensional scale analysis (MWIMDS) method is proposed in this paper. Based on the time-freezing theory and the moving window ...
  54. [54]
    Autoencoders with shared and specific embeddings for multi-omics ...
    The majority of the data integration methods can be grouped into four different categories: dimension reduction, kernel, network, and deep learning methods [1].
  55. [55]
    [PDF] Sparse multidimensional scaling using landmark points
    Jun 30, 2004 · Our goal in this paper is to make Landmark MDS accessible and appealing to a broad audience, by giving a more thorough and more general ...