Fact-checked by Grok 2 weeks ago

Distance matrix

In , , and related fields, a distance matrix is a square matrix that encodes the pairwise distances between a set of objects, points, or vertices, where the entry in the i-th row and j-th column represents the from the i-th to the j-th element. These matrices are typically symmetric if the underlying is symmetric, nonnegative, and have zeros along the to indicate zero distance from an object to itself. Distance matrices play a central role in , where they capture the shortest-path distances between vertices of a , facilitating the analysis of , , and spectral properties such as eigenvalues that inform and cospectrality. In and , they underpin algorithms for —by enabling agglomerative merging based on proximity—and , which embeds high-dimensional data into lower-dimensional spaces while preserving distances. matrices, a specialized variant, extend to optimization problems in sensor network localization, determination, and , often leveraging for completion or denoising of incomplete data. In applications, they support tasks like k-means and clustering, as well as broader uses in image recognition, , recommendation systems, and bioinformatics pipelines for and construction. Efficient computation of distance matrices remains a focus of research, with parallel algorithms and linear algebra techniques addressing scalability for large datasets in areas like and manifold learning.

Fundamentals

Definition and Notation

A distance matrix is a fundamental tool in for capturing pairwise dissimilarities between a set of objects within a , where distances quantify how far apart the objects are according to a defined measure. Formally, for a of n objects, a distance matrix is an n \times n square D = (d_{ij}), where each entry d_{ij} represents the distance between the i-th and j-th object, satisfying d_{ii} = 0 for all i (zero diagonal) and d_{ij} \geq 0 for all i, j (non-negative entries). The matrix is typically denoted by an uppercase letter such as D, with individual elements referred to in lowercase as d_{ij}. This notation emphasizes the matrix's role as a compact representation of all pairwise distances, enabling efficient storage and computation in various analytical contexts. To illustrate, consider a simple set of three points in two-dimensional : P_1 = (0,0), P_2 = (1,0), and P_3 = (0,1). The between any two points (x_1, y_1) and (x_2, y_2) is computed as \sqrt{(x_2 - x_1)^2 + (y_2 - y_1)^2}. Applying this formula yields: d_{12} = \sqrt{(1-0)^2 + (0-0)^2} = 1, d_{13} = \sqrt{(0-0)^2 + (1-0)^2} = 1, and d_{23} = \sqrt{(0-1)^2 + (1-0)^2} = \sqrt{2}. The resulting distance matrix is thus D = \begin{pmatrix} 0 & 1 & 1 \\ 1 & 0 & \sqrt{2} \\ 1 & \sqrt{2} & 0 \end{pmatrix}, which is symmetric and features zeros on the diagonal.

Basic Properties

A distance matrix D is symmetric, meaning D_{ij} = D_{ji} for all i, j, because the distance between any two objects is undirected and thus reciprocal by definition. This property holds in general for dissimilarity measures, where the pairwise distance function satisfies d(i, j) = d(j, i). For a simple example with two objects, the distance matrix takes the form \begin{pmatrix} 0 & d \\ d & 0 \end{pmatrix}, where d > 0 is the distance between them, illustrating the off-diagonal equality. The of a distance matrix consists entirely of zeros, so D_{ii} = 0 for all i, as the distance from an object to itself is defined to be zero. This follows directly from the non-negativity applied to identical points, where d(i, i) = 0. Consequently, the of the matrix, \operatorname{tr}(D) = \sum_i D_{ii}, is zero, which reflects the absence of self-distance contributions and simplifies certain matrix analyses. All entries in a distance matrix are non-negative, so D_{ij} \geq 0 for all i, j, with equality holding when i = j or when the objects coincide in the . This non-negativity is a foundational requirement for dissimilarity matrices, ensuring distances represent valid measures of separation. In cases where distinct objects overlap exactly, such as two points at the same location in , D_{ij} = 0 for i \neq j, serving as a to strict positivity off the diagonal but aligning with the in spaces. These properties—symmetry, zero diagonal, and non-negativity—collectively ensure that distance matrices are well-suited for algorithms assuming symmetric pairwise relations, such as those in , where the matrix's structure facilitates efficient computation of linkages without directional biases.

Types and Classifications

Metric Distance Matrices

A distance matrix arises from a set of objects in a , where the matrix entries represent pairwise s that satisfy the defining axioms of a . Specifically, for a set X = \{x_1, \dots, x_n\} and distance function d: X \times X \to [0, \infty), the matrix D = [d_{ij}] with d_{ij} = d(x_i, x_j) must fulfill: (1) positivity, ensuring d_{ij} > 0 if i \neq j and d_{ii} = 0 for all i (); (2) , d_{ij} = d_{ji} for all i, j; and (3) the , d_{ik} \leq d_{ij} + d_{jk} for all i, j, k. These properties guarantee that the distances behave consistently as measures in a , preventing anomalies like negative or asymmetric distances that could distort geometric interpretations. The , in particular, imposes a global constraint on the matrix entries, ensuring no direct path is longer than any indirect path via an intermediary point. This axiom is crucial for applications requiring embeddability into spaces, as violations would imply inconsistencies in the underlying structure. For instance, in a distance matrix, the entries must form a valid "distance geometry" where points can be realized without contradictions. The concept of such matrices originates in the study of Minkowski spaces from the early , where distances were formalized in higher-dimensional frameworks. A prominent example is the for points in \mathbb{R}^n, defined by d_{ij} = \|\mathbf{x}_i - \mathbf{x}_j\|_2 = \sqrt{\sum_{k=1}^n (x_{i k} - x_{j k})^2}. This satisfies all metric axioms: positivity and zero diagonal follow from vector norms, symmetry from the commutative difference, and the from the norm's (\|\mathbf{u} + \mathbf{v}\|_2 \leq \|\mathbf{u}\|_2 + \|\mathbf{v}\|_2). Consider three points in \mathbb{R}^2: A = (0,0), B = (3,0), C = (0,4). The distances are d_{AB} = 3, d_{AC} = 4, d_{BC} = 5, forming the matrix D = \begin{pmatrix} 0 & 3 & 4 \\ 3 & 0 & 5 \\ 4 & 5 & 0 \end{pmatrix}. Verification shows the triangle inequality holds: d_{BC} = 5 \leq 3 + 4 = d_{AB} + d_{AC}, d_{AC} = 4 \leq 3 + 5 = d_{AB} + d_{BC}, and d_{AB} = 3 \leq 4 + 5 = d_{AC} + d_{BC}. To verify if a given matrix is metric, first confirm non-negativity, zero diagonal, and symmetry, then check the triangle inequality for all triples (i,j,k), which requires O(n^3) time in the worst case. An efficient alternative uses the Floyd-Warshall algorithm to compute all-pairs shortest paths on the matrix treated as a complete graph's edge weights; if any computed shortest path distance is strictly less than the original entry, the triangle inequality is violated, as the matrix would not already represent the minimal paths. This method runs in O(n^3) time and directly identifies inconsistencies.

Non-Metric Distance Matrices

Non-metric distance matrices are constructed from dissimilarity measures between objects that fail to satisfy one or more axioms required for a , including symmetry (d(x, y) = d(y, x)), non-negativity (d(x, y) \geq 0), the (d(x, y) = 0 x = y), or the (d(x, z) \leq d(x, y) + d(y, z) for all x, y, z). These matrices are particularly valuable in fields like and , where strict properties may not align with the underlying , allowing for more expressive representations of similarity. A common violation occurs with the , as seen in the squared , which computes d(x, y) = \|x - y\|^2. This measure is non-negative and symmetric but fails the ; for points x = 0, y = 0.5, z = 1 in one dimension, d(x, z) = 1 > d(x, y) + d(y, z) = 0.25 + 0.25 = 0.5. Another example is (DTW), used for aligning of varying lengths, which also violates the even when based on a local . For sequences X = (\alpha, \beta, \gamma), Y = (\alpha, \beta, \beta, \gamma), and Z = (\alpha, \gamma, \gamma) with unit mismatch cost, DTW(X, Y) = 0, DTW(X, Z) = 1, but DTW(Y, Z) = 2 > 1. Asymmetry provides another key violation, exemplified by the Kullback-Leibler (KL) divergence, d(p, q) = \sum p(x) \log \frac{p(x)}{q(x)} for probability distributions p and q, which measures information loss but satisfies d(p, q) \neq d(q, p) in general, alongside failing the triangle inequality. In psychological similarity models, such as Tversky's feature-based approach, dissimilarities can be asymmetric or fail positivity, reflecting diagnostic and common feature emphases in human judgments. Median distances, which take the median absolute feature difference, further violate the triangle inequality by focusing on central similarities without global constraints. To construct a non-metric distance matrix, pairwise dissimilarities are computed using such functions and arranged in an n \times n array for n objects. Consider four points on the real line at positions 0, 0.5, 1, and 2, labeled A, B, C, D, with squared distances forming the matrix below, which violates the (e.g., d(A, D) = 4 > d(A, C) + d(C, D) = 1 + 1 = 2):
ABCD
A00.2514
B0.2500.252.25
C10.2501
D42.2510
This example illustrates how non-metric matrices capture nonlinear relationships unsuitable for standard metrics. The primary advantages of non-metric distance matrices lie in their flexibility for real-world data exhibiting asymmetries, such as directed information flows in KL divergence, or violations of geometric axioms, as in time series with irregular alignments under DTW. They enable robust modeling in non-Euclidean settings, like or cognitive modeling, where enforcing metric properties could distort natural dissimilarities.

Additive and Ultrametric Distance Matrices

An additive distance matrix arises from an unrooted where the distance between any two leaves corresponds to the sum of the positive weights along the unique path connecting them. Such matrices satisfy the four-point condition: for any four distinct taxa a, b, c, d, the three possible sums of pairwise s d(a,b) + d(c,d), d(a,c) + d(b,d), and d(a,d) + d(b,c) have exactly two that are equal and at least as large as the third. Buneman's theorem establishes that a distance matrix is additive this condition holds for every of taxa, ensuring a unique with lengths that exactly realizes the matrix distances. Ultrametric distance matrices represent a stricter subclass of additive matrices, satisfying the ultrametric : for all taxa i, j, k, d(i,j) \leq \max(d(i,k), d(j,k)). This condition implies a rooted tree metric where all leaves are equidistant from the , often modeled as a in , with distances corresponding to the height of the of the pair. In such structures, the enforces a where subclusters form at increasing levels, making ultrametrics particularly suitable for representing evolutionary clocks or uniform divergence rates. Tree reconstruction from an additive distance matrix can be achieved using algorithms like neighbor-joining, which iteratively identifies and joins the most closely related taxa based on a rate-corrected distance measure. The process begins with the full matrix, computes an adjusted matrix Q(i,j) = (n-2)d(i,j) - r_i - r_j where n is the number of taxa and r_i is the sum of distances from taxon i, selects the pair minimizing Q, introduces a new internal node, and updates the matrix for remaining taxa using branch length formulas derived from the joins, continuing until the tree is complete. This method exactly recovers the true tree for additive matrices and approximates well for near-additive cases. Consider a four-taxon unrooted with ((A,B),(C,D)) and edge weights A-1.5-x-1.5-B, x-2-y-0.5-C, y-1.5-D; the resulting additive distance matrix is:
ABCD
A0345
B3045
C4402
D5520
The four-point sums for A,B,C,D are d(A,B)+d(C,D)=5, d(A,C)+d(B,D)=9, d(A,D)+d(B,C)=9, confirming additivity with two equal maxima. This matrix is not ultrametric, as d(A,D)=5 > \max(d(A,C),d(D,C))= \max(4,2)=4. In contrast, adjusting the weights to A-1-x-1-B, x-2-y-1-C, y-1-D yields distances d(A,B)=2, d(C,D)=2, d(A,C)=d(A,D)=d(B,C)=d(B,D)=4, satisfying the ultrametric inequality (e.g., d(A,D)=4 \leq \max(d(A,B),d(D,B))= \max(2,4)=4) and corresponding to a balanced where all root-to-leaf paths equal 4. Thus, ultrametrics form a proper of additive matrices, with the former imposing equal terminal path lengths absent in the general additive case.

Mathematical Properties and Analysis

Symmetry and Triangle Inequality

A distance matrix D = (d_{ij}) is symmetric, meaning d_{ij} = d_{ji} for all i, j, as distances are undirected in standard definitions. This symmetry implies that D is a real symmetric matrix, and thus all its eigenvalues are real numbers. In the Euclidean case, where distances arise from points \mathbf{x}_i \in \mathbb{R}^p via d_{ij} = \|\mathbf{x}_i - \mathbf{x}_j\|_2, the matrix D exhibits further structure. Specifically, the double-centered matrix B = -\frac{1}{2} H D H, where H = I - \frac{1}{n} \mathbf{1}\mathbf{1}^T is the centering matrix, is positive semi-definite (PSD), with rank at most p. To see this, note that B equals the Gram matrix X^T X for the coordinate matrix X of the points (centered at the origin), and Gram matrices are PSD by construction since \mathbf{z}^T B \mathbf{z} = \|X^T \mathbf{z}\|^2 \geq 0 for all \mathbf{z} \in \mathbb{R}^n. Conversely, D corresponds to Euclidean distances if and only if B is PSD. The , a defining property of distance matrices, states that for all indices i, j, [k](/page/K), d_{ik} \leq d_{ij} + d_{jk}. For distances, this follows from the triangle inequality in the underlying . Let \mathbf{x}_i, \mathbf{x}_j, \mathbf{x}_k \in \mathbb{R}^p; then d_{ik} = \|\mathbf{x}_i - \mathbf{x}_k\|_2 = \|(\mathbf{x}_i - \mathbf{x}_j) + (\mathbf{x}_j - \mathbf{x}_k)\|_2 \leq \|\mathbf{x}_i - \mathbf{x}_j\|_2 + \|\mathbf{x}_j - \mathbf{x}_k\|_2 = d_{ij} + d_{jk}, by the of the . In form, the inequality requires D_{ik} \leq D_{ij} + D_{jk} entrywise for every triple (i,j,[k](/page/K)). Non-metric distance matrices may violate the . To correct this and obtain a , one can interpret D as the weight of a and compute all-pairs shortest paths, which enforces the inequality via path minimization. The Floyd-Warshall algorithm achieves this in O(n^3) time. Its pseudocode is as follows:
for k = 1 to n:
    for i = 1 to n:
        for j = 1 to n:
            if D[i][j] > D[i][k] + D[k][j]:
                D[i][j] = D[i][k] + D[k][j]
This updates distances iteratively, considering each vertex k as an intermediate . The resulting matrix satisfies the , as shortest paths cannot exceed direct plus indirect routes. Verifying the in a given n \times n distance matrix requires checking all n^3 triples (i,j,k), yielding O(n^3) for dense matrices.

Distance Polynomials and Spectra

The distance of a G with distance matrix D is defined as the P_D(x) = \det(xI - D), where I is the . This encodes algebraic information about the distances in the , with its coefficients related to the principal minors of D. In particular, for , the roots of P_D(x) provide insights into path enumeration and structural properties, as explored in early work on distance matrix determinants. Applications include deriving formulas for the number of paths of certain lengths weighted by their distances, leveraging the polynomial's expansion to count walks influenced by edge traversals. The of a consists of the eigenvalues of its distance matrix D, denoted \{\lambda_1 \geq \lambda_2 \geq \cdots \geq \lambda_n\}. For a connected on n vertices, D is symmetric and nonnegative, hence by the Perron-Frobenius theorem, it has a simple largest eigenvalue \lambda_1 > 0 (the distance spectral radius) with a positive eigenvector, and the remaining eigenvalues are nonpositive. The W(G), defined as half the sum of all pairwise distances \frac{1}{2} \sum_{i,j=1}^n d_{ij} = \frac{1}{2} \mathbf{1}^T D \mathbf{1} (where \mathbf{1} is the all-ones vector), relates quadratically to the via this form, providing a global measure of compactness that bounds the spectral radius. For trees, the inertia of D is precisely (1, n-1, 0), meaning exactly one positive eigenvalue, n-1 negative eigenvalues, and no zero eigenvalue, ensuring D is invertible with \det(D) = (-1)^{n-1} (n-1) 2^{n-2}. In general connected graphs, the multiplicity of the zero eigenvalue is typically zero, reflecting the full rank of D. Explicit spectra are known for specific families; for the path graph P_n, the distance eigenvalues are given by a closed-form expression involving trigonometric functions, and P_n uniquely maximizes the distance spectral radius among all connected n-vertex graphs. In chemical graph theory, the distance energy E_D(G) = \sum_{i=1}^n |\lambda_i| sums the absolute eigenvalues and serves as a topological descriptor for molecular stability and reactivity, correlating with physicochemical properties in quantitative structure-property relationships. Seminal studies established bounds and characterizations, such as E_D(G) \geq 2(n-1) for trees, highlighting its role in distinguishing non-isomorphic structures. Computing the distance spectrum involves spectral decomposition of D, typically via the QR algorithm or Lanczos method for large sparse approximations, with standard dense implementations requiring O(n^3) time complexity for an n-vertex graph. These eigenvalues yield graph invariants like the distance spectral radius, useful for comparative analysis in optimization problems.

Graph-Theoretical Distance Matrices

In graph theory, a distance matrix arises from a G = (V, E) with vertex set V = \{v_1, \dots, v_n\} by defining the entry D_{ij} as the length of the shortest between vertices v_i and v_j, or if no such path exists; for connected graphs, all entries are finite, with zeros on the diagonal. This construction captures the combinatorial structure of pairwise vertex separations via edge traversals, distinguishing it from Euclidean or other metric-based distances. For unweighted graphs, where each edge has unit length, the distance matrix can be computed using (BFS) initiated from each , yielding an overall of O(n(n + m)) using an adjacency list representation, or O(n³) for dense graphs with Θ(n²) edges represented by an . In weighted graphs with non-negative edge weights, applied from each source computes the necessary shortest paths, achieving O(n(m + n \log n)) time with a Fibonacci heap implementation, where m is the number of edges. Consider the complete graph K_n, where every pair of distinct vertices is adjacent; its distance matrix has D_{ij} = 1 for all i \neq j and D_{ii} = 0, reflecting immediate connectivity. For the P_n with vertices ordered linearly as v_1 - v_2 - \dots - v_n, the matrix entries simplify to D_{ij} = |i - j|, illustrating how linear structure yields a Toeplitz form with increasing distances along the off-diagonals. Key combinatorial properties of the distance matrix include the graph's , defined as the maximum entry \max_{i,j} D_{ij}, which quantifies the longest shortest and bounds the graph's extent. The is the minimum , where the eccentricity of a v_i is \max_j D_{ij}, providing a measure of ; the center consists of vertices achieving this minimum. These invariants relate the matrix to global graph topology, with the often exceeding the by at most a factor of 2 in connected graphs. The distance matrix connects to the A of an unweighted through powers: the entry (A^k)_{ij} counts walks of k from v_i to v_j, and D_{ij} equals the smallest k such that (A^k)_{ij} > 0. Computing successive powers up to A^n thus reveals all distances, though more efficient methods like BFS are preferred for large n. In , particularly scale-free graphs with power-law distributions, distance matrices have been used post-2020 to analyze average path lengths, revealing small-world properties where diameters grow logarithmically with n, as seen in models of and biological systems. For instance, studies on generalized scale-free networks compute these matrices to derive closed-form expressions for mean distances, aiding in understanding information propagation efficiency. Additionally, -degree distance distributions derived from such matrices offer refined characterizations of scale-free behavior beyond traditional degree sequences.

Applications in Bioinformatics and Biology

Sequence Alignment

In pairwise sequence alignment, distance matrices play a central role in quantifying the similarity between two biological sequences, such as DNA, RNA, or proteins, by computing the minimum number of operations (insertions, deletions, or substitutions) required to transform one into the other, often using edit distance metrics like the Levenshtein distance. This approach fills a dynamic programming matrix where each entry D(i,j) represents the optimal distance between the first i characters of the first sequence and the first j characters of the second, initialized with gap penalties and updated via recurrence relations that minimize the cumulative cost: D(i,j) = \min\{D(i-1,j) + \text{gap}, D(i,j-1) + \text{gap}, D(i-1,j-1) + \text{substitution cost}\}. For protein sequences, substitution costs are derived from empirical matrices like BLOSUM, which score amino acid replacements based on observed frequencies in conserved blocks, enabling more biologically informed distances than simple edit metrics. Global alignment methods, such as the Needleman-Wunsch algorithm, construct a full distance matrix across entire sequences to find the optimal end-to-end alignment, penalizing mismatches and gaps uniformly to reflect overall similarity. In contrast, local alignment via the Smith-Waterman algorithm focuses on high-scoring subsequences by allowing the distance matrix to reset to zero for suboptimal paths, identifying conserved regions without forcing alignment of divergent ends. For example, aligning two DNA sequences—say, "AGCT" and "AGCCT"—using a match score of +1, mismatch/gap penalty of -1 yields a Needleman-Wunsch matrix where the optimal global path incurs a distance of 1 (one insertion), while Smith-Waterman highlights the local match "AGC" with distance 0, ignoring the trailing "T". Multiple sequence alignment extends pairwise distances by first computing a matrix of pairwise scores for all sequences, then using progressive methods to align them hierarchically based on similarity. Tools like ClustalW generate this distance matrix from initial pairwise alignments, apply the unweighted pair group method with arithmetic mean (UPGMA) to build a guide tree ordering sequences by closeness, and progressively align clusters starting from the most similar pairs, incorporating gap penalties to maintain consistency. This tree-guided approach ensures that closely related sequences influence the final alignment, though it assumes a and can propagate early errors. Recent advancements integrate AI-predicted structures to refine distance matrices in alignments; for instance, AlphaFold2 generates predicted inter-residue distance distributions from multiple sequence alignments, which can constrain or improve alignment accuracy by incorporating structural plausibility, achieving near-experimental precision in conserved regions.

Phylogenetic Analysis

In phylogenetic analysis, distance matrices serve as the foundation for reconstructing evolutionary trees by quantifying pairwise divergences among taxa, such as or molecular s, under the assumption of an additive where distances represent path lengths along branches. These methods are particularly efficient for large datasets, as they transform sequence into a matrix of evolutionary distances before inference, bypassing direct character-state optimization. Seminal distance-based approaches, like neighbor-joining, approximate the true tree topology by iteratively clustering taxa based on corrected distances, providing a balance between computational speed and topological accuracy when the molecular clock assumption holds approximately. The neighbor-joining (NJ) algorithm exemplifies distance methods for additive matrices, constructing unrooted binary trees through an agglomerative process that accounts for varying evolutionary rates across lineages. For a set of n taxa with distance matrix D, the algorithm first computes the net divergence r_i for each taxon i as r_i = \frac{1}{n-2} \sum_{k \neq i} D_{ik}, representing an estimate of the total branch length from i to the central node. It then forms the rate-corrected distances M_{ij} = D_{ij} - (r_i + r_j), or equivalently the Q-matrix entries Q_{ij} = (n-2) D_{ij} - \sum_k D_{ik} - \sum_k D_{jk}, and selects the pair i, j minimizing Q_{ij} to join as sister taxa. Branch lengths to the new node are calculated as u_i = \frac{1}{2} (D_{ij} + r_i - r_j) and u_j = D_{ij} - u_i, after which the matrix is updated by removing i and j, adding a new node u, and setting D_{uk} = \frac{1}{2} (D_{ik} + D_{jk} - D_{ij}) for remaining taxa k; this process repeats until the tree is complete. For the four-taxon case, the net divergence formula simplifies to d_{ij} = D_{ij} - D_{ik} - D_{jl} + D_{kl} (adjusted for the specific pairing), aiding initial topology selection. The method's efficiency stems from its O(n^3) time complexity, making it suitable for datasets with hundreds of taxa, though it may underperform on highly uneven trees without corrections. To address underestimation of distances due to multiple substitutions—where unobserved changes at the same site saturate the signal—corrections based on probabilistic models are applied to raw pairwise distances derived from sequence alignments. The Jukes-Cantor model, a one-parameter assuming equal mutation rates among the four and no bias, corrects the observed proportion of differences p (e.g., divided by sequence length) to the expected number of substitutions per site d via the formula: d = -\frac{3}{4} \ln \left(1 - \frac{4}{3} p \right) This accounts for hidden multiple hits by extrapolating beyond the observable p < 0.75 limit. For instance, with two 300-base-pair sequences differing at 42 sites (p = 0.14), the corrected d \approx 0.158, indicating about 15.8% substitutions after adjustment, which better reflects true evolutionary divergence under the model's assumptions. More complex models like Kimura's two-parameter extend this for transition-transversion biases, but Jukes-Cantor remains foundational for nucleotide data due to its simplicity and robustness in preliminary analyses. Tree reliability is evaluated using bootstrap resampling, which generates B (typically 100–1000) pseudoreplicates by sampling alignment sites with replacement, recomputing distance matrices, and rebuilding to compute clade support percentages; values above 70–95% indicate robust branches. Introduced by Felsenstein in 1985, this nonparametric approach quantifies uncertainty from finite data, particularly useful for distance methods where matrix noise propagates to topology errors. The PHYLIP software suite, developed by Felsenstein since 1980 and continuously updated, implements NJ, distance corrections, and (via programs like PROTDIST for protein distances and for tree building), enabling seamless workflows for additive matrix analysis on Unix, Windows, and platforms. Recent advances address limitations of pure distance methods by hybridizing them with maximum likelihood (ML) frameworks, leveraging distances for initial topologies refined via sequence-based likelihood optimization. For example, post-2015 methods like multistrap integrate distance matrices from structural alignments with ML searches to enhance inference accuracy in protein phylogenies, achieving up to 20% topological improvement over standalone NJ on simulated datasets with incomplete data. These hybrids combine the scalability of distances with ML's model-based precision, as seen in tools like IQ-TREE's distance-ML pipelines.

Applications in Data Mining and Machine Learning

Hierarchical Clustering

Hierarchical clustering leverages distance matrices to construct a nested of , either through bottom-up or top-down division. In the agglomerative approach, the process begins with each data point treated as its own , resulting in n initial for n points. The distance matrix D guides the iterative merging of the two closest , where closeness is determined by a linkage criterion that updates inter-cluster distances after each merge. This continues until all points form a single , producing a hierarchical structure that reveals relationships at multiple scales. Common linkage criteria include , complete, and linkage, each defining the between clusters C_i and C_j differently to balance sensitivity to outliers and cluster compactness. linkage uses the minimum between any pair of points across the clusters: d(C_i, C_j) = \min_{x \in C_i, y \in C_j} d(x, y) This method tends to form elongated, chain-like clusters and is prone to the chaining effect. Complete linkage employs the maximum : d(C_i, C_j) = \max_{x \in C_i, y \in C_j} d(x, y) It favors compact, spherical clusters by considering the farthest points. Average linkage computes the mean over all pairs: d(C_i, C_j) = \frac{1}{|C_i| |C_j|} \sum_{x \in C_i} \sum_{y \in C_j} d(x, y) This provides a balanced , reducing to outliers compared to single linkage while avoiding the conservatism of complete linkage. These criteria fit within the general Lance-Williams framework for efficient updates during merging. The output of agglomerative clustering is typically visualized as a , a tree-like diagram where the height of each merge corresponds to the linkage distance at which clusters are joined. Leaves represent individual points, and internal nodes denote merges, allowing users to select a cut height for a desired number of flat clusters. For illustration, consider a 5-point distance matrix using single linkage:
12345
10831111
280742
337069
4114605
5112950
  • Step 1: Merge points 1 and 3 ( 3), forming {1,3}.
  • Updated distances to {1,3}: (0,3,3,11) = 0 to itself; (8,7) = 7 to 2; (11,6) = 6 to 4; (11,9) = 9 to 5.
  • Step 2: Merge 2 and 5 ( 2), forming {2,5}.
  • Continue iteratively: Next merge {2,5} and 4 ( 4); then {1,3} and {2,4,5} ( 6). The heights reflect these merge distances: 3, 2 (adjusted), 4, 6.
Divisive hierarchical clustering reverses the process, starting with all points in one cluster and recursively splitting into subclusters until singletons remain. Splits are often guided by identifying —central points that minimize average to others in the —and partitioning around them to maximize intra-cluster dissimilarity or minimize inter-cluster similarity. The algorithm starts with all points in one cluster and at each step divides the cluster with the largest (maximum pairwise ) by splitting it between the two most dissimilar objects, using the distance matrix to identify divisions. The naive implementation requires O(n^2) storage for the distance matrix and O(n^3) time for agglomerative methods due to repeated distance computations and minimum searches. Divisive approaches face similar costs but are less common due to their complexity in split selection. Optimizations like the SLINK algorithm achieve O(n^2) time for single linkage by maintaining a pointer representation of the and incremental updates, avoiding full matrix rescans. Recent scalable variants address challenges through approximations, such as sampling-based merging or parallel tree grafting, enabling on billions of points while preserving quality. For instance, one method uses approximations and agglomeration to scale agglomeratively without full pairwise distances. The resulting from either approach approximates an ultrametric distance structure, where cluster distances satisfy the strong .

The k-nearest neighbors (k-NN) algorithm utilizes distance matrices as a core component for in supervised tasks such as and . In this approach, a full distance matrix is typically not precomputed for large datasets due to computational expense; instead, for each query point, distances are calculated on-the-fly to all points in the training set, effectively forming a partial row or column of a hypothetical distance matrix. The k points with the smallest distances are then selected as neighbors, and the prediction is derived from their associated labels or values. This method's simplicity and adaptability make it effective for problems where local structure in the , captured via distances, correlates with the target outcomes. The algorithm's steps are straightforward: given a query point \mathbf{x} and training set \{(\mathbf{x}_i, y_i)\}_{i=1}^n, compute the distances d(\mathbf{x}, \mathbf{x}_i) for all i, sort them in ascending order, and identify the k smallest. For , the predicted class is the majority vote among the k neighbors' labels; ties can be resolved by random selection or distance weighting. For , the prediction is the (or weighted mean) of the k neighbors' target values. These operations rely on the to define "nearness," ensuring the selected submatrix of distances reflects relevant similarities. Euclidean distance is the conventional choice for k-NN, defined as d(\mathbf{x}, \mathbf{y}) = \sqrt{\sum_{j=1}^d (x_j - y_j)^2} in d-dimensional , as it aligns with the of many real-world datasets. However, is flexible and can employ non-metric distances, such as or Mahalanobis, provided they are positive and symmetric, allowing adaptation to domain-specific notions of similarity without violating the core neighbor selection logic. A representative example is the Iris dataset, comprising 150 samples across three species with four features ( and lengths/widths). For a 3-nearest neighbor classification of a test point like (5.1, 3.5, 1.4, 0.2), distances are computed to the training samples; the three closest might all belong to the setosa class (e.g., distances approximately 0.1, 0.2, 0.3 units), yielding a setosa via majority vote, demonstrating how partial distance computations enable accurate species identification. Variants of k-NN address limitations in uniform neighbor treatment; notably, weighted k-NN assigns weights inversely proportional to distances, such as w_i = 1 / d(\mathbf{x}, \mathbf{x}_i), so closer neighbors contribute more to the vote or average, often improving robustness to noise. Additionally, the curse of dimensionality affects high-dimensional distance matrices by increasing sparsity—all points become nearly equidistant, diluting the meaningfulness of the k nearest selections and requiring dimensionality reduction or alternative metrics for mitigation. Evaluation of k-NN often employs leave-one-out cross-validation, where for each training point, are computed to the remaining n-1 points to predict its , yielding an unbiased estimate that leverages the full structure without a separate test set. Post-2010 kernelized extensions enhance k-NN for non-Euclidean spaces by replacing explicit with kernel evaluations, k(\mathbf{x}, \mathbf{x}_i), which implicitly map data to higher-dimensional reproducing Hilbert spaces, enabling neighbor selection on curved manifolds while preserving computational efficiency through approximations.

Dimensionality Reduction Techniques

Distance matrices play a central role in manifold learning techniques for dimensionality reduction, where high-dimensional data lying on a low-dimensional manifold is embedded into a lower-dimensional Euclidean space while preserving intrinsic geometric structures. These methods leverage the distance matrix to approximate geodesic distances on the manifold, enabling the unfolding of nonlinear structures that linear techniques like principal component analysis cannot capture. Seminal approaches such as Isomap and Neighborhood Retrieval Visualizer (NeRV) explicitly construct or approximate distance matrices to achieve faithful low-dimensional representations. Isomap, introduced in 2000, addresses by estimating distances from the original via shortest paths on a neighborhood . The algorithm proceeds in three key steps: first, construct a k-nearest neighbors (k-NN) to define local neighborhoods, approximating the manifold's topology; second, compute the shortest path distances between all pairs of points using the Floyd-Warshall algorithm on this , yielding a partial distance matrix that captures global distances; third, apply classical (MDS) to this distance matrix to obtain the low-dimensional embedding. This process minimizes the stress function, which measures the discrepancy between the original distances D and the Euclidean distances \hat{D} in the embedded space: \text{stress} = \| D - \hat{D} \|_F where \| \cdot \|_F denotes the Frobenius norm. A classic demonstration is the Swiss roll dataset, a 3D manifold twisted into a rolled sheet; Isomap successfully unfolds it into a 2D plane, preserving the intrinsic 2D geometry that Euclidean MDS distorts. However, the Floyd-Warshall step imposes a computational bottleneck of O(n^3) time complexity for n data points, limiting scalability to moderate-sized datasets. NeRV, proposed in 2010 as an extension of stochastic neighbor embedding, uses probabilistic interpretations of distance matrices for and retrieval tasks. It models pairwise similarities as probability distributions derived from the distance matrix, balancing local and global structure through a parameter, and optimizes the low-dimensional by minimizing the Kullback-Leibler () between high-dimensional and low-dimensional similarity distributions approximated from the matrices. This probabilistic approach allows NeRV to handle non-Euclidean distances effectively, outperforming earlier methods like SNE in preserving both neighborhood retrieval accuracy and quality on datasets such as profiles. Later developments building on these foundations include t-distributed stochastic neighbor embedding (t-SNE), introduced in 2008, and Uniform Manifold Approximation and Projection (UMAP), introduced in 2018, which build on these foundations by implicitly approximating distance matrices through probabilistic or graph-based similarities, enabling faster and more scalable nonlinear embeddings for large-scale data visualization. t-SNE refines SNE by using t-distributions to mitigate the crowding problem in low dimensions, while UMAP employs fuzzy simplicial sets and Riemannian optimization for broader applicability beyond visualization. These methods indirectly rely on distance matrix approximations to capture manifold structure, extending the utility of explicit distance-based techniques like Isomap and NeRV.

Applications in Information Retrieval and Text Analysis

Document Similarity and Clustering

In document similarity and clustering, feature extraction begins by representing texts as vectors using the term frequency-inverse document frequency (TF-IDF) method, which weights terms based on their frequency in a document relative to their rarity across the corpus, thereby constructing a term- . This captures the (VSM) for texts, where each is a high-dimensional vector of TF-IDF scores for the vocabulary terms. Pairwise distances, such as between these vectors, are then computed to form a distance that quantifies dissimilarity between , enabling subsequent analysis in tasks. The resulting distance matrix serves as input for clustering algorithms to group documents by topics. Hierarchical clustering builds a from the matrix to reveal nested structures, while k-means partitions documents into k clusters by iteratively minimizing intra-cluster distances derived from the matrix, facilitating topic discovery in large text collections. For instance, consider four short documents: Doc1 on "machine learning algorithms," Doc2 on "neural networks training," Doc3 on "data privacy laws," and Doc4 on "regulatory compliance standards." After TF-IDF vectorization, cosine similarities are converted to distances (e.g., via 1 minus similarity), yielding a matrix where Docs 1 and 2 cluster closely due to shared AI terms, while Docs 3 and 4 form another group on legal topics, demonstrating effective topic separation. Cluster quality is evaluated using metrics like the silhouette score, which measures how well each document fits its cluster relative to others based on the distance matrix, with scores near 1 indicating strong separation. High-dimensionality in term-document matrices, often exceeding thousands of dimensions, is managed through sparse matrix representations that store only non-zero TF-IDF entries, reducing computational overhead in clustering. Recent advances integrate transformer-based embeddings, such as those from BERT, into distance matrices for document clustering. These contextual embeddings, derived from pre-trained models, capture semantic nuances beyond bag-of-words approaches like TF-IDF, leading to improved clustering performance across 28 of 36 evaluation metrics in comparative studies.

Cosine Similarity Evaluation

Cosine similarity serves as a foundational metric for constructing distance matrices in high-dimensional text representations, particularly within the vector space model of information retrieval. It quantifies the cosine of the angle \theta between two vectors \mathbf{A} and \mathbf{B}, defined as \cos(\theta) = \frac{\mathbf{A} \cdot \mathbf{B}}{\|\mathbf{A}\| \|\mathbf{B}\|} where \mathbf{A} \cdot \mathbf{B} denotes the dot product and \|\mathbf{A}\|, \|\mathbf{B}\| are the Euclidean norms of the vectors. This measure, ranging from -1 (opposite directions) to 1 (identical directions), was introduced in the vector space model for automatic indexing of text documents. To convert cosine similarity into a distance metric suitable for distance matrices, common transformations include the linear form d = 1 - \cos(\theta), which maps similarity to a dissimilarity score between 0 and 2, or the angular form d = \arccos(\cos(\theta)), yielding distances from 0 to \pi. These conversions preserve the angular relationship while enabling the use of distance-based algorithms, though the linear variant does not satisfy the triangle inequality and thus is not a true metric. In sparse high-dimensional spaces typical of text data, such as those arising from term-frequency vectors, cosine similarity exhibits key properties that make it advantageous for distance matrix construction. It is invariant to vector scaling, focusing solely on directional alignment rather than magnitude, which mitigates biases from varying document lengths or term frequencies. This scale invariance arises because normalization by the norms \|\mathbf{A}\| and \|\mathbf{B}\| renders the measure independent of absolute vector lengths. For instance, consider bag-of-words vectors for two short documents: \mathbf{A} = [1, 0, 1, 1] (representing "the cat sat") and \mathbf{B} = [1, 1, 0, 2] (representing "the dog sat sat"), where the vocabulary is {"the", "dog", "cat", "sat"}. The cosine similarity is \cos(\theta) \approx 0.707, unchanged if \mathbf{B} is scaled to [0.5, 0.5, 0, 1]; this demonstrates how it treats proportionally similar content as equivalent despite frequency differences. In contrast to Euclidean distance, which would penalize the magnitude difference in \mathbf{B} (yielding d_E \approx 1.732), cosine emphasizes angular proximity, making it robust in sparse spaces where most dimensions are zero. Comparisons in information retrieval highlight cosine's strengths and limitations relative to alternatives like Euclidean and Jaccard similarities. Euclidean distance, d_E = \sqrt{\sum (A_i - B_i)^2}, incorporates both angular and magnitude differences, performing well in low-dimensional dense data but degrading in high-dimensional sparse text due to the curse of dimensionality, where distances become nearly equal. Cosine outperforms Euclidean in IR tasks involving variable-length documents, as normalization avoids length bias. Jaccard similarity, defined for sets as the intersection over union of non-zero elements, ignores term frequencies and treats data binarily, suiting exact set overlaps but underperforming when frequencies convey relevance, as in weighted text vectors; for example, Jaccard yields 0.5 for the above bag-of-words pair, lower than cosine's 0.707. Cosine may fail in scenarios with equal-length documents where magnitude signals importance (e.g., denser topics) or when binary presence suffices over frequency weighting, prompting shifts to Jaccard for such cases. Recent benchmarks from the underscore cosine's enduring role in evaluating embedding-based distance matrices for large language models (LLMs) in . On the Massive Text Embedding (MTEB), LLM-derived embedders like E5-mistral-7B and GritLM-7B achieve average scores of 66.63 and 66.76, respectively, across retrieval and semantic tasks using , outperforming earlier BERT-based models by capturing nuanced semantics in high dimensions. In multilingual evaluations, models such as multilingual-e5-large yield nDCG@10 scores of 0.91 on English datasets, demonstrating cosine's effectiveness for cross-lingual text alignment, though performance drops to 0.34 on domain-specific corpora like NFCorpus due to gaps. These results affirm cosine's scale in modern while highlighting needs for metrics in specialized domains.

Neighborhood-Based Retrieval Methods

In (IR), neighborhood-based methods leverage distance matrices to model probabilistic relationships between documents or queries, enabling more nuanced similarity searches beyond simple vector comparisons. One prominent approach is the Gaussian mixture distance, which represents documents as finite mixtures of Gaussians to capture local data distributions effectively. In this framework, each document's feature distribution is modeled as p(x) = \sum_{j=1}^k \alpha_j G(x; m_j, \Sigma_{X_j}), where \alpha_j are mixing coefficients, G denotes a Gaussian density with mean m_j and covariance \Sigma_{X_j}, and k is the number of components determined via the Bayesian Ying-Yang learning algorithm and expectation-maximization. The distance between a query q and a document is then defined using the (KL) divergence between their conditional distributions, KL(p(x|q) || p(x)) = \int p(x|q) \log \left( \frac{p(x|q)}{p(x)} \right) dx, which quantifies the information loss when approximating the query's posterior with the document's model. This KL divergence is decomposed component-wise for computational efficiency, yielding KL(p(x,j|q) || p(x,j)) = KL(p(j|q) || p(j)) + \sum_{j=1}^k P(j|q) KL(p(x|q,j) || p(x|j)), where posteriors P(j|q) = \frac{\alpha_j G(q; m_j, \Sigma_{X_j})}{\sum_{i=1}^k \alpha_i G(q; m_i, \Sigma_{X_i})} weight the contributions of individual Gaussian components. The optimal query covariance is approximated as \hat{\Sigma}_C = \left( \sum_{j=1}^k P(j|q) \Sigma_{X_j}^{-1} \right)^{-1}, allowing nearest-neighbor searches that outperform or Mahalanobis distances in precision, particularly for clustered document corpora like text or databases. Experimental evaluations on benchmark datasets demonstrate that this method achieves higher retrieval accuracy by better preserving local neighborhood structures in the distance matrix. Another key technique is the Neighbor Retrieval Visualizer (NeRV), which frames visualization as an IR task to embed high-dimensional data while preserving neighborhood retrieval properties from the input distance matrix. NeRV constructs probabilistic neighborhoods from pairwise distances, defining input probabilities p_{j|i} = \frac{\exp(-d(x_i, x_j)^2 / \sigma_i^2)}{\sum_{k \neq i} \exp(-d(x_i, x_k)^2 / \sigma_i^2)} and output probabilities q_{j|i} = \frac{\exp(-\|y_i - y_j\|^2 / \sigma_i^2)}{\sum_{k \neq i} \exp(-\|y_i - y_k\|^2 / \sigma_i^2)} in the low-dimensional embedding space. The embedding optimizes a cost function that balances precision and recall: E_{\text{NeRV}} = \lambda \sum_i \sum_{j \neq i} p_{j|i} \log \frac{p_{j|i}}{q_{j|i}} + (1 - \lambda) \sum_i \sum_{j \neq i} q_{j|i} \log \frac{q_{j|i}}{p_{j|i}}, where \lambda \in [0,1] controls the tradeoff (\lambda = 1 emphasizes precision, akin to stochastic neighbor embedding). This is minimized using conjugate gradients, with time complexity O(d n^2) for n points and output dimension d, enabling visualization of query neighborhoods that maintain distance-based similarities for IR applications like exploratory search in large text corpora. In supervised variants, NeRV incorporates class-discriminative metrics to enhance retrieval of relevant documents. These methods extend to practical applications in nearest-neighbor search over large-scale corpora, where exact distance matrices become computationally prohibitive, prompting the use of approximations. For instance, the FAISS library implements inverted file indices and hierarchical navigable graphs to perform approximate nearest-neighbor searches on representations derived from distance matrices, supporting metrics like and inner product with sublinear query times on billion-scale datasets. FAISS achieves up to 8.5x speedups over prior state-of-the-art while maintaining high recall, making it suitable for real-time in systems. Similarly, modern databases like Pinecone, introduced in the early , facilitate scalable distance-based retrieval by indexing embeddings in managed cloud environments, enabling hybrid queries that combine vector similarity with filters for applications such as in vast text archives. Pinecone's serverless architecture supports pod-based or serverless indices for high-throughput ANN retrieval, with reported latencies under 50 ms for million- queries, bridging the gap between theoretical distance preservation and production-scale .

Applications in Chemistry and Molecular Sciences

Interconversion Mechanisms in Isomers

In stereochemically nonrigid molecules, permutational isomers arise from different arrangements of identical ligands or atoms, which can be represented as elements of the , where n is the number of permutable positions. The distance between two such isomers is defined as the minimum number of elementary rearrangements, such as transpositions of adjacent positions, required to transform one permutation into the other; this metric is encoded in the distance matrix derived from the of S_n generated by the set of allowed transpositions. Such matrices facilitate the quantification of rearrangement , with entries reflecting the graph distance, equal to the number of inversions in the composition of the permutations (). Interconversion mechanisms between permutational isomers are modeled using graphs where vertices represent distinct configurations and edges denote single-step rearrangements, with the Johnson graph J(n, k) providing a framework for distances in systems involving k-subsets of positions (e.g., placements in coordination polyhedra). In this setup, the distance between two vertices (isomers) is half the size of their (or k minus the size of their ), capturing the minimal changes needed for transition; the associated distance matrix then encodes these pairwise metrics for path analysis. Analysis of these matrices reveals shortest paths that approximate reaction barriers, as the graph distance correlates with the number of elementary steps, aiding prediction of feasible interconversions under thermal conditions. The application of to chemistry emerged in the through early work on isomer enumeration and topological indices, with distance matrices specifically adapted for nonrigid by Clark and Kettle in 1975, who constructed them to distinguish rearrangement sequences in permutational systems. Post-2015 (DFT) simulations have validated these approaches by computing energy profiles along graph-identified paths; for example, graph-driven discovery methods combined with DFT optimizations (e.g., B3LYP/6-31G* level) confirmed barriers for reaction networks, aligning discrete distances with continuous surfaces and refining predicted mechanisms for interstellar or synthetic conditions.

Structure-Property Modeling

In structure-property modeling, distance matrices derived from molecular graphs serve as foundational tools for quantitative structure-activity relationship (QSAR) and quantitative structure-property relationship (QSPR) analyses, enabling predictions of physicochemical properties such as boiling points and reactivity. These matrices encode pairwise topological distances between atoms, capturing molecular and branching that influence bulk properties. Seminal work established linear correlations between distance-based invariants and observable traits, laying the groundwork for regression models that link structural features to empirical data. A key invariant is the , defined as W = \sum_{i < j} D_{ij}, where D_{ij} represents the shortest path distance between atoms i and j in the . This index quantifies overall molecular size and branching, with higher values indicating more compact, branched structures that typically exhibit lower points due to reduced surface area for intermolecular forces. In QSAR applications, models using the Wiener index have predicted points of s with high accuracy; for instance, a multiple incorporating W alongside Randić connectivity and volume descriptors yielded R^2 = 0.992 for 50 alkane derivatives, outperforming single-descriptor models like T_{bp} = 1.15W + 296.47 (R^2 = 0.84). Such equations highlight how path distances in linear alkane chains correlate with increased polarity and van der Waals interactions, elevating boiling points as chain length grows. For more complex datasets, multivariate techniques like applied directly to flattened distance matrices reduce dimensionality while preserving structural variance relevant to properties. In this approach, the distance matrix is vectorized into a feature of length N(N-1)/2 (for N atoms), and identifies principal components that capture collective motions or conformational changes linked to reactivity. For example, on interatomic distances from reaction coordinates has projected molecular trajectories into 2D spaces explaining over 93% variance, aiding QSPR predictions of energy barriers without Cartesian coordinate biases. Validation in these models often employs leave-one-out cross-validation, achieving mean absolute errors below 5% for property forecasts in benchmark sets. Software tools facilitate descriptor extraction from distance matrices for QSAR workflows. The DRAGON package computes over 1,600 molecular descriptors, including the and other distance-based metrics from topological blocks, supporting input formats like SMILES for automated QSPR modeling. Recent advancements integrate , particularly graph neural networks (GNNs) enhanced with distance encoding, to predict properties like or from raw distance features. Distance encoding augments GNN node representations with shortest-path or random-walk distances, boosting expressive power beyond traditional 1-Weisfeiler-Lehman limits and improving accuracy by up to 15% on molecular benchmarks. In chemistry-specific applications, such GNNs incorporate interatomic distances as edge attributes to forecast properties with chemical accuracy on datasets like QM9.

Geometric and Topological Distance Matrices

In , the geometric distance matrix captures the Euclidean distances between the three-dimensional coordinates of atoms within a . These coordinates are typically extracted from structural files such as those in the (PDB) format, which store atomic positions determined experimentally via techniques like or . The matrix, denoted as D where D_{ij} is the straight-line distance between atoms i and j, provides a compact representation of the molecule's spatial arrangement and is essential for quantitative . A primary application of the geometric distance matrix is in computing the (RMSD), a metric that quantifies the average atomic displacement between two superimposed structures. RMSD is calculated as the of the of the squared differences in pairwise distances after optimal rigid-body alignment, minimizing rotational and translational variances as described in the . This approach ensures that small conformational changes, such as those in protein dynamics, can be precisely measured, with RMSD values often reported in angstroms to assess structural fidelity. For example, an RMSD below 2 typically indicates high similarity between related protein conformations. The topological distance , in contrast, abstracts molecular structure into a graph-theoretic framework, treating as vertices and covalent bonds as edges. Entries in this represent the shortest distances, measured as the minimum number of bonds connecting atom pairs along the molecular . This formulation ignores three-dimensional , focusing instead on to encode branching and cyclicity. The sum of all entries (excluding the diagonal), a topological descriptor related to twice the , serves as a global measure of molecular size and shape; for instance, it distinguishes isomers by their lengths, with higher values indicating more extended structures. Hybrid distance matrices integrate geometric and topological aspects to model realistic intermolecular interactions, particularly in densely packed systems like proteins. One such method computes all-pairs shortest paths in , adjusted by van der Waals radii to enforce steric exclusion and avoid unphysical overlaps during simulations. In contexts, these matrices define upper and lower bounds on interatomic distances: lower bounds incorporate the sum of van der Waals radii (typically around 1.8 for non-hydrogen atoms) to represent minimal contact distances, while upper bounds derive from topological constraints like lengths. This hybrid approach facilitates efficient exploration of conformational space, as seen in distance geometry algorithms that reconstruct folded states from sparse restraints. Advancements in cryo-electron microscopy (cryo-EM) during the 2020s have increasingly utilized distance matrices for high-precision structure alignment, addressing challenges in resolving flexible or heterogeneous protein assemblies. Methods like the align distance matrix with scale (ADAMS) pipeline combine distance matrix representations with scale-invariant feature transforms to systematically compare cryo-EM-derived models, achieving robust alignments even for partial or low-resolution maps. This enables near-atomic refinement of protein structures, outperforming traditional rigid-body fittings by accounting for local distortions and improving overall accuracy in dynamic systems.

Other Specialized Applications

Time Series Analysis

In time series analysis, distance matrices play a crucial role in measuring similarities between sequential data, enabling tasks such as clustering and . For aligned —where observations occur at synchronized time points—the distance is commonly applied, computing the root-mean-square difference across corresponding elements to form a straightforward pairwise distance matrix. However, real-world often exhibit temporal distortions, such as shifts or varying speeds, making measures inadequate; here, (DTW) addresses this by constructing a cumulative distance matrix that finds the optimal non-linear alignment between sequences. DTW operates by building a cost matrix D where each entry D(i,j) represents the local distance (typically Euclidean) between elements x_i from series X and y_j from series Y. The warping path is then derived via dynamic programming to minimize the cumulative cost along an optimal , subject to boundary, monotonicity, and constraints. The DTW distance is given by the value at the bottom-right of the cumulative matrix \gamma, computed as: \gamma(i,j) = D(i,j) + \min \begin{cases} \gamma(i-1,j) \\ \gamma(i,j-1) \\ \gamma(i-1,j-1) \end{cases} with \gamma(1,1) = D(1,1) and boundary conditions ensuring the path starts and ends at the series endpoints. This results in a distance matrix that captures shape similarities despite timing variations, though at quadratic O(nm) for series of lengths n and m. For clustering , hierarchical methods leverage DTW-derived distance matrices to build dendrograms, grouping series with similar underlying patterns. In agglomerative hierarchical , pairwise DTW distances serve as the linkage metric (e.g., , complete, or linkage), allowing non-Euclidean similarities to guide merges until desired granularity is achieved. A representative application involves stock price series, where DTW aligns price trajectories to companies exhibiting correlated movements, such as stocks showing synchronized rises during events; for instance, analyses of Russell 3000 constituents have revealed clusters of co-moving that outperform random groupings in return by 5-10% in backtests. To enhance scalability for large datasets, Symbolic Aggregate Approximation () reduces series dimensionality by segmenting into piecewise aggregates and mapping to symbols, providing a lower-bounding that accelerates DTW computations while preserving distance ordering with minimal accuracy loss (e.g., <5% error in nearest-neighbor retrieval). Distance matrices also facilitate by identifying outliers—series whose DTW distances to the majority exceed a , indicating deviations from patterns. In , for example, DTW matrices on sensor data flag equipment failures as isolated points, with detection rates superior to baseline methods in benchmarks. Recent advancements incorporate , such as LSTM-based embeddings, where recurrent networks learn latent representations of (trained via contrastive losses on augmented pairs), enabling efficient distances in embedding space for clustering and tasks; these approaches, emerging in the , have demonstrated superior performance on variable-length series, enabling more efficient computations than DTW in high-dimensional settings.

Computer Vision and Image Processing

In computer vision, distance matrices play a crucial role in quantifying similarities between image features, enabling tasks such as keypoint matching and shape alignment. For local feature descriptors like those generated by the Scale-Invariant Feature Transform (SIFT), Euclidean distance matrices are computed between descriptor vectors to identify correspondences between keypoints in different images. This approach, introduced in Lowe's seminal work, involves extracting 128-dimensional SIFT descriptors from interest points and matching them by finding the nearest neighbors in the Euclidean metric, often refined with a ratio test to discard ambiguous matches. Such matrices facilitate robust feature correspondence under variations in scale, rotation, and illumination, forming the basis for applications like image stitching and 3D reconstruction. For shape comparison, the serves as a to measure the maximum discrepancy between two sets of points, such as contours extracted from images. Defined as the supremum of the minimum distances from points in one set to the other (and vice versa), it captures outliers and partial overlaps effectively, making it suitable for and in cluttered scenes. Huttenlocher et al. developed efficient algorithms to compute directed and symmetric Hausdorff distances, enabling real-time comparisons by evaluating all possible translations between model and image maps. This method has been widely adopted for silhouette-based shape retrieval, where distance matrices approximate the mismatch between boundary point sets. In object tracking, distance matrices underpin assignment problems solved via the Hungarian algorithm to associate detections across frames. For instance, in the Simple Online and Realtime Tracking (SORT) framework, a cost matrix is constructed from Mahalanobis distances between predicted track positions (via Kalman filtering) and new detections, often incorporating intersection-over-union (IoU) or appearance similarities; the Hungarian algorithm then finds the optimal bipartite matching to minimize total assignment cost. This matrix-based approach ensures efficient data association in multi-object scenarios, such as pedestrian tracking in video sequences, where edge maps from Canny detectors can provide positional cues for initializing distances. By handling occlusions and identity switches, it achieves real-time performance on benchmarks like MOT16, with reported multiple object tracking precision exceeding 60%. Image segmentation leverages distance matrices in graph-based formulations, where pixels are nodes and edge weights reflect pairwise distances to enforce boundary adherence. In graph cut methods, such as those proposed by Boykov and Jolly, the graph's neighborhood edges are weighted inversely proportional to intensity or color differences (a form of in feature space), while unary terms incorporate user scribbles; min-cut/max-flow algorithms then partition the graph to minimize an energy function approximating segment boundaries. This yields globally optimal segmentations for foreground-background separation, with edge weights derived from spatial and photometric distances ensuring smooth regions. For superpixel generation, the Simple Linear Iterative Clustering (SLIC) algorithm clusters pixels using a combined in a 5D space of lab color values and xy-coordinates, producing compact, boundary-respecting superpixels with near-linear time complexity. Extensions like Geodesic-SLIC replace this with geodesic distances along image manifolds to better preserve structures in textured regions, computing paths that avoid high-gradient boundaries for more content-sensitive partitioning. Recent advancements in vision transformers (ViTs) approximate large distance-like similarity matrices through self-attention mechanisms, addressing quadratic computational costs in high-resolution images. In the ViT architecture, attention matrices are derived from scaled dot-product similarities between patch embeddings, effectively computing pairwise affinities that mimic kernel distances; efficient variants, such as those using low-rank approximations, reduce matrix operations while maintaining representational power for tasks like semantic segmentation. This has led to state-of-the-art results on datasets like ADE20K, where attention approximations capture long-range dependencies akin to geodesic paths on image graphs.

Network Analysis and Multidimensional Scaling

In network analysis, distance matrices derived from shortest paths play a crucial role in quantifying structural properties of social and . The shortest distance between two s represents the minimum number of edges connecting them, forming a distance matrix that captures the graph's . This matrix is foundational for computing measures, such as , which evaluates a 's average shortest distance to all others, indicating its and influence in information diffusion within social graphs. For instance, Freeman's conceptualization highlights how s with low average distances act as central hubs in communication networks, facilitating rapid spread of ideas or resources. , another key metric, counts the proportion of shortest paths passing through a , emphasizing its role as a bridge in social structures. These measures, computed from the all-pairs shortest matrix (e.g., via Floyd-Warshall algorithm), enable identification of influential actors in applications like modeling or organizational studies. Distance matrices also support community detection in networks through adaptations of modularity optimization. Traditional modularity assesses the density of intra-community edges relative to a null model, but variants extend this to distance-based formulations, such as the distance modularity matrix, which penalizes long-range connections within communities. By constructing a matrix where entries reflect geodesic distances adjusted for expected random distances, spectral methods can then decompose it to reveal community partitions that maximize modularity scores. This approach is particularly useful in sparse social networks, where adjacency alone may overlook subtle structural divisions, as demonstrated in analyses of collaboration graphs or online interaction data. Multidimensional scaling (MDS) leverages distance matrices to embed high-dimensional network or perceptual data into low-dimensional spaces for and . In classical MDS, given a distance matrix D, the process begins by computing the squared distance matrix D^2 and applying double centering to obtain the inner product matrix B = -\frac{1}{2} H D^2 H, where H = I - \frac{1}{n} \mathbf{1}\mathbf{1}^T is the centering matrix. The eigendecomposition B = V \Lambda V^T then yields coordinates X = V \Lambda^{1/2} in the principal subspace, preserving distances up to the rank of the . This method, rooted in principal coordinates , is exact for metric distances and ideal for visualizing shortest-path distances in networks. For non-Euclidean or , non-metric MDS, as developed by Kruskal, minimizes a function to find an that monotonically preserves orders of distances. The measure is defined as \text{Stress} = \sqrt{ \frac{\sum_{i<j} (d_{ij} - \hat{d}_{ij})^2}{\sum_{i<j} d_{ij}^2} }, where d_{ij} are observed distances and \hat{d}_{ij} are fitted distances in the , optimized via iterative algorithms like steepest descent. A practical example is embedding shortest-path distances from a friendship network into 2D , where nodes cluster by social proximity, revealing subgroups like cliques or isolates in datasets from adolescent peer relations. In , Kruskal's approach has been applied to perceptual similarity data, such as color or shape judgments, to uncover underlying psychological dimensions. Recent advances integrate distance matrices with graph neural networks (GNNs) for learning dynamic distances in evolving networks, addressing limitations of static shortest paths. GNNs with distance encoding augment features by incorporating shortest-path or resistance distances, enabling provably expressive models for tasks like in temporal social graphs. For instance, distance-aware GNNs learn adaptive embeddings that evolve with network changes, outperforming traditional MDS in scalability for large-scale dynamic data from 2020 onward. In psychological applications, MDS on fMRI-derived distance matrices—based on functional connectivity correlations—maps brain regions into spaces reflecting cognitive processes, such as networks. Studies using three-way MDS on fMRI data have revealed integrated neurocognitive subsystems, with embeddings correlating perceptual tasks to patterns in regions like the .

References

  1. [1]
    Graph Distance Matrix -- from Wolfram MathWorld
    The graph distance matrix, sometimes also called the all-pairs shortest path matrix, is the square matrix (d_(ij)) consisting of all graph distances from ...
  2. [2]
    Distance Matrix - an overview | ScienceDirect Topics
    A distance matrix is a nonnegative, square, symmetric matrix with elements corresponding to estimates of some pairwise distance between the sequences in a set.
  3. [3]
    [PDF] The distance matrix and its variants for graphs and digraphs
    The distance matrix D(G) of a connected graph G is the matrix whose entries are the pairwise distances between vertices. The distance matrix was defined by ...
  4. [4]
    What Is A Distance Matrix? | Explainer & How-To Guide - Displayr
    A distance matrix is a table that shows the distance between pairs of objects. For example, in the table below we can see a distance of 16 between A and B.
  5. [5]
    [PDF] Euclidean Distance Matrices and Applications - University of Waterloo
    Over the past decade, Euclidean distance matrices, or EDMs, have been re- ceiving increased attention for two main reasons. The first reason is that the.
  6. [6]
    Distance Matrix - an overview | ScienceDirect Topics
    It is commonly used in distance-based clustering algorithms such as k-means and k-medoids. AI generated definition based on: Computer Science Review, 2020. How ...
  7. [7]
    An Improved Distance Matrix Computation Algorithm for Multicore ...
    Distance matrix has diverse usage in different research areas. Its computation is typically an essential task in most bioinformatics applications, ...
  8. [8]
    [PDF] Faster Linear Algebra for Distance Matrices
    Distances matrices are ubiquitous objects arising in various applications ranging from learning image manifolds [TSL00, WS06], signal processing [SY07], ...<|control11|><|separator|>
  9. [9]
  10. [10]
    [PDF] Basic Properties of Metric and Normed Spaces - TTIC
    Non-negativity: d(x, y) ≥ 0. 2. Symmetry: d(x, y) = d(y, x). 3. Triangle ... However, it is not sufficient to look at a single edge or single diagonal.
  11. [11]
    [PDF] Triangle Fixing Algorithms for the Metric Nearness Problem
    We define a dissimilarity matrix to be a nonnegative, symmetric matrix with zero diagonal. Meanwhile, a distance matrix is defined to be a dissimilarity ...
  12. [12]
    [PDF] Euclidean Distance Matrix - Stanford CCRMA
    This next example shows how finding the common point of intersection for three circles in a plane, a nonlinear problem, has convex expression. 5.4.2.2.8 Example ...
  13. [13]
    [PDF] Clustering - CMU School of Computer Science
    What properties should a distance measure have? ○ D(A,B) = D(B,A). Symmetry. ○ D( ...
  14. [14]
    [PDF] Metric Multidimensional Scaling (MDS): Analyzing Distance Matrices
    The basic idea of MDS is to transform the distance matrix into a cross-product matrix and then to find its eigen-decomposition which gives a principal component ...
  15. [15]
    [PDF] Minkowski space
    Mar 21, 2013 · In mathematical physics, Minkowski space or Minkowski spacetime (named after the mathematician Hermann. Minkowski) is the mathematical ...
  16. [16]
    Euclidean Distance - GeeksforGeeks
    Jul 23, 2025 · Euclidean Distance is defined as the distance between two points in Euclidean space. ... Here are some sample questions based on the Euclidean ...
  17. [17]
    [PDF] Testing Metric Properties - cs.Princeton
    A natural family of problems concerning metrics is deciding, given a matrix M , whether or not it is a distance metric of a certain pre- determined type.
  18. [18]
    [PDF] If it ain't broke, don't fix it: Sparse metric repair*
    We say that the triangle inequality corresponding to triangle ijk for a distance matrix D is Dij ≤ Dik + ... It then returns the APSP matrix for U via the Floyd- ...
  19. [19]
    Classification in Non-Metric Spaces - NIPS papers
    We are interested in handling a wide range of non-metric distance functions ... examples of such distances: median distance: This distance assumes that ...
  20. [20]
    Is the squared error loss a proper metric? - Sebastian Raschka
    Since it does not satisfy the triangle inequality via the example above, we conclude that the (mean) squared error loss is not a proper metric.
  21. [21]
    [PDF] 4 Dynamic Time Warping
    Furthermore, the DTW distance generally does not satisfy the triangle inequality even in case c is a metric. This fact is illustrated by the following example.
  22. [22]
    [PDF] Distance functions for machine learning - UCSD CSE
    A non-metric distance function. Let p,q be probability distributions on some set X. The Kullback-Leibler divergence or relative entropy between p,q is: d(p,q) ...
  23. [23]
    [PDF] Peter Buneman - The recovery of trees from measures of dissimilarity
    This distance is the total length of the path between the two nodes, and we shall call such a measure of distance (which is itself a DC) an additive tree metric ...
  24. [24]
    [PDF] Chapter 2 Trees
    2.18 An additive distance matrix between four sequences and the corresponding additive tree. For any two sequences, the value in the distance matrix corresponds ...<|control11|><|separator|>
  25. [25]
    [PDF] Distance-based methods
    A matrix is additive if it satisfies the four point condition. • A tree defines a tree metric, T[i,j]; i.e., the pairwise distances between all pairs of ...
  26. [26]
    [PDF] Properties of Euclidean and Non-Euclidean Distance Matrices
    A distance matrix D of order n is symmetric with elements. - idfj, where d,, = 0. D is Euclidean when the in(n - 1) quantities dij can be generated as the.
  27. [27]
    [PDF] Math 2940: Symmetric matrices have real eigenvalues
    The Spectral Theorem states that if A is an n × n symmetric matrix with real entries, then it has n orthogonal eigenvectors. The first step of the proof is ...
  28. [28]
    [PDF] Cauchy-Schwarz inequality - Williams College
    Calculating the distance between x = (x1,x2) and y = (y1,y2). The Euclidean metric famously satisfies the triangle inequality d(x,y) ≤ d(x,z) + d(z,y),. (1.1).
  29. [29]
    Checking triangle inequality in a massive numpy matrix
    Jan 5, 2019 · I would like to check if all the distances in the matrix obey the triangle inequality, that is: D[i,j]<=D[i,k]+D[k,j] for all i , j , k.modify distances to violate triangle inequality - Stack OverflowManhattan Distance and Triangle Inequality - Stack OverflowMore results from stackoverflow.com
  30. [30]
    Distance matrix polynomials of trees - ScienceDirect.com
    Furthermore, we define the distance matrix D(G) for G to be the square matrix with rows and columns indexed by the vertex set of G which has dG(x, y) as its (x, ...
  31. [31]
    [PDF] The distance spectrum of a tree - UC Davis Math
    Let T = (V,E) be a tree with vertex set V = {1,2,. . . , n} and edge set. E = {e,,e,, . . . ,em}, m = n - 1. The distance matrix D = D(T) = (d,,) is the.
  32. [32]
    The distance spectrum of the path Pn and The First Distance ...
    Apr 2, 2008 · It is shown that Pn has the maximum distance spectral radius among all connected graphs of order n, and an ordering property of the entries of ...Missing: P_n | Show results with:P_n
  33. [33]
    (PDF) On Distance Energy of Graphs - ResearchGate
    Aug 6, 2025 · Indulal et al. [20] defined the distance energy of a graph G as the sum of the absolute values of its distance eigenvalues, i.e., ... On the ...
  34. [34]
    [PDF] 3 Shortest Paths in Graphs
    The three shortest path problems are: single-source shortest paths (SSSP) from a source to all vertices, all-pairs shortest paths (APSP), and single-source  ...
  35. [35]
    [PDF] Scale-free networks beyond power-law degree distribution - Bin Zhou
    Oct 17, 2023 · It turned out that around 60% of real-world networks have a statistically significant power-law degree–degree distance dis- tribution (DDDD) ...
  36. [36]
    Levenshtein Distance, Sequence Comparison and Biological ...
    Levenshtein edit distance has played a central role—both past and present—in sequence alignment in particular and biological database similarity search in ...
  37. [37]
    A general method applicable to the search for similarities in the ...
    A computer adaptable method for finding similarities in the amino acid sequences of two proteins has been developed.Missing: original | Show results with:original
  38. [38]
    Amino acid substitution matrices from protein blocks. - PNAS
    We have derived substitution matrices from about 2000 blocks of aligned sequence segments characterizing more than 500 groups of related proteins.Missing: paper | Show results with:paper
  39. [39]
    Highly accurate protein structure prediction with AlphaFold - Nature
    Jul 15, 2021 · The AlphaFold network directly predicts the 3D coordinates of all heavy atoms for a given protein using the primary amino acid sequence and ...<|control11|><|separator|>
  40. [40]
    Highly significant improvement of protein sequence alignments with ...
    Here, we find that multiple sequence alignments estimated on AlphaFold2 predictions are almost as accurate as alignments estimated on experimental structures.Abstract · Materials and methods · Results · Discussion
  41. [41]
    (PDF) A note on the Neighbor-Joining Algorithm of Saitou and Nei
    Aug 7, 2025 · Saitou and Nei ( 1987 ) present an algorithm, which they call the neighbor-joining (NJ ) method, for estimating an additive tree from a distance matrix D.
  42. [42]
    and two-parameter models of nucleotide substitution - PMC - NIH
    The simplest and most frequently used models are Jukes and Cantor's (1969) one-parameter model and Kimura's (1980) two-parameter model. Jukes and Cantor's one- ...
  43. [43]
    multistrap: boosting phylogenetic analyses with structural information
    Jan 15, 2025 · We do so by systematically exploring the relationship between distance-based methods, using sequences or structures, and Maximum Likelihood (ML) ...
  44. [44]
    A general theory of classificatory sorting strategies - Oxford Academic
    G. N. Lance, W. T. Williams, A general theory of classificatory sorting strategies: II. Clustering systems, The Computer Journal, Volume 10, Issue 3, 1967 ...
  45. [45]
    10.2 - Example: Agglomerative Hierarchical Clustering | STAT 555
    We can see that the clustering pattern for complete linkage distance tends to create compact clusters of clusters, while single linkage tends to add one point ...
  46. [46]
    (PDF) A new efficient medoid based divisive hierarchical clustering ...
    Apr 23, 2024 · Clustering in data mining groups a set of data objects into a collection of different classes which are called clusters.
  47. [47]
    SLINK: An optimally efficient algorithm for the single-link cluster ...
    Jan 1, 1973 · The SLINK algorithm carries out single-link (nearest-neighbour) cluster analysis on an arbitrary dissimilarity coefficient and provides a representation of the ...
  48. [48]
    [2010.11821] Scalable Hierarchical Agglomerative Clustering - arXiv
    Oct 22, 2020 · In this paper, we present a scalable, agglomerative method for hierarchical clustering that does not sacrifice quality and scales to billions of data points.Missing: 2020s | Show results with:2020s
  49. [49]
    Nearest neighbor pattern classification | IEEE Journals & Magazine
    Abstract: The nearest neighbor decision rule assigns to an unclassified sample point the classification of the nearest of a set of previously classified points.
  50. [50]
    Nearest Neighbors Classification — scikit-learn 1.7.2 documentation
    This example shows how to use KNeighborsClassifier. We train such a classifier on the iris dataset and observe the difference of the decision boundary.
  51. [51]
    Kernel k-nearest neighbor algorithm as a flexible SAR modeling tool
    A kernel version of k-nearest neighbor algorithm (k-NN) has been developed to model the complex relationship between molecular descriptors and bioactivities ...Missing: post- | Show results with:post-
  52. [52]
    Cosine Similarity - an overview | ScienceDirect Topics
    similarity = cos θ = A ⋅ B A B = ∑ i = 1 n A i B i ∑ i = 1 n A i 2 ∑ i = 1 n B i 2 , Still taking Fig. 7 as an example, we can calculate the cosine similarity ...Introduction to Cosine... · Applications in Information...<|separator|>
  53. [53]
    What Is Cosine Similarity? | IBM
    Calculate the cosine similarity: The cosine similarity is found by dividing the dot product (step 1) by the product of the magnitudes of the vectors (step 2).What is cosine similarity? · Importance
  54. [54]
    Cosine Similarity – Understanding the math and how it works (with ...
    Oct 22, 2018 · Cosine similarity is a metric used to measure how similar the documents are irrespective of their size. Mathematically, it measures the cosine ...
  55. [55]
    [PDF] Comparison of Cosine, Euclidean Distance and Jaccard Distance
    Dec 22, 2017 · Jaccard similarity is based on set theory so repetition of words does not affect it whereas in case of Cosine similarity repetition of words ...
  56. [56]
    A Comprehensive Evaluation of Embedding Models and LLMs for IR ...
    This study presents a comprehensive evaluation of embedding techniques and large language models (LLMs) for Information Retrieval (IR) and question answering ( ...
  57. [57]
    [PDF] Information Retrieval Perspective to Nonlinear Dimensionality ...
    NeRV includes the previous method called stochastic neighbor embedding (SNE;. Hinton and Roweis, 2002) as a special case where the tradeoff is set so that ...
  58. [58]
    [PDF] The Faiss library - arXiv
    The vector index performs neighbor search among the embedding vectors as accurately as possible w.r.t. exact search results given the agreed distance metric.
  59. [59]
    What is a Vector Database & How Does it Work? Use Cases + ...
    May 3, 2023 · Vector databases like Pinecone fulfill this requirement by offering optimized storage and querying capabilities for embeddings. Vector databases ...Missing: 2020 | Show results with:2020
  60. [60]
  61. [61]
  62. [62]
    Structural Determination of Paraffin Boiling Points
    **Summary of Wiener Index and Boiling Points of Alkanes:**
  63. [63]
    [PDF] modeling boiling points of alkane derivatives
    Jan 12, 2017 · The results revealed that. Wiener, Randić and volume descriptors play a more important role in the description of boiling points of alkanes, in ...
  64. [64]
    Low dimensional representations along intrinsic reaction ...
    Sep 18, 2019 · Additionally, the output from using interatomic distance matrices as input to PCA is more suitable for use in free energy sampling methods ...
  65. [65]
    (PDF) DRAGON software: An easy approach to molecular descriptor ...
    Aug 6, 2025 · In this paper, the main characteristics of DRAGON software for the calculation of molecular descriptors are shortly illustrated.
  66. [66]
    [PDF] Distance Encoding: Design Provably More Powerful Neural ...
    Graph neural networks (GNNs), inheriting the power of neural networks [18], have become the de facto standard for representation learning in graphs [19].
  67. [67]
    Graph neural networks for materials science and chemistry - Nature
    Nov 26, 2022 · Graph neural networks (GNNs) are one of the fastest growing classes of machine learning models. They are of particular relevance for chemistry and materials ...
  68. [68]
    The Wiener Distance Matrix for Acyclic Compounds and Polymers
    In terms of the distance matrix of a graph, according to Hosoya, this index is expressed as the halfsum of all its entries. 3 Numerous topological indices ...
  69. [69]
    [PDF] A topology-constrained distance network algorithm for protein ...
    In all cases, the sum of van der Waals radii (1.8Е) is used as the lower-bound distance limit. Upper- and lower- bound distance limits are then generated in ...
  70. [70]
    Distance Matrix-Based Approach to Protein Structure Prediction - PMC
    Much structural information is encoded in the internal distances; a distance matrix-based approach can be used to predict protein structure and dynamics, ...Matrices Containing... · Results · A Buildup Algorithm
  71. [71]
    [PDF] Everything you know about Dynamic Time Warping is Wrong
    ABSTRACT. The Dynamic Time Warping (DTW) distance measure is a technique that has long been known in speech recognition community.
  72. [72]
    [PDF] Dynamic programming algorithm optimization for spoken word ...
    Abstract-This paper reports on an optimum dynamic programming. (DP) based time-normalization algorithm for spoken word recognition.
  73. [73]
    A global averaging method for dynamic time warping, with ...
    This article is about the use of DTW in data mining algorithms, and focuses on the computation of an average of a set of sequences.
  74. [74]
    Using clustering techniques to enhance stock returns forecasting
    This paper explores the use of clustering models of stocks to improve both (a) the prediction of stock prices and (b) the returns of trading algorithms.
  75. [75]
    [PDF] Experiencing SAX: a Novel Symbolic Representation of Time Series
    SAX allows a time series of arbitrary length n to be reduced to a string of arbitrary length w, (w < n, typically w << n).Missing: DTW | Show results with:DTW
  76. [76]
    [PDF] Advances in Time-Series Anomaly Detection: Algorithms ... - Hal-Inria
    Aug 7, 2025 · As its name suggests, the distance-based method detects anomalies purely from the raw time series using distance measures. Given two ...
  77. [77]
    Deep Multivariate Time Series Embedding Clustering via Attentive ...
    In this paper we propose a deep-learning based framework for clustering multivariate time series data with varying lengths.
  78. [78]
    [PDF] Distinctive Image Features from Scale-Invariant Keypoints
    Jan 5, 2004 · This paper presents a method for extracting distinctive invariant features from images that can be used to perform reliable matching between ...
  79. [79]
    [PDF] Comparing images using the Hausdorff distance - People @EECS
    In this paper, we provide efficient algorithms for computing the Hausdorff distance between all possible relative positions of a binary image and a model. We ...
  80. [80]
    [PDF] Interactive Graph Cuts for Optimal Boundary & Region ...
    This method uses user-marked pixels as hard constraints, soft constraints on boundary/region, and graph cuts for optimal segmentation of N-D images.
  81. [81]
    [PDF] SLIC Superpixels Compared to State-of-the-Art Superpixel Methods г
    4a. Geodesic-SLIC, or GSLIC, replaces the distance of (3) with a geodesic distance. The unsigned geodesic distance from one pixel. IПpiч to another IПpjч is ...