Fact-checked by Grok 2 weeks ago

Distance transform

In image processing and , a distance transform is a fundamental operation applied to a that assigns to each a value representing the shortest distance to the nearest feature , typically the boundary between foreground and background regions. This computation generalizes the concept of distance in grids, where the distance metric can vary, including the (measuring straight-line separation), the city-block distance (also known as or L1 distance, summing horizontal and vertical steps), or the chessboard distance (L∞ distance, considering diagonal moves). Introduced by Rosenfeld and Pfaltz in 1968 as distance functions on , the distance transform enables efficient analysis of spatial relationships in discrete arrays. The technique has evolved from early sequential scan algorithms, such as those for approximate distances proposed by Rosenfeld and Pfaltz, to exact methods achieving linear O(n) for n pixels, as developed by Meijster et al. in 2000. Danielsson's 1980 algorithm further advanced exact computation by separating forward and backward passes over the image. Beyond binary images, the concept extends to sampled functions, where the transform computes the minimum of plus a , supporting applications in higher dimensions and non-binary data. Distance transforms underpin numerous tasks in and , including (delineating object boundaries), (extracting medial axes for shape analysis), generation (partitioning space based on proximity), and shape matching via distances for under partial or deformation. They also facilitate separating overlapping objects, such as cells in , and serve as building blocks in for path planning and collision avoidance, as well as in for feature detection. Modern implementations, like linear-time algorithms for squared distances using parabolic envelopes, ensure scalability for large-scale processing.

Fundamentals

Definition

A distance transform (DT) of a or is a derived representation in which each non- , such as those in the background, is assigned the minimum to the nearest , typically object or obstacles marked as features. This process converts qualitative representations into quantitative fields, enabling precise measurements for subsequent analysis tasks like description and . The basic computation of a distance transform operates on a where the set F consists of pixels valued at 0, and all pixels are initialized to \infty. For each pixel p, the transform assigns the value D(p) = \min_{q \in F} \operatorname{dist}(p, q), where \operatorname{dist} denotes a selected distance function. As a simple illustrative example, consider a one-dimensional binary array [0, \infty, \infty, 0], with features at the first and last positions. Applying the transform with unit steps (corresponding to a basic adjacency metric) yields [0, 1, 2, 0], where each intermediate position holds the shortest path distance to the nearest . The unsigned distance transform specifically computes these external distances for background pixels only, without assigning values inside the feature set or incorporating directional signs.

Distance Metrics

In discrete spaces such as grids in images, distance metrics for distance transforms are adaptations of continuous-space measures to account for the grid structure, typically considering neighborhood like 4-connected (adjacent pixels horizontally or vertically) or 8-connected (including diagonals). These adaptations ensure distances propagate along paths that respect the topology, enabling efficient computation while approximating or exactly representing geometric separations. The Manhattan distance, also known as the L_1 or city-block distance, is defined as d((x_1,y_1),(x_2,y_2)) = |x_1 - x_2| + |y_1 - y_2|. It models propagation in a 4-connected , producing diamond-shaped distance fields, and supports simple integer arithmetic without floating-point operations, making it computationally lightweight for large images. The , or L_\infty distance, is given by d((x_1,y_1),(x_2,y_2)) = \max(|x_1 - x_2|, |y_1 - y_2|). Corresponding to an 8-connected grid, it generates square-shaped propagation fronts and assumes uniform cost for all eight directions, which is suitable for applications requiring isotropic-like behavior in discrete paths without diagonal penalties. The , or L_2 distance, provides the true geometric measure as d((x_1,y_1),(x_2,y_2)) = \sqrt{(x_1 - x_2)^2 + (y_1 - y_2)^2}. It is rotation-invariant and independent of grid connectivity, offering the most accurate representation of straight-line distances, though its and squared computations increase overhead on discrete grids. Grid adaptations distinguish metrics like city-block (Manhattan, limited to orthogonal moves) from chessboard (Chebyshev, allowing diagonals), influencing how distances accumulate in transforms to avoid over- or under-estimation in non-Euclidean topologies.
MetricFormulaKey PropertiesProsCons
Manhattan (L_1)d = \|x_1 - x_2\| + \|y_1 - y_2\|Diamond-shaped; 4-connectedFast integer computation; separableApproximates Euclidean; anisotropic
Chebyshev (L_\infty)d = \max(\|x_1 - x_2\|, \|y_1 - y_2\|)Square-shaped; 8-connectedSimple max operation; uniform diagonalsOverestimates in some directions
Euclidean (L_2)d = \sqrt{(x_1 - x_2)^2 + (y_1 - y_2)^2}Rotation-invariant; grid-independentExact geometric distanceComputationally intensive; non-separable

History

Origins

The distance transform originated in the mid-1960s amid burgeoning research in and early , where discrete rasterized representations of images necessitated tools for quantifying spatial relationships without assuming continuous . This need arose from efforts to process binary digital pictures—rectangular arrays of picture elements (pixels)—for tasks involving object boundaries and interior points, extending foundational work on digital topology. In their pioneering 1968 paper "Distance Functions on Digital Pictures," Azriel Rosenfeld and John L. Pfaltz formally introduced the distance transform, defining it as a that assigns to each in a picture the minimum distance to the nearest in a specified reference subset, such as an object boundary. Published in Pattern Recognition (Volume 1, Issue 1), this work proposed efficient algorithms for computing such transforms using sequential operations, enabling the analysis of connectivity, simple-connectedness, and topological properties in binary images. The authors emphasized its utility in generating descriptors for shapes by propagating distance values across the grid in two passes: a forward scan to approximate local minima and a backward scan to refine global distances. Early applications of the distance transform focused on fundamental description, such as identifying skeletons or medial axes through local maxima of distance values, and performing nearest-neighbor searches to connected components in rudimentary systems. These capabilities supported initial efforts in automated image analysis, including boundary detection and region ing, which were critical for emerging fields like . Rosenfeld and Pfaltz highlighted key limitations from the outset, noting that grid-based discretizations inherently distort true distances, prompting the adoption of approximate metrics like the city-block () or distances to balance computational efficiency with accuracy. The year 1968 stands as the foundational milestone for distance transforms, building on contemporaneous developments in shape representation, such as Harry Blum's 1967 introduction of the medial axis transform—a continuous analog that encodes object via loci of maximal inscribed circles, later adapted to settings using fields. Pre-1970 works, including Blum's contributions in the symposium Models for the Perception of Speech and Visual Form, underscored the transform's role in biological and perceptual shape analysis, laying theoretical groundwork for implementations.

Key Developments

In the 1980s, key advancements in distance transform algorithms emphasized both exactness and efficiency for digital images. Per-Erik Danielsson's 1980 algorithm marked the first method for computing the exact transform on raster grids, employing a two-pass scheme—forward and backward scans—that approximates straight-line paths and incorporates quadratic root extractions to ensure accuracy. This approach achieved precise distances but at a computational cost of O(n²) for an n × n image, making it suitable for moderate-sized inputs. Concurrently, approximate methods gained traction for faster processing. Gunilla Borgefors introduced chamfer distance transforms in her 1984 paper on arbitrary dimensions and expanded it in 1986 for digital images, using local propagation with weighted masks (such as 3-4 or 5-7-11 metrics) to approximate distances with a maximal relative error of approximately 2%. These integer-based operations enabled linear-time O(n) computation, prioritizing speed over exactness in applications requiring quick approximations. The 1990s and early 2000s saw optimizations for exact transforms, addressing the quadratic bottleneck. In 2000, Arnold Meijster, Jos B.T.M. Roerdink, and Wim H. Hesselink developed a separable that decomposes the problem into independent 1D row and column scans, followed by envelope computations, achieving exact Euclidean distances in linear time for n pixels. This breakthrough significantly scaled computations for large images, influencing subsequent exact methods. The introduced parallelization to leverage emerging hardware. Guodong Rong and Tiow-Seng Tan's 2006 Jump Flooding Algorithm adapted distance propagation for graphics processing units (GPUs), using iterative jumps in a logarithmic number of passes to approximate Voronoi diagrams and distance fields in parallel, enabling real-time performance on high-resolution grids. In the and , theoretical extensions incorporated uncertainty and learning paradigms. Johan Öfverstedt, Joakim Lindblad, and Nataša Sladoje presented the distance transform in 2019, modeling images as discrete random sets to compute probabilistic distance statistics, offering adjustable robustness through sampling or dynamic programming. Parallelly, integrations emerged, with neural networks learning implicit distance fields from ; for instance, a 2021 framework used signed-distance functionals to optimize manipulation planning in cluttered environments, blending geometric priors with trainable representations. Recent advancements as of 2024 include GPU-accelerated implementations for exact transforms, enhancing scalability for large-scale processing, and applications of distance transforms in pipelines, such as guided mixup for medical image analysis in Alzheimer's detection.
YearAuthorsContributionTime Complexity
1980DanielssonFirst exact DT via two-pass propagation with root extractionsO(n²)
1984–1986Borgefors DT for fast approximations using local weighted propagationsO(n)
2000Meijster et al.Separable exact EDT reducing complexity through 1D transformsO(n)
2006Rong & TanGPU-parallel Jump Flooding for approximate DT and Voronoi computationO(n) with log passes
2019Öfverstedt et al. DT for probabilistic handling of noisy random setsO(n) with sampling
2021Driess et al.Learned signed-distance fields integrated with for planning tasksModel-dependent

Algorithms

Approximate Distance Transforms

Approximate distance transforms compute distances in binary images using simplified metrics that prioritize computational efficiency over exact measurements. These methods approximate the true L2 distance by employing local distance metrics, such as distances, which rely on weighted masks applied over small neighborhoods like or 5×5 pixels. The core principle involves propagating distance values through forward and backward passes across the image, where each non-feature pixel's distance is updated as the minimum sum of its current estimate and the weighted distances from neighboring pixels already processed. This approach ensures a good trade-off between accuracy and speed, making it suitable for large images where exact methods would be prohibitive. Chamfer algorithms use predefined local distance masks to approximate metrics like L1 (city-block) or L (chessboard), which in turn serve as proxies for the Euclidean distance. For a basic L approximation in a 3×3 neighborhood, a forward mask might assign weights of 0 to the current pixel, 1 to orthogonal neighbors (horizontal/vertical), and 1 to diagonal neighbors, represented as:
  1  1  1
  1  0  ∞
  ∞  ∞  ∞
This mask propagates distances from left and above during the forward scan. For better L2 approximation, weighted chamfer masks are used, such as the 3×3 mask with axial weight a=3 (orthogonal) and diagonal weight b=4, scaled by a factor (e.g., 3) to maintain integer arithmetic:
  4  3  4
  3  0  ∞
 ∞  ∞  ∞
(for ). A 5×5 extends this with an additional knight-move weight c=11 (a=5, b=7), reducing approximation error at the cost of more comparisons. These weights are optimized to minimize deviation from the true while avoiding floating-point operations. Sequential scan methods implement propagation via two passes over a 2D I, producing a D where D(i,j) = 0 if I(i,j) is a , and otherwise the approximate . In the forward pass (top-to-bottom, left-to-right), each updates its distance using the forward mask:
for i from 0 to height-1:
    for j from 0 to width-1:
        if I(i,j) == feature: D(i,j) = 0
        else: D(i,j) = min(D(i-1,j) + a, D(i,j-1) + a, D(i-1,j-1) + b, D(i-1,j+1) + b)
The backward pass (bottom-to-top, right-to-left) refines using a symmetric from below and right:
for i from height-1 downto 0:
    for j from width-1 downto 0:
        D(i,j) = min(D(i,j), D(i+1,j) + a, D(i,j+1) + a, D(i+1,j-1) + b, D(i+1,j+1) + b)
This two-pass ensures all propagate from the nearest feature, achieving an for an n-pixel . Parallel variants adapt these for concurrent processing, particularly for city-block (L1) distances, using four independent scans: downward along columns, upward along columns, rightward along rows, and leftward along rows. Each scan updates distances in parallel across non-dependent pixels, enabling efficient implementation on architectures without synchronization conflicts. This decomposes the propagation into orthogonal directions, approximating the diamond-shaped L1 front while maintaining the same asymptotic time as sequential methods on multiprocessor systems. Error analysis for approximations quantifies deviation from the exact , typically measured as relative maximum . The 3×3 with weights 3-4 yields a maximum of approximately 8% (0.0809 in relative terms), while the 5×5 with 5-7-11 reduces this to about 2% (0.0202). These bounds arise from the discrete grid's inability to perfectly replicate continuous propagation, but optimization of weights minimizes the further, to around 4% for 3×3 and 1.4% for 5×5. The primary advantages of approximate distance transforms include linear O(n) , reliance on simple integer arithmetic, and suitability for applications in resource-constrained environments. Unlike exact methods, they avoid computations or global searches, enabling fast processing on sequential hardware while providing sufficient accuracy for many practical uses.

Exact Euclidean Distance Transforms

Computing the exact transform (EDT) on grids presents significant challenges due to the non-separable nature of the metric in two and three dimensions, which prevents simple row-by-column propagation as in or distances. In grids, grid artifacts such as can lead to inaccuracies if not handled precisely, requiring algorithms to account for the true norm between grid points while avoiding approximations that introduce errors. These issues necessitate specialized techniques to ensure the distance at each point is the minimum to the nearest feature point, maintaining exactness on the grid . One of the earliest methods for exact EDT is Danielsson's algorithm from , which employs a two-phase approach to achieve efficiency. In the first phase, an initial approximation using the Manhattan is computed via raster scanning, assigning to each point a two-component descriptor representing the label and direction to the nearest . The second phase refines this approximation through quadratic intersection calculations during forward and backward scans along each line, correcting the distances to match the true metric with negligible error. This incorporates propagation cones, geometric regions that fan out from points to guide the spread of , ensuring accurate nearest-neighbor without exhaustive searches; the cones visualize how distance vectors propagate diagonally across the grid to handle 2D non-separability. The algorithm runs in O(N^2) time for an N x N image, making it suitable for moderate-sized 2D grids. A more efficient linear-time was introduced by Meijster et al. in , achieving exact EDT in O(mn) time for an m x n through a separable two-phase process. Phase 1 computes a 1D EDT along each column (scanning top-to-bottom and bottom-to-top) to produce an intermediate map G(x,y), which captures the minimum to the nearest feature within the same column. In Phase 2, the processes rows (left-to-right and right-to-left) by constructing the lower of curves derived from G values, using dynamic points to identify the global minimum contributor for each position. This technique efficiently resolves the 2D minimization by evaluating intersections between parabolas centered at feature points, ensuring exactness without approximations. The separable nature allows parallelization across rows or columns. The following outlines the core separable computation for the 2D case (adapted from the original description):
# Phase 1: Column-wise 1D EDT (for each column x)
for y = 0 to m-1:
    if [image](/page/Image)[x][y] is [feature](/page/Feature): G[x][y] = 0
    else: G[x][y] = min [distance](/page/Distance) in column to nearest [feature](/page/Feature)

# Forward row [scan](/page/Scan) (Phase 2, left-to-right for each row y)
for x = 0 to n-1:
    compute lower [envelope](/page/Envelope) intersections for parabolas f_i(x') = G[i][y] + (x - i)^2
    [DT](/page/DT)[y][x] = min over [envelope](/page/Envelope) at x

# Backward row [scan](/page/Scan) (right-to-left) to refine if needed
for x = n-1 downto 0:
    update [DT](/page/DT)[y][x] = min(current, refined [envelope](/page/Envelope))
This structure guarantees the exact by minimizing over all possible feature sites via envelope geometry. For extensions, Maurer et al. proposed in 2003 an optimized linear-time algorithm that generalizes to arbitrary dimensions using and precomputed lookup tables. The method recursively computes the EDT by reducing the k-dimensional problem to (k-1)-dimensional slices, constructing partial Voronoi diagrams along each dimension and intersecting them with image rows (or planes in ) to find exact minima. Precomputed lookup tables store intersection offsets and distances for common configurations, significantly speeding up the process while maintaining O(N) for N voxels, with a small constant factor verified experimentally on medical images. This approach handles grid artifacts effectively by propagating exact distances through layered reductions, supporting anisotropic voxels via weighted norms. Modern GPU implementations leverage parallel reductions to accelerate exact EDT computations, particularly for large and grids. Algorithms like the Parallel Banding Algorithm decompose the problem into separable passes across scanlines, using GPU shaders for concurrent 1D distance computations and parallel reductions to resolve envelope minima, achieving real-time performance on high-resolution volumes. These methods map the brute-force minimization to pointwise operations followed by reductions over bands of the image, ensuring exactness while exploiting massive parallelism for up to 100x speedups over CPU versions. To verify the exactness of an EDT , one approach is to compare the computed distances against a brute-force , where the distance at each grid point is the minimum Euclidean norm over all feature points. This validation checks for zero error on discrete grids, as deviations would indicate approximation artifacts; empirical tests on benchmark images confirm algorithms like Meijster's and Maurer's results to brute-force within .

Applications

Image Processing

In image processing, distance transforms serve as a foundational tool for analyzing spatial relationships within or images, enabling precise manipulation of shapes and features based on proximity metrics. By converting a into a map, where each pixel's value represents its distance to the nearest or feature, distance transforms facilitate tasks ranging from boundary refinement to . This capability is particularly valuable in static image analysis, where understanding geometric properties aids in isolating and enhancing structural elements without altering the underlying pixel intensities excessively. One prominent application is in , where distance transforms enable Voronoi-based partitioning to delineate object boundaries. In algorithms, the distance transform of a marker creates a topographic surface that guides flooding simulations, producing catchment basins that correspond to Voronoi diagrams of the markers. This approach effectively segments touching or overlapping objects by imposing minima at predefined marker locations, reducing over-segmentation common in gradient-based s. For instance, in object boundary detection, the distance transform applied to edge-detected regions allows segmentation to accurately separate clustered cells in microscopic images, achieving precise delineation with minimal leakage across boundaries. Distance transforms also underpin skeletonization and thinning processes by identifying the medial axis of shapes. The medial axis, defined as the set of points equidistant from at least two boundary points, emerges as the ridge lines in the distance transform map of a foreground. To extract the , one computes the distance transform and then prunes branches by subtracting a scaled version of the original shape from the distance map, retaining only the core topology. This method, rooted in the medial axis transform, preserves the shape's connectivity and length while reducing it to a one-pixel-wide representation, useful for feature matching in . In morphological operations, distance transforms support accurate and for offsetting under metrics. Traditional approximations can introduce distortions, but exact transforms allow for precise reconstruction: expands a by adding the distance value to each , while shrinks it by subtraction, enabling uniform boundary offsets without . This is particularly effective for multi-scale analysis, where iterative application simulates grayscale morphology on inputs. Signed distance transforms extend these capabilities to blurring and , especially in for fonts. By storing positive distances inside shapes and negative outside, signed fields enable smooth transitions across edges via shader-based , mitigating artifacts at arbitrary scales. In GPU-accelerated systems, this facilitates high-quality text rendering: the distance value at a fragment determines opacity through a , supporting subpixel accuracy without multisampling overhead. For example, vector textures like glyphs are rasterized into signed distance maps at low , then magnified crisply on modern displays. For feature detection, distance transforms enhance ridges and valleys by quantifying proximity to edges, suppressing noise near boundaries while amplifying internal structures. The distance map of an edge-detected highlights ridges as local maxima far from edges, where enhancement filters can boost based on distance thresholds. This aids in detecting curvilinear features like vessels or lanes, with valley enhancement achieved by inverting the transform to emphasize minima. Such distance-weighted operations improve robustness in low-contrast scenarios. A typical workflow involves computing the distance transform on a binary edge map derived from a grayscale image via thresholding or Canny detection, then modulating the original intensities by the distance values for effects like soft shadows or selective blurring. For instance, pixels within a distance threshold retain full intensity, while farther ones are attenuated, creating a natural falloff that simulates depth-based compositing without explicit 3D modeling. Recent advancements integrate distance transforms with , such as distance transform-based loss functions for enhancing boundary quality in semantic segmentation tasks, improving accuracy in medical image analysis as of 2024.

Robotics and Other Fields

In , distance transforms are employed in to facilitate obstacle avoidance by representing the space of a , where obstacles are mapped as regions and free space is quantified via distance fields. For instance, in for robot arms, the ( transform computes the shortest collision-free paths in high-dimensional spaces, enabling efficient gradient-based optimization for tasks. This approach transforms workspace obstacles into configuration-space distance maps, allowing real-time repulsion forces to guide trajectories away from singularities and collisions. In , distance transforms support radiotherapy planning by generating precise distance maps from tumors or organs, which inform dose distribution and beam positioning to minimize damage to healthy . For example, transforms on segmented or MRI volumes quantify voxel distances to target boundaries, optimizing intensity-modulated radiotherapy parameters for conformal dose delivery. Additionally, in prenatal analysis, 3D distance transforms process ultrasound volumes to delineate fetal structures, such as the or , enabling automated estimation and morphological . These applications leverage fast exact algorithms to handle volumetric , providing quantitative metrics for clinical without manual intervention. GPU-accelerated implementations have enabled processing for large-scale volumetric as of 2025. Distance transforms find extensive use in for in interactive simulations and games, where precomputed distance fields accelerate queries for object proximity and testing. By storing Euclidean distances to surfaces in grids, these fields enable efficient broad-phase and narrow-phase , supporting deformable models and rendering. In , distance transforms facilitate (CSG) operations by converting primitive shapes into implicit representations, allowing Boolean unions or intersections via distance thresholding for smooth blending and offsetting. GPU-accelerated signed distance transforms have become standard for these tasks, as demonstrated in hardware implementations that achieve sub-millisecond updates for complex scenes. In , distance transforms enable robust shape matching through invariant descriptors that capture boundary and information, insensitive to , , or . matching, a classic technique, uses approximate distance transforms to align template and query shapes by minimizing directional distance sums, widely applied in and retrieval systems. More advanced methods, like distance transform networks, integrate these maps into pipelines for feature extraction, improving accuracy in silhouette-based classification tasks. This invariance arises from normalizing distances along contours, making it suitable for diverse datasets in or industrial inspection. Beyond these domains, distance transforms aid geographic information systems (GIS) in terrain analysis by computing cost-effective paths over digital elevation models, incorporating slope and as weighted distances for applications like pipeline routing or hydrological flow simulation. In bioinformatics, they generate molecular distance maps for protein , where transforms on atomic grids produce triangulated solvent-excluded surfaces essential for simulations and cavity detection. These maps quantify interatomic proximities, supporting structure prediction and workflows. A notable in autonomous vehicles involves with data, where distance transforms create free-space maps from point clouds to support path planning and localization. By applying transforms to occupancy grids derived from LiDAR scans, vehicles can identify navigable regions and predict collision risks in dynamic environments, as seen in navigation systems that use distance fields for without relying on high-definition prior maps. This integration enhances robustness in urban settings, with real-time updates enabling safe navigation.

Variants

Signed Distance Transforms

Signed distance transforms extend the standard transform by incorporating directional information relative to the of an object in a , assigning negative values to points inside the object and positive values to points outside, with zero on the itself. Formally, for a point p in the , the signed D(p) is given by D(p) = (-1)^{I(p)} \min_{q \in Bd} [\Delta](/page/Delta)(p, q), where I(p) is the binary indicator function (1 inside, 0 outside), Bd denotes the object , and \Delta is a such as the \Delta(p, q) = \sqrt{\sum (p_i - q_i)^2}. This signing convention facilitates distinguishing interior and exterior regions, enabling applications that require spatial awareness beyond unsigned distances. Computation of signed distance transforms typically involves calculating separate unsigned distance transforms for the interior and exterior regions before combining them with appropriate signs. One approach inverts the binary image to compute the external distance transform (distance from background pixels to the nearest boundary), while the original image yields the internal transform (distance from object pixels to the boundary); the signed field is then formed by negating the internal distances and taking the minimum where regions meet near boundaries to resolve narrow-band inconsistencies. This method handles the topological challenges at boundaries by ensuring continuity across the zero level set. Algorithms adapt exact Euclidean distance transform techniques, such as linear-time separable methods, by propagating signs alongside distances during forward and backward passes. For instance, in 2D, boundary pixels are initialized with zero distance and neutral sign, then propagated using a 3x3 neighborhood mask that updates both distance and sign based on the minimum from adjacent pixels, as in the dead reckoning algorithm which achieves exact Euclidean results in two passes. Signed distance transforms exhibit key properties that enhance their utility in advanced processing. The gradient of the signed distance field has unit magnitude almost everywhere, providing a natural normal vector field to the boundary, which is essential for level-set evolution equations where the speed term relies on accurate surface normals. Additionally, the field is Lipschitz continuous with constant 1, ensuring smooth interpolation and bounding the variation in distances by the Euclidean separation of points, which supports stable numerical schemes. These properties enable significant advantages, particularly in reversible operations. The zero-level set of the signed distance transform exactly reconstructs the original boundary, allowing lossless recovery of the binary object from the field, which is invaluable for iterative segmentation or morphological reconstructions. Furthermore, the signed representation mitigates issues in narrow-band methods by providing consistent initialization for front propagation, improving efficiency and accuracy in dynamic simulations.

Generalizations

Distance transforms have been extended beyond binary images in two dimensions to higher-dimensional spaces, such as volumes for or 4D spatiotemporal data. Meijster's algorithm, originally for 2D Euclidean distance transforms, generalizes to arbitrary dimensions by decomposing the computation into sequential one-dimensional passes along each axis, enabling efficient processing of n-dimensional arrays. However, higher-dimensional transforms face significant challenges, including exponential increases in memory usage—for instance, a 1024³ volume requires approximately 8.6 of storage for 64-bit floats, though computational overhead can push total usage higher, potentially straining systems with 32 of —and computational time scaling with O(n^d) in the worst case without optimizations. Weighted distance transforms adapt the standard formulation by incorporating pixel- or voxel-specific attributes, such as varying costs or speeds, to compute paths that reflect heterogeneous like speed maps in path planning. These are often realized through adaptations of on grid graphs, where edge weights represent local attributes, yielding distances that minimize accumulated costs rather than pure lengths. For example, in anisotropic , weights can model speeds, with fast marching methods providing efficient implementations akin to Dijkstra for large grids. Probabilistic or distance transforms address in feature boundaries, such as noisy or fuzzy object edges, by modeling inputs as random sets and outputting distance distributions or expectations rather than deterministic values. A key 2019 method defines the (SDT) as the expected minimum distance under element-wise probabilities, enhancing robustness to while preserving morphological properties. This approach, presented at the , computes via sampling or analytical approximations, with applications in uncertain where traditional transforms fail. Vector-valued distance transforms extend the scalar output to vectors, encoding additional information like the direction or of the nearest feature, which is useful for multi-object scenarios involving distances to multiple types. The -city distance transform (VCVDT) computes a pointing to the closest boundary point, allowing reconstruction of exact propagators for or metrics in 2D and higher dimensions. This generalization supports multi-feature analysis by storing per-object distances or gradients, as implemented in libraries like DIPlib for -based transforms. In non-Euclidean spaces, distance transforms operate on irregular structures like or Riemannian manifolds, where distances follow paths rather than straight lines. On , representing pixels as nodes with weighted edges, computes exact transforms for arbitrary metrics, suitable for sparse or connectivity-based data. For manifolds, such as curved surfaces in , generalizations use discrete approximations of distances, often via embeddings or fast marching on triangulated meshes, to handle non-flat geometries without embedding into . Implementations of these generalizations are available in libraries like and , which support higher-dimensional Euclidean transforms natively—SciPy's distance_transform_edt handles nD arrays via shapes, while 's distanceTransform supports 2D images with mask-based metrics. For weighted and vector variants, specialized extensions like the edt package in provide nD support, though full probabilistic or manifold features often require custom graph-based implementations using NetworkX or similar tools.

References

  1. [1]
    [PDF] The Distance Transform and its Computation - arXiv
    Feb 24, 2023 · Figure 1 shows what can be achieved in general by a distance transform. The input is typically a binary image (Figure 1a), i. e. the pixels can ...
  2. [2]
    [PDF] Distance Transforms of Sampled Functions - Brown CS
    Sep 2, 2012 · Distance transforms are an important tool in computer vision, image processing and pattern recognition. A distance transform of a binary image ...
  3. [3]
    [PDF] Distance Functions on Digital Pictures
    Abstract--This paper describes algorithms for computing various functions on a digital picture which depend on the distance to a given subset of the picture.Missing: transform | Show results with:transform
  4. [4]
    [PDF] A GENERAL ALGORITHM FOR COMPUTING DISTANCE ...
    Clearly, the function Sep is dependent on which distance transform we want to compute. In the next section we will derive the expressions for the function ...
  5. [5]
    Euclidean distance mapping - ScienceDirect.com
    View PDF; Download full issue. Search ScienceDirect. Elsevier. Computer Graphics and Image Processing · Volume 14, Issue 3, November 1980, Pages 227-248.
  6. [6]
    [PDF] Computer Vision
    Sep 8, 2015 · Distance Transform is a function that for each image pixel p assigns a non-negative number corresponding to distance from p to the nearest ...
  7. [7]
    [PDF] Distance Transformations in Digital Images
    A distance transformation converts a binary image to a grey-level image where each pixel's value is the distance to the nearest feature pixel.
  8. [8]
    [PDF] Sequential Operations in Digital Picture Processing
    Sequential operations use results from preceding elements, such as labeling connected components and computing distance from every element to a subset.
  9. [9]
    Distance functions on digital pictures - ScienceDirect.com
    This paper describes algorithms for computing various functions on a digital picture which depend on the distance to a given subset of the picture.
  10. [10]
    [PDF] Transformation for Extracting Descriptors of Shape
    Harry Blum. Data Sciences Laboratory. Air Force Cambridge Research ... The transformation from the original pattern to the medial axis function ...
  11. [11]
    [1810.08097] Stochastic Distance Transform - arXiv
    Oct 18, 2018 · In this study, we consider images represented as discrete random sets and observe statistics of DT computed on such representations.
  12. [12]
    Learning Models as Functionals of Signed-Distance Fields ... - arXiv
    Oct 2, 2021 · This work proposes an optimization-based manipulation planning framework where the objectives are learned functionals of signed-distance fields that represent ...
  13. [13]
    [PDF] 2D Euclidean distance transform algorithms: a comparative survey
    The distance transform (DT) maps each image pixel into its smallest distance to regions of interest [Rosenfeld and Pfaltz 1966]. It is a fundamental geometrical ...<|control11|><|separator|>
  14. [14]
    Euclidean distance mapping - ScienceDirect.com
    A new distance transformation is presented, that is easily computed and has a maximal error of about 2%. In the second part of the paper six different distance ...
  15. [15]
    [PDF] A general algorithm for computing distance transforms in linear time
    W. H. Hesselink, A. Meijster, and J. B. T. M. Roerdink. An exact Euclidean distance transform in linear time. Technical Report IWI 99-9-04, Institute for ...
  16. [16]
    A Linear Time Algorithm for Computing Exact Euclidean Distance ...
    A Linear Time Algorithm for Computing Exact Euclidean Distance Transforms of Binary Images in Arbitrary Dimensions. Authors: Calvin R. Maurer, Jr.
  17. [17]
    [PDF] Parallel Banding Algorithm to Compute Exact Distance Transform ...
    We propose a Parallel Banding Algorithm (PBA) on the GPU to compute the exact Euclidean Distance Transform (EDT) for a bi- nary image in 2D and higher ...
  18. [18]
    [PDF] Space-efficient Pointwise Computation of the Distance Transform on ...
    In computer graphics, the distance transform is used to speed up the raytracing of volumetric models [1] and Voronoi diagrams form the basis of many nature ...
  19. [19]
    [PDF] Watersheds in Digital Spaces: An Efficient Algorithm Based on ...
    Abstract- In this paper, a fast and flexible algorithm for computing watersheds in digital grayscale images is introduced. A review of watersheds and ...
  20. [20]
    [PDF] Deep Watershed Transform for Instance Segmentation
    up of this distance transform. More formally, up,gt = ∇Dgt(p). |∇Dgt(p)| ... The watershed transformation applied to im- age segmentation. Scanning ...
  21. [21]
    Generating skeletons and centerlines from the distance transform
    Generating skeletons and centerlines from the distance transform. Author links open overlay panel. C.Wayne Niblack
  22. [22]
    [PDF] SKELETONIZATION USING SSM ON THE DISTANCE TRANSFORM
    This paper proposes a new approach for skeletonization based on the skeleton strength map (SSM) caculated by. Euclidean distance transform of a binary image ...
  23. [23]
    [PDF] Recursive Erosion, Dilation, Opening, and Closing Transforms
    The distance transform using the four-neighborhood corresponds to an erosion transform with a five-pixel cross structuring element, and the distance transform ...
  24. [24]
    [PDF] Improved Alpha-Tested Magnification for Vector Textures and ...
    We chose to implement a simple uniformly-sampled signed- distance field representation, with the distance function stored in an 8-bit channel. By doing so, we ...Missing: valk | Show results with:valk
  25. [25]
    [PDF] The Relationship Between Curvilinear Structure Enhancement And ...
    Villanueva,. “Evaluation of methods for ridge and valley detection,” ... Arcelli and G. S. Di Baja, “Finding local maxima in a pseudo-Euclidian distance transform ...
  26. [26]
    [PDF] Watershed Transform - MathWorks
    Modify the distance transform so it only has minima at the desired locations, and then repeat the watershed steps above. D2 = imimposemin(D,mask);. Ld2 = ...
  27. [27]
    [PDF] Linear time algorithms for exact distance transform
    In 2003, Maurer at al. [10] published a paper describing an al- gorithm that computes the exact distance transform in linear time. (with respect to image ...
  28. [28]
    [PDF] Signed Distance Transform Using Graphics Hardware - CGL @ ETHZ
    The signed distance transform computes the scalar valued function of the Euclidean distance to a given mani- fold of co-dimension one. For a closed and ...
  29. [29]
    Image transforms - Introduction to Bioimage Analysis
    The watershed transform can be used to split structures using intensity values · The distance transform calculates the distance between foreground and background ...Image Transforms · The Watershed Transform · Combining Transforms
  30. [30]
  31. [31]
    Distance_transform_edt need too much memory - Image.sc Forum
    Feb 10, 2019 · I do distance transform on a 1024^3 image. My computer has 32 G memory, but cause a memory out exception. (So I transform it by Fiji, and got it)Missing: OpenCV higher weighted
  32. [32]
    An attribute weighted distance transform - ScienceDirect
    This paper describes a method for computing both external and internal Euclidean distance transforms, which is also locally adaptive. Like Talbot, 2006, Buckley ...
  33. [33]
    Face recognition using weighted distance transform - IEEE Xplore
    Geodesic distance transform of facial images is estimated using the “Fast Marching” [1, 2] technique which is based on Dijkstra's algorithm employed to identify ...
  34. [34]
    Stochastic Distance Transform: Theory, Algorithms and Applications
    Jun 19, 2020 · We present a stochastic distance transform (SDT) based on discrete random sets, in which a model of element-wise probability is utilized.
  35. [35]
    Stochastic Distance Transform: 21st IAPR International Conference ...
    Stochastic Distance Transform: 21st IAPR International Conference, DGCI 2019 ... In the first part of this paper optimal distance transformations are developed.
  36. [36]
    [PDF] Vector-City Vector Distance Transform - Professor Mark W. Jones
    If a signed field (voxels inside the object have negative distances) is required, the datasets must also be classified to indicate whether a voxel is internal,.
  37. [37]
    Distance transforms module | DIPlib | a library for quantitative image ...
    The distance transforms module includes Euclidean, vector, grey-weighted, and geodesic distance transforms. Euclidean transform computes distances from objects ...Missing: multiple features
  38. [38]
    Efficient distance transformation for path-based metrics - ScienceDirect
    In this article, we present generic separable algorithms to efficiently compute Voronoi maps and distance transformations for a large class of metrics.
  39. [39]
    [PDF] Non-Euclidean Motion Planning with Graphs of Geodesically ...
    May 11, 2023 · In this paper, we formulate the general problem of shortest- path motion planning around obstacles on Riemannian mani- folds. We define a graph ...
  40. [40]
    distance_transform_edt — SciPy v1.16.2 Manual
    This function calculates the distance transform of the input, by replacing each foreground (non-zero) element, with its shortest distance to the background.Distance_transform_edt · Scipy.ndimage... · 1.12.0 · 1.13.0
  41. [41]
    edt · PyPI
    The edt library computes the Euclidean Distance Transform of 1d, 2d, or 3d labeled images with multiple labels, supporting anisotropic dimensions.