Cartogram
A cartogram is a thematic map that distorts the geographic size, shape, or position of regions to represent the distribution of a non-geographic variable, such as population, wealth, or electoral votes, rather than actual land area.[1][2] This transformation aims to visually equalize the representation of the chosen variable, making disparities more apparent than on conventional equal-area maps.[3] Unlike traditional projections that prioritize geographic accuracy, cartograms prioritize statistical proportionality, often resulting in counterintuitive distortions where densely populated or high-value areas expand while sparse ones shrink.[4] Cartograms have been employed since the early 20th century, though the term was initially applied inconsistently to various diagrammatic maps, evolving into a distinct cartographic technique for thematic visualization.[5] Modern computational methods, developed over the past decades, enable automated construction through algorithms like density-equalizing transformations or self-organizing maps, improving precision and scalability beyond manual drafting.[6][4] Common types include area cartograms, which resize regions by variable density; linear cartograms, altering distances; and topological variants preserving adjacency while deforming shapes. These maps excel in highlighting global inequalities, such as overrepresenting countries like India or China in population cartograms to counter the misleading equal-area portrayals of vast but sparsely inhabited nations like Russia or Canada.[7] Applications span election analysis, where vote shares resize districts; economic comparisons via GDP scaling; and hazard forecasting by amplifying exposure risks.[8][9] Despite challenges in readability and preservation of recognizable shapes, cartograms provide a geometrically rigorous alternative to choropleth maps, facilitating causal insights into variable-driven spatial patterns without reliance on color gradients alone.[10][3]Definition and Principles
Core Concept and Purpose
A cartogram constitutes a thematic map wherein the size or shape of geographic regions is deliberately altered to correspond proportionally with a selected statistical variable, such as population density, economic output, or electoral votes, rather than their actual land area.[11] This transformation substitutes the variable's magnitude for conventional geographic scale, enabling regions with disproportionate influence relative to their physical extent to be visually emphasized.[12] For instance, densely populated nations like India or China expand dramatically in population-based cartograms, reflecting their demographic weight more accurately than equal-area projections.[13] The core purpose of cartograms lies in enhancing the interpretability of spatial data distributions that correlate poorly with geographic area, thereby mitigating perceptual biases inherent in choropleth or standard proportional symbol maps.[14] Traditional maps often underrepresent compact areas with elevated variable values, such as urban centers or resource-rich enclaves, which can obscure causal patterns in phenomena like resource allocation or voting power.[15] By enforcing area proportionality to the data, cartograms promote a data-centric visualization that aligns spatial extent with substantive importance, facilitating quantitative comparisons and revealing disparities that geographic fidelity might conceal.[16] This approach rests on the recognition that cartographic representation involves trade-offs between locational accuracy and thematic emphasis; when the analytical objective prioritizes the latter, distortion serves as a tool for causal insight into variable-driven dynamics, unencumbered by irrelevant topographic constraints.[17] Empirical studies of map cognition indicate that such rescaling improves user comprehension of relative magnitudes, though it necessitates familiarity with the method to avoid misinterpretation of altered topologies.[18]Distortion Mechanisms and First-Principles Rationale
Cartograms distort geographic regions by resizing their areas to correspond proportionally to a chosen statistical variable, such as population density or economic production, rather than land area. This transformation prioritizes thematic accuracy over spatial fidelity, employing algorithms that adjust boundaries while often preserving adjacency and topology in contiguous variants. Distortion arises from the need to expand regions with high variable values and contract those with low values, which can lead to shape elongation, fragmentation risks, or unrecognizability if not constrained.[19] Key mechanisms include diffusion-based methods, exemplified by the Gastner-Newman algorithm developed in 2004, which models the map as a density field and applies a continuous diffusion process to equalize the variable's density across the plane. In this approach, excess "mass" representing the variable flows from denser to sparser regions akin to heat diffusion, solved via numerical integration of the equation \nabla \cdot (D \nabla \rho) = 0, where \rho is density and D is diffusivity, iteratively warping polygons until areas match target values with minimal boundary crossings. This yields smooth, contiguous distortions suitable for global or national maps.[19][20] Alternative mechanisms use discrete optimization or force-directed simulations, such as iterative rubber-sheet projections that incrementally scale polygons via mass-point adjustments or constrained triangulation to limit angular distortions. These methods optimize cartographic error, defined as the deviation between rendered and target areas, often incorporating penalties for excessive shape changes to retain recognizability. For instance, triangulation-based algorithms restrict edge bearing shifts to under 90 degrees during resizing, preventing topological inversions.[21] The first-principles rationale for such distortions stems from the mismatch between geographic extent and substantive significance: standard maps amplify sparse, vast territories at the expense of compact, data-rich ones, biasing perceptual judgments of totals since humans overestimate small areas and underestimate large ones when assessing densities via color alone. By substituting area as the primary visual variable, cartograms enable direct, scale-free comparisons of magnitudes, exploiting innate abilities to judge areas accurately—far superior to linear or angular encodings per empirical perceptual studies—thus revealing true proportionalities and causal distributions obscured by uniform geography. This trade-off favors data fidelity over locational precision when the goal is quantitative insight, as geographic distortions are tolerable for thematic emphasis, mirroring how projections inherently compromise for utility.[22][19]Historical Development
Origins in Statistical Mapping
![Émile Levasseur's 1876 cartogram of Europe][float-right] The origins of cartograms trace back to 19th-century advancements in statistical mapping, where cartographers began experimenting with proportional representations to visualize quantitative data such as population, economic output, or land utilization over geographic regions, departing from conventional equal-area projections. These early techniques emerged amid the rise of thematic cartography in Europe, particularly in France, as statisticians sought methods to convey statistical magnitudes intuitively without relying solely on auxiliary symbols like bars or circles. Precursors included William C. Woodbridge's 1837 comparative charts in his "Modern Atlas," which juxtaposed continental sizes and populations, laying groundwork for value-driven spatial distortions.[5] Pierre Émile Levasseur, a French economist and geographer, produced the earliest recognized cartograms around 1870, with a notable series published in 1876 depicting European countries resized proportionally to variables like land area, population, or wealth, often using rectangular or block-like forms while approximating geographic outlines. Levasseur's work, featured in economic geography texts such as his 1875 "La France, avec ses Colonies," marked the first documented use of the term "cartogramme" in this context, emphasizing statistical proportionality over topographic fidelity. These maps, though diagrammatic rather than fully contiguous, demonstrated the principle of area distortion to highlight disparities, influencing subsequent statistical visualizations in journals and atlases.[23][24] By the late 19th century, similar approaches appeared in election mapping, such as Hermann Haack and Hans Wiechel's 1903 rectangular cartograms of German Reichstag results scaled by population, bridging statistical mapping toward more integrated geographic distortions. This evolution reflected a causal drive in statistical cartography to prioritize data density and perceptual accuracy, as geographic scale often misrepresented human-centric metrics like population distribution. However, early cartograms remained labor-intensive manual constructions, limited by available data precision and graphical tools.[24]Key Figures and Mid-20th Century Advances
Erwin Raisz provided one of the earliest formal definitions of rectangular statistical cartograms in 1934, emphasizing their utility for visualizing economic data by arranging regions into grid-like forms proportional to variables such as production output, which laid foundational principles influencing mid-century applications.[5] This approach gained traction in periodicals, with examples like a 1937 Business Week cartogram depicting U.S. manufacturing by state, demonstrating practical deployment for industrial analysis.[5] In 1953, Arthur Philbrick advanced theoretical discussions on cartographic abstraction, including density-adjusted representations akin to cartograms, in the context of geographical content analysis, promoting their role in emphasizing functional rather than geometric fidelity.[5] By the 1960s, epidemiological applications emerged, as seen in Levinson and Haddon's 1965 area-adjusted maps for public health data, which scaled regions by incidence rates to highlight disease distributions more effectively than equal-area projections.[5] Further methodological progress occurred in 1968 when Hunter and Young introduced a technique employing physical accretion models—simulating region growth via layered materials like clay—to generate quantitative cartograms, offering a precursor to digital methods by balancing distortion through iterative physical approximation.[5] That same year, Häro produced an area cartogram of U.S. Standard Metropolitan Statistical Areas scaled by population, illustrating urban density variations and advancing contiguous distortion practices for socioeconomic mapping.[5] Waldo Tobler contributed foundational mathematical frameworks, deriving partial differential equations for area cartograms during this period, which enabled systematic transformation of geographic spaces while preserving topological relations, marking a shift toward analytical cartography.[25] These developments reflected growing academic interest in cartograms as tools for causal inference in spatial data, prioritizing empirical variable emphasis over territorial accuracy.Algorithmic and Computational Era
The advent of digital computing in the mid-20th century enabled the first automated cartogram generation, marking the transition from manual drafting to algorithmic approaches. Waldo Tobler pioneered this era with iterative algorithms developed in the early 1960s, utilizing partial differential equations and the Jacobian determinant to warp map regions while preserving topological connectivity and achieving proportional area distortion based on variables like population density. Implemented on hardware such as the IBM 709, these methods required approximately 25 seconds per iteration across 20–30 cycles to converge, producing early contiguous cartograms for regions including the United States and global projections.[6] Tobler's techniques, detailed in publications from 1961 to 1963, emphasized grid-based transformations for latitude-longitude data and later extended to irregular polygons in the 1970s, laying foundational principles for computational spatial adjustment despite limitations in handling complex boundaries and computational speed.[6] Advancements in the 1980s addressed efficiency bottlenecks, with Dougenik, Chrisman, and Niemeyer introducing a polygon-specific displacement algorithm in 1985 that applied iterative forces directly to region vertices, reducing processing time compared to grid-based predecessors and improving scalability for finer resolutions. This method incorporated topological checks to prevent overlaps, facilitating broader adoption in geographic information systems (GIS) prototypes. By the late 20th century, non-contiguous variants like Danny Dorling's 1996 circle-based scatorgrams further diversified computational strategies, though contiguous forms remained challenging due to distortion artifacts.[6] A breakthrough in contiguous cartogram quality arrived in 2004 with the diffusion-based algorithm by Michael T. Gastner and M. E. J. Newman, published on May 10 in the Proceedings of the National Academy of Sciences. Drawing from physical diffusion principles, the method solves a linear diffusion equation to redistribute an initial density field (e.g., population) across a map, deriving a velocity field for smooth displacement via numerical integration and fast Fourier transforms for efficiency, often completing in seconds to minutes on standard hardware. Unlike prior iterative approaches prone to irregular warping or discontinuities, this technique yields aesthetically coherent, overlap-free results by mimicking mass flow equalization, as demonstrated in applications like the 2000 U.S. presidential election cartogram and New York lung cancer incidence mapping from 1993–1997.[19] Post-2004 refinements built on this foundation, with Benjamin Hennig adapting the Gastner-Newman model around 2013 to enable intra-area density variations, allowing heterogeneous sub-regions to warp independently while maintaining overall continuity. These developments coincided with GIS software integrations, such as Esri's Cartogram tool implementing the diffusion method by 2010, enhancing accessibility for analysts. Computational cartograms thus evolved from rudimentary prototypes to robust tools for empirical data visualization, prioritizing causal density relationships over geographic fidelity.[14][26]Classification of Cartograms
Area-Based Cartograms
Area-based cartograms resize the geographic area of map units—such as countries, states, or provinces—to reflect the magnitude of a chosen variable, like population density or GDP, rather than true land area. This transformation encodes statistical data directly into spatial extent, enabling visual comparisons of variable values across regions while challenging traditional equal-area projections that prioritize geographic fidelity over thematic emphasis. The approach traces to early 20th-century statistical mapping efforts but gained prominence with computational methods in the late 20th century, as manual resizing proved labor-intensive for complex datasets.[27][5] These cartograms balance data representation with map readability, though distortions can obscure relative positions and adjacencies, potentially misleading untrained viewers about geographic relationships. Empirical studies indicate they outperform choropleth maps in tasks requiring magnitude estimation, as area perception aligns intuitively with quantity judgment, but require careful variable selection to avoid overemphasizing outliers like densely populated urban states. Construction typically involves iterative algorithms that scale polygons while minimizing topological disruptions, with density-equalizing flows preserving continuity in advanced models.[28][29] Key variants include contiguous forms, which warp shapes while maintaining shared borders, and non-contiguous or diagrammatic alternatives that prioritize shape integrity over connectivity. Contiguous implementations, such as diffusion-based methods, redistribute "mass" analogous to physical flows, yielding fluid distortions suitable for national-scale maps; for instance, a 2004 algorithm by Gastner and Newman simulates continuous density equalization to produce globally connected representations. Non-contiguous types resize units independently—often as scaled outlines or uniform symbols like circles—facilitating simpler computation and reduced overlap, though at the cost of lost neighborhood cues; Dorling cartograms, using forced-directed circle placements from 1996, exemplify this by approximating topology via proximity. Diagrammatic extensions, including gridded or rectangular arrays, further abstract geography into bar-like or tessellated forms for multivariate data, enhancing comparability in dense datasets but diverging furthest from mappability.[30][5][2]| Variant | Topology Preservation | Shape Distortion | Example Application |
|---|---|---|---|
| Contiguous | High (adjacencies maintained) | Significant warping | Population redistribution in national maps[5] |
| Non-Contiguous | Low (regions separated) | Minimal (outlines scaled) | Economic output comparisons across disconnected territories[31] |
| Diagrammatic (e.g., Dorling) | Approximate via placement | Uniform symbols (e.g., circles) | Thematic overlays with multiple variables[32] |
Contiguous Shape-Warping Variants
Contiguous shape-warping cartograms distort the geometry of geographic regions continuously to make their areas proportional to a specified variable, such as population or economic output, while maintaining shared boundaries and topological adjacency between regions.[33] This approach ensures the map remains a single connected piece without fragmentation, unlike non-contiguous variants, but often results in significant shape deformations for regions with uneven variable densities.[19] The warping is achieved through iterative algorithms that redistribute "mass" analogous to density equalization, preserving relative positions and neighborhood relations as much as possible.[34] A foundational method for these cartograms is the diffusion-based algorithm developed by Michael T. Gastner and M. E. J. Newman in 2004, which models distortion as a continuous flow of density across boundaries to achieve uniform target density.[19] In this process, initial geographic areas are treated as sources or sinks of flow proportional to the difference between their actual and desired areas; diffusion equations propagate adjustments iteratively until equilibrium, yielding smooth, contiguous transformations suitable for thematic mapping.[35] The algorithm's computational efficiency scales well for national or global datasets, producing readable maps with minimal overlap or inversion of adjacencies.[36] Subsequent refinements include flow-based extensions that accelerate computation while retaining density-equalizing properties; a 2018 algorithm by Gastner et al. benchmarks at seconds for world-scale cartograms on standard hardware, outperforming prior diffusion methods in speed without sacrificing contiguity or shape coherence.[34] Tools like ArcGIS Pro implement similar contiguous generation via numerical optimization, allowing users to specify fields for distortion while enforcing boundary preservation.[33] Alternative approaches, such as CartoDraw (2004), incorporate Fourier-based shape similarity metrics to minimize curvature distortions during warping, prioritizing recognizability for complex polygons.[37] These variants excel in applications requiring topological integrity, such as visualizing population distributions where preserving regional connectivity aids interpretation of spatial relationships.[26] However, extreme density contrasts can amplify shape distortions, potentially compromising legibility for elongated or irregular regions, as noted in algorithmic evaluations balancing distortion metrics against contiguity.[38] Empirical studies confirm their utility in highlighting disparities, such as in global health or economic data, where uniform density reveals patterns obscured by land-area biases.[39]Non-Contiguous and Diagrammatic Forms
Non-contiguous cartograms resize individual geographic regions proportionally to a chosen variable, such as population or economic output, without preserving adjacency between neighboring areas.[40] This method allows each region to maintain its original shape while scaling its area—typically by applying the square root of the ratio between the variable value and the baseline geographic area—to avoid the topological distortions common in contiguous cartograms.[41] Positions are adjusted to approximate original locations and prevent overlaps, facilitating easier shape recognition and more accurate area estimation by viewers compared to shape-warping alternatives.[42] The technique was formalized in scholarly work by Judy M. Olson in 1976, who described algorithms for independent scaling and placement of regions on a base map framework.[43] Practical implementations often involve computational placement to optimize visibility, as seen in tools like ArcGIS modules that generate such maps for thematic data visualization.[44] These cartograms prove effective for datasets where preserving recognizable outlines outweighs the need for spatial continuity, such as U.S. state maps scaled by population density.[45] Diagrammatic forms extend non-contiguous principles by substituting original shapes with standardized geometric primitives, including circles, squares, rectangles, or hexagons, to emphasize proportional area over geographic fidelity.[46] Dorling cartograms, for instance, employ packed circles whose radii are proportional to the square root of the variable, positioned via force-directed algorithms to mimic relative locations without enforced connectivity.[47] Similarly, Demers cartograms utilize rectangular or hexagonal tiles scaled by area, enabling compact arrangements that highlight statistical comparisons across regions.[48] Hexagonal diagrammatic cartograms, such as those depicting German federal states by population, assign uniform hexagons to each unit and resize them accordingly, often arranging them in grids or scattered layouts for clarity.[49] This abstraction minimizes shape bias in perception and supports multivariate overlays, though it sacrifices outline familiarity for diagrammatic simplicity.[5] Graphical variants may incorporate bar-like or mosaic elements, classified alongside non-contiguous types for their emphasis on value encoding through form rather than terrain.[14] These methods, while less tied to geography, enhance empirical data interpretation by prioritizing causal variable prominence over illusory spatial hierarchies.[50]Linear and Rectangular Cartograms
Linear cartograms modify the lengths or directions of linear features, such as roads or transit lines, to represent variables like travel time or traffic density instead of true geographic distances.[51] This distortion preserves connectivity but alters spacing to emphasize functional relationships, as seen in schematic subway maps where station intervals reflect average journey durations rather than Euclidean distances.[52] For instance, non-connective linear cartograms decouple line segments to independently scale lengths based on data like congestion levels, avoiding topological constraints for clearer visualization of one-dimensional metrics.[51] Applications include traffic condition mapping, where road segment lengths are proportionally adjusted to indicate average speeds; a study on urban networks demonstrated that such cartograms improve comprehension of variability in travel efficiency by normalizing distances to experiential scales.[51] Construction often employs algorithms that solve for edge length transformations while maintaining vertex order or fixed positions, enabling real-time updates for dynamic data like live transit feeds.[52] Unlike area cartograms, linear variants prioritize route-based phenomena, reducing perceptual bias from geographic familiarity.[53] Rectangular cartograms depict regions as rectangles with areas scaled to thematic variables, such as population or GDP, simplifying irregular polygons into uniform shapes while aiming to preserve adjacency where possible.[54] This form emerged as a diagrammatic alternative to contiguous warping, trading spatial fidelity for readability; for example, algorithms optimize rectangle dimensions and placements using linear programming to minimize overlaps and match target areas within a bounded frame.[55] A 2006 analysis showed that rectangular layouts can represent up to 50 regions with low distortion if aspect ratios are constrained, though complex topologies often require non-contiguous arrangements.[56] Empirical evaluations indicate rectangular cartograms enhance quantitative perception for relative magnitudes, outperforming choropleth maps in tasks estimating totals from areas, but they may obscure directional relationships unless augmented with labels or grids.[57] Tools like R's recmap package automate generation by partitioning values into hierarchical rectangles, supporting uses in electoral analysis where party vote shares dictate sizes.[58] These cartograms favor causal inference on per-unit densities by equalizing visual prominence, though algorithmic choices affect validity; evolution strategies have been proposed to iteratively refine packings for minimal wasted space.[59]Multivariate and Specialized Variants
Multivariate cartograms integrate multiple data dimensions by combining area distortion for one variable with additional symbology for others, enabling richer analysis of geospatial relationships. In bivariate variants, region sizes are adjusted proportional to a primary variable, such as population, while colors or patterns encode a secondary variable like gross domestic product per capita.[60] This approach facilitates direct visual comparison of variables that univariate cartograms cannot achieve alone.[61] A formal technique for bivariate cartograms was detailed in a 2018 IEEE Transactions on Visualization and Computer Graphics paper, which constructs distortions iteratively to balance representation of both variables while minimizing topological disruptions. The method preserves recognizable shapes better than independent univariate mappings, allowing users to identify correlations, such as regions with high population density and low economic output. Empirical evaluations in the study confirmed improved accuracy in tasks like ranking and outlier detection compared to juxtaposed maps. Specialized variants extend cartogram principles to niche applications or constraints. Graphical cartograms, including Dorling and Demers methods, replace geographic shapes with abstract forms like circles or rectangles sized by the variable, prioritizing data fidelity over spatial continuity.[62] Dorling cartograms, introduced in 1996, use force-directed algorithms to position non-overlapping circles, reducing distortion artifacts in dense areas.[31] Demers cartograms employ rectangular tiles, suitable for grid-based data representations. Gridded cartograms subdivide regions into uniform cells resized individually, enhancing resolution for fine-scale variables.[62] These forms are implemented in tools like ArcGIS Pro as of 2023, supporting rapid prototyping for thematic analysis.[62] Multivariate labeling techniques further specialize cartograms by varying typographic attributes—font size for one variable, weight for another—on equal-area distortions, as demonstrated in a 2016 study combining symmetric shapes with data-encoded text for compact, multi-variable displays.[63] Such innovations address limitations in traditional symbology, though they require careful design to avoid perceptual overload.[64]Construction Techniques
Fundamental Algorithms
The construction of contiguous area cartograms relies on algorithms that iteratively distort polygonal regions to match target densities while preserving shared boundaries and overall topology. A foundational method is the density-equalizing diffusion algorithm developed by Gastner and Newman in 2004, which treats density variations as imbalances resolvable through simulated physical flow.[19] This approach begins by defining an initial density function \rho(\mathbf{r}) based on the geographic variable, such as population per unit area. A linear diffusion equation \partial \rho / \partial t = \nabla^2 \rho is then solved to propagate changes, yielding a velocity field \mathbf{v} = -\nabla \rho that directs displacement from high-density to low-density zones.[19] Displacements are integrated via \mathbf{r}(t) = \int \mathbf{v} \, dt, iterating until \rho uniformizes, at which point region areas scale inversely with original densities to reflect the variable's totals.[19] For computational efficiency, the process leverages fast Fourier transforms in a cosine basis, enabling maps with thousands of regions to compute in seconds to minutes on standard hardware.[19] This diffusion model ensures smooth, contiguous deformations by analogy to mass redistribution in physical systems, avoiding abrupt overlaps or tears common in earlier manual techniques.[19] Subsequent refinements, such as flow-based variants introduced in 2018, accelerate convergence by solving nonlinear flow equations directly, reducing runtime for large-scale cartograms to under a second while retaining density equalization and boundary integrity.[34] These methods prioritize empirical fidelity to input data over geographic fidelity, with distortion controlled via coarse-graining parameters that balance readability and precision.[19] For non-contiguous diagrammatic cartograms, Dorling's 1996 forced-directed algorithm represents regions as variable-sized circles, positioning them via repulsion forces to eliminate overlaps and attraction terms to approximate original adjacencies.[65] Iterations minimize energy in a layout analogous to graph drawing, yielding compact arrangements that sacrifice continuity for reduced shape distortion.[65] Complementary optimization techniques, such as nonlinear least-squares formulations linearized for iterative vertex relocation, enforce exact area targets under topology constraints, often using scanline sweeps or medial-axis skeletons to guide deformations.[37] These algorithms underpin software tools by providing scalable, verifiable transformations grounded in mathematical optimization rather than heuristic approximation.[37]Software Implementation and Tools
Several software tools implement cartogram construction algorithms, ranging from standalone applications to integrations within geographic information systems (GIS) and programming libraries. These tools typically support density-equalizing methods, such as the diffusion-based Gastner-Newman algorithm introduced in 2004, which preserves topology while resizing regions proportional to a variable like population.[19] Standalone applications like ScapeToad, developed by the Chôros Laboratory, provide user-friendly interfaces for generating continuous cartograms using the Gastner-Newman method on shapefiles, with features for grid-based adaptation and topological preservation; it is cross-platform, open-source, and written in Java.[66] Similarly, Cartogram Studio offers a free Windows-based tool for manual contiguous cartogram creation, emphasizing simplicity for non-programmers.[67] GIS platforms incorporate cartogram functionality through plugins or native toolsets. ArcGIS Pro includes a Cartogram toolset that distorts input geometries based on data fields, supporting variants like Gastner-Newman for alternate spatial representations.[68] QGIS enables area cartograms via processing toolbox extensions, such as density-equalizing transformations, suitable for thematic mapping workflows.[69] These integrations leverage vector data handling but may require preprocessing for optimal results, as noted in ESRI's older Avenue scripts for ArcView, which improved efficiency over manual methods.[70] Programming libraries facilitate reproducible and customizable cartograms in statistical environments. The R packagecartogram (version 0.3.0, released May 2023) implements continuous area cartograms via the rubber-sheet distortion algorithm from Dougenik et al. (1985) and non-contiguous variants, operating on geospatial objects like those from sp or sf classes.[71] In Python, the cartogram package (version updated July 2024) computes cartograms from GeoPandas GeoDataFrames using the same Dougenik algorithm, while python-cartogram focuses on anamorphic distortions for continuous maps.[72] For high-performance needs, go_cart provides a C-based implementation of the flow-based Gastner-Seguy-More algorithm (2018), enabling rapid generation of density-equalizing maps in seconds.[34][73] These libraries prioritize algorithmic fidelity over graphical interfaces, requiring user expertise in geospatial data manipulation.[74]