TOPSIS
TOPSIS, an acronym for Technique for Order of Preference by Similarity to Ideal Solution, is a multi-criteria decision analysis method originally developed by Ching-Lai Hwang and Kwangsun Yoon in 1981.[1] It evaluates and ranks a set of alternatives based on their relative proximity to an ideal solution—representing the best possible outcome across all criteria—and their distance from a negative ideal solution, which embodies the worst outcomes.[2] The method assumes that the optimal choice minimizes the Euclidean distance to the positive ideal while maximizing it to the negative ideal, providing a systematic approach to complex decision problems involving multiple conflicting criteria.[1]
At its core, TOPSIS operates through a structured six-step process that transforms raw decision data into a comparable ranking. First, a decision matrix is constructed to capture the performance of alternatives against selected criteria; this matrix is then normalized to ensure uniformity across different units of measurement.[2] Criteria weights—often derived from methods like analytic hierarchy process (AHP)—are applied next to reflect their relative importance, followed by the identification of positive and negative ideal solutions as composite benchmarks.[1] Separation measures, calculated as Euclidean distances from each alternative to these ideals, lead to a relative closeness coefficient that determines the final ranking, with higher values indicating preferable options.[2] This geometric foundation allows TOPSIS to handle both quantitative and qualitative data, making it versatile for real-world applications.[1]
Since its introduction, TOPSIS has seen extensive adoption and evolution across diverse domains, including supply chain management for supplier selection, energy planning for renewable resource allocation, healthcare for service quality assessment, and environmental management for sustainability evaluations.[3] To address uncertainties in decision environments, numerous extensions have emerged, such as fuzzy TOPSIS, intuitionistic fuzzy TOPSIS, and hybrid models combining it with other techniques like AHP or VIKOR, enhancing its robustness in ambiguous scenarios.[3] Research on TOPSIS has proliferated, with a 2024 review analyzing 240 studies, underscoring its enduring influence and adaptability in multi-attribute decision-making challenges.[4]
Overview
Definition and Purpose
TOPSIS, or Technique for Order of Preference by Similarity to Ideal Solution, is a multi-attribute decision making (MADM) method developed by Ching-Lai Hwang and Kwangsun Yoon in 1981.[1] It functions as a compensatory aggregation technique that compares alternatives by normalizing criterion scores and calculating geometric distances between options and reference points.[5]
The core purpose of TOPSIS is to rank and select the best alternative from a finite set by measuring its proximity to the positive ideal solution (PIS)—comprising the most desirable attribute values—and its remoteness from the negative ideal solution (NIS)—defined by the least desirable values.[6] This geometric rationale allows decision-makers to quantify trade-offs across conflicting criteria, yielding a closeness coefficient that orders alternatives from most to least preferable.[6]
TOPSIS operates on a decision matrix that evaluates m alternatives across n criteria, representing performance measures in an n-dimensional measurement space.[6] Developed to tackle structured decision problems involving both quantitative and qualitative criteria, it offers a versatile tool for applications in fields requiring balanced multi-faceted assessments.[1]
Key Assumptions and Prerequisites
The Technique for Order of Preference by Similarity to Ideal Solution (TOPSIS) relies on several foundational assumptions to ensure its validity in multi-attribute decision making. A primary assumption is that each criterion exhibits monotonic preference, meaning the utility for alternatives increases or decreases consistently with the criterion's value, without non-monotonic behaviors such as thresholds or plateaus.[1] Another key assumption is the independence of criteria, where interactions or dependencies between attributes are not considered, allowing weights to reflect isolated importance without adjustment for correlations.[7] TOPSIS further assumes linear compensation among criteria, permitting trade-offs where strong performance in one attribute can offset weaknesses in another, characteristic of compensatory aggregation methods.[8] Decision makers' preferences must be adequately captured through assigned weights, and alternatives are typically treated as mutually exclusive, suitable for ranking or selection scenarios rather than simultaneous adoption.[9]
Prior to applying TOPSIS, users require a basic understanding of decision matrices, which organize alternatives and criteria in a structured tabular format to facilitate evaluation.[1] Knowledge of weighting schemes is essential, including simple equal weights for uniform importance or expert-assigned values derived from methods like analytic hierarchy process to prioritize criteria.[7] Data inputs must be numerical or readily convertible to scores; qualitative attributes, such as descriptive ratings, necessitate preprocessing through scales (e.g., Likert-type conversions) to enable quantitative analysis.[10]
TOPSIS assumes complete and precise information across all alternatives and criteria, without provisions for missing data or uncertainty in the standard formulation; extensions like fuzzy TOPSIS address such limitations but fall outside the core method.[7]
History
Origins and Development
The Technique for Order of Preference by Similarity to Ideal Solution (TOPSIS) was originally developed in 1981 by Ching-Lai Hwang and Kwangsun Yoon as part of their comprehensive framework for multiple attribute decision making (MADM). In their seminal book, Multiple Attribute Decision Making: Methods and Applications, they introduced TOPSIS as a compensatory method that ranks alternatives based on their relative proximity to an ideal solution in a geometric space, leveraging normalized Euclidean distances to balance multiple criteria effectively.[1]
This approach built upon earlier distance-based techniques in decision theory, such as those employing Euclidean metrics, while addressing limitations in non-compensatory outranking methods like ELECTRE, which had been proposed in the 1960s. Unlike ELECTRE's focus on pairwise comparisons and thresholds for concordance and discordance, TOPSIS offered a simpler, more intuitive geometry-based procedure that allows trade-offs among attributes, making it particularly suitable for structured problems in operations research. Hwang and Yoon positioned TOPSIS within the broader MADM landscape to provide a practical tool for scenarios where alternatives could be objectively measured against positive and negative ideal points.[7]
Following its introduction, TOPSIS saw early adoption in the 1980s within operations research, where it was applied to practical problems such as supplier selection and project ranking. These initial uses demonstrated its utility in evaluating alternatives under multiple conflicting criteria, with examples appearing in academic literature shortly after publication, including assessments of resource allocation and performance evaluation in industrial settings. By the mid-1980s, the method had gained traction for its computational simplicity and ability to handle both quantitative and qualitative data in decision support systems.[7]
The basic TOPSIS formulation evolved during the late 1980s and 1990s to accommodate real-world complexities, particularly uncertainty in data. Extensions emerged to incorporate fuzzy sets for handling imprecise or linguistic information, with early fuzzy TOPSIS variants, such as the one proposed by Chen and Hwang in 1992, enabling the method to address vagueness in group decision-making environments.[11] These developments, including interval-based adaptations for bounded uncertainty, expanded TOPSIS's applicability to dynamic and ambiguous scenarios while preserving its core geometric principles.[7]
Key Publications and Contributors
The Technique for Order of Preference by Similarity to the Ideal Solution (TOPSIS) was formally introduced in the seminal book Multiple Attribute Decision Making: Methods and Applications by Ching-Lai Hwang and Kwangsun Yoon, published in 1981 by Springer-Verlag. This work serves as the cornerstone of TOPSIS, with chapters 3 and 4 providing the detailed methodology for multi-attribute decision making, including the core principles of ideal solution approximation and distance-based ranking.[1]
Subsequent extensions addressed limitations in robustness and applicability to complex scenarios. David L. Olson contributed significantly through his 1996 book Decision Aids for Selection Problems, which explores TOPSIS in selection contexts and emphasizes robustness against variations in weighting and normalization, enhancing its reliability for practical decision support. For group decision-making variants, Hsu-Shih Shih, Huan-Jyh Shyur, and E. Stanley Lee proposed an extension in their 2007 paper, integrating aggregation of individual preferences within the TOPSIS framework to handle non-homogeneous alternatives in collaborative settings.[12]
Influential review papers have further solidified TOPSIS's impact. Majid Behzadian and colleagues conducted a comprehensive state-of-the-art survey in 2012, classifying over 200 applications of TOPSIS across domains like engineering and management, while highlighting methodological advancements and common implementation patterns.[7] Similarly, Edmundas K. Zavadskas and Zenonas Turskis provided an overview in 2011 of multiple criteria decision-making methods in economics, focusing on hybrid TOPSIS models that combine it with other techniques such as fuzzy logic or analytic hierarchy process for improved handling of economic uncertainties.[13]
Recent developments up to 2025 have integrated TOPSIS with artificial intelligence, enhancing its adaptability to data-driven environments. For instance, as of 2025, AI-driven predictive maintenance frameworks using enhanced TOPSIS have been proposed to rank alternatives based on criteria like reliability and efficiency, bridging multi-criteria decision making with AI workflows.[14]
Mathematical Foundations
Decision Matrix Construction
In the Technique for Order of Preference by Similarity to Ideal Solution (TOPSIS), the decision matrix serves as the foundational data structure, capturing the performance evaluations of alternatives across multiple criteria. It is formally defined as an m \times n matrix A = [a_{ij}]_{m \times n}, where m denotes the number of decision alternatives (typically labeled A_1, A_2, \dots, A_m), n represents the number of evaluation criteria (labeled C_1, C_2, \dots, C_n), and each entry a_{ij} quantifies the performance rating of alternative A_i with respect to criterion C_j. These ratings a_{ij} are derived from raw data x_{ij}, often through a simple transformation a_{ij} = f(x_{ij}), where f may be the identity function for direct measurements or a basic scaling for consistency, ensuring the matrix reflects the decision problem's context. This structure assumes a complete dataset initially, as outlined in the original formulation.[15]
The construction of the decision matrix begins with systematic data collection tailored to the problem domain, such as soliciting judgments from domain experts, gathering empirical measurements from tests or sensors, or aggregating inputs from stakeholders via surveys or pairwise comparisons. For qualitative criteria, ratings may be converted to numerical scales (e.g., Likert scores), while quantitative criteria use direct values like costs or efficiencies. If missing values arise due to incomplete data collection, common approaches include imputing them with the arithmetic mean of available ratings for that criterion across alternatives or employing more sophisticated methods like expectation-maximization to estimate plausible values while preserving the matrix's integrity. This process ensures the matrix is robust and representative before proceeding to further analysis.[15][16]
Criteria in the decision matrix are categorized into two types based on their desirability: benefit criteria, where higher values indicate better performance (e.g., fuel efficiency or reliability scores), and cost criteria, where lower values are preferable (e.g., price or maintenance costs). This distinction guides subsequent interpretations but does not alter the raw matrix construction. For illustration, consider a selection problem among three car models evaluated on four criteria: price (cost), fuel efficiency in miles per gallon (benefit), comfort rating on a 1-10 scale (benefit), and safety score on a 1-10 scale (benefit). The resulting decision matrix might appear as follows:
| Alternative | Price ($) | Fuel Efficiency (mpg) | Comfort (1-10) | Safety (1-10) |
|---|
| Car A | 20,000 | 30 | 8 | 9 |
| Car B | 25,000 | 25 | 7 | 8 |
| Car C | 22,000 | 28 | 9 | 7 |
Such a matrix encapsulates the raw evaluations, with criteria weights assigned externally to reflect relative importance.[15][17]
Normalization Techniques
In multi-criteria decision making, the normalization of the decision matrix is essential to render diverse criteria commensurable, thereby eliminating discrepancies arising from differing units and scales of measurement. This process transforms the original decision matrix A = [a_{ij}]_{m \times n}, where m alternatives are evaluated across n criteria, into a normalized matrix R = [r_{ij}]_{m \times n} with values typically scaled between 0 and 1, facilitating equitable comparisons and subsequent computations in methods like TOPSIS.[18]
The classical approach in TOPSIS employs vector normalization, which scales each element by the Euclidean norm of its respective criterion column to preserve relative magnitudes while achieving unit length. Specifically, the normalized value is computed as r_{ij} = \frac{a_{ij}}{\sqrt{\sum_{i=1}^m a_{ij}^2}} for each alternative i and criterion j, ensuring that the normalization is independent of the data's range and robust to outliers. This method, introduced in the foundational work on TOPSIS, maintains the geometric interpretation of alternatives as points in a hyperspace, where distances reflect true dissimilarities without distortion from varying scales.[18]
Alternative normalization techniques have been proposed and analyzed to address limitations of vector normalization, such as sensitivity to data distribution or the need for bounded outputs in specific applications. Linear scaling, often via the min-max method, rescales values relative to the minimum and maximum in each criterion column: for benefit criteria (where higher values are preferable), r_{ij} = \frac{a_{ij} - \min_i a_{ij}}{\max_i a_{ij} - \min_i a_{ij}}; this bounds the normalized values strictly between 0 and 1. For cost criteria (where lower values are preferable), inversion is applied during normalization to align them with a benefit-oriented scale, using r_{ij} = \frac{\max_i a_{ij} - a_{ij}}{\max_i a_{ij} - \min_i a_{ij}}, thereby ensuring consistency in subsequent ideal solution calculations. These linear methods, while simpler and more intuitive than vector normalization, can amplify the impact of extreme values and have been evaluated as viable alternatives in comparative studies, particularly when data ranges are well-defined.[18][19]
TOPSIS Procedure
Weighted Normalized Decision Matrix
In the TOPSIS method, criterion weights w_j for j = 1, \dots, n are positive values that sum to 1, i.e., w_j > 0 and \sum_{j=1}^n w_j = 1, to represent the relative importance of each evaluation criterion.[1] These weights are typically provided by the decision maker and incorporate subjective priorities into the multi-criteria analysis.[1]
Weights can be assigned through subjective expert judgment, where decision makers directly specify relative importance based on experience, or via structured techniques such as the Analytic Hierarchy Process (AHP), which derives weights from pairwise comparisons to ensure consistency in judgments.[3] Alternatively, objective methods like the entropy technique calculate weights based on the inherent variability and information content in the decision data, reducing reliance on personal bias.[3]
The weighted normalized decision matrix \mathbf{V} = [v_{ij}]_{m \times n}, with m alternatives, is formed by scaling the normalized decision matrix elements r_{ij} (obtained from prior normalization) with the criterion weights:
v_{ij} = w_j \cdot r_{ij}, \quad i=1,\dots,m; \quad j=1,\dots,n.
This computation adjusts the scale of each criterion according to its assigned importance.[1]
The resulting matrix \mathbf{V} emphasizes higher-priority criteria in the evaluation, setting the stage for ideal solution computations by aligning the data with decision maker preferences and ensuring proportional influence across attributes.[1]
For instance, if the weight for a cost criterion is w_1 = 0.4, then the weighted values for all alternatives under that criterion are v_{i1} = 0.4 \times r_{i1}, thereby increasing the criterion's role in distinguishing alternatives.[1]
Determination of Ideal Solutions
In the TOPSIS procedure, once the weighted normalized decision matrix V = (v_{ij})_{m \times n} is obtained, the positive ideal solution (PIS), denoted as A^+, and the negative ideal solution (NIS), denoted as A^-, are determined as benchmark points representing the best and worst possible performances across all criteria, respectively.[1]
The PIS A^+ is constructed by selecting, for each benefit criterion j (where higher values are preferable), the maximum value among all alternatives i, i.e., v_j^+ = \max_i v_{ij}; for each cost criterion j (where lower values are preferable), the minimum value is selected, i.e., v_j^+ = \min_i v_{ij}. Similarly, the NIS A^- selects the minimum value for benefit criteria, v_j^- = \min_i v_{ij}, and the maximum value for cost criteria, v_j^- = \max_i v_{ij}. These selections ensure that A^+ embodies the optimal hypothetical alternative that maximizes benefits while minimizing costs, while A^- represents the undesirable hypothetical alternative with the opposite extremes.[1][20]
Formally, the PIS and NIS are expressed as vectors in the criteria space:
A^+ = \left\{ v_1^+, v_2^+, \dots, v_n^+ \right\}
A^- = \left\{ v_1^-, v_2^-, \dots, v_n^- \right\}
where n is the number of criteria, and each component follows the maximization/minimization rules based on criterion type as defined above.[1][20]
Geometrically, in the multi-dimensional criteria space, the PIS A^+ corresponds to the "utopia" point, an unattainable ideal located at the farthest positive direction along benefit axes and negative along cost axes, whereas the NIS A^- is the "nadir" point, positioned at the opposite extremes to represent the worst-case scenario. This interpretation underscores TOPSIS's reliance on relative positioning to alternatives in Euclidean space for preference ordering.[1][21]
Calculation of Separation Measures
In the TOPSIS methodology, the separation measures quantify the geometric distance between each alternative and the positive ideal solution (PIS) as well as the negative ideal solution (NIS), using the weighted normalized decision matrix V = [v_{ij}]_{m \times n}, where m is the number of alternatives and n is the number of criteria. The separation from the PIS, denoted S_i^+, for the i-th alternative (i = 1, 2, \dots, m) is calculated as the Euclidean distance:
S_i^+ = \sqrt{\sum_{j=1}^n (v_{ij} - v_j^+)^2}
where v_j^+ represents the coordinate of the PIS for the j-th criterion. Similarly, the separation from the NIS, denoted S_i^-, is given by:
S_i^- = \sqrt{\sum_{j=1}^n (v_{ij} - v_j^-)^2}
with v_j^- as the coordinate of the NIS for the j-th criterion. These measures are derived from the principle that the preferred alternative minimizes its distance to the PIS while maximizing its distance to the NIS. A smaller value of S_i^+ indicates that the alternative is closer to the ideal solution and thus more desirable, reflecting the method's reliance on Euclidean geometry to capture overall deviation across criteria.
The Euclidean distance is employed in the standard TOPSIS formulation due to its simplicity and interpretability as the straight-line distance in the n-dimensional criteria space. While the original method specifies the Euclidean metric, extensions have explored alternatives such as the Manhattan distance (also known as L1 norm), defined as S_i^+ = \sum_{j=1}^n |v_{ij} - v_j^+|, which may be more robust in high-dimensional or noisy data but is less commonly adopted as it emphasizes linear paths over direct ones.
Ranking of Alternatives
The final step in the TOPSIS procedure involves computing the closeness coefficient for each alternative, which serves as a single index to synthesize the separation measures from the positive ideal solution (PIS) and negative ideal solution (NIS). The closeness coefficient C_i for alternative A_i is defined as
C_i = \frac{S_i^-}{S_i^+ + S_i^-},
where S_i^+ is the separation from the PIS, S_i^- is the separation from the NIS, and $0 \leq C_i \leq 1. A higher value of C_i indicates that the alternative is closer to the PIS and farther from the NIS, reflecting greater overall preference. This formulation, introduced by Hwang and Yoon, provides a relative measure of how well each alternative approximates the ideal solution relative to the worst-case scenario.
Alternatives are then ranked in descending order of their C_i values, with the highest C_i designating the most optimal choice. This ranking rule ensures that the preference order directly corresponds to the degree of similarity to the ideal solution, facilitating straightforward decision-making in multi-criteria contexts. For instance, in supplier selection problems, the alternative with the maximum C_i is selected as it best balances all criteria.[22]
In cases where two or more alternatives have identical C_i values, a tie-breaking mechanism is applied by comparing their S_i^+ values; the alternative with the smaller S_i^+ (closer to the PIS) is ranked higher. This secondary criterion resolves ambiguities without altering the core closeness-based ordering.
The ranking outcomes in TOPSIS are sensitive to the assigned criterion weights, as variations in weights can lead to shifts in the relative positions of alternatives by altering the separations S_i^+ and S_i^-. Sensitivity analysis is thus recommended to assess ranking stability under different weight scenarios, though it is not part of the standard procedure.[23]
Applications
Use in Engineering and Management
In engineering, TOPSIS is frequently applied to supplier selection processes, where multiple vendors are ranked based on conflicting criteria such as cost, quality, delivery reliability, and technical capabilities. For instance, in manufacturing supply chains, fuzzy TOPSIS extensions account for linguistic or imprecise data from decision-makers, enabling robust evaluations under uncertainty. Similarly, material selection in product design utilizes TOPSIS to balance attributes like strength, weight, durability, and expense, often integrated with normalization techniques to handle diverse units of measurement.
In management contexts, TOPSIS supports project prioritization by ranking investment options according to metrics including return on investment, risk levels, and implementation feasibility, aiding resource-constrained organizations in strategic planning. For human resource allocation, the method evaluates and assigns personnel to roles by considering skills, experience, and team fit, particularly in dynamic environments like project-based firms.
A notable early application in manufacturing involved facility location selection in the early 2000s, where TOPSIS was used under group decision-making to assess sites based on criteria such as proximity to markets, operational costs, and infrastructure availability, demonstrating its utility in optimizing industrial layouts.
The method's value in these fields lies in its ability to manage conflicting objectives, such as minimizing costs while maximizing performance, by quantifying distances to ideal and negative-ideal solutions, thus providing transparent and defensible rankings.[7]
Examples in Environmental and Healthcare Decisions
In environmental decision-making, TOPSIS has been applied to select optimal sites for waste disposal facilities, balancing multiple conflicting criteria such as pollution impact, operational costs, and accessibility. For instance, a study on municipal solid waste management in Istanbul utilized fuzzy TOPSIS to evaluate potential landfill sites, incorporating criteria like adjacent land use, climate conditions, road access (as a proxy for accessibility), and economic costs, while also assessing pollution and emission levels for disposal methods. The analysis ranked Çatalca as the most suitable site due to its favorable proximity to infrastructure and lower environmental risks, demonstrating TOPSIS's ability to integrate qualitative and quantitative data for sustainable waste management.[24]
TOPSIS also supports water resource management by ranking alternatives for dam construction or conservation strategies, particularly in addressing scarcity and climate variability during the 2010s. Research evaluating sustainable water strategies, such as rainwater harvesting, recycling, and desalination, employed TOPSIS with criteria including water efficiency, cost-effectiveness, environmental impact, social equity, and technological feasibility. In one application, rainwater harvesting emerged as the top-ranked option with a closeness coefficient of 0.640, outperforming desalination (0.578), highlighting TOPSIS's role in prioritizing low-impact interventions informed by studies on irrigation efficiency and demand reduction from the late 2010s.[25]
In healthcare, TOPSIS facilitates the evaluation and ranking of hospitals based on operational efficiency and patient satisfaction metrics. A hybrid DEA-TOPSIS model assessed performance across Chinese provinces, using inputs like the number of doctors, nurses, and beds alongside outputs such as inpatient and outpatient volumes (indicators of satisfaction and accessibility), revealing stable but regionally varied efficiency levels, with synergy from best practices improving overall rankings. This approach underscores TOPSIS's utility in quantifying trade-offs between resource utilization and service quality in public health systems.[26]
For treatment option selection under multiple conflicting outcomes, TOPSIS integrates network meta-analysis data to rank interventions by their proximity to ideal efficacy and safety profiles. In clinical decision-making, the method applies weights—derived objectively via entropy or subjectively via analytic hierarchy process—to outcomes like survival rates, side effects, and quality of life, enabling personalized rankings that balance trade-offs in scenarios such as oncology or chronic disease management. Studies confirm TOPSIS's effectiveness for compromise solutions in multi-outcome evaluations, providing transparent visualizations for clinicians.
A representative case study illustrates TOPSIS's application in ranking sustainable energy sources, such as solar and wind, for environmental policy decisions. Using fuzzy TOPSIS, experts evaluated alternatives based on criteria including environmental impact (e.g., emissions), cost, reliability, energy efficiency, and scalability, with wind ranking highest (closeness coefficient 0.72) followed by solar (0.65), due to their low emissions and high reliability in smart grid contexts. This ranking supports transitions to renewables by quantifying sustainability trade-offs without exhaustive numerical benchmarks.
To address uncertainty in environmental data, such as variable pollution measurements or imprecise cost estimates, adaptations incorporate fuzzy sets into TOPSIS, transforming crisp values into fuzzy numbers for robust rankings. In water management applications, fuzzy TOPSIS handles imprecision in criteria like environmental impact by using linguistic variables and membership functions, yielding more reliable strategy rankings (e.g., prioritizing recycling over desalination under data variability). Similarly, fuzzy extensions in waste site selection mitigate subjectivity in accessibility assessments, enhancing decision credibility in uncertain ecological contexts.[25][24]
Advantages and Limitations
Strengths of the Method
The Technique for Order of Preference by Similarity to Ideal Solution (TOPSIS) is renowned for its simplicity, as it relies on a straightforward geometric rationale that ranks alternatives based on their relative proximity to an ideal solution and distance from a negative-ideal solution in a multi-dimensional Euclidean space. This approach is easy to understand and implement, requiring only basic matrix operations and distance calculations, which makes it accessible to decision-makers without advanced mathematical expertise. Its logical structure, rooted in the concept of compromise solutions, facilitates clear interpretation of results, enhancing its adoption in practical settings.[27]
TOPSIS demonstrates significant flexibility by accommodating both quantitative and qualitative criteria through normalization techniques that standardize diverse data types into a comparable format. This versatility allows the method to handle mixed-attribute problems effectively, and it scales well to larger datasets due to its linear computational structure.[28] As a result, TOPSIS has been widely applied across domains such as engineering, management, and environmental decision-making, where criteria vary in nature and volume.
A key strength of TOPSIS lies in its compensatory nature, which permits trade-offs among criteria, enabling a poor performance in one attribute to be offset by superior performance in others via aggregated distance measures.[20] Unlike non-compensatory methods that enforce strict thresholds, this feature promotes realistic decision-making by reflecting human judgment in balancing attributes. Furthermore, its computational efficiency, with a time complexity of O(mn) where m is the number of alternatives and n is the number of criteria, supports its use in real-time or resource-constrained applications without excessive demands on processing power.[28]
Criticisms and Constraints
One significant criticism of the TOPSIS method is its high sensitivity to the assignment of criterion weights, which are often determined subjectively by decision makers. Small variations in these weights can lead to substantial changes in the final ranking of alternatives, potentially undermining the reliability of the outcomes. For instance, empirical studies have demonstrated that altering weights by as little as ±20% to ±50% can reverse the order of alternatives in assessments such as water quality evaluation. This dependency highlights the method's vulnerability to subjective biases in weight elicitation, necessitating robust sensitivity analyses to validate results.[29]
Normalization in TOPSIS, typically performed using either vector or min-max techniques, introduces further constraints by potentially distorting the original data variances and relative importance of criteria. The vector normalization method, which divides each entry by the Euclidean norm of its column, can amplify distortions in datasets with varying scales or distributions, leading to inconsistent rankings across different problem sizes. Conversely, the min-max normalization approach, which scales values to a [0,1] range based on the maximum and minimum observations per criterion, is particularly sensitive to outliers, as extreme values disproportionately stretch the scale and skew normalized scores for all alternatives. These issues can compromise the method's ability to preserve the inherent structure of the decision matrix, prompting recommendations for alternative normalization strategies in complex scenarios.[30][31][32]
Another notable limitation is the potential for rank reversal in TOPSIS, where the relative ranking of alternatives can change unexpectedly upon the addition or removal of other alternatives, even if the new alternatives are dominated or inferior. This phenomenon arises due to alterations in the ideal and negative-ideal solutions and has been widely discussed in the literature as a drawback affecting the method's stability.[33]
TOPSIS operates under the assumption of independence among criteria, aggregating distances using Euclidean metrics without accounting for potential correlations, which limits its applicability in real-world decisions where criteria often interact. When criteria exhibit strong interdependencies—such as economic factors influencing environmental ones—this assumption can result in misleading separations from ideal solutions and inaccurate rankings. Research has shown that ignoring such correlations violates the method's foundational axioms, leading to outcomes that fail to reflect true trade-offs and requiring hybrid approaches for interdependent settings.[34][3]
The basic TOPSIS framework lacks inherent mechanisms for handling uncertainty or vagueness in input data, assuming all values are crisp and precise, which is rarely the case in practical multi-criteria decisions involving human judgment or incomplete information. This constraint can propagate errors in linguistic or imprecise assessments, such as qualitative ratings like "good" or "fair," resulting in overly deterministic rankings that overlook probabilistic or fuzzy elements. Extensions like fuzzy TOPSIS have been developed to address this, but the classical method's rigidity often necessitates supplementary techniques for robust uncertainty management.[35][3]
Comparisons with Other Methods
Similarities and Differences with AHP
Both the Technique for Order of Preference by Similarity to the Ideal Solution (TOPSIS) and the Analytic Hierarchy Process (AHP) are multi-criteria decision-making (MCDM) methods designed to evaluate and rank alternatives based on multiple conflicting criteria.[36] They share a compensatory nature, allowing trade-offs among criteria where strengths in one can offset weaknesses in another, and both rely on a decision matrix comprising alternatives and criteria scores to aggregate performance.[36] Additionally, AHP is frequently employed to derive criteria weights for TOPSIS, enhancing the latter's objectivity in weighting schemes.[37]
In terms of differences, TOPSIS operates on a distance-based geometric approach, ranking alternatives by their relative closeness to an ideal solution and distance from a negative-ideal solution, which makes it computationally straightforward and suitable for larger datasets without requiring extensive user input beyond initial data provision.[38] Conversely, AHP utilizes pairwise comparisons among criteria and alternatives within a hierarchical structure, employing eigenvalue methods and Saaty's scale to establish priorities, which introduces more subjectivity through expert judgments but excels in decomposing complex problems.[36] While TOPSIS demands quantitative input and handles numerous criteria efficiently, AHP accommodates qualitative assessments but is limited to about nine elements per level to maintain consistency ratios.[38]
TOPSIS is preferable when abundant numerical data is available and a simple, transparent ranking is needed, whereas AHP suits scenarios with intricate, qualitative hierarchies requiring structured expert elicitation.[36] Hybrid approaches combining AHP for weight determination with TOPSIS for final ranking are common, as they leverage AHP's robust prioritization alongside TOPSIS's geometric efficiency, as demonstrated in various applications like supplier selection and environmental assessments.[39]
Contrasts with VIKOR and PROMETHEE
TOPSIS and VIKOR are both compromise-oriented multi-criteria decision-making (MCDM) methods that rely on ideal solutions for ranking alternatives, but they differ fundamentally in their aggregation and normalization approaches. TOPSIS measures the geometric distance of alternatives to the positive ideal solution (PIS) and negative ideal solution (NIS) using Euclidean metrics exclusively, emphasizing relative closeness as the ranking criterion. In contrast, VIKOR employs the L1-metric (Manhattan distance) to compute group utility and the L∞-metric (Chebyshev distance) for maximum individual regret, generating a compromise ranking that balances the highest group utility against the lowest individual regret through a strategy coefficient.[40] This allows VIKOR to prioritize consensus solutions in scenarios with conflicting criteria, whereas TOPSIS assumes a more compensatory aggregation without explicit regret minimization.[41]
Compared to PROMETHEE, an outranking method, TOPSIS performs global aggregation across all criteria via distance measures from ideal points, assuming full comparability among alternatives. PROMETHEE, however, relies on pairwise comparisons between alternatives using preference functions (such as linear or threshold-based) tailored to each criterion, computing positive, negative, and net outranking flows to establish dominance relations. This pairwise focus enables PROMETHEE to better handle incomparability—where alternatives are neither clearly dominant nor dominated—through partial preorders in PROMETHEE I or complete rankings in PROMETHEE II, unlike TOPSIS's total ordering based on distances.[42]
All three methods serve MCDM ranking in complex problems, incorporating criteria weights and supporting both quantitative and qualitative data, but they diverge in philosophical underpinnings: TOPSIS and VIKOR are distance-based ideal-point techniques, while PROMETHEE is relational and preference-driven. PROMETHEE excels in managing incomparability and subjective preferences, offering stronger visualization via GAIA planes, whereas TOPSIS and VIKOR provide straightforward computational paths for ideal-referenced evaluations.[43] For selection guidance, TOPSIS suits problems with simple Euclidean preferences and clear ideal benchmarks, VIKOR is preferable for achieving balanced compromise solutions amid trade-offs, and PROMETHEE is ideal when pairwise outranking and handling non-comparable options are critical.[40][42]
Implementation Resources
Algorithmic Steps for Computation
The Technique for Order of Preference by Similarity to Ideal Solution (TOPSIS) involves a systematic computational procedure to rank alternatives based on their proximity to an ideal solution and distance from a negative ideal solution. The algorithm assumes an initial decision matrix X = (x_{ij})_{m \times n}, where m is the number of alternatives and n is the number of criteria, along with a weight vector w = (w_j)_{1 \times n} where \sum_{j=1}^n w_j = 1, and designations of benefit or cost criteria. The core process proceeds in six key steps, ensuring normalized, weighted evaluations and Euclidean distance calculations for relative closeness.[44]
-
Construct the normalized decision matrix: Normalize the decision matrix to eliminate scale differences among criteria using the vector normalization method. For each element, compute r_{ij} = \frac{x_{ij}}{\sqrt{\sum_{i=1}^m x_{ij}^2}} for i = 1 to m and j = 1 to n, forming matrix R = (r_{ij})_{m \times n}. To handle potential division by zero (e.g., if all values in a criterion column are zero), add a small epsilon (\epsilon > 0) to the denominator or flag the criterion as invalid for further processing.[44]
-
Construct the weighted normalized decision matrix: Apply criterion weights to the normalized matrix by computing v_{ij} = w_j \cdot r_{ij} for each i and j, yielding matrix V = (v_{ij})_{m \times n}. This step emphasizes the relative importance of criteria as determined by decision-makers.[44]
-
Determine the positive ideal solution (PIS) and negative ideal solution (NIS): Identify the PIS A^* = \{v_1^*, \dots, v_n^*\} and NIS A^- = \{v_1^-, \dots, v_n^-\}. For benefit criteria, set v_j^* = \max_i v_{ij} and v_j^- = \min_i v_{ij}; for cost criteria, reverse to v_j^* = \min_i v_{ij} and v_j^- = \max_i v_{ij}. These represent the best and worst possible values across alternatives for each criterion.[44]
-
Compute the separation distances to PIS and NIS: For each alternative i, calculate the Euclidean distance to the PIS as S_i^* = \sqrt{\sum_{j=1}^n (v_{ij} - v_j^*)^2} and to the NIS as S_i^- = \sqrt{\sum_{j=1}^n (v_{ij} - v_j^-)^2}, producing vectors S^* = (S_1^*, \dots, S_m^*) and S^- = (S_1^-, \dots, S_m^-). The square root operation ensures distances are in the original scale.[44]
-
Calculate the relative closeness coefficients: For each alternative, compute the closeness coefficient C_i^* = \frac{S_i^-}{S_i^* + S_i^-} (with $0 \leq C_i^* \leq 1), where higher values indicate greater similarity to the PIS. If S_i^* + S_i^- = 0 (rare, implying an alternative is both ideal and anti-ideal), set C_i^* = 0.5 or exclude the alternative.[44]
-
Rank the alternatives: Order the alternatives in descending order of C_i^*; the alternative with the highest C_i^* is preferred. Ties can be resolved by secondary criteria or additional methods if needed.[44]
Sensitivity analysis, though not core to the computation, can be performed by recomputing rankings after varying weights or thresholds to assess robustness; this is implementation-specific.[44]
The following Python-like pseudocode illustrates a basic implementation, assuming NumPy for matrix operations and handling zero-denominator cases in normalization:
import numpy as np
def topsis(X, weights, is_benefit):
m, n = X.shape # m alternatives, n criteria
# Step 1: Normalize (handle zero columns)
col_sums = np.sqrt(np.sum(X**2, axis=0))
col_sums[col_sums == 0] = 1e-10 # Epsilon for zero denominator
R = X / col_sums
# Step 2: Weight
V = R * weights
# Step 3: PIS and NIS
pis = np.array([np.max(V[:, j]) if is_benefit[j] else np.min(V[:, j]) for j in range(n)])
nis = np.array([np.min(V[:, j]) if is_benefit[j] else np.max(V[:, j]) for j in range(n)])
# Step 4: Distances
S_star = np.sqrt(np.sum((V - pis)**2, axis=1))
S_minus = np.sqrt(np.sum((V - nis)**2, axis=1))
# Step 5: Closeness
denom = S_star + S_minus
denom[denom == 0] = 1e-10
C = S_minus / denom
# Step 6: Rank
ranks = np.argsort(-C) + 1 # 1-based ranking
return C, ranks
import numpy as np
def topsis(X, weights, is_benefit):
m, n = X.shape # m alternatives, n criteria
# Step 1: Normalize (handle zero columns)
col_sums = np.sqrt(np.sum(X**2, axis=0))
col_sums[col_sums == 0] = 1e-10 # Epsilon for zero denominator
R = X / col_sums
# Step 2: Weight
V = R * weights
# Step 3: PIS and NIS
pis = np.array([np.max(V[:, j]) if is_benefit[j] else np.min(V[:, j]) for j in range(n)])
nis = np.array([np.min(V[:, j]) if is_benefit[j] else np.max(V[:, j]) for j in range(n)])
# Step 4: Distances
S_star = np.sqrt(np.sum((V - pis)**2, axis=1))
S_minus = np.sqrt(np.sum((V - nis)**2, axis=1))
# Step 5: Closeness
denom = S_star + S_minus
denom[denom == 0] = 1e-10
C = S_minus / denom
# Step 6: Rank
ranks = np.argsort(-C) + 1 # 1-based ranking
return C, ranks
This pseudocode uses loops implicitly via vectorized operations for efficiency; explicit loops would appear in non-NumPy versions for normalization (e.g., for j in range(n): sum_sq = 0; for i in range(m): sum_sq += X[i,j]**2; etc.) and distance computations.[44]
The time complexity of TOPSIS is O(mn), dominated by matrix operations like normalization (summing over m for each of n criteria), PIS/NIS determination (max/min over m for each n), and distance calculations (summing over n for each m). Space complexity is also O(mn) to store the decision, normalized, and weighted matrices. These linear complexities make TOPSIS suitable for moderate-sized problems.[45]
Several open-source libraries support the implementation of TOPSIS for multi-criteria decision making. In Python, the scikit-criteria library integrates TOPSIS within a broader suite of MCDA methods, offering extensions such as configurable distance metrics (e.g., Euclidean or Manhattan) and support for weighted objectives, making it suitable for both standard and customized applications. The pymcdm library provides a flexible Python 3 implementation of TOPSIS, emphasizing ease of use for solving various MCDM problems with options for different normalization and aggregation strategies.[46] In R, the topsis package enables evaluation of alternatives using the core TOPSIS algorithm, accepting decision matrices and weights as inputs to compute rankings based on similarity to ideal solutions.[47] Additionally, the MCDM package in R includes a dedicated TOPSIS function for straightforward computation in statistical workflows.[48]
Commercial tools extend TOPSIS to specialized environments. MATLAB features user-contributed implementations for fuzzy TOPSIS, such as the Stopsis toolbox, which handles fuzzy similarity measures in multi-criteria scenarios and is available via the MATLAB File Exchange for integration into engineering simulations.[49] For spreadsheet-based analysis, the TOPSIS Software add-in for Excel, developed by the Statistical Design Institute, allows users to build and manage decision matrices with up to 200 criteria and options, performing calculations including normalization, ideal solution determination, and ranking directly within Excel worksheets.[50]
Online resources offer accessible, no-installation options for TOPSIS. Web-based calculators like the TOPSIS tool on OnlineOutput.com permit users to define criteria and alternatives, input decision matrices, and generate comprehensive reports with rankings and sensitivity analysis.[51] Similarly, Decision Radar provides a platform for applying TOPSIS in multi-criteria analysis, focusing on geometric distance calculations to ideal and negative-ideal solutions.[52] Updated Google Colab notebooks from around 2023, such as those implementing TOPSIS with Python libraries, enable cloud-based execution and customization, often shared on repositories for educational and research purposes.[53]
Since 2020, hybrid tools incorporating TOPSIS with machine learning have gained traction, particularly for optimizing criterion weights through predictive models. In Python ecosystems, scikit-criteria can be combined with scikit-learn for such integrations, as demonstrated in frameworks like hybrid AHP-TOPSIS enhanced by machine learning algorithms for employability prediction and supplier selection.[54] These advancements allow automated weight derivation from data, improving TOPSIS applicability in dynamic decision environments.[55]