Fact-checked by Grok 2 weeks ago

Cyclomatic complexity

Cyclomatic complexity is a quantitative developed by J. McCabe in to assess the structural complexity of a program's by counting the number of linearly independent paths through its . Represented as V(G) for a G, it is computed using the formula V(G) = E - N + 2P, where E is the number of edges, N is the number of nodes, and P is the number of connected components; for a typical single-component program module, this simplifies to V(G) = E - N + 2. This graph-theoretic approach models program execution as a , with nodes denoting sequential code blocks and edges indicating possible control transfers such as decisions or loops. The resulting value directly corresponds to the minimum number of test paths needed for basis path testing, enabling structured coverage of decision while guarding against errors. Values of V(G) range from 1 for simple to higher numbers reflecting increased branching; empirical studies correlate elevated complexity (typically above 10) with greater defect proneness, reduced , and heightened testing challenges. In practice, cyclomatic complexity guides by flagging modules for refactoring when exceeding recommended thresholds, such as McCabe's of 10, to mitigate risks in large-scale systems. It integrates into tools for static analysis and has influenced standards like those from NIST for basis path testing methodologies. While effective for procedural code, adaptations extend its use to object-oriented and modern paradigms, though critics note limitations in capturing data flow or cognitive aspects of complexity.

Fundamentals

Definition

Cyclomatic complexity is a proposed by Thomas J. McCabe in 1976 to assess program complexity in . This graph-theoretic measure was introduced in McCabe's seminal paper, where it is described as a tool for managing and controlling the complexity inherent in software modules. At its core, cyclomatic complexity quantifies the number of linearly independent paths through a program's , providing a numerical indicator of the control flow's intricacy. By focusing on decision structures such as branches and loops, it captures the essential elements that determine how many distinct execution trajectories a program can take. The primary purpose of this metric is to evaluate the and of code by highlighting areas with excessive decision points that could lead to errors or difficulties in modification. It serves as a basis for determining the minimum number of test cases required to achieve adequate path coverage during testing. Historically, cyclomatic complexity emerged as a response to the limitations of earlier metrics like lines of code, which often failed to account for the true impact of structures on program reliability and comprehension. McCabe's approach shifted emphasis toward the logical architecture, enabling developers to identify overly complex modules before they become problematic in phases.

Mathematical Formulation

Cyclomatic complexity, denoted as V(G), is formally defined for a control flow graph G as V(G) = E - N + 2P, where E represents the number of edges, N the number of nodes, and P the number of connected components in the . For a typical single-procedure represented by a strongly connected (where P = 1), this simplifies to V(G) = E - N + 2. In this context, nodes correspond to blocks of sequential code that cannot be decomposed further, while edges depict transfers of control between these blocks, such as jumps, branches, or sequential flows. Alternative formulations provide equivalent ways to compute V(G) without explicitly constructing the full . One such expression is V(G) = \pi + 1, where \pi is the number of predicate nodes—those containing conditional statements like or conditions that introduce branching. Another equivalent form is V(G) = 1 + the number of in the code, aligning with the count of linearly independent paths through the program. This measure originates from , specifically deriving from for , which states that for a connected planar graph, N - E + F = 2, where F is the number of faces (regions bounded by edges, including the infinite outer face). Rearranging yields F = E - N + 2, and in the context of graphs embedded planarly, V(G) equals the number of faces F (including the outer face), representing the number of linearly independent paths through the and thus the minimum number of test paths needed for basis path coverage.

Computation

Control Flow Graphs

Control flow graphs provide a graphical representation of a program's structure, modeling the sequence of executable statements and the decisions that alter execution paths. Introduced by Thomas McCabe as the foundational tool for his complexity metric, these graphs are directed and consist of nodes and edges that capture the logical flow without regard to data dependencies. The construction of a control flow graph begins by partitioning the program's into s, which are maximal sequences of consecutive statements where enters at the beginning and exits at the end, with no internal branches or jumps. Each basic block becomes a node in the graph, and directed edges connect these nodes to indicate possible control transfers, such as sequential progression from one block to the next, conditional branches from constructs, or iterative paths in loops like while or do-while statements. This process ensures the graph accurately reflects all feasible execution sequences. Key elements of the graph include the entry node, which represents the initial basic block where execution starts, and the exit node, denoting the final block leading to program termination. Decision nodes, typically those ending in branching statements like conditional tests or case switches, feature multiple outgoing edges, each corresponding to a distinct control flow outcome— for instance, true and false branches from an if condition. Loops introduce edges that return control to earlier nodes, forming cycles in the graph. Specific rules govern building to handle varying styles. For , which adheres to sequential, selective, and iterative constructs without unrestricted jumps, basic blocks are straightforwardly identified at control structure boundaries, ensuring a clean, hierarchical flow. Unstructured , such as that employing statements or multiple entry points, requires inserting for each jump to connect non-adjacent blocks, which can complicate the by creating additional or cycles. In all cases, compound statements like nested ifs are reduced to their constituent basic blocks before assignment, and sequential without branches forms single nodes with a single incoming and outgoing . McCabe emphasized that this reduction to basic blocks simplifies analysis while preserving the program's logical paths. As an illustrative example, consider a basic if-statement in pseudocode:
if (condition) {
    statement1;
} else {
    statement2;
}
statement3;
The corresponding control flow graph features:
  • Node 1 (entry): The block containing the condition check (decision node).
  • Edge from Node 1 to Node 2 (true branch): The block with statement1.
  • Edge from Node 1 to Node 3 (false branch): The block with statement2.
  • Edges from Node 2 and Node 3 to Node 4 (exit): The block with statement3.
This structure highlights the branching and merging of flows, with Node 1 having two outgoing edges and Node 4 having two incoming edges. The cyclomatic complexity is derived directly from such graphs to quantify independent paths.

Calculation Procedures

To compute cyclomatic complexity, the process begins with constructing a (CFG) from the source code, where nodes represent basic blocks of sequential statements and edges represent possible transfers between them. Once the CFG is built, identify the total number of nodes N (including entry and exit points) and the total number of edges E (including those from decision points). The complexity V(G) is then calculated using the formula V(G) = E - N + 2 for a connected with a single entry and exit point. For programs consisting of multiple or functions, cyclomatic complexity is typically computed separately for each module using the above , as each represents an independent structure. The overall program complexity can then be obtained by summing the individual V(G) values across all modules, providing a measure of total path independence. Static analysis tools automate this computation by parsing code to generate CFGs internally and applying the formula without manual intervention. For instance, PMD, an open-source code analyzer, includes rules that evaluate cyclomatic complexity per and reports violations against configurable thresholds during builds or integration. Similarly, computes complexity at the function level and aggregates it for broader codebases, integrating with pipelines to flag high-complexity areas in real-time scans. Consider the following Java code snippet as an illustrative example:
java
public int countEvens(int limit) {
    int count = 0;
    for (int i = 0; i < limit; i++) {
        if (i % 2 == 0) {
            count++;
        }
    }
    return count;
}
The corresponding CFG has 6 nodes and 7 edges:
  • Node 1: Initialization (count = 0; i = 0)
  • Node 2: Loop condition (i < limit)
  • Node 3: Inner if condition (i % 2 == 0)
  • Node 4: count++ (true branch)
  • Node 5: i++ (after true or false branch)
  • Node 6: return count (exit)
Edges: 1 → 2, 2 → 6 (false), 2 → 3 (true), 3 → 5 (false), 3 → 4 (true), 4 → 5, 5 → 2 (loop back). Applying the formula yields V(G) = 7 - 6 + 2 = 3, indicating three linearly independent paths due to the loop decision and the conditional branch.

Interpretation

Value Meanings

Cyclomatic complexity values provide insight into the structural simplicity or intricacy of a program, directly reflecting the number of linearly independent paths through its control flow. A value of V(G) = 1 denotes a straightforward, sequential execution without any branching or decision points, characteristic of the simplest possible program structure. Values ranging from 2 to 10 indicate moderate complexity, where a limited set of decision elements introduces manageable branching, facilitating comprehension and testing without overwhelming structural demands. In contrast, values exceeding 10 signal elevated complexity, often associated with intricate control flows that challenge long-term maintainability and heighten the potential for structural vulnerabilities. Elevated cyclomatic complexity values arise from an proliferation of execution paths, which impose greater cognitive demands on developers during comprehension, modification, and debugging activities. This multiplicity of paths can increase error proneness, as each independent route represents an opportunity for inconsistencies or oversights to manifest. Several programming constructs contribute to higher cyclomatic complexity scores, including deeply nested decision statements such as if-else chains, compound conditions combining multiple logical operators (e.g., && or ||), and exception handling blocks that introduce additional control branches. These elements each increment the decision count, thereby expanding the overall path diversity. Empirical analyses underscore the practical implications of these values for software reliability, revealing that modules with V(G) < 10 tend to demonstrate superior stability. In one study of production code, modules below this threshold averaged 4.6 errors per 100 source statements, while those at or above 10 averaged 21.2 errors per 100 source statements, highlighting a marked increase in defect density with rising complexity.

Threshold Guidelines

Thomas J. McCabe originally recommended that the cyclomatic complexity V(G) of a software module should not exceed 10, as higher values correlate with increased error rates and reduced maintainability; modules surpassing this threshold should be refactored to improve testability and reliability. Some standards adopt higher limits for specific contexts. For example, NASA Procedural Requirements (NPR) 7150.2D (effective March 8, 2022) requires that safety-critical software components have a cyclomatic complexity value of 15 or lower, with any exceedances reviewed and waived with rationale to ensure testability and safety. This threshold, while higher than McCabe's general recommendation, is accompanied by rigorous testing requirements such as 100% Modified Condition/Decision Coverage (MC/DC). When V(G) exceeds recommended thresholds, refactoring strategies focus on decomposing complex functions into smaller, independent units to distribute decision points and lower overall complexity. For instance, extracting conditional logic into helper functions reduces nesting and independent paths, enhancing modularity without altering program behavior. To enforce these guidelines, cyclomatic complexity monitoring is integrated into code reviews, where reviewers flag high-V(G) modules for refactoring, and into CI/CD pipelines via static analysis tools that automatically compute and report metrics during builds. This continuous integration approach prevents complexity creep and aligns development with established limits across project lifecycles.

Applications

Design and Development

Cyclomatic complexity serves as a foundational metric in software design, directing developers toward modular and structured programming practices that minimize the value of V(G). Introduced by , this measure quantifies the linear independence of paths in a program's control flow graph, prompting the breakdown of intricate logic into discrete, low-complexity modules to prevent excessive nesting and branching. Such design strategies align with principles of , fostering code that is inherently more comprehensible and adaptable during the creation phase. In long-term software projects, maintaining low cyclomatic complexity yields tangible benefits for upkeep, as evidenced by empirical studies linking reduced V(G) to streamlined updates and diminished bug introduction risks. Research on maintenance productivity reveals that higher complexity densities inversely correlate with efficiency, with teams expending less effort on modifications in simpler modules compared to their more convoluted counterparts. This correlation underscores the metric's value in sustaining project viability over extended lifecycles. Within development workflows, cyclomatic complexity integrates as a proactive indicator in design reviews, enabling early detection and mitigation of potential complexity spikes. Practitioners employ it to enforce guidelines, such as capping module V(G) at 10, ensuring collaborative scrutiny aligns code evolution with maintainability goals from inception. For example, in guided review processes, complexity thresholds flag revisions needed to preserve structural simplicity.

Testing Strategies

Cyclomatic complexity serves as the basis for estimating the minimum number of test cases required to achieve full path coverage in a program's control flow graph, where the value V(G) directly equals the number of linearly independent paths that must be tested. This metric, introduced by , ensures that testing efforts target the program's logical structure to verify all decision outcomes without redundancy. A primary strategy leveraging cyclomatic complexity is , which involves identifying and executing a set of V(G) independent paths that span the program's flow graph. These paths are selected such that any other path can be derived from their linear combinations, providing efficient yet comprehensive coverage of control structures like branches and loops. This method aligns closely with paradigms, emphasizing the exercise of code internals to confirm logical correctness. In practice, higher V(G) values signal increased testing effort, as each additional independent path necessitates a distinct test case to cover potential execution scenarios. For instance, modules with V(G) exceeding 10 may require significantly more resources for path exploration compared to those with V(G) under 5, guiding testers in prioritizing complex components during verification planning. Tools like automate V(G) computation and highlight high-complexity areas to inform effort allocation. Automated tools further aid in generating test paths by analyzing the control flow graph and suggesting basis paths for test case design. Examples include PMD, which integrates cyclomatic complexity checks into build processes to flag and mitigate overly complex code before testing, and Visual Studio's code metrics features, which report V(G) to support path-based test generation. These tools streamline the process of deriving independent paths, reducing manual effort in creating executable test suites. To illustrate, consider a control flow graph for a simple decision procedure with V(G) = 4, featuring nodes A (entry), B and E (decisions), C and D (branches from B), F (from E true), G (from E false or C/D), and H (exit). The four basis paths are:
  • Path 1: A → B (true) → C → G → H
  • Path 2: A → B (false) → D → G → H
  • Path 3: A → B (true) → C → E (true) → F → H
  • Path 4: A → B (true) → C → E (false) → G → H
Each path requires a dedicated test case with inputs that force the specified decisions, ensuring all edges and nodes are covered when combined. This example demonstrates how V(G) guides the selection of minimal yet sufficient tests for structural validation.

Defect Analysis

Early empirical studies in the late 1970s and 1980s provided initial evidence linking cyclomatic complexity to software defects. Thomas McCabe's seminal 1976 paper introduced the metric as a predictor of program reliability, noting that higher complexity increases the likelihood of errors due to more independent paths requiring verification. Subsequent research, such as Basili and Perricone's 1984 analysis of a NASA software system, found that modules with elevated cyclomatic complexity exhibited higher error densities, with complexity serving as a moderate indicator of fault proneness. These studies often observed that modules exceeding a cyclomatic complexity threshold of V(G) > 10 were associated with significantly more defects. Modern validations have extended these findings to open-source repositories, confirming the metric's relevance in diverse contexts. A 2023 study on codebases from analyzed complexity metrics and found correlations between cyclomatic complexity and defects. Cyclomatic complexity is frequently integrated with other metrics, such as Halstead's volume or lines of code, to build more robust defect prediction models. Systematic reviews indicate that combining these measures can improve fault-proneness classification. For instance, hybrid approaches incorporating cyclomatic complexity with size-based metrics have shown stronger predictive power in identifying risky modules. High cyclomatic complexity acts as a key in assessing , highlighting fault-prone modules that demand additional scrutiny during maintenance. consistently positions it as an indicator of potential defect hotspots, where complex control flows amplify error introduction and propagation risks. Recent findings as of 2025 affirm a moderate between cyclomatic complexity and defects, but emphasize that it reflects rather than direct causation. These reviews underscore the metric's utility in while noting contextual factors like programming language and team practices influence outcomes.

Theoretical Foundations

Cyclomatic complexity draws directly from the concept of the cyclomatic number in , which quantifies the cyclic structure of a by representing the size of the minimum set—a minimal collection of edges whose removal renders the acyclic. This measure captures the fundamental extent of cyclicity, as the set intersects every in the , ensuring no loops remain after its excision. The cyclomatic number serves as a key indicator of a graph's beyond mere acyclicity, specifically gauging the number of cycles that contribute to its relative to a . It exhibits invariance under graph isomorphisms, preserving its value when the graph undergoes relabeling of vertices or edges without altering . Additionally, the measure remains stable under certain topological transformations, such as edge subdivisions, which do not introduce new cycles but refine existing paths. A foundational theorem linking the cyclomatic number to spanning trees states that, for a connected , this number equals the total number of edges minus the number of edges in any of the graph. Since a connects all vertices with exactly one fewer edge than the number of vertices and contains no cycles, the difference highlights the "extra" edges responsible for cyclicity. This relation underscores how the cyclomatic number extends the tree-like simplicity of , providing a precise count of cyclic dependencies. In the context of , Thomas McCabe adapted the cyclomatic number to graphs, modeling program execution as directed graphs where nodes represent code blocks and edges denote control transfers. Here, the cyclomatic complexity V(G) denotes the number of linearly independent cycles within this directed structure, reflecting the graph's inherent path diversity and the minimum paths required for comprehensive testing. This adaptation preserves the graph-theoretic essence while accounting for directional flow, treating the graph as strongly connected if necessary to apply the core principles uniformly.

Topological Interpretations

In , the cyclomatic complexity V(G) of a G is identified with the first \beta_1(G), defined as the rank of the first group H_1(G; \mathbb{Z}) when the graph is regarded as a 1-dimensional CW-complex or . This equivalence arises because both quantify the number of linearly independent 1-cycles in the graph, providing a measure of its fundamental cyclic structure in the context of singular or . Seminal work in graph homology formalizes this link, showing that V(G) computes the topological complexity of the graph's 1-skeleton. As a topological , \beta_1(G) measures the presence of "holes" or non-contractible loops in the graph's , corresponding to the number of cycles that cannot be reduced to without altering connectivity. For instance, in a connected , this equals the excess of edges over vertices minus one, reflecting the minimal generators of the cycle space. This interpretation extends beyond planar embeddings to abstract graphs, where \beta_1(G) remains unchanged under homeomorphisms, emphasizing its role in distinguishing topologically distinct cycle configurations. From an algebraic perspective, V(G) represents the dimension of the cycle space within the graph's , specifically the kernel of the boundary map from 1-chains to 0-chains over the integers or a . This dimension captures the nullity of the graph's , aligning with matroid-theoretic views of circuit rank. These topological concepts find broader implications in network analysis and electrical theory, where the cyclomatic number—originally introduced by —determines the in loop-based equations governed by Kirchhoff's laws. In graphs, \beta_1(G) specifies the number of independent voltage or constraints, facilitating the solution of linear systems for behavior.

Limitations

Identified Shortcomings

Cyclomatic complexity, by design, measures only the within a program's structure, such as the number of and loops, while entirely overlooking the intricacies of handling and algorithmic operations. This limitation means that programs with identical control flows but vastly different dependencies—such as simple assignments versus intricate manipulations or computationally intensive algorithms—receive the same complexity score. As a result, the metric fails to capture essential aspects of software that arise from data flow interactions, leading to an incomplete assessment of overall program difficulty. The metric exhibits significant sensitivity to superficial changes in coding style and refactoring practices, which can alter the V(G) value without modifying the program's logical behavior or risk profile. For example, restructuring by extracting conditional statements into separate functions reduces the cyclomatic complexity of the parent module, even though the total decision paths remain unchanged across the . This arbitrariness, rooted in the metric's reliance on graph-based representations of structure, undermines its consistency and makes it unreliable for comparative analysis across different implementations of equivalent logic. In larger-scale software, particularly object-oriented and concurrent systems, cyclomatic complexity proves inadequate due to its inability to address features like , polymorphism, and thread synchronization. Traditional V(G) calculations, derived from procedural graphs, do not incorporate the additional complexity introduced by class hierarchies or distributed execution paths in multi-threaded environments. Consequently, the metric underestimates risks in modern paradigms where is decentralized across objects or processes. Empirical evaluations reveal weak correlations between cyclomatic complexity scores and actual software quality outcomes in extensive systems, highlighting the metric's overreliance on enumerating potential execution paths rather than their practical likelihood or impact. Studies on large codebases have found only marginal associations with defect density, as high V(G) values often do not align with observed fault patterns influenced by usage frequency and environmental factors. This disconnect suggests that while the metric identifies structural branching, it neglects real-world dynamics, limiting its predictive power for maintenance and reliability in complex projects.

Alternative Metrics

Halstead metrics, introduced by Maurice H. Halstead in his 1977 book Elements of Software Science, provide a set of measures derived from the lexical analysis of source code, focusing on operators and operands rather than control flow. These include program length N, which is the total number of operators and operands; vocabulary n, the count of unique operators and operands; volume V = N \log_2 n, estimating the size in bits needed to represent the program; and difficulty D = \frac{n_1}{2} \times \frac{N_2}{n_2}, where n_1 and n_2 are the numbers of unique operators and operands, respectively, and N_2 is the total number of operands, reflecting the effort required to understand and write the code. Halstead's approach treats software as a language, aiming to quantify overall implementation complexity beyond structural paths. Cognitive complexity, developed by and first detailed in their 2017 whitepaper, addresses limitations in traditional metrics by measuring the mental effort required to comprehend code, emphasizing nested structures and breaks in flow over mere decision counts. Unlike cyclomatic complexity, which increments uniformly for each branching statement regardless of nesting, adds points for increments in nesting levels (e.g., +1 for a top-level if, +2 for a nested one) and penalizes sequential breaks like gotos or breaks outside switches. This metric starts at 0 for a and accumulates based on how deeply a developer must track mentally, making it particularly useful for assessing in object-oriented languages. Essential complexity, an extension of cyclomatic complexity proposed by Thomas J. McCabe, quantifies the degree of unstructured programming by computing the cyclomatic number on a reduced where structured constructs (such as if-else, while loops, and sequences) are collapsed into single nodes. In this metric, denoted ev(G), only irreducible elements like multiple exits or goto-like jumps contribute to the count, with values of 1 indicating fully structured code and higher values signaling refactoring needs. It builds directly on McCabe's original graph-theoretic framework but isolates inherent design flaws. Comparisons among these metrics highlight their complementary roles: cyclomatic complexity excels at identifying risks and test path coverage, while metrics better capture lexical and algorithmic intricacy, such as in data-intensive modules where operator diversity dominates. is preferred for readability assessments in nested or sequential code, showing stronger correlations with developer comprehension time than cyclomatic measures in empirical studies. Essential complexity supplements cyclomatic by focusing on , recommending its use when evaluating legacy code for , whereas 's and difficulty suit broader quality predictions across program sizes. Overall, combining them—e.g., cyclomatic for testing and cognitive for reviews—provides a more holistic view than any single metric.

References

  1. [1]
    A Complexity Measure | IEEE Journals & Magazine
    This paper describes a graph-theoretic complexity measure and illustrates how it can be used to manage and control program complexity. The paper first explains.
  2. [2]
    Software Design Complexity
    To calculate Cyclomatic complexity of a program module, we use the formula - V(G) = e – n + 2 Where e is total number of edges n is total number of nodes. The ...Missing: definition | Show results with:definition
  3. [3]
    McCabe IQ Glossary of Terms
    A high cyclomatic complexity indicates that the code may be of low quality and difficult to test and maintain. In addition, empirical studies have established a ...
  4. [4]
    [PDF] a testing methodology using the cyclomatic complexity metric
    The idea is to start with a baseline path, thenvary exactly one decision outcome to generate each successive path until all decision outcomes have been ...
  5. [5]
    A Complexity Measure
    Insufficient relevant content. The provided URL (https://ieeexplore.ieee.org/document/1702388) points to a page requiring access, and no full text or abstract is available without login. Key points on McCabe's paper regarding cyclomatic complexity in program design and modularization cannot be extracted.
  6. [6]
    Software Metrics Glossary - McCabe IQ
    Cyclomatic Complexity (v(G)) is a measure of the complexity of a module's decision structure. It is the number of linearly independent paths and therefore, the ...
  7. [7]
    [PDF] A Testing Methodology Using the Cyclomatic Complexity Metric
    Cyclomatic complexity is defined for each module to be e - n + 2, where e and n are the num- ber of edges and nodes in the control flow graph, respectively.
  8. [8]
    [PDF] II. A COMPLEXITY MEASURE In this sl~ction a mathematical ...
    Abstract- This paper describes a graph-theoretic complexity measure and illustrates how it can be used to manage and control program com- plexity .
  9. [9]
    [PDF] A Testing Methodology Using the Cyclomatic Complexity Metric
    The purpose of this document is to describe the structured testing methodology for software testing, also known as basis path testing. Based on the cyclomatic ...
  10. [10]
    A Critique of Cyclomatic Complexity as a Software Metric
    Aug 7, 2025 · McCabe's cyclomatic complexity metric is widely cited as a useful predictor of various software attributes such as reliability and development effort.
  11. [11]
  12. [12]
    [PDF] Cyclomatic Complexity and Basis Path Testing Study
    Nov 16, 2020 · A function that runs start to finish with no options has a cyclomatic complexity of one, regardless of the number of lines. Figure 6-1. ...Missing: formula | Show results with:formula
  13. [13]
  14. [14]
    cyclomatic complexity: developer's guide - Sonar
    The formula to calculate cyclomatic complexity is relatively straightforward. Start by observing the program's control-flow graph—a graphical representation of ...Missing: Euler | Show results with:Euler<|control11|><|separator|>
  15. [15]
    What is Code Quality? - Amazon AWS
    Use tools that examine test-to-code coverage; Implement cyclomatic complexity tools like Halstead complexity measures to assess code complexity. Developers ...
  16. [16]
    The utility of complexity metrics during code reviews for CSE ...
    Nov 1, 2024 · We have developed a technique to guide the code review process that considers cyclomatic complexity levels and changes during code reviews.
  17. [17]
    Refactoring code to increase readability and maintainability: a case ...
    Oct 1, 2014 · We describe a case study in refactoring-improving ... reduction in average cyclomatic complexity, and 10.4% reduction in average class coupling.
  18. [18]
    Cyclomatic Complexity Definition, Calculation & Examples - Jellyfish
    May 27, 2025 · First developed by Thomas J. McCabe Sr. back in 1976, cyclomatic complexity is based on graph theory and the evaluation of control flow graphs.
  19. [19]
    Code metrics - Cyclomatic complexity - Visual Studio (Windows)
    Dec 10, 2024 · You can use cyclomatic complexity to get a sense of how hard any given code might be to test, maintain, or troubleshoot as well as an indication ...Missing: threshold | Show results with:threshold
  20. [20]
    [PDF] Software Errors and Complexity: An Empirical Investigation
    3.7 Module Complexity. Cyclomatic complexity [8] (number of decisions + 1) was correlated with module size. This was done in or- der to determine whether or ...Missing: defects | Show results with:defects
  21. [21]
    [PDF] An Empirical Investigation of Correlation between Code Complexity ...
    ​Abstract—There have been many studies conducted on predicting bugs. These studies show that code complexity, such as cyclomatic complexity, correlates with ...
  22. [22]
    The Relationship between Code Complexity and Software Quality
    May 14, 2023 · Software quality is determined using correlation data analysis, while code complexity metrics are examined using Python packages such as Radon.
  23. [23]
    [PDF] Software fault prediction metrics: A systematic literature review
    Feb 21, 2013 · McCabe's cyclomatic complexity was a good predictor of software fault proneness in [127,156,112,111,121,. 92,75], but not in [64,87,141,148].
  24. [24]
    Cyclomatic Complexity and Lines of Code: Empirical Evidence of a ...
    This research presents evidence that LOC and CC have a stable practically perfect linear rela-tionship that holds across programmers, languages, code paradigms.
  25. [25]
    An empirical study on bug severity estimation using source code ...
    In this paper, we provide a quantitative and qualitative study on two popular datasets (Defects4J and Bugs.jar), using 10 common source code metrics, and two ...
  26. [26]
    Software defect prediction using hybrid techniques: a systematic ...
    Jan 17, 2023 · In this systematic review, we have investigated 72 papers published from January 2000 to December 2021 that ascertain the use of hybrid techniques and their ...
  27. [27]
  28. [28]
    [PDF] Approximation Algorithms
    ... cyclomatic number of G, denoted cyc(G), is the dimension of this space ... feedback edge set) Given a connected, undi- rected graph G = (V,E) with an ...
  29. [29]
    [PDF] On Structural Parameterizations of Hitting Set: Hitting Paths in ...
    One way to measure how close a graph is to a tree is to consider its cyclomatic number. This is the size of a minimum feedback edge set of the graph, i.e., of a ...
  30. [30]
    Betti Number -- from Wolfram MathWorld
    The first Betti number of a graph is commonly known as its circuit rank (or cyclomatic number). The following table gives the Betti number of some common ...Missing: complexity | Show results with:complexity
  31. [31]
    [PDF] Generalizing cyclomatic complexity via path homology - arXiv
    Mar 2, 2020 · Path homology offers several improvements on cyclomatic complex- ity. With case statements, the Betti numbers can take on arbitrary values for ...
  32. [32]
    [PDF] Graph homology and cohomology - Alistair Savage
    The first. Betti number is also called the cyclomatic number. Definition 6.7 (Euler characteristic). The Euler characteristic of a graph Γ is the alternating ...
  33. [33]
    [PDF] Cycle Bases in Graphs Characterization, Algorithms, Complexity ...
    Aug 25, 2009 · The length of a path is the number of its edges. An undirected graph is connected if there exists a path from any vertex to every other vertex.
  34. [34]
    Circuit rank - EPFL Graph Search
    The circuit rank, cyclomatic number, cycle rank, or nullity of an undirected graph is the minimum number of edges that must be removed from the graph to break ...<|control11|><|separator|>
  35. [35]
    [PDF] Software Measurement and Complexity
    McCabe cyclomatic complexity. • maximum of all functions. • average over ... Header files showed poor correlation between cyclomatic complexity and the rest of ...Missing: formula | Show results with:formula
  36. [36]
    On the accuracy of code complexity metrics: A neuroscience-based ...
    Feb 7, 2023 · This article investigates the problem of measuring code complexity and discusses the results of a controlled experiment to compare different views and methods.
  37. [37]
    A critique of cyclomatic complexity as a software metric
    McCabe's cyclomatic complexity metric is widely cited as a useful predictor of various software attributes such as reliability and development effort.<|control11|><|separator|>
  38. [38]
    A critique of three metrics - ScienceDirect.com
    This article examines the metrics of the software science model, cyclomatic complexity, and an information flow metric of Henry and Kafura.
  39. [39]
    Efficient use of code coverage in large-scale software development
    We also found that cyclomatic complexity and lines of code have a very strong correlation. Figure 5 shows almost identical results to the strong correlation.
  40. [40]
    The Correlation among Software Complexity Metrics with Case Study
    Aug 8, 2025 · ... correlation between Cyclomatic complexity and. Hallstead volume with the number of errors is very. weak according to this dataset, because ...
  41. [41]
    [PDF] 'Software Science' revisited: rationalizing Halstead's system using ...
    May 8, 2018 · The set of software metrics introduced by Maurice H. Halstead in the 1970s has seen much scrutiny and not infrequent criticism. This article ...
  42. [42]
    [PDF] {Cognitive Complexity} a new way of measuring understandability
    Feb 6, 2017 · Cyclomatic Complexity was initially formulated as a measurement of the. “testability and maintainability” of the control flow of a module.
  43. [43]
    Cognitive Complexity, Because Testability != Understandability - Sonar
    Dec 7, 2016 · Thomas J. McCabe introduced Cyclomatic Complexity in 1976 as a way to guide programmers in writing methods that "are both testable and ...
  44. [44]
    [PDF] An Empirical Validation of Cognitive Complexity as a Measure of ...
    Jul 24, 2020 · Unfortunately, a lack of empirical evaluation seems to be the reality for most software metrics employed in today's static analysis tools.Missing: historical | Show results with:historical
  45. [45]
  46. [46]
    An empirical evaluation of the “Cognitive Complexity” measure as a ...
    The goals of this paper are to assess whether (1) “Cognitive Complexity” is better correlated with code understandability than traditional measures, and (2) the ...