Fact-checked by Grok 2 weeks ago

Lehman's laws of software evolution

Lehman's laws of software evolution refer to a series of eight principles developed by Meir M. Lehman to describe the dynamics of how software systems, especially those embedded in real-world environments (termed E-type systems), inevitably change, grow, and degrade over their lifetimes unless actively maintained. These laws, first introduced in 1974 and expanded through subsequent research, with the full set of eight laws formulated by 1996, emphasize that software is not a static artifact but a living entity subject to ongoing adaptation to meet evolving user needs, environmental demands, and operational requirements. Grounded in empirical observations from large-scale systems like OS/360 at , the laws highlight key challenges in , such as increasing complexity and the need for continuous evolution to preserve utility and quality. The laws emerged from Lehman's pioneering studies in the , initially comprising three principles based on of software metrics like size, structure, and change rates, and were later refined and extended through decades of research into processes. Lehman classified programs into three categories—S-type (formal, specification-derived systems with no ), P-type (purposeful but non-embedded systems), and E-type (those interacting with complex, changing real-world domains)—with the laws primarily applying to E-type software, which constitutes most practical systems. This classification underscores the feedback-driven nature of software evolution, where changes propagate through multi-level loops involving developers, users, and the environment. The eight laws describe patterns such as continuing change, increasing , self-regulation, of and familiarity, continuing , declining , and the nature of evolution. These principles have profoundly influenced practices, informing strategies for , refactoring, and process improvement to counteract in long-lived systems. Lehman's work, spanning over 30 years, established software evolution as a foundational area, demonstrating through that unmanaged software tends toward , while proactive interventions can sustain its .

Background

Meir M. Lehman and His Research

Meir M. Lehman was born on January 24, 1925, in , and later moved to as a child. He pursued studies in at , completing a degree in 1953 and earning his in 1957, with research focused on early computer systems. Lehman's early career began in electronics and computing hardware. After initial roles at Murphy Radio from 1941 to 1949, where he gained practical experience in service and testing, he joined Ferranti's London laboratory in 1955. There, he contributed to a feasibility study on the Mercury computer for missile control systems. From 1957 to 1964, he worked in the Scientific Department of the Israel Ministry of Defence, advancing his expertise in computational systems. In 1964, Lehman transitioned to IBM's research laboratory in Yorktown Heights, New York, USA, marking the start of his deeper involvement in software-related research. During his time at IBM from 1964 to 1972, Lehman shifted focus to software engineering in the late 1960s and early 1970s, leading an empirical study on the evolution of the OS/360 operating system. This project involved analyzing metrics from multiple releases to identify patterns in system growth and maintenance. In 1972, he joined Imperial College London as a professor in the Department of Computing, where he became head of the department from 1979 to 1984 and continued his research until his retirement from Imperial College in 2002. In 2002, he moved to the School of Computing Science at Middlesex University. Lehman died on 29 December 2010 in Jerusalem, Israel. Lehman's seminal contributions emerged from this empirical foundation. In his 1974 inaugural lecture at Imperial College, titled "Programs, Cities, Students—Limits to ?", he introduced initial concepts on software derived from the OS/360 . This was followed by his 1980 paper, "Programs, Life Cycles, and Laws of Software ," which formalized the first five laws based on quantitative data from industrial systems like OS/360. His approach emphasized collecting and interpreting metrics—such as module counts and change rates—from real-world software releases to uncover generalizable patterns, influencing the field of and .

Concept of E-type Systems

E-type systems, also known as evolution-type systems, refer to software programs that are designed to mechanize human or societal activities and are deeply embedded within a dynamic external . These systems interact continuously with the real world, where they both influence and are influenced by environmental changes, creating intrinsic feedback loops that necessitate ongoing adaptation and modification. Unlike more static forms of software, E-type systems must evolve to maintain their utility, as the problems they address are not fixed but subject to continual transformation due to shifts in user needs, operational contexts, or external conditions. In contrast, S-type systems, or specification-type systems, are formally defined by a precise mathematical specification from which the program's behavior can be rigorously derived and proven correct. These systems have no direct causal relationship with the external environment; any changes typically redefine the specification entirely, treating it as a new problem rather than an evolution of the existing one. P-type systems, or program-type systems, address real-world problems through practical approximations where exact solutions are infeasible, such as in optimization tasks, and they too require evolution based on performance feedback, but they are generally less embedded than E-type systems and more akin to ad hoc solutions. The distinction lies in the degree of environmental coupling: E-type systems are open and feedback-driven, while S-type are closed and static, and P-type occupy an intermediate space focused on practical problem-solving. Key characteristics of E-type systems include their status as open systems, susceptible to environmental that drives inevitable change, often leading to increased over time if not managed. They exhibit self-regulating behaviors in their processes but remain highly prone to without proactive . Representative examples include large-scale operating systems like OS/360, which must adapt to hardware advancements, user demands, and security threats, or enterprise applications such as that responds to market fluctuations and regulatory updates. These systems form the core of industrial software, where longevity and adaptability are paramount. The significance of E-type systems stems from their prevalence in long-lived, mission-critical applications that dominate practice. Most industrial software falls into this category, underscoring the need for evolutionary models to ensure sustained and in a changing world. This classification, originating from Meir M. Lehman's foundational research, highlights why traditional static paradigms are insufficient for such systems.

Formulation of the Laws

Historical Development

The formulation of Lehman's laws of software evolution originated in 1974, when Meir M. Lehman proposed the initial three laws based on empirical analysis of the OS/360 operating system conducted at . This work drew from detailed metrics on module counts, size growth, and change patterns across early releases of OS/360, revealing patterns of ongoing adaptation in large-scale systems. These observations highlighted the need for continuous maintenance to sustain system utility, laying the groundwork for understanding software as an evolving entity rather than a static artifact. By , Lehman expanded and formalized the framework into five laws in his seminal paper "Programs, Life Cycles, and Laws of Software Evolution," presented at the Fourth International Conference on (ICSE). This publication synthesized data from multiple OS/360 releases and introduced concepts like life cycle processes and evolutionary pressures, emphasizing how software must adapt to environmental demands to avoid degradation. The paper's influence stemmed from its rigorous empirical foundation, including quantitative tracking of system size and over time, which demonstrated predictable growth trends in real-world applications. The laws continued to evolve through the and early , culminating in their expansion to eight in with the publication "Metrics and Laws of Software Evolution—The Nineties View." This update incorporated insights from additional studies, refining earlier propositions and adding emphasis on mechanisms. A pivotal was Lehman's of the FEAST (Feedback, Evolution And Software Technology) project in the mid-1990s, funded by the and focused on multi-level, multi-loop in evolution processes across diverse systems. Throughout this development, the laws were grounded in empirical data from over 20 releases of long-lived systems, including OS/360 and software developed for the Meteorological , where metrics on release cycles, fault rates, and structural changes validated the observed evolutionary dynamics. These sources provided representative examples of E-type systems, where external pressures drive perpetual change.

Initial Observations and Data

Meir M. Lehman conducted a longitudinal empirical study of software evolution by analyzing historical data from successive releases of IBM's OS/360 operating system, spanning from 1964 to 1976. This analysis examined approximately 19 releases, focusing on quantitative metrics to uncover patterns in system development and maintenance. The methodology involved collecting and interpreting release sequence data through time-series analysis and regression techniques, treating the software process as a feedback system to model growth and change dynamics. Key metrics included system size measured in modules and lines of code, with OS/360 reaching about 4,800 modules and over 2 million statements by release 19. Changes were tracked via modules added, removed, or modified per release, alongside effort metrics such as man-days or machine hours invested and handling rates (modules processed per day). For instance, in release 19, 410 modules were added and 2,650 (about 55%) were changed over 275 days, at an average rate of roughly 9.6 modules per day. Representative data from early to later releases showed the percentage of modules changed rising from 14.6% in releases 2–6 to 31.9% in releases 12–16, indicating escalating modification scope. These observations revealed that software systems like OS/360 did not stabilize after initial development, instead exhibiting indefinite ongoing changes driven by , enhancements, and adaptations. Growth patterns demonstrated statistically smooth increases in size, with an average net addition of about 200 modules per release, superimposed on cyclic ripples from feedback mechanisms. Complexity accrued progressively without deliberate intervention, as evidenced by the increasing fraction of the system affected by changes and a trend in metrics (e.g., C_p = 0.14 + 0.0012 R^2, where C_p is and R is the release number). activity remained high and self-regulating, with work rates around 10–11 modules per day, underscoring the need for continuous effort to sustain utility. Lehman's approach pioneered the use of program metrics—such as counts and change frequencies—as precursors to modern software measurement practices, integrated with feedback loop analysis to interpret evolutionary processes. This data collection and analytical framework directly informed the initial formulation of the laws of software evolution in 1974.

The Eight Laws

Law I: Continuing Change

Law I states that an E-type system must be continually adapted, or it becomes progressively less satisfactory in use. This law applies specifically to E-type programs, which are developed to solve real-world problems and thus reflect external realities such as user requirements and operational environments. Without ongoing adaptation, these systems degrade in utility due to evolving external factors, including shifts in user needs, technological advancements like new , and regulatory changes. Stagnation leads to , as the software fails to align with its changing context, rendering it increasingly ineffective or costly to operate. Empirical evidence for this law originated from analyses of the OS/360 operating system, where Lehman and Belady observed continuous modifications across over twenty user-oriented releases from the late to the mid-1970s. Data on module additions, deletions, and changes showed no period where change requests dropped to zero; instead, the system underwent perpetual evolution to address faults, performance issues, and new functionalities driven by external pressures. In real-world scenarios, this is exemplified by legacy systems, such as outdated mainframe applications in , which fail without updates due to incompatibility with modern and unaddressed vulnerabilities, often necessitating full replacement. The implications for software practice emphasize treating development as an ongoing process rather than a finite , requiring budgets to allocate for indefinite evolution. Lifetime cost assessments must incorporate continuous adaptation to ensure economic viability, as unadapted systems incur escalating expenses that can exceed initial development costs. This shifts focus toward processes that enable low-cost changes and proactive planning to sustain system utility over extended lifecycles.

Law II: Increasing Complexity

As an E-type system evolves, its increases unless work is done to maintain or reduce it. This law, formulated by Meir M. Lehman in 1974 and elaborated in subsequent works, posits that ongoing adaptations to a changing inherently amplify structural intricacies, akin to rising in physical s. Driven by the continuing change described in Law I, each modification—whether adding functionality, fixing defects, or enhancing —introduces new interdependencies, tangles, and architectural , eroding the system's original simplicity without deliberate countermeasures. Empirical evidence from Lehman's studies of large-scale systems, such as IBM's OS/360 operating system, demonstrates this progression through measurable indicators of . For instance, analysis of successive releases showed the fraction of modules requiring changes rising from approximately 33% in Release 15 to 56% in Release 19 of a related , reflecting growing and interconnectedness across components. A representative example is the evolution of monolithic codebases, where feature additions in legacy systems like early OS/360 variants led to denser inter-module dependencies, complicating future maintenance and increasing the on developers. To counteract this buildup, software engineers must allocate resources to anti-regressive activities, such as refactoring to simplify code structures, modularization to encapsulate functionalities behind clear interfaces, layers to hide details, or periodic rewrites of aging components. These techniques, emphasized in Lehman's , help restore organizational stability by proactively managing , ensuring the system remains adaptable without exponential maintenance costs.

Law III: Self-Regulation

The Law of Self-Regulation, the third in Meir M. Lehman's formulation of software laws, states that the evolution process of E-type software systems is self-regulating, characterized by distributions of product and process measures that are close to normal. This regulation emerges from feedback mechanisms within the development environment, where positive and negative controls—arising from testing, usage patterns, and error detection—dynamically adjust the pace of changes to maintain balance against environmental demands. As a result, the system's change velocity stabilizes, preventing uncontrolled acceleration or stagnation in . Evidence for this law draws from empirical observations of industrial E-type software systems, where release cycles demonstrate notable stability despite fluctuating inputs such as requirement changes or resource availability. For instance, analyses of systems like OS/360 and FW reveal ripple effects in growth trends, where initial modifications, such as bug fixes, propagate to induce broader refactoring, thereby reinforcing self-stabilization through cyclic adjustments around an average growth rate. These patterns indicate that the process inherently corrects deviations, leading to normally distributed metrics for attributes like module size and update frequency. Key characteristics of self-regulation include its basis in emergent behavior from numerous pseudo-independent decisions by developers and users, akin to a feedback-driven that sustains long-term viability. This dynamic contributes to organizational stability as a related , ensuring consistent activity levels across phases.

Law IV: Conservation of Organizational Stability

The fourth law of software evolution, known as the conservation of organizational stability, states that the average effective global activity rate on an evolving E-type system remains over the system's lifetime. This invariance reflects the organizational tendency to maintain a level of development effort, where the total work applied—such as person-months per release—does not grow exponentially despite the system's increasing size and complexity. Instead, resources are reallocated internally to balance demands, preventing dramatic fluctuations in productivity rates. Empirical evidence supporting this law comes from analyses of long-lived systems, which demonstrate consistent annual volumes of change activity. For instance, data from the OS/360 operating system, evolved over more than 20 years, showed stable effort levels across its lifecycle, as illustrated in growth curves that maintained a steady rate of modifications despite external pressures. Similarly, the Firmware (FW) system, spanning 23 releases, exhibited invariant global activity, with maintenance teams sustaining a constant output without proportional increases in personnel. In earlier observations, such as those from , the average work rate held at approximately 10.4 modules per day over multiple releases, underscoring the law's applicability to mature projects. This self-stabilizing behavior is enabled by feedback mechanisms akin to those in Law III, ensuring organizational equilibrium. The implications for include the ability to predict and budget for evolutionary costs reliably, as the invariant rate avoids the pitfalls of escalating resource demands and supports long-term planning without expecting unbounded growth in development expenditure.

Law V: Conservation of Familiarity

Law V, the Law of Conservation of Familiarity, posits that during the active life of an E-type program, the content of successive releases—encompassing changes, additions, and deletions—remains statistically invariant to preserve mastery among developers, users, and other stakeholders. This invariance arises from the nonlinear relationship between the magnitude of system modifications and the intellectual effort required to absorb them, ensuring that excessive changes do not overwhelm comprehension and hinder ongoing evolution. As software undergoes repeated modifications, its internal structure tends to deteriorate, eroding developers' mental models unless proactive measures are implemented to enhance understanding. Without interventions such as thorough or structured assessments, declining familiarity prolongs the time needed for subsequent work, as maintainers must reconstruct cognitive grasp of the . This effect contributes to the broader challenge of increasing complexity in evolving systems, where accumulated changes amplify the on the development team. Empirical observations underscore this dynamic; for instance, program comprehension often accounts for over half of the total effort in systems, frequently necessitating of tangled "" structures that have grown organically over time. Supporting evidence from long-lived projects illustrates the law's implications. An analysis of 810 versions spanning 14 years found partial adherence, with stable growth rates in minor releases aligning with familiarity constraints, though major releases exhibited discontinuous jumps that temporarily disrupted developer understanding. Similarly, a study of commercial UNIX variants confirmed invariant incremental growth post-commercialization, contrasting with more erratic patterns in academic versions, highlighting how organizational needs enforce release stability to sustain team proficiency. To counteract familiarity loss, several strategies have been proposed within the framework of Lehman's work. Maintaining changes at an average incremental rate, rather than concentrating them in single releases, allows developers to adapt gradually without overwhelming cognitive demands. For larger updates, distributing growth across multiple releases or incorporating preparatory cleanup phases preserves structural clarity and eases comprehension. Additionally, automated tools for collecting and modeling metrics—such as lines of or alterations—enable proactive monitoring of evolution trends, facilitating informed decisions to uphold developer grasp. practices further support this by isolating changes, reducing the scope of reconstruction required during maintenance.

Law VI: Continuing Growth

Law VI, also known as the law of continuing growth, states that the functional content of E-type systems must be continually increased to maintain user satisfaction over their lifetime. This law, formulated by Meir M. Lehman in 1980, emphasizes that software systems cannot remain static; instead, they require ongoing enhancements to their capabilities to meet evolving user expectations and environmental demands. The core explanation of this law revolves around the dynamic nature of user needs and the external environment. Over time, users demand additional features to address previously omitted functionalities, remove performance bottlenecks, or incorporate new requirements arising from real-world changes. Mere corrective maintenance, such as bug fixes, is insufficient; without proactive growth in functional capability, the system risks becoming obsolete or less useful, leading to declining satisfaction. This growth counters "environmental drift," where the system's context evolves independently, necessitating adaptations to preserve relevance. Empirical evidence supporting this law has been drawn from long-term studies of software evolution. For instance, analysis of IBM's OS/360 operating system releases demonstrated sustained increases in functional and over decades, with rates reflecting the addition of new features to sustain . More recent validations in projects, spanning 705 releases across nine systems over 108 years, confirm continuing through statistical testing, showing consistent expansion in functionality rather than stagnation. A representative example is the evolution of web applications, where systems like platforms regularly introduce new to enable expanded integrations and features, outpacing mere fixes to align with user-driven demands for enhanced interactivity and scalability. This law builds upon Law I (continuing change) by specifying that much of the required adaptation must manifest as directed functional growth, rather than general modifications alone.

Law VII: Declining Quality

The seventh law of software evolution, the Law of Declining Quality, posits that E-type systems—those developed in real-world environments—will be perceived as declining in quality, particularly in attributes like reliability and performance, unless they are rigorously maintained and adapted to changes in their operational environment. This perception arises from the progressive invalidation of embedded assumptions as external conditions evolve, leading to mismatches between the system's behavior and user expectations. Accumulated changes during evolution introduce defects and inefficiencies, as modifications to address new requirements often overlook long-term impacts on non-functional properties. Quality does not preserve itself passively; instead, without deliberate efforts, systems accumulate , resulting in brittle behavior where minor updates trigger widespread issues. This erosion is compounded by factors such as rising user expectations and the of competitive alternatives, further amplifying the sense of . Empirical evidence from long-lived open-source projects illustrates this decline through metrics like accumulated defect density (ADD), defined as the ratio of confirmed reports to system size (e.g., lines of code). In Apache Tomcat's 5.5 branch across 27 releases from 2004 to 2010, ADD steadily increased, signaling rising failure rates and quality loss in the absence of major restructuring. Similarly, Tomcat's 6.0 branch over 18 releases showed comparable trends, with unmaintained evolution leading to higher densities and system brittleness. These patterns confirm that aged systems exhibit escalating defects without intervention, as observed in bug-tracking data from tools like . To mitigate declining quality, evolution processes require rigorous maintenance strategies, including systematic testing to identify and resolve defects introduced by changes, refactoring to reduce accumulated inefficiencies, and quality gates—such as automated validation and peer reviews—to enforce standards at each development stage. For example, restructuring in Apache Ant's trunk releases from 2000 to 2010 reversed ADD trends, stabilizing after initial declines. Such proactive adaptations ensure systems remain aligned with their environment, preventing the passive erosion of reliability and performance.

Law VIII: Feedback System

Law VIII posits that E-type evolution processes constitute multi-level, multi-loop, multi-agent systems and must be treated as such to achieve significant improvement over any reasonable base. This law underscores the intricate dynamics of software , where changes introduced by interact with inputs from and the operational , forming nested loops that continuously influence subsequent modifications. For instance, on performance can trigger adjustments, which in turn alter the , creating a cycle of that affects future paths. Evidence for this feedback mechanism emerges from the FEAST (Feedback, Evolution And Software Technology) project, which models software evolution through iterative cycles observed in long-lived systems. The FEAST/1 initiative analyzed historical data from systems like OS/360 (with 26 releases over decades) and FW (21 releases), revealing cyclic patterns of growth and stabilization driven by multi-agent interactions, such as ripple effects from usage data feeding back into redesign efforts. These models employ black-box , white-box , and multi-agent simulations to demonstrate how loops maintain system viability amid environmental pressures. A practical illustration of this law appears in modern methodologies like , where short iterative cycles incorporate continuous from stakeholders to mirror the multi-loop structure. Practices such as and regular retrospectives enable rapid adaptation, aligning with the -driven Lehman described. This law provides a holistic framework, integrating prior observations—such as self-regulation as a foundational form—into a unified view of software as a living entity shaped by ongoing interactions. By framing as a controllable , it emphasizes the need for proactive to sustain long-term system quality and relevance.

Implications and Applications

In Software Maintenance Practices

Lehman's laws provide a foundational framework for integrating software maintenance types with the ongoing evolution of E-type systems, which are designed for real-world problem domains and thus require continuous . Adaptive maintenance, which modifies software to accommodate changes in its operational environment, directly aligns with Law I (Continuing Change), as systems must evolve to remain useful or risk . Corrective maintenance, focused on fixing faults and errors, relates to Law VII (Declining Quality), where unchecked evolution leads to degradation unless proactive interventions restore reliability. Perfective maintenance, aimed at improving performance or functionality, corresponds to Laws II (Increasing Complexity) and VI (Continuing Growth), necessitating enhancements to manage growing complexity and expand capabilities while preserving stability. Practical strategies informed by these laws enhance efficiency. Predictive modeling of change volume, drawing from Law IV ( of Organizational ), allows teams to forecast rates and resource needs, ensuring consistent output despite varying demands; for instance, models can estimate modifications per to plan staffing. Refactoring schedules, guided by Law V ( of Familiarity), involve periodic restructuring to limit staff turnover impacts and maintain developer productivity, such as allocating time for code clean-ups to counteract accumulation. For mechanisms under Law VIII ( ), incorporating multi-level inputs from users and environments into cycles supports timely adjustments, though this predates modern tools and emphasizes structured processes. Case studies illustrate these applications in rejuvenation. In the analysis of , a large batch operating system with over 4,800 modules across 18 releases by 1980, predictive models based on Laws IV and identified saturation risks, leading to recommendations for a dedicated "clean-up" release that reduced modified modules from 55% to stabilize growth and —demonstrating how laws guide interventions to extend system life. Similarly, in COBOL-based migrations, such as rehosting financial applications to modern platforms, Laws I and inform strategies to reduce through modularization and wrapper techniques, preserving core logic while adapting to new environments and averting without full rewrites. The benefits of applying Lehman's laws in maintenance practices are significant for resource allocation and long-term viability. By anticipating change volumes and complexity trends, organizations can allocate resources more effectively, potentially reducing maintenance costs that historically consumed up to 70% of software budgets in the 1970s. This approach prevents unplanned obsolescence by promoting proactive rejuvenation, ensuring systems remain aligned with evolving requirements and minimizing risks associated with legacy degradation.

Relevance to Modern Software Engineering

Lehman's laws remain highly relevant to agile methodologies, where iterative sprints facilitate ongoing adaptation to changing requirements, directly embodying the principle of continuing change (Law I). Empirical studies of agile projects have found most of the laws to hold, demonstrating how refactoring practices mitigate increasing (Law II) by to maintain and adaptability across versions. Furthermore, agile's emphasis on continuous loops, such as through the Goal-Question-Metric (GQM) , aligns with the (Law VIII), enabling teams to regulate evolution based on user and process metrics. In architectures and environments, modular designs help preserve developer familiarity (Law V) by isolating components, allowing teams to update services independently without disrupting the overall system. However, maintaining to support continuing growth (Law VI) often leads to accumulating , as multiple versions increase complexity (Law II) and complicate maintenance efforts. surveys confirm that breaking changes are accepted quarterly to semi-annually to address evolving needs, underscoring the laws' prediction that unchecked evolution degrades design quality over time. DevOps practices, particularly continuous integration and continuous delivery (CI/CD) pipelines, operationalize self-regulation (Law III) by automating testing and deployment to stabilize system metrics and distributions. These pipelines also enable proactive of quality decline (Law VII), as seen in tools like , where metrics track dependency growth across releases to prevent architectural degradation in dynamic environments. By integrating feedback from runtime data and automated alerts, DevOps counters the natural described in the laws, ensuring sustained organizational (Law IV). Modern platforms like exemplify the laws' predictive power in container orchestration, where rapid scaling and modular deployments address continuing growth (Law VI) but introduce maintenance challenges from escalating complexity (Law II) in interconnected services. Evolving clusters require constant refactoring and monitoring to avoid familiarity erosion (Law V), as external dependencies and updates—common in cloud-native setups—can lead to quality degradation without rigorous feedback mechanisms (Law VIII). These insights from Lehman's guide practitioners in anticipating and mitigating evolution pitfalls in highly distributed systems. A empirical study further validated six of the eight laws in evolving systems, confirming their ongoing applicability.

Criticisms and Extensions

Key Critiques

Lehman's laws of software evolution, formulated primarily based on observations of large-scale, proprietary systems running on 1970s mainframe computers such as IBM's OS/360, have faced significant empirical scrutiny regarding their generalizability to contemporary software ecosystems. Early validations relied on limited datasets from industrial environments, leading critics to question their applicability beyond such contexts. For instance, studies of (OSS) projects, including the , have yielded mixed results, with some demonstrating accelerating or superlinear growth that contradicts expectations of stabilizing or decelerating evolution under the laws. Godfrey and Tu's analysis of revealed continued rapid expansion without the predicted slowdown in growth rates, challenging laws like VI (Continuing Growth) and suggesting that distributed, collaborative development in OSS may evade traditional invariance patterns. Subsequent work on OSS, such as examinations of and , has further highlighted inconsistencies, where systems exhibit non-linear trajectories not anticipated by the original formulations. These findings underscore the laws' origins in closed, E-type environments, raising doubts about their relevance to mobile applications or agile OSS repositories that undergo rapid, iterative changes without evident buildup. Conceptually, the laws have been critiqued for being largely descriptive rather than predictive, offering observations of observed behaviors without robust mechanisms for future . This descriptive nature stems from ambiguous definitions and the absence of standardized, quantifiable metrics for core concepts like "" or "," which has complicated empirical testing and led to varying interpretations across studies. For example, Law II (Increasing ) often relies on proxies such as lines of code (SLOC), but critics argue this metric fails to capture architectural or modular improvements that can mitigate growth. Similarly, Law VII (Declining ) lacks precise indicators, making it difficult to measure objectively. Such vagueness renders some laws, including IV (Conservation of Organizational Stability) and V (Conservation of Familiarity), more as corollaries or interdependent ideas rather than independent principles, potentially overlapping without clear boundaries. The reliance on qualitative assessments has also invited accusations of , where the laws appear self-evident or circular in explaining as inevitable change driven by change itself. A key conceptual limitation is the overemphasis on E-type software—systems that evolve in response to an ever-changing external environment—while largely sidelining successes in S-type (formal, specified) or P-type (arbitrary, problem-specific) programs, which may not exhibit the same degenerative tendencies. Lehman himself classified the laws as primarily applicable to E-type contexts, but this focus has been seen as restricting their universality, ignoring scenarios where deliberate design or throwaway prototyping avoids predicted pitfalls. Law V, in particular, posits that successive releases maintain statistically invariant content to preserve developer familiarity, yet modern integrated development environments (IDEs), automated refactoring tools, and continuous integration practices have been argued to decouple familiarity from strict invariance, allowing larger changes without productivity loss. This obsolescence is evident in agile methodologies, where team familiarity is sustained through documentation and collaboration tools rather than rigid release constraints. Notable critiques from the emphasized empirical and methodological shortcomings, including insufficient statistical validation and overreliance on a single, atypical . Lawrence's 1982 analysis, echoed in discussions, highlighted weak evidence due to small sample sizes and qualitative methods. Additionally, some works argued that the laws undervalue economic and organizational factors, such as funding cycles or market pressures, which drive evolution independently of technical invariance. Lehman addressed these by analogizing the laws to economic principles, which are probabilistic rather than deterministic, but critics maintained that ignoring socioeconomic drivers limits practical .

Subsequent Research and Developments

In the 1990s, Meir M. Lehman extended his original laws through the (Feedback, Evolution And Software Technology) hypothesis, which posits that E-type software processes are complex systems exhibiting strong dynamics and global stability, thereby constraining efforts to improve them without proactive modeling of these loops. This hypothesis built directly on laws such as Continuing Change (Law I) and System (Law VIII) by emphasizing multi-level mechanisms involving human decision-making and environmental interactions, as evidenced in analyses of systems like OS/360, where ripple effects in growth trends indicated regulatory . The /1 project (1996–1998) provided empirical support through metric-based studies, demonstrating how drives functional growth while stabilizes complexity, thus enabling predictive process models for proactive management. During the 2000s, empirical validations confirmed the laws' applicability to open-source software, particularly Law VI (Continuing Growth). Michael W. Godfrey and Qiang Tu's analysis of the Linux kernel across 96 releases from 1994 to 2000 revealed exponential growth in code size, from approximately 1 million to over 3 million lines of code, aligning with the law's prediction of sustained expansion to maintain utility despite increasing complexity. Similar patterns emerged in web-based systems, where growth trends showed self-regulating behavior (Law III) through developer interventions. A broader 2011 study by Iulian Neamtiu, Guowei Xie, and Jianbo Chen examined nine popular open-source projects (e.g., Apache, MySQL) over 108 cumulative years and 705 releases, confirming Laws I (Continuing Change) and VI universally, while noting partial support for others depending on metrics like lines of code and change frequency, thus reinforcing the laws' relevance in distributed, collaborative environments. In the 2010s, researchers integrated Lehman's laws with software models to quantify complexity growth under Law II (Increasing ). A 2012 study framed this law thermodynamically, interpreting unmaintained changes as rising —measured as in microstates (e.g., execution paths)—which degrades structure unless counteracted by refactoring, akin to entropy reduction in physical systems. This integration, drawing on Normalized Systems theory, highlighted design principles like to bound combinatorial effects and preserve evolvability, providing a formal basis for mitigating accumulation in long-lived systems. Michael W. Godfrey's 2014 retrospective further evolved the laws by analyzing their empirical trajectory, noting how entropy-like degradation persists in modern contexts unless addressed through systematic maintenance. Recent 2020s research has revisited the laws' invariances in dynamic environments, with a 2024 exploratory study validating them in agile software systems, where practices uphold growth and change patterns despite iterative development. Applications to systems draw parallels between model drift—degradation in predictive due to evolving distributions—and Law I's continuing change, necessitating ongoing retraining to sustain , though direct empirical links remain emerging. In cloud-native settings, such as Kubernetes-orchestrated applications, the laws underscore the need for modular architectures to counter complexity increases from frequent deployments, as initial studies indicate sustained growth in configuration sizes mirroring Law VI. As of 2025, no major new extensions or critiques have emerged, underscoring the enduring influence of the laws.

References

  1. [1]
    Software evolution—Background, theory, practice - ScienceDirect
    This paper opens with a brief summary of some 30 years of study of the software evolution phenomenon.
  2. [2]
    [PDF] Programs, Life Cycles, and Laws of Software Evolution
    For software the term maintenance is generally used to describe all changes made to a program after its first installation. It therefore dif- fers significantly ...
  3. [3]
    Meir M LEHMAN personal appointments - Companies House
    Meir M LEHMAN. Filter appointments. Filter appointments. Current appointments. Total number of appointments 2. Date of birth: January 1925. GID LIMITED ...
  4. [4]
    Oral-History:Meir Lehman
    In 1972 Lehman began teaching at Imperial College, designing undergraduate courses in computing and control. In 1979 he became the head of the department, and ...Missing: biography | Show results with:biography
  5. [5]
    In memory of Manny Lehman, 'Father of Software Evolution' - Canfora
    Mar 21, 2011 · His laws of software evolution, the uncertainty principle, and his view of the software process as a feedback system served as a foundation ...
  6. [6]
    [PDF] On the Evolution of Lehman's Laws - PLG
    In this brief paper, we honour the contributions of the late Prof. Manny Lehman to the study of software evolution. We do so by means of a kind of evolutionary ...
  7. [7]
    Programs, life cycles, and laws of software evolution - IEEE Xplore
    Sep 30, 1980 · The paper then introduces laws of Program Evolution that have been formulated following quantitative studies of the evolution of a number of ...Missing: ICSE | Show results with:ICSE
  8. [8]
    [PDF] 1 The evolution of the laws of software evolution. A discussion ...
    Initially, Lehman proposed three laws, shown in Table 2, stating three basic principles for the evolution of software systems:
  9. [9]
    [PDF] Program Evolution - Processes of Software Change</em> - Gwern
    18. 19. Program Evolution and its Impact on Software Engineering. M. M. Lehman ... Laws of Program Evolution—Rules and Tools for. Programming Management.<|control11|><|separator|>
  10. [10]
    [PDF] Metrics and Laws of Software Evolution - The Nineties View
    VIII. 1996. Feedback System. (first stated 1974, formalised as law 1996). E-type evolution processes constitute multi-level, multi-loop, multi-agent feedback ...
  11. [11]
    [PDF] A model of large program development - UCSD CSE
    The authors have studied the programming process:' as it per- tains to the development of os/360, and now give a preliminary analysis of some project statistics ...
  12. [12]
    Legacy System Modernization Without Breaking Your Business
    Jul 21, 2025 · They no longer receive updates, support, or maintenance from developers · They depend on obsolete technology to run and maintain · They require ...
  13. [13]
    [PDF] Laws of software evolution revisited - Gwern
    Lehman M M: Uncertainty in Computer Application and its Control Through the. Engineering of Software. J. of Software Maintenance: Research and Practice, 1:1,.<|control11|><|separator|>
  14. [14]
    [PDF] Rules and Tools for Software Evolution Planning and Management
    Dec 9, 2002 · The eight laws of software evolution, formulated over the 70s and 80s [Lehman 1974, 1978,. 1980a,b; Lehman and Belay 1985], are listed in the ...
  15. [15]
    Towards a better understanding of software evolution: an empirical ...
    Sep 1, 2011 · First formulated in the early 1970s, in Belady and Lehman's study on the evolution of OS/360 3, these laws essentially characterize the ...
  16. [16]
    Ancient lore: Lehman's laws of software evolution - Microservices.io
    Aug 6, 2023 · Moreover, perhaps the other laws - III (Law of Self Regulation), IV (Law of Conservation of Organizational Stability), V (Law of Conservation of ...
  17. [17]
    [PDF] An Empirical Study of Lehman's Law on Software Quality Evolution
    For example, Israeli and Feitelson studied 810 versions of the Linux kernel and characterized the system's evolution patterns[8].
  18. [18]
  19. [19]
    None
    ### Summary of Lehman's Laws and Software Maintenance from https://www.utc.edu/document/72156
  20. [20]
    [PDF] Software Evolution of Legacy Systems - SciTePress
    Finally, we describe a case study of an actual platform migration, along with pitfalls and lessons learned. This paper thus aims to give software practitioners— ...
  21. [21]
    An Exploratory Study on the Validation of Lehman's Laws
    May 23, 2024 · Solution: This study aims to obtain indications about the validity of Lehman's Laws by analyzing the evolution of three information systems. SI ...Missing: type definition
  22. [22]
    Microservice API Evolution in Practice: A Study on Strategies and ...
    Accept necessary breaking changes. According to Lehman's laws of software evolution (Lehman, 1979), real-world software systems require maintenance and ...<|separator|>
  23. [23]
    Metrics and laws of software evolution-the nineties view - IEEE Xplore
    This observation, first recorded in the early 1970s during an extended study of OS/360 evolution, was recently captured in a FEAST (Feedback, Evolution And ...Missing: Meir | Show results with:Meir
  24. [24]
    [PDF] Exploring Entropy in Software Systems: Towards a Precise ... - UPV
    Interpreting entropy as a measure for uncertainty or the degree of absence of structure (i.e., disorder) in a system, one could conceive Lehman's law as ...Missing: integration | Show results with:integration
  25. [25]
    On the evolution of Lehman's Laws - Godfrey - Wiley Online Library
    Nov 15, 2013 · As noted earlier, Lehman's observations on software evolution—that he later deemed his 'laws'—were first developed during the 1960s and 1970s ...