Software quality
Software quality refers to the degree to which a software product or system meets specified requirements, satisfies stated and implied needs of its stakeholders, and provides value through desirable attributes such as functionality, reliability, and usability.[1][2] It encompasses both the inherent characteristics of the software itself—evaluating how well it conforms to design and non-functional demands like performance and security—and the processes used to develop and maintain it, ensuring consistency, defect reduction, and alignment with user expectations.[3][4] A foundational framework for assessing software quality is provided by international standards, notably ISO/IEC 25010:2023, which organizes quality into two primary categories: product quality and quality in use. Product quality includes nine key characteristics: functional suitability (degree to which the software provides functions that meet stated and implied needs), performance efficiency (behavior under stated conditions), compatibility (ability to exchange information with other products), usability (ease of understanding, learning, and use), reliability (ability to perform under specified conditions), security (protection of information and data), maintainability (ease of modification), portability (ability to transfer to different environments), and safety (degree to which a product or system mitigates the potential for harm to its users or other stakeholders).[5] Quality in use, on the other hand, focuses on the software's effectiveness in a specific context, covering effectiveness (accuracy and completeness of tasks), efficiency (resource use relative to results), satisfaction (user comfort and acceptability), freedom from risk (minimizing potential harm), and context coverage (use across different environments).[1] Software quality is integral to software engineering practices, influencing project success, cost control, and user satisfaction; poor quality can lead to failures in critical systems, while high quality supports scalability and long-term viability.[2] Measurement and assurance are achieved through methodologies like those in IEEE Std 730-2014, which outlines processes for planning, reviewing, and auditing to ensure compliance with quality requirements, often involving metrics for defect density, code coverage, and maintainability indices.[6] Emerging standards, such as ISO/IEC 5055:2021 from the Consortium for IT Software Quality (CISQ), supplement these by providing automated measures for structural attributes like reliability and security, adapting to modern challenges in cloud, mobile, and AI-driven software.[7]Motivation
Business and Economic Drivers
Poor software quality imposes significant direct and indirect costs on organizations, including substantial rework efforts that can consume 40-50% of development budgets, as outlined in Boehm and Basili's analysis of software defect reduction strategies.[8] These costs escalate when defects are discovered late in the development cycle, amplifying expenses through repeated testing and fixes. Liability from software failures further compounds financial risks; for instance, in 2012, Knight Capital Group suffered a $440 million loss in less than an hour due to a software glitch in its automated trading system, nearly leading to the firm's collapse and subsequent acquisition.[9] Opportunity costs also arise from lost market share, as unreliable software erodes customer trust and allows competitors to capture revenue.[10] More recent incidents underscore these risks. On July 19, 2024, a faulty update to CrowdStrike's Falcon sensor software caused a global IT outage affecting approximately 8.5 million Windows devices, grounding flights, disrupting hospitals and banks, and resulting in direct losses of $5.4 billion for Fortune 500 companies, with broader economic impacts estimated at over $10 billion.[11][12] This event highlighted vulnerabilities in update deployment processes and the cascading effects of untested changes in widely used security software. Economic models provide frameworks for quantifying these impacts and justifying investments in quality. The Cost of Quality (CoQ) framework, introduced by Philip B. Crosby in 1979, categorizes costs into prevention (planning and training to avoid defects), appraisal (inspections and testing), and failure (internal rework and external liabilities), emphasizing that proactive measures reduce overall expenses by minimizing nonconformance.[13] This model highlights how poor quality can account for 20-40% of sales revenue in manufacturing and software contexts, underscoring the need for organizations to track and optimize these categories for profitability.[14] Investing in quality practices yields a strong return on investment by lowering long-term expenses and accelerating business value delivery. Maintenance activities, which often comprise 60-80% of a software system's total lifecycle costs, can be significantly reduced through early defect prevention and robust testing, freeing resources for innovation.[15] Integrating quality assurance into agile methodologies further enhances ROI by enabling faster time-to-market—up to 50% quicker delivery cycles—while maintaining reliability, as teams iteratively incorporate testing to minimize defects and support rapid releases.[16] Reliability, as a core quality attribute, directly influences these economic outcomes by mitigating risks of costly downtime and regulatory penalties. Real-world examples illustrate the severe economic consequences of quality lapses. The Therac-25 radiation therapy machine, produced by Atomic Energy of Canada Limited, experienced software-controlled overdoses between 1985 and 1987, resulting in patient injuries and deaths that prompted machine recalls, extensive redesigns, and multimillion-dollar settlements, including at least $1 million per affected institution to replace faulty units.[17] These incidents not only incurred direct legal and remediation costs but also damaged the manufacturer's reputation, leading to lost contracts and heightened scrutiny in the medical device industry.[18]User and Societal Impacts
Poor software quality often manifests in usability defects that cause user frustration and errors, directly impacting daily interactions with technology. For instance, violations of established usability principles, such as inconsistent interface design or lack of user control and freedom, can lead to repeated mistakes and heightened stress during task completion.[19] Studies have shown that users experience frustration in approximately 11% of their interactions with digital systems, primarily due to implementation flaws like bugs and poor error handling, which exacerbate cognitive load and hinder effective use.[20] These issues not only diminish user satisfaction but also result in productivity losses through wasted time on error recovery and workarounds, particularly in high-stakes environments like professional workflows.[21] The 2024 CrowdStrike outage exemplified these user impacts, stranding travelers at airports worldwide due to grounded flights, delaying medical procedures in healthcare facilities, and causing widespread disruptions in retail and transportation, leading to hours of downtime and heightened user stress from unreliable digital services.[11] In safety-critical domains, software defects can have catastrophic human consequences, underscoring the life-threatening stakes of quality assurance. The 1996 Ariane 5 rocket launch failure, caused by an integer overflow in the inertial reference system software—a remnant of unadapted code from the Ariane 4—triggered the vehicle's self-destruction just 37 seconds after liftoff, resulting in the loss of the payload and endangering ground operations.[22] Similarly, a software timing error in the 1991 U.S. Patriot missile defense system, stemming from a 24-bit fixed-point arithmetic approximation that accumulated a 0.34-second discrepancy after extended operation, failed to intercept an incoming Scud missile during the Gulf War, contributing to the deaths of 28 U.S. soldiers in a barracks impact.[23] Such incidents highlight how subtle quality lapses in real-time systems can amplify into disasters, eroding confidence in automated safety mechanisms. Broader societal repercussions arise from quality deficiencies that enable privacy invasions and perpetuate inequities through flawed algorithms. The 2017 Equifax data breach, exploiting an unpatched vulnerability (CVE-2017-5638) in Apache Struts software, exposed sensitive personal information of 147 million individuals, leading to widespread identity theft risks and long-term harm to victims' financial security.[24] In artificial intelligence applications, low-quality training data riddled with biases can amplify discriminatory outcomes, as models trained on incomplete or skewed datasets reinforce societal prejudices, such as racial or gender disparities in decision-making tools for hiring or lending.[25] These ethical pitfalls extend to public trust erosion when software failures in essential services undermine reliability. Conversely, high software quality fosters societal resilience by building enduring trust in digital ecosystems, particularly in vital sectors like healthcare. Robust electronic health record (EHR) systems, when engineered with strong reliability and security attributes, enable accurate data sharing and timely interventions, enhancing patient outcomes and clinician efficiency without compromising privacy.[26] For example, well-designed EHRs reduce errors in medical decision-making and support seamless care coordination, thereby strengthening overall public faith in health informatics as a cornerstone of modern infrastructure.[27]Definitions
Core Concepts
Software quality is fundamentally defined as conformance to explicitly stated functional and performance requirements, explicitly documented development standards, and implicit characteristics that stakeholders assume to be self-evident. This definition emphasizes that high-quality software not only satisfies specified needs but also avoids significant defects, ensuring reliability and utility in real-world applications. A foundational framework for understanding quality, adaptable to software, comes from David A. Garvin's five perspectives outlined in 1984. The transcendental perspective views quality as an inherent excellence that is recognizable but difficult to define precisely, often described intuitively in software as "elegant" code or intuitive interfaces.[28] The product-based perspective focuses on quantifiable attributes, such as lines of code efficiency or error rates in software systems.[28] The user-based perspective defines quality as fitness for use, prioritizing how well the software meets end-user expectations in practical scenarios.[28] The manufacturing-based perspective stresses conformance to design specifications, ensuring the implemented software matches its planned architecture.[28] Finally, the value-based perspective balances quality against cost, evaluating software based on its benefits relative to development and maintenance expenses.[28] These perspectives highlight the multifaceted nature of software quality, bridging philosophical, technical, and economic viewpoints. Barry Boehm's 1976 software quality model initially conceptualized quality as a function inversely proportional to the density of defects, where higher quality corresponds to fewer faults per unit of code. However, Boehm's work evolved to recognize quality as a multifaceted attribute, incorporating portability, reliability, efficiency, usability, and other characteristics that extend beyond mere defect absence. This model laid the groundwork for hierarchical quality evaluation, influencing later assessments by providing a structured way to quantify and balance multiple quality factors in software development. A key distinction in software quality lies between quality of design and quality of conformance. Quality of design refers to the planned attributes embedded in requirements, specifications, and system architecture, determining the potential excellence of the software product. In contrast, quality of conformance measures how accurately the implemented software matches this design, focusing on implementation fidelity and defect prevention during development and testing. These concepts underscore that superior design sets the foundation, but rigorous conformance ensures the software realizes its intended quality in practice. International standards, such as those from ISO, build on these core ideas by providing formalized frameworks for assessing and improving software quality attributes.Standards and Organizational Perspectives
Formal standards bodies and professional organizations have developed structured frameworks to define and evaluate software quality, providing benchmarks for consistency across industries. The International Organization for Standardization (ISO) and International Electrotechnical Commission (IEC) in their ISO/IEC 25010:2023 standard, titled Systems and software engineering—Systems and software Quality Requirements and Evaluation (SQuaRE)—System and software quality models, define software product quality as the degree to which a software product satisfies the stated and implied needs of its various stakeholders, thereby providing value to them.[5] This model organizes quality into eight primary characteristics—functional suitability, performance efficiency, compatibility, usability, reliability, security, maintainability, and portability—each subdivided into sub-characteristics that address static properties of the software and its dynamic behavior in use.[5] These characteristics emphasize a holistic evaluation, extending from earlier models like ISO/IEC 9126 by incorporating modern concerns such as security and compatibility.[5] The Institute of Electrical and Electronics Engineers (IEEE) offers a complementary perspective through IEEE Std 1061-1998, IEEE Standard for a Software Quality Metrics Methodology, which views software quality as the degree to which software possesses a set of attributes that bear on its ability to satisfy stated and implied needs, focusing on both process and product aspects.[29] This standard outlines a methodology for establishing quality requirements, identifying relevant metrics, and validating them against organizational goals, prioritizing quantifiable attributes like correctness, reliability, and efficiency to ensure suitability for intended use.[29] Unlike ISO/IEC 25010's characteristic-based model, IEEE 1061 stresses iterative metric selection and analysis to align quality with project-specific needs, enabling organizations to achieve measurable improvements in software development processes.[29] The American Society for Quality (ASQ) approaches software quality from a customer-centric angle, defining it as the degree to which a set of inherent characteristics of a software product or service fulfills customer requirements, leading to satisfaction when those needs are met consistently.[30] This perspective integrates quality assurance practices, such as systematic evaluation of adherence to standards and processes, to prevent defects and enhance overall product desirability.[30] ASQ's emphasis on desirable attributes, including reliability and usability, aligns with broader quality management principles but highlights the role of ongoing improvement in meeting user expectations.[31] For federal systems in the United States, the National Institute of Standards and Technology (NIST) emphasizes measurable, verifiable attributes of software quality to ensure reliability, security, and interoperability, particularly in high-stakes environments like government operations.[32] NIST guidelines, such as those in Special Publication 800-53, incorporate quality controls that address attributes like availability, integrity, and compatibility with other systems, mandating documentation and testing to support federal procurement and deployment.[33] This focus on quantifiable metrics facilitates risk assessment and compliance, distinguishing NIST's approach by its regulatory orientation toward public sector interoperability and resilience.[32] The Project Management Institute (PMI) integrates software quality into its broader project management framework via the PMBOK Guide—Seventh Edition (2021), defining quality management as a planned and systematic approach to ensuring that project deliverables, including software, conform to requirements and stakeholder expectations through defined processes.[34] This edition structures quality within eight performance domains, such as Measurement and Delivery, emphasizing conformance to standards like ISO 9001 while incorporating principles like stewardship and value delivery to align software quality with organizational outcomes.[34] PMI's view underscores process conformance as a means to achieve predictable, high-value software products, differing from product-centric models by embedding quality in lifecycle management.[34] These standards collectively highlight varying emphases: ISO/IEC 25010 on comprehensive product characteristics, IEEE on metrics-driven validation, ASQ on customer fulfillment, NIST on federal measurability and interoperability, and PMI on systematic project conformance, providing organizations with tailored lenses for software quality assurance.[5][29][30][32][34]Historical Evolution and Controversies
The recognition of software quality as a distinct concern emerged in the early 1960s amid the escalating complexity of space programs, particularly NASA's Apollo missions, which exposed critical reliability issues in early software systems and prompted the formalization of software engineering practices to mitigate failures.[35] This push was exemplified by the development of onboard flight software under leaders like Margaret Hamilton, who advocated for error-handling mechanisms to ensure mission safety, marking a shift from ad-hoc coding to structured approaches.[36] By the late 1970s, the first structured model for software quality was introduced with McCall's quality factors framework in 1977, which categorized quality into factors like correctness, reliability, and efficiency, providing a hierarchical basis for evaluation and influencing subsequent models.[37] The 1980s and 1990s saw a transition from informal practices to standardized frameworks, driven by the adoption of total quality management (TQM) principles in software development, inspired by W. Edwards Deming's emphasis on continuous improvement and process control originally from manufacturing.[38] This era culminated in the publication of ISO/IEC 9126 in 1991, an international standard that defined software quality through six characteristics—functionality, reliability, usability, efficiency, maintainability, and portability—aiming to provide a common vocabulary and metrics for assessment.[39] TQM's integration into software, as explored in studies applying Deming's cycle of plan-do-check-act, promoted defect prevention and customer focus, reducing variability in development processes.[40] Entering the 2000s, the Agile Manifesto of 2001 disrupted traditional quality paradigms by prioritizing working software and customer collaboration over rigid documentation and contract negotiation, effectively challenging upfront quality gates in favor of iterative testing and feedback. This shift aligned with emerging DevOps practices, yet sparked controversies over the tension between rapid delivery and thorough quality assurance, with surveys indicating that 63% of organizations deploy code without full testing to meet speed demands, leading to increased production defects.[41] Ongoing debates highlight the subjectivity of user satisfaction as a quality metric, where cultural differences—such as varying preferences for interface density in high-context versus low-context societies—affect usability perceptions and complicate universal standards.[42] Additionally, critics argue that models like ISO/IEC 25010 overemphasize non-functional attributes, potentially sidelining business value in dynamic environments, and prove too rigid for modern microservices architectures that require flexible scalability and interoperability metrics.[43] This evolution continued into the 2020s with the November 2023 revision of ISO/IEC 25010, which refines the product quality model for contemporary challenges like AI and cloud computing while moving usage aspects to ISO/IEC 25002.[5] The evolution of these concepts drew from broader quality perspectives, such as David Garvin's 1984 framework outlining transcendent, product-based, user-based, manufacturing-based, and value-based views, which informed software-specific adaptations.Quality Characteristics
Functional Suitability
Functional suitability refers to the degree to which a software product provides functions that meet stated and implied needs when used under specified conditions.[44] According to ISO/IEC 25010:2023, this quality characteristic encompasses the completeness, correctness, and appropriateness of the functions provided by the software.[5] The sub-characteristics of functional suitability are defined as follows:- Functional completeness: The degree to which the set of functions covers all the specified tasks and user objectives, ensuring no requirements are omitted.[44]
- Functional correctness: The degree to which the software provides the correct results with the needed degree of precision for given inputs.[44]
- Functional appropriateness: The degree to which the functions facilitate the accomplishment of specified tasks and objectives in the intended context.[44]
Compatibility
Compatibility refers to the degree to which a product, system, or component can exchange information with other products, systems, or components, and/or perform its required functions, while sharing the same hardware or software environment. According to ISO/IEC 25010:2023, compatibility encompasses co-existence and interoperability.[44][5] The sub-characteristics of compatibility are defined as follows:- Co-existence: The degree to which a product can perform its required functions while sharing an environment with other products without negative impact on any such product.[44]
- Interoperability: The degree to which two or more systems, products, or components can exchange information and use the information exchanged.[44]
Reliability
In software engineering, reliability refers to the degree to which a system, product, or component performs specified functions under specified conditions for a specified period of time.[47] According to ISO/IEC 25010:2023, this quality characteristic encompasses four subcharacteristics: faultlessness, which measures the frequency of failures and the system's ability to avoid them; availability, indicating the degree to which the system is operational and accessible when required; fault tolerance, reflecting the system's capacity to withstand specified faults or failures; and recoverability, assessing how quickly and completely the system can recover services after a failure.[5][44] These elements ensure consistent operation, minimizing disruptions in critical applications such as financial systems or medical devices. A key metric for assessing reliability is Mean Time Between Failures (MTBF), which quantifies the predicted elapsed time between inherent failures during normal operation, providing a basis for evaluating system dependability.[48] Fault tolerance is often achieved through redundancy mechanisms, such as RAID (Redundant Array of Independent Disks) configurations in database systems, where data is duplicated across multiple drives to prevent loss from single disk failures.[49] For instance, in high-load scenarios, a lack of such redundancy can lead to crashes; the Twitter outage on December 15, 2021, stemmed from a bug in an internal tool that overloaded the system, acting as a single point of failure and disrupting service for hours. Reliability is further enhanced by recovery mechanisms, like automated backups in cloud services, which enable rapid restoration of data and functionality following disruptions.[50] However, factors such as error-prone code patterns, including unhandled exceptions that propagate failures without mitigation, can undermine reliability by causing unexpected terminations.[51] Environmental stressors, such as hardware malfunctions or network instability, also influence reliability by introducing external variables that test the system's robustness under real-world conditions.[52]Interaction Capability
Interaction capability in software quality refers to the degree to which a product or system can be used by specified users to achieve specified goals with effectiveness, efficiency, and satisfaction in a specified context of use.[5] This characteristic emphasizes the ease with which users can interact with software, encompassing aspects that make interfaces intuitive and accommodating to diverse user needs. According to the ISO/IEC 25010:2023 standard, interaction capability is broken down into eight sub-characteristics: appropriateness recognizability, which assesses how easily users can identify if the software suits their needs; learnability, measuring the ease of acquiring proficiency in using the system; operability, evaluating the controllability and ease of operation; user error protection, which limits the impact of user errors and aids recovery; user engagement, focusing on the pleasing and engaging appearance of the interface; inclusivity, ensuring usability by people with varying abilities, such as disabilities; user assistance, providing support for users; and self-descriptiveness, where the system provides clear information about its state.[44] These sub-characteristics guide developers in creating software that minimizes cognitive load and maximizes user independence. A foundational concept for evaluating interaction capability is Jakob Nielsen's 10 heuristics, introduced in 1994, which provide general principles for interface design and inspection.[53] These include visibility of system status, matching the system to the real world, user control and freedom, consistency and standards, error prevention, recognition rather than recall, flexibility and efficiency of use, aesthetic and minimalist design, help users recognize and recover from errors, and help and documentation.[53] Widely adopted in heuristic evaluations, these rules help identify interaction issues early in development without extensive user testing.[19] Inclusivity, a key sub-characteristic, is further supported by standards like the Web Content Accessibility Guidelines (WCAG) 2.2, published in 2023 by the World Wide Web Consortium (W3C).[54] WCAG 2.2 outlines success criteria across four principles—perceivable, operable, understandable, and robust—to ensure web content is accessible to people with disabilities, including provisions for text alternatives, keyboard navigation, and sufficient contrast.[54] This standard promotes inclusive design by addressing barriers for users with visual, auditory, motor, or cognitive impairments. Intuitive interfaces exemplify strong interaction capability; for instance, Apple's Human Interface Guidelines emphasize clarity, deference to content, and depth, resulting in designs that reduce learning curves and training time for users.[55] In contrast, complex enterprise resource planning (ERP) systems often suffer from poor interaction capability, leading to frequent user errors due to convoluted navigation and insufficient error protection.[56] Interaction capability supports functional suitability by enabling effective task completion through seamless interaction.[44] In specific contexts, interaction capability gains importance; mobile applications prioritize operability through touch-friendly controls, such as gesture-based navigation, to enhance efficiency on small screens.[57] For aging populations, features like larger fonts (at least 16pt) and voice controls address declining vision and dexterity, improving learnability and inclusivity.[58]Performance Efficiency
Performance efficiency is a key quality characteristic in software systems, defined as the performance of a product or system relative to the amount of resources used under stated conditions. This attribute ensures that software meets performance requirements while optimizing resource consumption, such as CPU, memory, and network bandwidth, to deliver acceptable responsiveness and throughput. In the ISO/IEC 25010:2023 standard for systems and software quality models, performance efficiency encompasses three primary sub-characteristics: time behavior, resource utilization, and capacity.[5][44] Time behavior addresses the temporal aspects of software operation, including response time and processing speed. For instance, in user interfaces, response times should ideally remain below 1 second to maintain a sense of continuity in interaction, as delays exceeding this threshold can disrupt user flow and perceived performance.[59] Throughput, measured in transactions per second, quantifies how many operations the system can handle within a given timeframe, which is critical for high-volume applications like e-commerce platforms. Resource utilization focuses on the efficient use of hardware and software resources to minimize waste. Inefficient resource management, such as memory leaks in long-running applications, can lead to gradual memory bloat, where unused objects accumulate and degrade performance over time, potentially causing system crashes or slowdowns.[60] Optimizing algorithms plays a vital role here; for example, implementing a merge sort with O(n \log n) time complexity significantly outperforms a bubble sort's O(n^2) for large datasets, reducing CPU cycles and enabling scalability. Capacity evaluates the maximum limits of the system under load, including its ability to scale with increasing demands like user growth. Scalable software designs, such as those using load balancing, allow systems to handle higher concurrency without proportional resource increases, ensuring sustained performance as usage expands.[44] Balancing performance efficiency often involves trade-offs with other quality attributes; for example, incorporating data encryption to enhance security can introduce CPU overhead of 3-30% depending on the algorithm and workload, necessitating careful optimization to avoid compromising responsiveness.[61] Under extreme performance stress, inefficiencies may also exacerbate reliability issues, such as increased failure rates during peak loads.[62]Security
In software quality, security refers to the capability of a software product to protect information and data so that unauthorized access, use, disclosure, disruption, modification, or destruction is prevented. This attribute is critical for ensuring the trustworthiness of systems handling sensitive data, such as financial transactions or personal information. According to ISO/IEC 25010:2023, security encompasses six subcharacteristics: confidentiality, which ensures that information is accessible only to authorized entities; integrity, which protects data from unauthorized modification or destruction; non-repudiation, which provides proof of actions or events to prevent denial; accountability, which traces actions to specific entities; authenticity, which verifies the identity of entities and data origins; and resistance, which protects against identified threats.[5][44] Key practices for achieving software security include threat modeling, which systematically identifies potential threats and vulnerabilities during design. The STRIDE model, developed by Microsoft in 1999, categorizes threats into six types: spoofing (impersonation), tampering (unauthorized alteration), repudiation (denial of actions), information disclosure (unauthorized exposure), denial of service (disruption of availability), and elevation of privilege (gaining higher access levels). This model aids developers in proactively addressing risks by mapping threats to system components. Common vulnerabilities, such as SQL injection, remain prevalent; the OWASP Top 10 for 2025 ranks injection attacks at #5, where untrusted input manipulates database queries, potentially leading to data breaches.[63] Illustrative examples highlight the consequences of security lapses. The Heartbleed bug, discovered in 2014, was a buffer over-read vulnerability in the OpenSSL cryptography library that allowed attackers to read up to 64 kilobytes of sensitive memory, including private keys and user credentials, affecting millions of websites. Secure design principles, such as the principle of least privilege, mitigate such risks by granting users, processes, or systems only the minimum permissions necessary to perform their functions, thereby limiting potential damage from compromises.[64][65] Evolving threats underscore the need for adaptive security measures. Zero-day exploits target previously unknown vulnerabilities before patches are available, exploiting systems with no prior defenses and often causing widespread damage. Supply chain attacks, like the 2020 SolarWinds incident, involved malware inserted into software updates, compromising thousands of organizations including U.S. government agencies. To counter these, DevSecOps integrates security practices into the DevOps pipeline, automating threat detection and compliance checks throughout development, deployment, and operations for continuous protection.[66][67][68]Maintainability
Maintainability refers to the ease with which software can be modified to correct faults, improve performance, or adapt to a changed environment, encompassing attributes that facilitate ongoing development and evolution. According to the ISO/IEC 25010:2023 standard, maintainability is a key product quality characteristic subdivided into five subcharacteristics: modularity, reusability, analysability, modifiability, and testability. Modularity involves the degree to which software is composed of discrete, independent components, allowing changes in one part without affecting others; reusability measures how components can be used in other systems or contexts; analysability assesses the ease of diagnosing deficiencies or causes of failures; modifiability evaluates the effort required for changes; and testability gauges the ease of verifying modifications.[5][44] A foundational concept for assessing modularity is cyclomatic complexity, introduced by Thomas J. McCabe in 1976 as a graph-theoretic measure of the number of linearly independent paths through program source code, where higher values indicate greater complexity and potential maintenance challenges. For instance, code with cyclomatic complexity exceeding 10 is often considered risky for maintainability due to increased difficulty in understanding and modifying control flows. To enhance reusability, design patterns provide proven solutions to common problems, promoting modular and extensible architectures; the Singleton pattern, as described in the seminal work by Gamma et al., ensures a class has only one instance while providing global access, facilitating reuse in scenarios like resource management without redundant initialization.[69][70] In practice, legacy systems with "spaghetti code"—characterized by tangled, unstructured flows of control—severely hinder updates, as modifications risk unintended side effects across interconnected routines, leading to prolonged debugging times. In contrast, modular microservices architectures decompose applications into independent services, improving maintainability by enabling isolated updates and scaling, as each service can be developed, deployed, and maintained separately without impacting the whole. Technical debt, a metaphor coined by Ward Cunningham in 1992 to describe the implied costs of suboptimal design choices, often accumulates from rushed development, where shortcuts like duplicated code or inadequate refactoring compromise long-term modifiability and increase the effort needed for future enhancements.[71][72][73] Best practices for bolstering maintainability include rigorous code reviews, which systematically examine changes to enforce standards, detect issues early, and promote knowledge sharing among developers, thereby reducing defects that affect analysability and modifiability. Additionally, adhering to documentation standards such as those supported by Doxygen—a tool that generates structured documentation from source code comments—ensures that code intent, interfaces, and dependencies are clearly articulated, aiding reusability and future modifications. These practices, when integrated into development workflows, help mitigate the risks associated with evolving software systems. For cross-platform maintenance, maintainability intersects briefly with flexibility by influencing the adaptability of code across different environments.[74][75]Flexibility
Flexibility refers to the ability of a software product to be transferred from one hardware or software environment to another with minimal modifications, ensuring it operates effectively across diverse computing platforms. In the ISO/IEC 25010:2023 standard for systems and software quality models, flexibility is defined as one of the core quality characteristics, encompassing four key sub-characteristics: adaptability, installability, replaceability, and scalability.[5] Adaptability measures the degree to which a software system can be modified for different or evolving hardware, software, or operational environments without substantial redesign.[44] Installability assesses how easily the software can be installed or uninstalled in a target environment, including handling dependencies and configurations.[44] Replaceability evaluates the extent to which the software can supplant another product in an existing environment while maintaining functionality and integration.[44] Scalability measures the system's ability to handle growing amounts of work or to be enlarged to meet increased demands. A critical aspect of flexibility involves ensuring compatibility with varying hardware and operating systems, often achieved through techniques like cross-compilation, which enables code to be built on one platform (the host) for execution on another (the target).[76] For instance, cross-compilation supports deployment across architectures such as x86 to ARM, reducing the need for separate development environments. Virtualization technologies further enhance flexibility by abstracting dependencies; Docker, introduced in 2013, uses containerization to package applications with their runtime environments, allowing consistent execution regardless of the underlying infrastructure.[77] This approach mitigates issues arising from OS-specific libraries or configurations, promoting seamless transfers between cloud, on-premises, and hybrid setups. Illustrative examples highlight flexibility's practical implications. Java exemplifies high flexibility through its "write once, run anywhere" paradigm, enabled by the Java Virtual Machine (JVM), which interprets bytecode on any compatible platform without recompilation.[78] However, porting desktop applications to mobile devices often encounters hurdles, such as UI scaling problems where interfaces designed for larger screens fail to adapt to smaller, touch-based displays, requiring responsive redesigns to maintain interaction capability.[79] Despite these advancements, challenges persist in achieving full flexibility. Reliance on platform-specific APIs, which differ between operating systems like Windows and Linux, can necessitate code rewrites or abstraction layers to avoid vendor lock-in.[80] Additionally, endianness—the byte order in multi-byte data types—creates flexibility issues when transferring software between big-endian (e.g., some network protocols) and little-endian (e.g., x86 processors) systems, potentially leading to data corruption if not explicitly handled. Flexible interfaces must briefly consider interaction capability aspects, such as intuitive adaptations for varied input methods across environments.Safety
Safety refers to the degree to which a product, system, or component contributes to its safe operation with respect to any risks of harm to people, assets, or the environment. According to ISO/IEC 25010:2023, safety is a new product quality characteristic with five sub-characteristics: operational constraint, risk identification, fail safe, hazard warning, and safe integration.[5][44] The sub-characteristics of safety are defined as follows:- Operational constraint: The degree to which the system imposes constraints on its operation to ensure safe use.
- Risk identification: The degree to which risks to safety are identified and documented.
- Fail safe: The degree to which the system can enter a safe state upon failure.
- Hazard warning: The degree to which the system provides warnings of potential hazards.
- Safe integration: The degree to which the system can be safely integrated with other systems.