Fact-checked by Grok 2 weeks ago

Software quality

Software quality refers to the degree to which a software product or system meets specified requirements, satisfies stated and implied needs of its stakeholders, and provides value through desirable attributes such as functionality, reliability, and . It encompasses both the inherent characteristics of the software itself—evaluating how well it conforms to and non-functional demands like and —and the processes used to develop and maintain it, ensuring , defect reduction, and alignment with expectations. A foundational framework for assessing software quality is provided by international standards, notably ISO/IEC 25010:2023, which organizes quality into two primary categories: product quality and quality in use. Product quality includes nine key characteristics: functional suitability (degree to which the software provides functions that meet stated and implied needs), performance efficiency (behavior under stated conditions), compatibility (ability to exchange information with other products), usability (ease of understanding, learning, and use), reliability (ability to perform under specified conditions), security (protection of information and data), maintainability (ease of modification), portability (ability to transfer to different environments), and safety (degree to which a product or system mitigates the potential for harm to its users or other stakeholders). Quality in use, on the other hand, focuses on the software's effectiveness in a specific context, covering effectiveness (accuracy and completeness of tasks), efficiency (resource use relative to results), satisfaction (user comfort and acceptability), freedom from risk (minimizing potential harm), and context coverage (use across different environments). Software quality is integral to practices, influencing project success, cost control, and user satisfaction; poor quality can lead to failures in critical systems, while high quality supports and long-term viability. Measurement and assurance are achieved through methodologies like those in IEEE Std 730-2014, which outlines processes for planning, reviewing, and auditing to ensure compliance with quality requirements, often involving metrics for defect density, , and indices. Emerging standards, such as ISO/IEC 5055:2021 from the for IT Software Quality (CISQ), supplement these by providing automated measures for structural attributes like reliability and security, adapting to modern challenges in , , and AI-driven software.

Motivation

Business and Economic Drivers

Poor software quality imposes significant direct and indirect costs on organizations, including substantial rework efforts that can consume 40-50% of development budgets, as outlined in Boehm and Basili's analysis of software defect reduction strategies. These costs escalate when defects are discovered late in the development cycle, amplifying expenses through repeated testing and fixes. Liability from software failures further compounds financial risks; for instance, in 2012, suffered a $440 million loss in less than an hour due to a software in its , nearly leading to the firm's collapse and subsequent acquisition. Opportunity costs also arise from lost , as unreliable software erodes customer trust and allows competitors to capture . More recent incidents underscore these risks. On July 19, 2024, a faulty to CrowdStrike's sensor software caused a global IT outage affecting approximately 8.5 million Windows devices, grounding flights, disrupting hospitals and banks, and resulting in direct losses of $5.4 billion for companies, with broader economic impacts estimated at over $10 billion. This event highlighted vulnerabilities in update deployment processes and the cascading effects of untested changes in widely used security software. Economic models provide frameworks for quantifying these impacts and justifying investments in quality. The Cost of Quality (CoQ) framework, introduced by in 1979, categorizes costs into prevention (planning and training to avoid defects), appraisal (inspections and testing), and failure (internal rework and external liabilities), emphasizing that proactive measures reduce overall expenses by minimizing nonconformance. This model highlights how poor quality can account for 20-40% of sales revenue in and software contexts, underscoring the need for organizations to track and optimize these categories for profitability. Investing in quality practices yields a strong by lowering long-term expenses and accelerating business value delivery. Maintenance activities, which often comprise 60-80% of a software system's total lifecycle costs, can be significantly reduced through early defect prevention and robust testing, freeing resources for . Integrating into agile methodologies further enhances ROI by enabling faster time-to-market—up to 50% quicker delivery cycles—while maintaining reliability, as teams iteratively incorporate testing to minimize defects and support rapid releases. Reliability, as a core quality attribute, directly influences these economic outcomes by mitigating risks of costly and regulatory penalties. Real-world examples illustrate the severe economic consequences of quality lapses. The machine, produced by , experienced software-controlled overdoses between 1985 and 1987, resulting in patient injuries and deaths that prompted machine recalls, extensive redesigns, and multimillion-dollar settlements, including at least $1 million per affected institution to replace faulty units. These incidents not only incurred direct legal and remediation costs but also damaged the manufacturer's reputation, leading to lost contracts and heightened scrutiny in the medical device industry.

User and Societal Impacts

Poor software quality often manifests in usability defects that cause frustration and errors, directly impacting daily interactions with . For instance, violations of established usability principles, such as inconsistent or lack of and , can lead to repeated mistakes and heightened stress during task completion. Studies have shown that users experience in approximately 11% of their interactions with digital systems, primarily due to implementation flaws like and poor handling, which exacerbate and hinder effective use. These issues not only diminish user satisfaction but also result in losses through wasted time on recovery and workarounds, particularly in high-stakes environments like professional workflows. The 2024 CrowdStrike outage exemplified these user impacts, stranding travelers at airports worldwide due to grounded flights, delaying medical procedures in healthcare facilities, and causing widespread disruptions in and , leading to hours of downtime and heightened user stress from unreliable digital services. In safety-critical domains, software defects can have catastrophic human consequences, underscoring the life-threatening stakes of . The 1996 rocket launch failure, caused by an in the inertial reference system software—a remnant of unadapted code from the —triggered the vehicle's self-destruction just after liftoff, resulting in the loss of the payload and endangering ground operations. Similarly, a software timing error in the 1991 U.S. missile defense system, stemming from a 24-bit approximation that accumulated a 0.34-second discrepancy after extended operation, failed to intercept an incoming during the , contributing to the deaths of 28 U.S. soldiers in a impact. Such incidents highlight how subtle quality lapses in systems can amplify into disasters, eroding confidence in automated mechanisms. Broader societal repercussions arise from quality deficiencies that enable privacy invasions and perpetuate inequities through flawed algorithms. The , exploiting an unpatched vulnerability (CVE-2017-5638) in Apache Struts software, exposed sensitive personal information of 147 million individuals, leading to widespread risks and long-term harm to victims' financial security. In applications, low-quality training data riddled with biases can amplify discriminatory outcomes, as models trained on incomplete or skewed datasets reinforce societal prejudices, such as racial or gender disparities in decision-making tools for hiring or lending. These ethical pitfalls extend to erosion when software failures in essential services undermine reliability. Conversely, high software quality fosters societal resilience by building enduring trust in digital ecosystems, particularly in vital sectors like healthcare. Robust (EHR) systems, when engineered with strong reliability and attributes, enable accurate and timely interventions, enhancing patient outcomes and clinician efficiency without compromising . For example, well-designed EHRs reduce errors in medical decision-making and support seamless care coordination, thereby strengthening overall public faith in as a of modern infrastructure.

Definitions

Core Concepts

Software quality is fundamentally defined as conformance to explicitly stated functional and performance requirements, explicitly documented development standards, and implicit characteristics that stakeholders assume to be self-evident. This definition emphasizes that high-quality software not only satisfies specified needs but also avoids significant defects, ensuring reliability and utility in real-world applications. A foundational framework for understanding quality, adaptable to software, comes from David A. Garvin's five perspectives outlined in 1984. The transcendental perspective views quality as an inherent excellence that is recognizable but difficult to define precisely, often described intuitively in software as "elegant" code or intuitive interfaces. The product-based perspective focuses on quantifiable attributes, such as lines of code efficiency or error rates in software systems. The user-based perspective defines quality as fitness for use, prioritizing how well the software meets end-user expectations in practical scenarios. The manufacturing-based perspective stresses conformance to design specifications, ensuring the implemented software matches its planned architecture. Finally, the value-based perspective balances quality against cost, evaluating software based on its benefits relative to development and maintenance expenses. These perspectives highlight the multifaceted nature of software quality, bridging philosophical, technical, and economic viewpoints. Barry Boehm's 1976 software quality model initially conceptualized quality as a function inversely proportional to the density of defects, where higher quality corresponds to fewer faults per unit of code. However, Boehm's work evolved to recognize quality as a multifaceted attribute, incorporating portability, reliability, , , and other characteristics that extend beyond mere defect absence. This model laid the groundwork for hierarchical quality evaluation, influencing later assessments by providing a structured way to quantify and balance multiple quality factors in . A key distinction in software quality lies between quality of design and quality of conformance. Quality of design refers to the planned attributes embedded in requirements, specifications, and system architecture, determining the potential excellence of the software product. In contrast, quality of conformance measures how accurately the implemented software matches this design, focusing on implementation fidelity and defect prevention during development and testing. These concepts underscore that superior design sets the foundation, but rigorous conformance ensures the software realizes its intended quality in practice. International standards, such as those from , build on these core ideas by providing formalized frameworks for assessing and improving software quality attributes.

Standards and Organizational Perspectives

Formal standards bodies and professional organizations have developed structured frameworks to define and evaluate software quality, providing benchmarks for consistency across industries. The (ISO) and (IEC) in their ISO/IEC 25010:2023 standard, titled Systems and software engineering—Systems and software Quality Requirements and Evaluation (SQuaRE)—System and software quality models, define software product quality as the degree to which a software product satisfies the stated and implied needs of its various stakeholders, thereby providing value to them. This model organizes quality into eight primary characteristics—functional suitability, performance efficiency, compatibility, usability, reliability, security, maintainability, and portability—each subdivided into sub-characteristics that address static properties of the software and its dynamic behavior in use. These characteristics emphasize a holistic , extending from earlier models like ISO/IEC 9126 by incorporating modern concerns such as security and compatibility. The Institute of Electrical and Electronics Engineers (IEEE) offers a complementary perspective through IEEE Std 1061-1998, IEEE Standard for a Software Quality Metrics Methodology, which views software quality as the degree to which software possesses a set of attributes that bear on its ability to satisfy stated and implied needs, focusing on both process and product aspects. This standard outlines a methodology for establishing quality requirements, identifying relevant metrics, and validating them against organizational goals, prioritizing quantifiable attributes like correctness, reliability, and efficiency to ensure suitability for intended use. Unlike ISO/IEC 25010's characteristic-based model, IEEE 1061 stresses iterative metric selection and analysis to align quality with project-specific needs, enabling organizations to achieve measurable improvements in software development processes. The (ASQ) approaches software quality from a customer-centric angle, defining it as the degree to which a set of inherent characteristics of a software product or service fulfills customer requirements, leading to satisfaction when those needs are met consistently. This perspective integrates practices, such as systematic evaluation of adherence to standards and processes, to prevent defects and enhance overall product desirability. ASQ's emphasis on desirable attributes, including reliability and , aligns with broader principles but highlights the role of ongoing improvement in meeting user expectations. For federal systems in the United States, the National Institute of Standards and Technology (NIST) emphasizes measurable, verifiable attributes of software quality to ensure reliability, , and , particularly in high-stakes environments like operations. NIST guidelines, such as those in Special Publication 800-53, incorporate quality controls that address attributes like , , and compatibility with other systems, mandating documentation and testing to support federal procurement and deployment. This focus on quantifiable metrics facilitates and compliance, distinguishing NIST's approach by its regulatory orientation toward and . The () integrates software quality into its broader framework via the PMBOK Guide—Seventh Edition (2021), defining as a planned and systematic approach to ensuring that project deliverables, including software, conform to requirements and stakeholder expectations through defined processes. This edition structures quality within eight performance domains, such as and , emphasizing conformance to standards like ISO 9001 while incorporating principles like and value delivery to align software quality with organizational outcomes. PMI's view underscores process conformance as a means to achieve predictable, high-value software products, differing from product-centric models by embedding quality in lifecycle management. These standards collectively highlight varying emphases: ISO/IEC 25010 on comprehensive product characteristics, IEEE on metrics-driven validation, ASQ on customer fulfillment, NIST on federal measurability and , and PMI on systematic project conformance, providing organizations with tailored lenses for .

Historical Evolution and Controversies

The recognition of software quality as a distinct concern emerged in the early amid the escalating complexity of space programs, particularly NASA's Apollo missions, which exposed critical reliability issues in early software systems and prompted the formalization of practices to mitigate failures. This push was exemplified by the development of onboard flight software under leaders like , who advocated for error-handling mechanisms to ensure mission safety, marking a shift from ad-hoc coding to structured approaches. By the late , the first structured model for software quality was introduced with McCall's quality factors framework in 1977, which categorized quality into factors like correctness, reliability, and efficiency, providing a hierarchical basis for evaluation and influencing subsequent models. The 1980s and 1990s saw a transition from informal practices to standardized frameworks, driven by the adoption of (TQM) principles in , inspired by W. Edwards Deming's emphasis on continuous improvement and process control originally from . This era culminated in the publication of ISO/IEC 9126 in 1991, an that defined software quality through six characteristics—functionality, reliability, , , , and portability—aiming to provide a common vocabulary and metrics for assessment. TQM's integration into software, as explored in studies applying Deming's cycle of plan-do-check-act, promoted defect prevention and customer focus, reducing variability in development processes. Entering the 2000s, the Agile Manifesto of 2001 disrupted traditional quality paradigms by prioritizing working software and customer collaboration over rigid documentation and contract negotiation, effectively challenging upfront quality gates in favor of iterative testing and feedback. This shift aligned with emerging practices, yet sparked controversies over the tension between rapid delivery and thorough , with surveys indicating that 63% of organizations deploy code without full testing to meet speed demands, leading to increased production defects. Ongoing debates highlight the subjectivity of user satisfaction as a quality , where cultural differences—such as varying preferences for interface density in high-context versus low-context societies—affect perceptions and complicate universal standards. Additionally, critics argue that models like ISO/IEC 25010 overemphasize non-functional attributes, potentially sidelining in dynamic environments, and prove too rigid for modern architectures that require flexible and s. This evolution continued into the 2020s with the November 2023 revision of ISO/IEC 25010, which refines the product quality model for contemporary challenges like AI and while moving usage aspects to ISO/IEC 25002. The evolution of these concepts drew from broader quality perspectives, such as David Garvin's 1984 framework outlining transcendent, product-based, user-based, manufacturing-based, and value-based views, which informed software-specific adaptations.

Quality Characteristics

Functional Suitability

Functional suitability refers to the degree to which a software product provides functions that meet stated and implied needs when used under specified conditions. According to , this quality characteristic encompasses the completeness, correctness, and appropriateness of the functions provided by the software. The sub-characteristics of functional suitability are defined as follows:
  • Functional completeness: The degree to which the set of functions covers all the specified tasks and user objectives, ensuring no requirements are omitted.
  • Functional correctness: The degree to which the software provides the correct results with the needed degree of precision for given inputs.
  • Functional appropriateness: The degree to which the functions facilitate the accomplishment of specified tasks and objectives in the intended context.
In practice, functional suitability manifests in scenarios such as software, where completeness requires implementation of all specified payment methods without omissions, while correctness ensures accurate transaction processing for various inputs. Defects like missing edge cases in input validation logic exemplify failures in correctness, leading to incorrect outputs such as unhandled invalid data. Measurement indicators for functional suitability include the percentage coverage in a matrix, which assesses completeness by linking requirements to implemented functions, and the functional test pass rate, which evaluates correctness through the proportion of tests yielding accurate results. Benchmarks often target 95% or higher for these indicators to ensure high suitability, as demonstrated in evaluations of web applications. Functional suitability relates to capability by providing the core functions that users must effectively interact with to achieve their goals.

Compatibility

Compatibility refers to the degree to which a product, , or component can exchange with other products, s, or components, and/or perform its required functions, while sharing the same or software . According to ISO/IEC 25010:2023, compatibility encompasses co-existence and . The sub-characteristics of compatibility are defined as follows:
  • Co-existence: The degree to which a product can perform its required functions while sharing an environment with other products without negative impact on any such product.
  • Interoperability: The degree to which two or more systems, products, or components can exchange information and use the information exchanged.
In practice, is essential for integrated systems, such as enabling seamless data exchange between . Lack of can lead to integration failures, as seen in migrations requiring adapters.

Reliability

In , reliability refers to the degree to which a , product, or component performs specified functions under specified conditions for a specified period of time. According to ISO/IEC 25010:2023, this quality characteristic encompasses four subcharacteristics: faultlessness, which measures the frequency of failures and the system's ability to avoid them; , indicating the degree to which the is operational and accessible when required; , reflecting the system's capacity to withstand specified faults or failures; and recoverability, assessing how quickly and completely the can recover services after a failure. These elements ensure consistent operation, minimizing disruptions in critical applications such as financial s or medical devices. A key metric for assessing reliability is (MTBF), which quantifies the predicted elapsed time between inherent failures during normal operation, providing a basis for evaluating system dependability. is often achieved through redundancy mechanisms, such as (Redundant Array of Independent Disks) configurations in database systems, where data is duplicated across multiple drives to prevent loss from single disk failures. For instance, in high-load scenarios, a lack of such redundancy can lead to crashes; the outage on December 15, 2021, stemmed from a bug in an internal tool that overloaded the system, acting as a and disrupting service for hours. Reliability is further enhanced by recovery mechanisms, like automated backups in cloud services, which enable rapid restoration of data and functionality following disruptions. However, factors such as error-prone code patterns, including unhandled exceptions that propagate failures without mitigation, can undermine reliability by causing unexpected terminations. Environmental stressors, such as hardware malfunctions or network instability, also influence reliability by introducing external variables that test the system's robustness under real-world conditions.

Interaction Capability

Interaction capability in software quality refers to the degree to which a product or system can be used by specified users to achieve specified goals with , , and in a specified context of use. This characteristic emphasizes the ease with which users can interact with software, encompassing aspects that make interfaces intuitive and accommodating to diverse user needs. According to the ISO/IEC 25010:2023 standard, interaction capability is broken down into eight sub-characteristics: appropriateness recognizability, which assesses how easily users can identify if the software suits their needs; learnability, measuring the ease of acquiring proficiency in using the system; operability, evaluating the and ease of operation; user error protection, which limits the impact of s and aids ; user engagement, focusing on the pleasing and engaging appearance of the ; inclusivity, ensuring by people with varying abilities, such as disabilities; user assistance, providing support for users; and self-descriptiveness, where the system provides clear information about its state. These sub-characteristics guide developers in creating software that minimizes and maximizes user independence. A foundational concept for evaluating interaction capability is Jakob Nielsen's 10 heuristics, introduced in 1994, which provide general principles for interface design and inspection. These include visibility of system status, matching the system to the real world, control and freedom, consistency and standards, error prevention, recognition rather than recall, flexibility and efficiency of use, aesthetic and minimalist design, help s recognize and recover from errors, and help and documentation. Widely adopted in heuristic evaluations, these rules help identify interaction issues early in development without extensive testing. Inclusivity, a key sub-characteristic, is further supported by standards like the (WCAG) 2.2, published in 2023 by the (W3C). WCAG 2.2 outlines success criteria across four principles—perceivable, operable, understandable, and robust—to ensure web content is accessible to people with disabilities, including provisions for text alternatives, keyboard navigation, and sufficient contrast. This standard promotes inclusive design by addressing barriers for users with visual, auditory, motor, or cognitive impairments. Intuitive interfaces exemplify strong interaction capability; for instance, Apple's emphasize clarity, deference to content, and depth, resulting in designs that reduce learning curves and training time for users. In contrast, complex () systems often suffer from poor interaction capability, leading to frequent user errors due to convoluted navigation and insufficient error protection. Interaction capability supports functional suitability by enabling effective task completion through seamless interaction. In specific contexts, interaction capability gains importance; mobile applications prioritize operability through touch-friendly controls, such as gesture-based , to enhance on small screens. For aging populations, features like larger fonts (at least 16pt) and voice controls address declining vision and dexterity, improving learnability and inclusivity.

Performance Efficiency

Performance efficiency is a key quality characteristic in software systems, defined as the performance of a product or system relative to the amount of resources used under stated conditions. This attribute ensures that software meets performance requirements while optimizing , such as CPU, , and network bandwidth, to deliver acceptable responsiveness and throughput. In the ISO/IEC 25010:2023 standard for systems and software quality models, performance efficiency encompasses three primary sub-characteristics: time behavior, resource utilization, and . Time behavior addresses the temporal aspects of software operation, including response time and processing speed. For instance, in user interfaces, response times should ideally remain below 1 second to maintain a sense of continuity in interaction, as delays exceeding this threshold can disrupt user flow and perceived . Throughput, measured in , quantifies how many operations the system can handle within a given timeframe, which is critical for high-volume applications like platforms. Resource utilization focuses on the efficient use of and software resources to minimize waste. Inefficient , such as memory leaks in long-running applications, can lead to gradual memory bloat, where unused objects accumulate and degrade performance over time, potentially causing system crashes or slowdowns. Optimizing algorithms plays a vital role here; for example, implementing a with O(n \log n) significantly outperforms a bubble sort's O(n^2) for large datasets, reducing CPU cycles and enabling . Capacity evaluates the maximum limits of the system under load, including its ability to scale with increasing demands like user growth. Scalable software designs, such as those using load balancing, allow systems to handle higher concurrency without proportional resource increases, ensuring sustained as usage expands. Balancing efficiency often involves trade-offs with other quality attributes; for example, incorporating data encryption to enhance can introduce CPU overhead of 3-30% depending on the algorithm and workload, necessitating careful optimization to avoid compromising . Under extreme performance stress, inefficiencies may also exacerbate reliability issues, such as increased failure rates during peak loads.

Security

In software quality, security refers to the capability of a software product to protect information and so that unauthorized access, use, disclosure, disruption, modification, or destruction is prevented. This attribute is critical for ensuring the trustworthiness of systems handling sensitive , such as financial transactions or personal information. According to ISO/IEC 25010:2023, security encompasses six subcharacteristics: , which ensures that information is accessible only to authorized entities; , which protects from unauthorized modification or destruction; , which provides proof of actions or events to prevent denial; , which traces actions to specific entities; , which verifies the identity of entities and data origins; and , which protects against identified threats. Key practices for achieving software security include , which systematically identifies potential threats and vulnerabilities during design. The STRIDE model, developed by in 1999, categorizes threats into six types: spoofing (impersonation), tampering (unauthorized alteration), repudiation (denial of actions), information disclosure (unauthorized exposure), denial of service (disruption of availability), and elevation of privilege (gaining higher access levels). This model aids developers in proactively addressing risks by mapping threats to system components. Common vulnerabilities, such as , remain prevalent; the OWASP Top 10 for 2025 ranks injection attacks at #5, where untrusted input manipulates database queries, potentially leading to data breaches. Illustrative examples highlight the consequences of security lapses. The bug, discovered in 2014, was a buffer over-read vulnerability in the cryptography library that allowed attackers to read up to 64 kilobytes of sensitive , including keys and user credentials, affecting millions of websites. Secure design principles, such as the principle of least privilege, mitigate such risks by granting users, processes, or systems only the minimum permissions necessary to perform their functions, thereby limiting potential damage from compromises. Evolving threats underscore the need for adaptive security measures. Zero-day exploits target previously unknown vulnerabilities before patches are available, exploiting systems with no prior defenses and often causing widespread damage. Supply chain attacks, like the 2020 SolarWinds incident, involved inserted into software updates, compromising thousands of organizations including U.S. agencies. To counter these, DevSecOps integrates security practices into the pipeline, automating threat detection and compliance checks throughout development, deployment, and operations for continuous protection.

Maintainability

Maintainability refers to the ease with which software can be modified to correct faults, improve performance, or adapt to a changed environment, encompassing attributes that facilitate ongoing development and evolution. According to the ISO/IEC 25010:2023 standard, maintainability is a key product quality characteristic subdivided into five subcharacteristics: modularity, reusability, analysability, modifiability, and testability. Modularity involves the degree to which software is composed of discrete, independent components, allowing changes in one part without affecting others; reusability measures how components can be used in other systems or contexts; analysability assesses the ease of diagnosing deficiencies or causes of failures; modifiability evaluates the effort required for changes; and testability gauges the ease of verifying modifications. A foundational concept for assessing modularity is , introduced by Thomas J. McCabe in 1976 as a graph-theoretic measure of the number of linearly independent paths through program , where higher values indicate greater complexity and potential maintenance challenges. For instance, code with exceeding 10 is often considered risky for maintainability due to increased difficulty in understanding and modifying control flows. To enhance reusability, provide proven solutions to common problems, promoting modular and extensible architectures; the , as described in the seminal work by Gamma et al., ensures a class has only one instance while providing global access, facilitating reuse in scenarios like resource management without redundant initialization. In practice, legacy systems with ""—characterized by tangled, unstructured flows of control—severely hinder updates, as modifications risk unintended side effects across interconnected routines, leading to prolonged times. In contrast, modular architectures decompose applications into independent services, improving by enabling isolated updates and , as each service can be developed, deployed, and maintained separately without impacting the whole. , a coined by in 1992 to describe the implied costs of suboptimal design choices, often accumulates from rushed development, where shortcuts like duplicated code or inadequate refactoring compromise long-term modifiability and increase the effort needed for future enhancements. Best practices for bolstering include rigorous code reviews, which systematically examine changes to enforce standards, detect issues early, and promote knowledge sharing among developers, thereby reducing defects that affect analysability and modifiability. Additionally, adhering to standards such as those supported by —a tool that generates structured from comments—ensures that code intent, interfaces, and dependencies are clearly articulated, aiding reusability and future modifications. These practices, when integrated into development workflows, help mitigate the risks associated with evolving software systems. For cross-platform maintenance, intersects briefly with flexibility by influencing the adaptability of code across different environments.

Flexibility

Flexibility refers to the ability of a software product to be transferred from one or software environment to another with minimal modifications, ensuring it operates effectively across diverse platforms. In the ISO/IEC 25010:2023 for systems and software quality models, flexibility is defined as one of the core quality characteristics, encompassing four key sub-characteristics: adaptability, installability, replaceability, and . Adaptability measures the degree to which a software can be modified for different or evolving , software, or operational environments without substantial redesign. Installability assesses how easily the software can be installed or uninstalled in a target environment, including handling dependencies and configurations. Replaceability evaluates the extent to which the software can supplant another product in an existing environment while maintaining functionality and integration. measures the system's ability to handle growing amounts of work or to be enlarged to meet increased demands. A critical aspect of flexibility involves ensuring compatibility with varying hardware and operating systems, often achieved through techniques like cross-compilation, which enables code to be built on one platform (the host) for execution on another (the target). For instance, cross-compilation supports deployment across architectures such as x86 to , reducing the need for separate development environments. Virtualization technologies further enhance flexibility by abstracting dependencies; , introduced in 2013, uses to package applications with their runtime environments, allowing consistent execution regardless of the underlying infrastructure. This approach mitigates issues arising from OS-specific libraries or configurations, promoting seamless transfers between , on-premises, and hybrid setups. Illustrative examples highlight flexibility's practical implications. Java exemplifies high flexibility through its "write once, run anywhere" paradigm, enabled by the (JVM), which interprets on any compatible platform without recompilation. However, desktop applications to mobile devices often encounters hurdles, such as UI scaling problems where interfaces designed for larger screens fail to adapt to smaller, touch-based displays, requiring responsive redesigns to maintain interaction capability. Despite these advancements, challenges persist in achieving full flexibility. Reliance on platform-specific APIs, which differ between operating systems like Windows and , can necessitate code rewrites or abstraction layers to avoid . Additionally, —the byte order in multi-byte data types—creates flexibility issues when transferring software between big-endian (e.g., some network protocols) and little-endian (e.g., x86 processors) systems, potentially leading to if not explicitly handled. Flexible interfaces must briefly consider interaction capability aspects, such as intuitive adaptations for varied input methods across environments.

Safety

Safety refers to the degree to which a product, , or component contributes to its safe operation with respect to any risks of harm to people, assets, or the environment. According to ISO/IEC 25010:2023, safety is a new product characteristic with five sub-characteristics: operational constraint, risk identification, , hazard warning, and safe integration. The sub-characteristics of safety are defined as follows:
  • Operational constraint: The degree to which the system imposes constraints on its operation to ensure safe use.
  • Risk identification: The degree to which risks to safety are identified and documented.
  • Fail safe: The degree to which the system can enter a safe state upon failure.
  • Hazard warning: The degree to which the system provides warnings of potential hazards.
  • Safe integration: The degree to which the system can be safely integrated with other systems.
Safety is crucial in domains like autonomous vehicles and , where failures can cause harm. For example, mechanisms in systems automatically revert to if fails.

Measurement and Assessment

Static Code Analysis

Static code analysis is a technique for evaluating software quality by inspecting without executing the program, targeting internal attributes such as , , and potential defects. This approach allows developers to uncover issues like code smells, vulnerabilities, and risks early in the development lifecycle, often integrated into pipelines for automated feedback. Unlike dynamic methods, it examines all possible code paths theoretically, providing comprehensive coverage of static artifacts. Common tools for static code analysis include linters and specialized analyzers. , a pluggable linter, identifies problematic patterns such as unused variables or inconsistent styling by enforcing configurable rules during development. , an open-source platform, performs broader static analysis across multiple languages to detect code smells, including overly complex methods or excessive , helping teams maintain clean architectures. For vulnerability detection, tools like from scan for pre-runtime issues, such as buffer overflows in C/C++ code, by modeling data flows and potential memory corruptions without program execution. Key metrics derived from static code analysis quantify internal quality indicators. McCabe's cyclomatic complexity measures control flow complexity using the formula V(G) = E - N + 2P, where E represents the number of edges, N the number of nodes, and P the number of connected components in the program's control flow graph; values exceeding 10 often signal high risk for defects and reduced maintainability. Halstead metrics provide effort-based insights, with program volume calculated as V = N \log_2 n, where N is the total number of operators and operands (program length) and n is the number of unique operators and operands (vocabulary); higher volumes correlate with increased cognitive load for comprehension and modification. Duplication percentage tracks the proportion of repeated code blocks, typically aiming for under 5-10% to avoid maintenance overhead from scattered identical logic. Code churn, measured as the ratio of added, modified, or deleted lines over time via version control integration, indicates codebase stability; excessive churn (e.g., over 20% monthly) suggests rework and potential quality erosion. Static code analysis offers significant advantages, including early identification of defects that could propagate to production, without requiring a environment or test data, thereby reducing overall development costs by an estimated 17% in some studies. It also supports predictive assessments through metrics like , enabling refactoring before integration. However, limitations include a high rate of false positives—up to 76% of warnings in vulnerability scans—necessitating developer and potentially increasing review overhead. In the context of security characteristics, static analysis briefly aids by flagging exploitable patterns like overflows prior to deployment.

Dynamic Testing Metrics

Dynamic testing metrics evaluate software quality by executing the system and observing its behavior, external outputs, and under various conditions, providing insights into functional correctness, reliability, and that static cannot capture. These metrics are essential for verifying how the software performs in real-world scenarios, such as handling inputs or loads, and help identify defects that only manifest during operation. Unlike static metrics, which examine code without execution, dynamic approaches focus on measurable outcomes like error rates and response times to assess overall system robustness. Key techniques in dynamic testing include unit and integration testing, which measure to ensure comprehensive execution paths. For instance, branch coverage tracks the percentage of decision points (e.g., if-else statements) exercised by tests, with industry benchmarks often targeting over 80% to indicate adequate testing depth. tools like simulate concurrent users to quantify throughput, defined as the number of requests processed per , revealing capacity limits and bottlenecks in performance efficiency. Prominent metrics derived from dynamic testing encompass defect density, which calculates the number of confirmed defects per thousand lines of code (KLOC), serving as an indicator of software maturity and quality post-execution. For reliability, the failure rate λ from the exponential distribution models constant failure probability over time, where mean time to failure (MTTF) is computed as MTTF = 1/λ, helping predict system uptime based on observed failures during testing. Usability metrics, such as task completion time, measure the duration required for users to achieve specific goals, highlighting efficiency in human-computer interaction. Examples of dynamic testing applications include , which pushes systems beyond normal limits to expose reliability issues like crashes under peak loads, as seen in scenarios where applications fail to recover from resource exhaustion. A/B testing compares interface variants to evaluate , often using the (SUS) score—a 10-item yielding scores from 0 to 100—to quantify subjective satisfaction, with averages above 68 indicating above-average . Standards like the ISTQB Foundation Level Syllabus v4.0 (2023) outline test levels such as , , and for dynamic approaches, emphasizing structured execution to cover functional and non-functional requirements. Automation in / (CI/CD) pipelines enhances these metrics by enabling frequent, repeatable test runs, reducing manual effort and accelerating feedback on quality issues. Cross-environment testing may briefly reference portability by executing the same tests across platforms to verify consistent behavior.

Integrated Quality Models

Integrated quality models in software engineering synthesize diverse metrics and attributes into cohesive frameworks for evaluating and improving overall software quality. These models go beyond isolated assessments by integrating factors such as functionality, reliability, and maintainability into a unified evaluation structure, enabling organizations to benchmark and prioritize quality efforts systematically. The ISO/IEC 25000 series, known as Systems and software Quality Requirements and Evaluation (SQuaRE), provides a foundational framework for product quality evaluation through standardized models and processes. It includes ISO/IEC 25010:2023, which defines a quality model with nine characteristics—functional suitability, performance efficiency, compatibility, interaction capability, reliability, security, maintainability, flexibility, and safety—allowing for holistic assessments via metrics aligned to these attributes. The Capability Maturity Model Integration (CMMI) version 3.0, released in 2023, extends this integration by incorporating quality processes across five maturity levels (1: Initial, 2: Managed, 3: Defined, 4: Quantitatively Managed, 5: Optimizing), where higher levels emphasize predictive analytics and continuous improvement of integrated quality practices. The Goal-Question-Metric (GQM) approach, introduced by Victor R. Basili and colleagues in 1994, structures quality evaluation by linking organizational goals to specific questions and corresponding metrics, ensuring measurements are purposeful and aligned. For instance, a goal to enhance reliability might involve questions about failure rates, leading to metrics like (MTBF) for tracking progress. This top-down method facilitates the integration of attributes such as reliability into broader quality goals without focusing on isolated calculations. Weighted scoring methods aggregate normalized metrics into composite indices, often using formulas like the quality index Q = \sum (w_i \cdot m_i), where w_i represents weights assigned to each attribute based on project priorities and m_i denotes normalized values. This aggregation supports defect and by producing a single score for software artifacts. In applications, the SQALE (Software Quality Assessment based on Lifecycle Expectations) method employs such scoring for and technical debt , estimating remediation costs across code characteristics to guide maintenance efforts. The 2023 revision of ISO 25010 adds safety as a new characteristic, renames to interaction capability and portability to flexibility, enhancing applicability to modern practices including .

Management and Assurance

Quality Assurance Processes

Software quality assurance (QA) encompasses systematic activities designed to provide confidence that software products and processes meet specified quality requirements throughout the development lifecycle. QA is fundamentally preventive and process-oriented, focusing on establishing and refining procedures to avoid defects before they occur, in contrast to (QC), which is detective and product-oriented, involving inspections and tests to identify and correct defects in the final output. This distinction ensures that QA addresses root causes in workflows, while QC verifies compliance at delivery stages. Integration of QA into the software development lifecycle (SDLC) begins with requirements review and extends through design inspections and peer reviews to embed quality early. Requirements reviews evaluate completeness and clarity to prevent downstream issues, while design inspections, as formalized by in 1976, involve structured team examinations of artifacts to detect errors systematically. Peer reviews, building on Fagan's method, have been shown to reduce defects by approximately 60% by catching issues before implementation. These practices align with models like the , where activities (such as reviews) parallel development phases, ensuring from requirements to testing and enhancing overall defect prevention across the SDLC. In recent years, DevSecOps has emerged to integrate practices into QA from the outset, enhancing compliance in cloud-native developments. Established frameworks guide the implementation of QA processes. The IEEE Std 730-2014 outlines requirements for initiating, , controlling, and executing software QA processes, including purpose, scope, resources, and verification, applicable to both and maintenance projects. It emphasizes organizational for QA roles and integration into SDLC phases to monitor adherence to standards. In modern agile environments, shifts verification activities earlier in the lifecycle, incorporating automated checks and collaboration during sprints to accelerate feedback and reduce late-stage rework. Additionally, auditing against standards like ISO 9001:2015, guided by ISO/IEC/IEEE 90003:2018 for software contexts, involves periodic reviews of processes to ensure continual improvement and regulatory alignment. Metrics from static analysis and testing, such as defect density, support QA by quantifying process effectiveness.

Improvement Frameworks and Tools

Improvement frameworks for software quality emphasize structured methodologies to minimize defects and enhance processes iteratively. , originally developed for , applies data-driven techniques to , targeting a defect rate of less than 3.4 (DPMO) through the cycle—Define, Measure, Analyze, Improve, and Control—which systematically identifies root causes of quality issues and implements controls to sustain gains. In software contexts, this framework has been adapted to reduce variability in development processes, such as by integrating function point analysis to prioritize high-risk modules. Complementing , focuses on eliminating waste—such as unnecessary features, delays, or rework—to streamline value delivery, drawing from principles like just-in-time production to foster faster cycles and higher efficiency. These frameworks promote a culture of continuous refinement, where waste reduction directly correlates with improved code reliability and reduced maintenance costs. Tools play a pivotal role in operationalizing these frameworks by automating quality checks and integrating them into development workflows. Automated testing suites like for web applications and for in environments enable repeatable validation of functionality, catching regressions early and ensuring consistency across builds. Continuous integration/continuous deployment () pipelines, exemplified by Jenkins, automate build, test, and deployment stages, enforcing quality gates that perform static analysis and performance benchmarks to prevent defective code from advancing. AI-driven tools, such as introduced in 2021, provide real-time code suggestions and reviews, helping developers adhere to best practices and reduce errors by analyzing context and proposing optimizations. These tools collectively lower the barrier to consistent quality enforcement, allowing teams to focus on innovation rather than manual oversight. Metrics-driven improvement leverages cycles like the Plan-Do-Check-Act (), an iterative improvement method associated with , to apply iterative learning in software contexts—planning enhancements based on quality metrics, executing changes, verifying outcomes through testing, and acting on insights to refine processes. This approach has proven effective in reducing process variability, as seen in adaptations of Toyota's principles to , where standardization of workflows and waste elimination led to more predictable delivery timelines and fewer defects in automotive embedded systems. By tying improvements to quantifiable indicators like defect density or time, organizations achieve sustained enhancements without overhauling entire systems. As of 2025, emerging trends integrate advanced technologies for proactive . models, such as ensemble methods using , enable predictive defect analysis by training on historical code metrics to forecast vulnerabilities, achieving high accuracy in identifying faulty modules and prioritizing testing efforts. technology further supports traceability in quality audits by creating immutable logs of development artifacts, ensuring verifiable compliance and reducing disputes in collaborative environments through decentralized . These innovations extend traditional frameworks, promising even greater precision in defect prevention and process accountability.

References

  1. [1]
    What Is ISO 25010? | Perforce Software
    May 6, 2021 · Software quality reflects how well software conforms to the design but also how it meets non-functional requirements such as security or ...ISO 25010 Standard Overview · What are the ISO 25010...
  2. [2]
    What is Software Quality? - IEEE Computer Society
    At its core, software quality refers to how well a software product conforms to its requirements and meets the needs of its users. It involves both the software ...
  3. [3]
  4. [4]
  5. [5]
    CISQ Supplements ISO/IEC 25000 Series with Automated Quality ...
    ISO/IEC 25010 defines a set of eight software quality characteristics, or system “-ilities,” i.e. security, reliability, and maintainability.
  6. [6]
    IEEE P730 - IEEE SA
    Feb 23, 2022 · The purpose of this standard is to provide uniform, minimum acceptable requirements for preparation and content of Software Quality Assurance ...
  7. [7]
    Software Quality Standards – ISO 5055 - CISQ
    ISO/IEC 5055:2021 is an ISO standard for measuring the internal structure of a software product on four business-critical factors: Security, Reliability, ...
  8. [8]
    [PDF] B. Boehm and V. Basili, "Software Defect Reduction Top 10 List ...
    40-50% of software effort is on avoidable rework. 80% of this rework comes from 20% of defects. Fixing after delivery is 100x more expensive. 10 techniques can ...Missing: 1981 | Show results with:1981
  9. [9]
    Knight Shows How to Lose $440 Million in 30 Minutes - Bloomberg
    Aug 2, 2012 · Although not as damaging as the May 2010 flash crash, the Knight glitch highlights structural problems that have contributed to the botched ...
  10. [10]
    (PDF) Cost models for future software life cycle processes
    Aug 7, 2025 · This paper summarizes research in deriving a baseline COCOMO 2.0 model tailored to these new forms of software development, including rationale for the model ...Missing: rework | Show results with:rework
  11. [11]
    the art of making quality certain : Crosby, Philip B : Free Download ...
    Oct 7, 2010 · Quality is free : the art of making quality certain. by: Crosby, Philip B. Publication date: 1979. Topics: Quality assurance, Assurance qualité ...Missing: categories | Show results with:categories
  12. [12]
    [PDF] A Review of Research on Cost of Quality Models and Best Practices
    The cost categories of Crosby's model (Crosby, 1979) are similar to the P-A-F scheme. Crosby sees quality as “conformance to requirements”, and therefore, ...
  13. [13]
    34. The 60/60 Rule - 97 Things Every Project Manager Should Know ...
    Fully 60% of the life cycle costs of software systems come from maintenance, with a relatively measly 40% coming from development. That is an average, of ...
  14. [14]
    [PDF] THE BUSINESS CASE FOR A NEW WAY TO WORK
    Agile promises a range of benefits: faster time to market, increased productivity, fewer defects, cost savings and better employee engagement.
  15. [15]
    Therac-25 Accidents: We Keep on Learning From Them
    AECL took our Therac-25 back and all its documentation, and paid us US$1 million to buy a new machine. I think that this settlement indicates that AECL took ...Missing: economic fallout
  16. [16]
    [PDF] therac.pdf - Nancy Leveson
    These changes include adding hardware safeguards against software errors. R elated problems were found in the Therac-2 0 software, but they were not recogni z ...
  17. [17]
    Enhancing the explanatory power of usability heuristics
    Adding value to usability testing. In Nielsen, J., and Mack, RL (Eds.), Usability inspection Methods, John Wiley & Sons, New York, NY, 1994, 253-270.
  18. [18]
    User Frustration: Frequency and Root Causes - UX Tigers
    Jun 20, 2023 · Computer users experience frustration a shocking 11% of the time, a new study found. 2/3 of frustrations are caused by bad implementation (bugs and sloppy ...Missing: loss defects
  19. [19]
    End-user frustrations and failures in digital technology - NIH
    Nov 1, 2018 · The present study aimed to explore the potential relationship between individual differences in responses to failures with digital technology.
  20. [20]
    ARIANE 5 Failure - Full Report
    Jul 19, 1996 · On 4 June 1996, the maiden flight of the Ariane 5 launcher ended in a failure. Only about 40 seconds after initiation of the flight sequence, at an altitude of ...
  21. [21]
    Patriot Missile Defense: Software Problem Led to System Failure at ...
    GAO reviewed the facts associated with the failure of a Patriot missile defense system in Dhahran, Saudi Arabia, during Operation Desert Storm.
  22. [22]
    [PDF] Equifax-Report.pdf - Oversight and Government Reform
    Equifax did not patch the Apache Struts software located within ACIS, leaving its systems and data exposed. On May 13, 2017, attackers began a cyberattack on ...
  23. [23]
    [PDF] Data quality and artificial intelligence – mitigating bias and error to ...
    Jun 11, 2019 · AI systems based on incomplete or biased data can lead to inaccurate outcomes that infringe on people's fundamental rights, including.
  24. [24]
    What is public trust in national electronic health record systems ... - NIH
    Jan 28, 2024 · Public trust in national electronic health record systems is essential for the successful implementation within a healthcare system.
  25. [25]
    Identification of Patient Safety Risks Associated with Electronic ...
    Aug 5, 2025 · This work elucidates the fact that EHR quality problems can adversely affect patient safety, resulting in errors such as incorrect patient ...
  26. [26]
    What Does “Product Quality” Really Mean?
    Five major approaches to the definition of quality can be identified: (1) the transcendent approach of philosophy; (2) the product-based approach of economics; ...Missing: original | Show results with:original
  27. [27]
    ISO/IEC 25010:2011 - Systems and software engineering
    ISO/IEC 25010:2011 defines: A quality in use model composed of five characteristics (some of which are further subdivided into subcharacteristics) that ...
  28. [28]
    IEEE 1061-1998 - IEEE SA
    A methodology for establishing quality requirements and identifying, implementing, analyzing, and validating the process and product software quality metrics ...
  29. [29]
  30. [30]
  31. [31]
    Software Identification and Quality Metrics | NIST
    Accelerate the development and adoption of correct, reliable, testable software, by developing methods and standard reference data sets for software quality ...
  32. [32]
    [PDF] Security and Privacy Controls for Federal Information Systems and ...
    Sep 23, 2021 · NIST is responsible for developing information security standards and guidelines, including minimum requirements for federal information systems ...
  33. [33]
    PMBOK Guide | Project Management Institute - PMI
    30-day returnsThe go-to source for Project Managers, the PMBOK Guide (7th Edition) is structured around eight Project Management domains and identifies twelve key principles ...Overview · Product Specifications · What You'll Learn
  34. [34]
    Her Code Got Humans on the Moon—And Invented Software Itself
    Oct 13, 2015 · By mid-1968, more than 400 people were working on Apollo's software, because software was how the US was going to win the race to the moon. As ...Missing: push | Show results with:push
  35. [35]
    Margaret Hamilton Led the NASA Software Team That Landed ...
    Mar 14, 2019 · In a bid to make software more reliable, Hamilton sought to design Apollo's software to be capable of dealing with unknown problems and flexible ...
  36. [36]
    Evolution of software quality models: Green and reliability issues
    In particular, well known software quality models beginning from on the first McCall's model (1977) to models described in standards ISO/IEC9126 (2001) and ISO ...Missing: historical | Show results with:historical
  37. [37]
    The History of Quality Management System - Juran Institute
    Mar 4, 2020 · While Deming's management principles weren't widely adopted in the United States over the next couple of decades, by the early 1980s it was ...
  38. [38]
    [PDF] ISO/IEC 9126:1991 - iTeh Standards
    Dec 15, 1991 · History of the work ... This International Standard is applicable to defining software quality requirements and evaluating.
  39. [39]
    (PDF) Total quality management in software development process
    Dec 23, 2015 · This paper discusses the essences of total quality management (TQM) concept and identifies the principles of successful TQM implementation.<|separator|>
  40. [40]
    Survey: Software Quality Continues to Be Sacrificed in the Name of ...
    May 13, 2025 · The irony is that addressing quality issues in the latter stages of a DevOps workflow only serves to slow organizations down further when work ...Missing: controversies | Show results with:controversies
  41. [41]
    The impact of culture and product on the subjective importance of ...
    Sep 10, 2019 · The researchers of the current study examined whether the cultural background of an individual has an influence on the subjective importance of user experience ...Missing: satisfaction | Show results with:satisfaction
  42. [42]
    Shortcomings of ISO 25010 - arc42 Quality Model
    The ISO 25010 standard on software product quality lacks pragmatism and practical applicability. Terms like scalability, deployability, energy efficiency, ...
  43. [43]
    ISO/IEC 25010:2023(en), Systems and software engineering
    The International Standards that form this division include a quality measurement framework, mathematical definitions of quality measures, and practical ...
  44. [44]
    Functional Suitability e-Commerce Website ISO/IEC 25010
    Functional suitability has three calculated attributes, namely completeness, correctness, and appropriateness. From the measurements that have been made, the ...
  45. [45]
    Assessing and evaluating functional suitability of software
    ISO 25010. http://iso25000. com/index.php/en/iso-25000-standards/iso-25010 ... Functional suitability assessment of smart contracts: A survey and first proposal.
  46. [46]
    Reliability - ISO/IEC 25000
    Reliability. Degree to which a system, product or component performs specified functions under specified conditions for a specified period of time.
  47. [47]
    What Is Mean Time between Failure (MTBF)? - IBM
    Mean time between failure (MTBF) is a measure of the reliability of a system or component. It's a crucial element of maintenance management.What is MTBF? · How is mean time between...
  48. [48]
    Fault Tolerance in System Design - GeeksforGeeks
    Aug 7, 2025 · Fault tolerance refers to a system's capacity to keep working even in the face of hardware or software issues. Redundancy, error detection ...1. Full Replication · 2. Partial Replication · Fault Tolerance Vs. High...
  49. [49]
    Disaster recovery options in the cloud - AWS Documentation
    Disaster recovery strategies available to you within AWS can be broadly categorized into four approaches, ranging from the low cost and low complexity of ...
  50. [50]
    Applying Pattern-Driven Maintenance: A Method to Prevent Latent ...
    Aug 16, 2018 · Background: Unhandled exceptions affect the reliability of web applications. Several studies have measured the reliability of web applications ...
  51. [51]
    [PDF] Assessment of environmental factors affecting software reliability
    Feb 21, 2020 · These environmental factors were grouped into five parts: general, analysis & design, coding, testing, and hardware systems. A survey was ...<|control11|><|separator|>
  52. [52]
    10 Usability Heuristics for User Interface Design - NN/G
    Apr 24, 1994 · Jakob Nielsen's 10 general principles for interaction design. They are called "heuristics" because they are broad rules of thumb and not specific usability ...
  53. [53]
    Web Content Accessibility Guidelines (WCAG) 2.1 - W3C
    May 6, 2025 · Web Content Accessibility Guidelines (WCAG) 2.1 defines how to make web content more accessible to people with disabilities. Accessibility ...Understanding WCAG · Translations of W3C standards · User Agent Accessibility
  54. [54]
    Human Interface Guidelines | Apple Developer Documentation
    The HIG contains guidance and best practices that can help you design a great experience for any Apple platform.ComponentsDesigning for iOSAccessibilityTypographyLayout
  55. [55]
    Identifying Usability Issues with an ERP Implementation.
    The purpose of this study is to begin addressing this gap by categorizing and describing the usability issues encountered by one division of a Fortune 500 ...
  56. [56]
    Mobile Usability 2nd Research Study - Nielsen Norman Group
    Sep 25, 2011 · Summary: The user experience of mobile websites and apps has improved since our last research, but we still have far to go.Missing: operability | Show results with:operability
  57. [57]
    Optimizing mobile app design for older adults: systematic review of ...
    Aug 14, 2025 · Key recommendations include simplified navigation, larger fonts, voice-activated features, and error-tolerant interfaces [5].
  58. [58]
    Response Time Limits: Article by Jakob Nielsen - NN/G
    Jan 1, 1993 · There are 3 main time limits (which are determined by human perceptual abilities) to keep in mind when optimizing web and application performance.
  59. [59]
    Memory leak - OWASP Foundation
    A memory leak is an unintentional form of memory consumption whereby the developer fails to free an allocated block of memory when no longer needed.
  60. [60]
    Database Encryption at Rest: Performance vs Security Trade-offs
    Jun 26, 2025 · Performance Comparison: Software AES-128: ~15–25% CPU overhead; Software AES-256: ~20–35% CPU overhead; Hardware-accelerated AES: ~2–8% CPU ...
  61. [61]
    Security tradeoffs - Microsoft Azure Well-Architected Framework
    Oct 10, 2024 · Security tradeoffs with Performance Efficiency. Tradeoff: Increased latency and overhead. A performant workload reduces latency and overhead.
  62. [62]
    A03 Injection - OWASP Top 10:2025 RC1
    Some of the more common injections are SQL, NoSQL, OS command, Object Relational Mapping (ORM), LDAP, and Expression Language (EL) or Object Graph Navigation ...
  63. [63]
    Heartbleed Bug
    The Heartbleed Bug is a serious vulnerability in the popular OpenSSL cryptographic software library. This weakness allows stealing the information protected ...<|control11|><|separator|>
  64. [64]
    least privilege - Glossary - NIST Computer Security Resource Center
    A security principle that a system should restrict the access privileges of users (or processes acting on behalf of users) to the minimum necessary to ...<|control11|><|separator|>
  65. [65]
    zero day attack - Glossary | CSRC
    An attack that exploits a previously unknown hardware, firmware, or software vulnerability. Sources: CNSSI 4009-2015 · NISTIR 8011 Vol. 3 under Zero-Day Attack ...
  66. [66]
    Active Exploitation of SolarWinds Software - CISA
    Dec 14, 2020 · CISA is aware of active exploitation of SolarWinds Orion Platform software versions 2019.4 HF 5 through 2020.2.1 HF 1, released between March 2020 and June ...
  67. [67]
    Secure Software Development, Security, and Operations ... - NCCoE
    DevSecOps brings together secure software development and operations to shorten development cycles, allow organizations to be agile, and maintain the pace ...
  68. [68]
  69. [69]
    Design Patterns: Elements of Reusable Object-Oriented Software
    These 23 patterns allow designers to create more flexible, elegant, and ultimately reusable designs without having to rediscover the design solutions ...
  70. [70]
    Spaghetti Code - GeeksforGeeks
    Jul 29, 2024 · Maintainability Issues: Maintaining spaghetti code is very delicate activity. Problems are solved through changes and these changes bring in ...
  71. [71]
    Microservices vs. monolithic architecture - Atlassian
    A monolithic application is built as a single unified unit while a microservices architecture is a collection of smaller, independently deployable services.
  72. [72]
    Technical Debt - Martin Fowler
    May 21, 2019 · Technical Debt is a metaphor, coined by Ward Cunningham, that frames how to think about dealing with this cruft, thinking of it like a financial debt.
  73. [73]
    The Standard of Code Review | eng-practices - Google
    The primary purpose of code review is to make sure that the overall code health of Google's code base is improving over time.
  74. [74]
    Doxygen homepage
    Doxygen is a widely-used documentation generator tool in software development. It automates the generation of documentation from source code comments.Special Commands · Download Doxygen · Doxygen Manual · Docs
  75. [75]
    Cross-compilation - .NET | Microsoft Learn
    May 27, 2025 · Cross-compilation is a process of creating executable code for a platform other than the one on which the compiler is running.
  76. [76]
    What is a Container? - Docker
    Docker container technology was launched in 2013 as an open source Docker Engine. It leveraged existing computing concepts around containers and ...
  77. [77]
    Solving modern application development challenges with Java
    Aug 30, 2021 · The original goal for Java was “write once, run anywhere”. This ensures that Java applications are portable and can run on a variety of ...
  78. [78]
    Mobile First Is NOT Mobile Only - NN/G
    Jul 24, 2016 · In this article we examine the consequences of porting mobile-first designs to the desktop, with a focus on navigation.Missing: challenges | Show results with:challenges
  79. [79]
    Cloud Application Portability: Issues and Developments - IntechOpen
    There are four areas of concern in application portability, namely programming language and framework, platform-specific services, data store, and platform- ...
  80. [80]
    Explaining Static Analysis - A Perspective - IEEE Xplore
    Static code analysis is widely used to support the development of high-quality software. It helps developers detect potential bugs and security ...
  81. [81]
    About - ESLint - Pluggable JavaScript Linter
    Code linting is a type of static analysis that is frequently used to find problematic patterns or code that doesn't adhere to certain style guidelines.
  82. [82]
    What is a Code Smell? Definition Guide, Examples & Meaning - Sonar
    Code smells are warning signs in your code that hint at deeper issues. These aren't errors & the code will still work, but they can make future development ...
  83. [83]
    Coverity SAST | Static Application Security Testing by Black Duck
    Built-in static analysis reports provide insight into issue types and severity to help prioritize remediation efforts and track progress toward each standard ...
  84. [84]
    [PDF] II. A COMPLEXITY MEASURE In this sl~ction a mathematical ...
    Abstract- This paper describes a graph-theoretic complexity measure and illustrates how it can be used to manage and control program com- plexity .
  85. [85]
    A software study using Halstead metrics - ACM Digital Library
    This paper describes an application of Maurice Halstead's software theory to a real time switching system. The Halstead metrics and the software tool developed ...
  86. [86]
    Code Quality Basics - What Is Code Duplication? - in28minutes
    Nov 14, 2019 · A general measure of controlled duplication is a limit of 5%. A project having less than 5% of code duplication is considered very good.
  87. [87]
    What is Code Churn? | Jellyfish
    Code churn refers to the frequency at which code changes occur. It encompasses adding, altering, or deleting code within a specific time frame.What Is Code Churn? · The 5 Elements Of Software... · Dive Deeper With Jellyfish...Missing: static analysis<|control11|><|separator|>
  88. [88]
    Evaluating the cost reduction of static code analysis for software ...
    Static code analysis is an emerging technique for secure software development that analyzes large software code bases without execution to reveal potential ...
  89. [89]
    An Empirical Study of Static Analysis Tools for Secure Code Review
    A single SAST can find 52% of vulnerable functions, but 76% of warnings are irrelevant and 22% of vulnerabilities are missed. Prioritization improves accuracy.
  90. [90]
  91. [91]
  92. [92]
    [PDF] Release: V3.0, 6 April 2023 - CMMI Institute
    Apr 6, 2023 · Global Changes. • Minor updates for grammar, formatting, plain language, translatability, clarity, and consistency with the CMMI Style Guide ...
  93. [93]
    [PDF] THE GOAL QUESTION METRIC APPROACH
    This article will present the Goal Question Metric approach and provide an example of its application. 2. THE GOAL QUESTION METRIC APPROACH. The Goal Question ...
  94. [94]
    Weighted software metrics aggregation and its application to defect ...
    Jun 23, 2021 · So-called quality models define the aggregation of (weighted) metrics values for an artifact to a single score value for this artifact. Suitable ...
  95. [95]
    [PDF] The SQALE Quality and Analysis Models for Assessing the ... - Adalog
    This paper introduces the analysis model of the SQALE (Software Quality Assessment Based on Lifecycle Expectations) method to assess the quality of software and ...Missing: prioritization | Show results with:prioritization
  96. [96]
    Update on ISO 25010, version 2023 - arc42 Quality Model
    In November 2023 an updated version of the ISO 25010 standard on software product quality was released. This article describes the major changes and some ( ...<|control11|><|separator|>
  97. [97]
    DevOps and software quality: A systematic mapping - ScienceDirect
    This study presents systematic mapping of the impact of DevOps on software quality. The results of this study provide a better understanding of DevOps on ...
  98. [98]
  99. [99]
    Quality assurance: A critical ingredient for organizational success - ISO
    Quality assurance (QA) is a framework embracing all operations, aiming to reduce defects and address faults early, ensuring compliance and contributing to ...What is quality assurance? · Best practices in quality... · Quality assurance methods
  100. [100]
    [PDF] Seven Truths About Peer Reviews - Process Impact
    A single testing stage is unlikely to remove more than 35 percent of the defects in the tested work product, whereas design and code inspections typically find ...<|separator|>
  101. [101]
    SDLC V-Model - Software Engineering - GeeksforGeeks
    Aug 11, 2025 · Improved Quality Assurance. Overall quality assurance is enhanced by the V-Model, which incorporates testing operations at every level.
  102. [102]
    IEEE 730-2014 - IEEE SA
    Requirements for initiating, planning, controlling, and executing the Software Quality Assurance processes of a software development or maintenance project
  103. [103]
    Shift Left Testing: Approach, Strategy & Benefits - BrowserStack
    Shift-left testing enhances software quality by identifying defects early in development. It reduces costs, speeds up time to market, and improves collaboration ...Benefits of Shift Left Testing · Types of Shift-Left Testing
  104. [104]
    ISO/IEC/IEEE 90003:2018 - Software engineering
    In stock 2–5 day deliveryThis document provides guidance for organizations in the application of ISO 9001:2015 to the acquisition, supply, development, operation and maintenance of ...
  105. [105]
  106. [106]
  107. [107]
    [PDF] Implementing Lean Software Development - Pearsoncmg.com
    Eliminate Waste. The three biggest wastes in software development are: Extra Features. We need a process that allows us to develop just those 20 percent of ...
  108. [108]
  109. [109]
    Comparative Study of Machine Learning Based Defect Prediction ...
    Abstract: Software Defect Prediction improves the software's stability and ensures the testing process is streamlined by pointing out issues in code.