Fact-checked by Grok 2 weeks ago

Code smell

A code smell, also known as a bad smell, is a surface indication in that typically signals a deeper underlying problem in the software's design or implementation, such as poor structure or issues, without necessarily being a . The term was coined by during his collaboration on Martin Fowler's 1999 book Refactoring: Improving the Design of Existing Code, drawing from earlier concepts like critiques in the 1960s and 1970s. Code smells are not errors that prevent execution but rather symptoms of design choices that can accumulate as , increasing code complexity, fault-proneness, and the effort required for future modifications or extensions. Research indicates that their presence correlates with reduced software , higher change-proneness, and elevated bug introduction risks during development. They are often detected through manual code reviews, static analysis tools, or automated techniques like metrics-based rules, search algorithms, and models, with being the most studied language (over 80% of cases). Common categories of code smells include bloaters (e.g., long or large that grow excessively), object-oriented abuses (e.g., refusal of or alternative with similar functionality), change preventers (e.g., divergent change or requiring widespread modifications), dispensables (e.g., duplicated code or comments that could be eliminated), couplings (e.g., feature envy where a accesses from another excessively), and more specialized types like architectural or test smells. Refactoring techniques, such as extracting or introducing , are commonly recommended to address them, promoting cleaner, more adaptable codebases. Systematic reviews of over 400 studies since 2000 highlight an exponential rise in research interest, particularly in detection including recent applications of large language models as of 2025, though challenges remain in standardizing benchmarks and identifying beneficial "good smells."

Overview

Definition

A code smell is a surface indication that usually corresponds to a deeper problem in the system. These indications manifest in as symptoms of underlying or issues that can impede , extensibility, or , but they do not constitute functional bugs that cause the software to fail or produce incorrect results. The term was popularized in the 1999 book Refactoring: Improving the Design of Existing Code by Martin Fowler, with contributions from , where it describes characteristics hinting at opportunities for improvement without immediate breakage. Key characteristics of code smells include their subjective nature, often relying on the experience and judgment of developers to identify them, as what one team views as a smell may be acceptable in another . They are typically quick to spot—such as a exceeding a dozen lines in length or duplicated logic across multiple places—but require further to confirm if they signal a genuine issue. Unlike outright errors, code smells do not break functionality; instead, they represent potential that, if addressed through refactoring, can enhance code quality and long-term sustainability. Code smells differ from anti-patterns, which are broader, recurring poor solutions to common problems that actively promote ineffective designs, whereas smells are more subtle hints of possible degradation without necessarily forming a complete counterproductive . They also stand apart from actual bugs, as smells affect non-functional aspects like rather than causing runtime failures or incorrect outputs. General symptoms might include excessive duplication of code blocks or methods that grow overly complex, serving as cues for deeper structural review.

Historical Development

The concept of code smell emerged in the late as part of efforts to improve software maintainability through refactoring. The term was first coined by and elaborated by Martin Fowler, with contributions from John Brant, William Opdyke, and Don Roberts, in their seminal 1999 book Refactoring: Improving the Design of Existing Code. This work built on foundational ideas from object-oriented design, such as those introduced by Opdyke in his 1992 thesis on refactoring object-oriented frameworks, framing code smells as indicators of deeper structural issues that hinder long-term code health. The development of code smells was influenced by contemporaneous advancements in collaborative and iterative programming practices. Ward Cunningham's co-founding of (XP) with in the mid-1990s emphasized continuous refactoring as a core discipline to prevent code degradation, providing a practical context for identifying and addressing smells during frequent, small-scale changes. This alignment with XP's principles helped position code smells within broader agile workflows, where they served as heuristics for teams to collaboratively refine code without overhauling entire systems. Following its introduction, the code smell concept evolved rapidly in both practice and research. Martin Fowler's 1999 catalog identified 22 distinct smells, categorized by bloaters, object-orientation abusers, change preventers, and dispensables, establishing a foundational for practitioners. By 2004, Michael Feathers extended these ideas in Working Effectively with Legacy Code, applying smells to the challenges of modifying large, pre-existing systems and advocating for characterization tests to safely refactor them. The integration into agile methodologies accelerated adoption, with tools like —first released in 2007 and supported by since its founding in 2008—enabling automated detection and expanding accessibility beyond manual reviews. Academic research in the further illuminated the concept's real-world impact, particularly through empirical analyses of open-source projects. Studies demonstrated high prevalence, with Palomba et al. (2017) analyzing 395 releases of 30 systems and finding that common smells like Long Method affected 84% of releases, while God Class appeared in 65%, underscoring their persistence across diverse codebases. These investigations, often using tools like or DECOR, quantified smells' diffusion and linked them to challenges, reinforcing the need for proactive refactoring in software evolution. Research interest has continued to grow exponentially into the 2020s, with over 400 studies since 2000 focusing on automated detection using and metrics, though challenges in persist. As of 2024, systematic reviews highlight ongoing efforts to distinguish harmful smells from potentially beneficial "good smells."

of Code Smells

Application-Level Smells

Application-level code smells, often referred to as architectural smells, manifest as structural design flaws that extend across multiple classes or modules, undermining the application's overall and integration, such as inadequate . These smells indicate deeper issues in how components interact, leading to increased and reduced maintainability at the system scale. A key example is the , also known as God Component, where a single central class or component handles an excessive number of responsibilities, centralizing too much logic and violating the . This centralization often results in a class that grows disproportionately large, measured by metrics like lines of code (LOC) exceeding typical thresholds or encompassing numerous sub-modules. Such structures foster tight , where modifications in the central component propagate changes throughout the system, exacerbating scalability challenges as the application grows. Another significant smell is Scattered Functionality, characterized by application logic being scattered across unrelated components, where operations pertinent to one feature or domain are dispersed rather than localized. This dispersion reduces and makes feature maintenance fragmented. In practice, this might appear in an system where order processing logic is split between inventory, payment, and notification modules without clear ownership, leading to duplicated efforts and error-prone integrations. Data Clumps represent yet another application-level concern, involving groups of related data items that are frequently passed together across modules without proper encapsulation into a cohesive structure. This smell highlights missed opportunities for abstraction, where primitive data bundles (e.g., customer ID, name, and ) traverse the application as ad-hoc parameters, obscuring intent and inviting inconsistencies. For example, in a of a application, the same set of fields might be bundled in calls from to components, complicating refactoring and increasing through shared data dependencies. Other common architectural smells include Cyclic Dependency, where components form cycles in their dependency graph, complicating change propagation and testing, and Dense Structure, arising from excessive interconnections among components, often detected when the average degree of the dependency graph exceeds 5. To identify these smells, developers can employ metrics such as high aggregated across interconnected modules, which signals overly complex control flows spanning the application. Additionally, s revealing excessive interconnections—measured by or instability ratios—help detect tight indicative of these issues. Tools analyzing package-level dependencies often quantify these, with thresholds like an above 5 flagging dense structures.

Class-Level Smells

Class-level smells pertain to structural issues within individual that undermine their cohesion and adherence to principles like the , where a class should have only one reason to change. These smells manifest as overly complex or poorly organized classes, complicating maintenance, testing, and extension without affecting interactions across the broader system. They often arise during iterative development when new responsibilities are appended to existing classes rather than distributing them appropriately, leading to decreased reusability and heightened for developers. A prominent example is the Large Class smell, characterized by a class that has expanded to include an excessive number of instance variables, methods, or lines of code, often absorbing unrelated responsibilities over time. This violates object-oriented design by concentrating too much functionality in one place, making the class difficult to understand and modify. For instance, consider a Customer class that not only stores customer data but also handles validation, database persistence, and email notifications; such mixing reduces modularity, as changes to notification logic could inadvertently impact data validation. Identification cues include a high count of instance variables (typically exceeding 10-15) or methods (often more than 20), along with elevated metrics like high Weighted Methods per Class (WMC) or low Tight Class Cohesion (TCC). Consequences encompass code duplication, as developers replicate logic elsewhere to avoid the bloated class, and increased error risk during refactoring. This smell was cataloged in Martin Fowler's seminal work on refactoring, emphasizing its role in degrading class maintainability. Another key class-level smell is Alternative Classes with Different Interfaces, where multiple classes implement nearly identical functionality but expose inconsistent method names or signatures, fostering subtle duplication and client confusion. This arises when developers create parallel classes for similar concepts without unifying their APIs, such as one class using processOrder() and another handlePurchase() for the same order-handling logic, forcing clients to learn varied interfaces. Symptoms include classes with overlapping internal structures but divergent public s, often detected through manual review or tools comparing method similarities. The result is bloated codebases with redundant implementations, elevating costs as changes must propagate across mismatched classes. Refactoring typically involves renaming methods for or extracting a common superclass to align interfaces, thereby enhancing readability and reducing duplication. Refused Bequest represents a misuse of , occurring when a subclass inherits methods and properties from its superclass but utilizes only a fraction of them, frequently overriding unused ones to throw exceptions or perform no action. This indicates an ill-suited hierarchy, as the subclass "refuses" much of the parent's bequest, breaching the by not behaving substitutably. An example is a SpecializedEmployee subclass of Employee that inherits and benefits methods but overrides them unused, since it applies only to contractors without such features, leading to cluttered and misleading code. Cues involve subclasses employing fewer than half of inherited methods or extensively overriding without behavioral extension, verifiable through . Impacts include disorganized hierarchies that confuse developers about intended polymorphism and complicate future extensions, as the unused inheritance bloats the subclass unnecessarily. As part of Fowler's code smell catalog, addressing this often requires replacing with or extracting a more precise superclass for shared elements.

Method-Level Smells

Method-level code smells refer to localized issues confined to individual methods or functions, where poor implementation choices degrade the , , and of specific code blocks. These smells typically stem from violations of principles like single responsibility, leading to overly complex control flows or excessive data handling within a single unit of execution. By focusing on intra-method concerns, they differ from broader structural issues at the or application level, yet they can compound to hinder overall software evolution. A prominent method-level smell is the Long Method, characterized by a function that exceeds typical bounds, often more than 50-100 lines of code, thereby undertaking multiple responsibilities and obscuring its intent. This complexity makes it challenging to comprehend the 's logic at a glance, increases the risk of introducing bugs during modifications, and complicates due to intertwined concerns. For example, consider a 200-line in a application that parses user input, validates it against multiple rules, performs calculations, and formats the output for display; such a buries potential errors in deeply nested conditionals and loops, violating the . Another common smell is the Long Parameter List, where a method requires more than 4-5 arguments, signaling that it may be trying to handle too much external data or coordinate disparate functionalities. This not only burdens callers with remembering parameter order and meanings but also raises coupling concerns, as changes to the list propagate widely. An illustrative case is a calculateDiscount method taking parameters for customer ID, purchase amount, product category, loyalty status, coupon code, and expiration date; this setup complicates invocation and hints at underlying design flaws, such as missing encapsulating objects for related data. Switch Statements represent a smell involving unmanaged conditional branching, typically through lengthy switch or chained if-else constructs that duplicate logic across cases and resist extension without violating open-closed principles. These structures often emerge when handling multiple types or states in a single , leading to maintenance nightmares as new cases require altering the core logic. For instance, a processPayment using a switch on payment types (, debit, ) to execute varying validation and execution steps repeats and becomes brittle if a new type is added, better addressed through polymorphic dispatch instead. Detection of method-level smells often relies on indicators such as high nesting levels exceeding three layers, which signal convoluted , or the presence of duplicated logic fragments within the method, suggesting opportunities for extraction. These cues, when combined with metrics above 10, highlight areas prone to hidden bugs and reduced test coverage. Tools and manual reviews can flag these by analyzing line counts, parameter arity, and conditional density to prioritize refactoring efforts.

Detection Methods

Manual Identification Techniques

Manual identification techniques for code smells rely on human expertise to scrutinize source code for patterns indicative of deeper design issues. These approaches emphasize collaborative and systematic review processes, such as peer code reviews, where developers examine each other's code line-by-line to spot structural anomalies like excessive complexity or tight coupling. Checklists derived from Martin Fowler's seminal catalog of code smells, including bloaters like large classes and dispensables like unused parameters, provide a structured framework for reviewers to systematically probe for common pitfalls during walkthroughs. Additionally, walkthroughs often incorporate readability metrics, such as cyclomatic complexity or method length, to guide discussions on whether code adheres to intuitive and maintainable structures. A typical step-by-step process begins with reading the code top-down to grasp overall architecture, followed by flagging deviations from established principles like , where violations such as a class handling multiple responsibilities signal breaches. Reviewers then prioritize smells based on their prevalence; for instance, duplicated code is one of the most widespread issues in legacy systems, often comprising a significant portion of the codebase and complicating maintenance. This prioritization helps focus efforts on high-impact areas, such as refactoring repeated logic blocks that appear across modules. The effectiveness of manual identification varies with developer experience. Junior developers tend to overlook subtle smells due to limited exposure to , while seniors exhibit higher sensitivity to contextual issues like feature envy, where methods excessively access foreign class data. Training through enhances detection skills, as controlled experiments show pairs identifying more smells and a broader variety compared to solo efforts, fostering on recognizing inter-class dependencies. These techniques offer advantages like deep contextual understanding that surpasses purely metric-based analysis, allowing reviewers to consider business intent and conventions. However, they suffer from subjectivity, as perceptions of what constitutes a smell can differ, and time consumption, with code reviews accounting for 10-30% of development effort depending on project scale. In practice, smells like duplicated code are frequently flagged in reviews but only addressed in about 86% of cases, highlighting the need for consistent guidelines to mitigate inconsistencies.

Automated Detection Tools

Static analysis tools form the cornerstone of automated code smell detection, enabling programmatic of potential and issues in without execution. Prominent examples include , which supports dozens of programming languages and employs hundreds of rules specifically for detecting code smells such as duplicated code, large classes, and long methods. PMD, primarily focused on , offers hundreds of customizable rules targeting issues like god classes, feature envy, and excessive , allowing users to tailor detection thresholds and patterns to project-specific needs. Checkstyle, also Java-centric, uses numerous rules to enforce that indirectly reveal smells, such as improper method lengths or naming inconsistencies that signal broader risks. These tools operate by parsing the source code into an (AST) and applying rule-based algorithms to compute structural metrics, including lines of code, , and coupling between objects, which flag deviations indicative of smells. For instance, PMD's rule quantifies decision points in methods to identify overly complex logic, while combines AST traversal with to detect anti-patterns across languages. Empirical evaluations reveal varying effectiveness; for example, one study reported a precision of 18% for in detecting certain quality issues on datasets. Integration into development workflows enhances their utility, with plugins available for / (CI/CD) pipelines such as Jenkins, where can trigger scans on pull requests and enforce quality gates to block merges if smells exceed thresholds. As open-source projects, these tools have evolved continuously; as of 2025, incorporates via AI CodeFix for context-aware smell prediction and remediation suggestions, reducing manual review overhead in large-scale repositories. Machine learning-based approaches are also gaining traction for automated detection, using models trained on labeled datasets to identify smells with higher contextual awareness, though challenges in accuracy and generalizability persist. Despite their strengths, automated tools exhibit limitations, particularly in detecting contextual smells that require or , as static methods alone yield low predictive accuracy for subtle maintainability issues without supplementary metrics. Tool outputs typically include violation reports with severity scores—critical, major, minor, or info—alongside remediation guidance; for example, generates dashboards listing smells by file and line, prioritized by impact on reliability.

Refactoring Approaches

General Refactoring Principles

Refactoring code smells involves applying structured techniques to enhance while ensuring no changes to the observable external behavior of the system. A core principle, as outlined by Martin Fowler, is to preserve this behavior through the use of a comprehensive , often integrated with (TDD) practices, which acts as a safety net to verify that transformations do not introduce defects. Fowler also emphasizes making small, incremental changes rather than large-scale overhauls, allowing developers to apply a series of behavior-preserving transformations that gradually improve code quality. Additionally, code smells are conceptualized using the metaphor, originally coined by and expanded by Fowler, where smells represent accumulated design flaws that incur interest in the form of increased maintenance costs if left unaddressed. Key rules guide the refactoring process to maintain safety and effectiveness. Refactorings must strictly avoid any functional changes, focusing solely on internal structure, with systems employed to track modifications and enable easy reversion if issues arise. Refactorings are often composed in sequences, where one transformation prepares the code for another; for instance, applying Extract Method to decompose a lengthy method before performing Inline Class to consolidate related classes. These rules ensure that refactoring remains a disciplined activity, separate from feature development or bug fixes. Supporting concepts reinforce consistent application of these principles. The Boy Scout Rule, popularized by , encourages developers to leave the code in a cleaner state than it was found, promoting ongoing small improvements during routine work rather than deferred major efforts. Success in refactoring code smells is typically measured by reductions in complexity metrics, such as or the maintainability index, observed before and after the process. Empirical studies demonstrate tangible benefits, with one analysis reporting a 4% increase in the maintainability index and a 5% rise in in some cases, though overall internal quality metrics often show positive shifts in and . Another study found reductions in lack of cohesion in methods (LCOM) by 3-10% following specific refactorings like Consolidate Conditional Expression. Common pitfalls in refactoring include over-refactoring, where excessive changes lead to unnecessary code churn—increased additions, modifications, and deletions—potentially offsetting benefits through heightened developer effort and risk of introducing errors. To mitigate this, refactorings should target identified smells from detection methods and be balanced against project timelines, ensuring improvements align with measurable gains in maintainability.

Specific Refactoring Patterns for Smells

Refactoring patterns for code smells are tailored to the specific characteristics of each smell, often addressing issues at the , , or application level through targeted transformations that preserve behavior while enhancing structure. At the level, the Long Method smell, where a single performs too many tasks, is commonly addressed using Extract Method to break it into smaller, focused methods, and Replace with Query to eliminate temporary variables by replacing them with dedicated querying methods. For -level smells like the , which centralizes excessive responsibilities in one , effective patterns include Extract to split responsibilities into new classes and Move to relocate methods to more appropriate classes, thereby distributing functionality. Duplicated Code, a pervasive smell that appears across levels such as or , requires systematic elimination to prevent maintenance inconsistencies. The process begins by identifying identical or similar code fragments using tools or manual inspection; then, apply Extract Method to isolate the common logic into a shared within the same . In hierarchies, if duplication spans subclasses, follow up with Pull Up Method to move the extracted to the superclass, ensuring single-point updates. For instance, in a with two sibling classes handling similar , extracting the validation logic and pulling it up reduces while promoting . A notable case study involves refactoring Switch Statements, a conditional smell that leads to inflexible type-based branching, by applying Replace Conditional with Polymorphism. This entails creating subclasses for each case in the switch—such as deriving subclasses like SalariedEmployee and HourlyEmployee from a base Employee —and moving the conditional logic into overridden like calculatePay, eliminating the switch entirely through . This transformation, a classic refactoring pattern described by Martin Fowler, replaces procedural checks with object-oriented , improving extensibility for new types without modifying existing code. Integrated development environments () facilitate this via automated menus: Eclipse's Refactor > Extract Class or IntelliJ IDEA's Refactor > Extract > , which preview changes and handle dependencies. More recently, as of , AI-powered tools leveraging large language models (LLMs) have emerged to automate and assist in refactoring code smells. For example, the framework uses validated LLMs to identify complex smells and suggest or apply safe refactorings, optimizing code for better understanding and reducing manual effort in large-scale systems. Successful application of these patterns yields measurable improvements, such as reduced ; empirical studies on systems show refactoring co-occurring smells like God Class with Long Method can decrease coupling metrics by up to 34%, enhancing . However, in large systems, challenges arise from ripple effects, where a single refactoring propagates unintended changes across interdependent components, necessitating comprehensive testing and incremental application to mitigate risks.

Implications and Prevention

Effects on Software Maintainability

Code smells degrade software by introducing that accumulates over time, complicating modifications and increasing long-term development costs. Empirical analyses of large-scale systems reveal that classes containing code smells exhibit substantially higher change-proneness, with a median of 32 changes compared to 12 in non-smelly classes (p < 0.001, effect size d = 0.68). This effect intensifies with smell density: classes with three or more smells undergo a median of 54 changes, versus 12 for smell-free classes. Such patterns indicate that smells hinder evolvability, as they correlate with elevated fault-proneness, where smelly classes average 9 compared to 3 in clean ones (p < 0.001, d = 0.41). Specific smells amplify these impacts through targeted quality erosions. For example, duplicated code fosters inconsistent updates during , raising the risk of defects as changes propagate unevenly across instances. Large classes, by concentrating responsibilities, impair comprehensibility and increase for developers navigating the system. A of 18 studies confirms that 16 identify 24 smell types—such as God Class and Long Method—as significantly associated with bug proneness, with God Class linked to 20% of bugs in analyzed projects. Inter-smell relations compound this, as smell patterns explain 62% of variance through , with co-located smells leading to approximately 30% longer durations in densely affected systems. Over extended periods, these degradations manifest in broader consequences, including heightened refactoring effort measured by affected lines of code and overall system metrics. Roughly 30% of maintenance problems trace to files with code smells, underscoring their role in fault-proneness (correlation coefficients up to r ≈ 0.4–0.6 across studies). Economically, code smells contribute to accumulated estimated at $1.52 trillion in the as of 2022, driven by poor design and reliability failures that inflate operational expenses industry-wide. In aging codebases, this correlates with 30% more bugs, perpetuating a cycle of reduced .

Strategies for Avoiding Code Smells

Adhering to established design principles during software development is essential for preventing code smells from emerging. The principles, introduced by , provide a foundational framework for creating maintainable object-oriented code. Specifically, the (SRP) counters the God Class smell by mandating that a class should have only one reason to change, thereby limiting excessive responsibilities in a single entity. Empirical studies on systems have shown that applying SOLID principles improves code understanding and maintainability. Similarly, (DDD) emphasizes modeling software around business domains, which enhances and minimizes smells like Feature Envy or Data Clumps by aligning code structure with domain logic. Incorporating process-oriented strategies into the development workflow further mitigates the introduction of code smells. Code reviews using structured checklists that explicitly target common smells, such as Long Methods or Duplicated Code, enable early detection and correction during integration. (TDD) enforces modularity by requiring tests before implementation, which naturally discourages overly complex structures and promotes cleaner, more focused code. Adopting modular architectures, such as , reduces the prevalence of monolithic Objects by decomposing systems into independent, bounded services that adhere to single responsibilities. Team practices play a crucial role in fostering a culture of proactive smell avoidance. Pair programming facilitates real-time collaboration, allowing developers to identify and refactor potential smells immediately, thereby enhancing overall code quality. Continuous refactoring sprints in agile environments ensure ongoing improvements, preventing the accumulation of smells over time. Integrating linters into Integrated Development Environments () provides real-time feedback on stylistic and structural issues, helping developers address smells like Primitive Obsession before committing code. Evidence from supports the effectiveness of these strategies. Literature reviews indicate that agile practices like TDD and code reviews correlate with fewer code smells compared to traditional approaches, while also accelerating feature delivery. As of 2025, emerging and techniques are increasingly used for automated code smell detection and refactoring suggestions, further aiding prevention efforts. These preventive measures not only lower costs but also contribute to long-term software by addressing the root causes of smells during initial development.

References

  1. [1]
    Code Smell - Martin Fowler
    Feb 9, 2006 · A code smell is a surface indication that usually corresponds to a deeper problem in the system. The term was first coined by Kent Beck.Missing: software | Show results with:software
  2. [2]
    [2103.01088] Code smells: A Synthetic Narrative Review - arXiv
    Mar 1, 2021 · Abstract:Code smells are symptoms of poor design and implementation choices, which might hinder comprehension, increase code complexity and ...
  3. [3]
    A systematic review on software code smells - ResearchGate
    Jun 20, 2025 · This paper provides a systematic review of code smell detection studies published from 2001 to 2023, addressing their significance in identifying underlying ...
  4. [4]
    Understanding code smells and how refactoring can help - TechTarget
    Mar 21, 2025 · Code smells signal potential underlying issues with code quality, maintainability and performance. Keep in mind that a code smell is not a bug ...
  5. [5]
    Refactoring - Martin Fowler
    There are then some introductory chapters that discuss broader issues around refactoring, the “code smells” that suggest refactoring, and the role of testing.
  6. [6]
    On the diffuseness and the impact on maintainability of code smells
    Aug 7, 2017 · In this paper we present a large scale empirical investigation on the diffuseness of code smells and their impact on code change- and fault-proneness.<|control11|><|separator|>
  7. [7]
    [PDF] A Systematic Mapping Study on Architectural Smells Detection
    Abstract—The recognition of the need for high-quality software architecture is evident from the increasing trend in investigating architectural smells.
  8. [8]
    An empirical investigation of the impact of architectural smells on ...
    This study analyzes seven architectural smells across 378 versions of eight open-source projects. We developed a tool to gather data on architectural smells.
  9. [9]
    Does your architecture smell? - Revisited - Designite
    Architecture smells are symptoms of underlying structural problems, such as god components, feature concentration, or dense structure.
  10. [10]
    [PDF] Architecture Smells and Pareto Principle - Tushar Sharma
    God component. A component is excessively large either in terms of Lines Of Code (LOC) or number of classes [4]. Feature concentra- tion. A component realizes ...
  11. [11]
  12. [12]
    Does Your Architecture Smell? - Speaker Deck
    Aug 10, 2018 · In this talk, I present seven architecture smells including Feature concentration, God component, Cyclic dependency, Scattered Functionality, and Dense ...
  13. [13]
    An empirical investigation on the relationship between design and ...
    Objective The paper aims to study architecture smells characteristics, investigate correlation, collocation, and causation relationships between architecture ...
  14. [14]
    On the relation between architectural smells and source code changes
    Oct 27, 2021 · We detect architectural smells using the Arcan tool, which detects architectural smells by building a dependency graph of the system analyzed ...
  15. [15]
    Data Clump - Martin Fowler
    Jan 5, 2006 · Data clumps are primitive values that nobody thinks to turn into an object. The first step is to replace data clumps with objects and use the objects whenever ...
  16. [16]
    Data Clumps - Refactoring.Guru
    If you want to make sure whether or not some data is a data clump, just delete one of the data values and see whether the other values still make sense. If this ...
  17. [17]
    Automatic detection of Feature Envy and Data Class code smells ...
    Jun 1, 2024 · Code smells represent symptoms of poor design and implementation choices. The presence of code smells has been shown to be positively correlated ...
  18. [18]
    [PDF] Code smell
    What are code smells? • Fowler: “... certain structures in the code that ... • Long parameter list. • Message chain. • Switch statements. • Data class.
  19. [19]
    [PDF] Code smells in software: Review
    A long Parameter List is the fourth code smell. This smell makes parameter ... Divided [3]: • Switch Statements. • Refused Bequest. • temporary field ...
  20. [20]
    [PDF] Code Smells and Code Metrics - Karthik Vaidhyanathan
    Code Smells – Long Parameter List. 12. See the number of parameters that are ... Code Smells – Switch Statements (Conditional Complexity). 18. Too many ...
  21. [21]
    Best Practices for Identifying and Eliminating Code Smells
    Mar 19, 2024 · We discuss techniques that software development teams can use to not just identify code smells, but also eliminate them from their codebase.
  22. [22]
    Code Smell Detection: Complete Guide to Clean Code [2025]
    Sep 5, 2025 · Behavioral Code Smells. Manual Code Smell Detection Techniques. The “Fresh Eyes” Approach; Code Reading Sessions; The Rubber Duck Method ...Manual Code Smell Detection... · Automated Code Smell... · Advanced Code Smell...
  23. [23]
    Understanding Code Smell Detection in Software Development
    Feb 9, 2024 · Manual code smell detection involves visually inspecting the codebase and looking for patterns or characteristics that indicate the presence of ...
  24. [24]
    [PDF] The Prevalence of Code Smells in Machine Learning projects
    Our research set out to discover the most prevalent code smells in ML projects. We gathered a dataset of 74 open-source ML projects, installed their ...Missing: 2010s 20-50%
  25. [25]
    Identifying Code Smells with Collaborative Practices - IEEE Xplore
    These practices offer the opportunity for two or more developers analyzing the source code together and collaboratively reason about potential smells prevailing ...
  26. [26]
    What is Code Smell Detection? [2025 Guide UPDATED] - CodeAnt AI
    Rating 5.0 (7) Jul 22, 2025 · Code smell detection identifies early warning signs of code that feel 'off' or hint at deeper design flaws, before they become technical debt.Common Types Of Code Smells · 1. Long Method · Best Practices For Managing...
  27. [27]
    Time allocated to code reviews
    Nov 8, 2010 · As for big changes, code review can take from 10% to 30% of time, but it's worth that. I can say pair programming, when 2 programmers do edit ...How long should a code review be? [closed]What do you do when code review is just too hard?More results from softwareengineering.stackexchange.com
  28. [28]
    [PDF] Understanding Code Smell Detection via Code Review - arXiv
    Mar 21, 2021 · Our analysis found that 1) code smells were not commonly identified in code reviews,. 2) smells were usually caused by violation of coding ...
  29. [29]
    Refactoring
    Refactoring is a disciplined technique for restructuring an existing body of code, altering its internal structure without changing its external behavior.Missing: principles | Show results with:principles
  30. [30]
    Technical Debt - Martin Fowler
    May 21, 2019 · Technical Debt is a metaphor, coined by Ward Cunningham, that frames how to think about dealing with this cruft, thinking of it like a financial debt.Missing: smells | Show results with:smells
  31. [31]
    Refactoring with Codemods to Automate API Changes - Martin Fowler
    Codemods allow you to automate large-scale code changes with precision and minimal effort, making them especially useful when dealing with breaking API changes.
  32. [32]
    8. The Boy Scout Rule - 97 Things Every Programmer Should Know ...
    The Boy Scouts have a rule: “Always leave the campground cleaner than you found it.” If you find a mess on the ground, you clean it up regardless of who might ...
  33. [33]
    An Empirical Evaluation of Impact of Refactoring on Internal and ...
    Aug 5, 2025 · The objective of this study is to validate/invalidate the claims that refactoring improves software quality. The impact of selected refactoring ...
  34. [34]
    An empirical study to assess the effects of refactoring on software ...
    Sep 21, 2016 · An empirical study to assess the effects of refactoring on software maintainability ... observed which further improves the maintainability.
  35. [35]
    What Lies Beneath Hard Work: Code Churn - Stepsize AI
    Mar 31, 2021 · Code churn may also indicate internal team problems with communication where a high volume output is perceived as highly rewarded, at the ...
  36. [36]
    [PDF] An Empirical Study of Refactoring Challenges and Benefits at ...
    It is widely believed that refactoring improves software quality and developer productivity by making it easier to maintain and understand software systems [1].
  37. [37]
    Code refactoring | IntelliJ IDEA Documentation - JetBrains
    Aug 8, 2025 · Refactoring is a process of improving your source code without creating a new functionality. Refactoring helps you keep your code solid, dry, and easy to ...
  38. [38]
  39. [39]
    The Impact of Code Smells on Software Bugs: A Systematic ... - MDPI
    Based on evidence of 16 studies covered in this SLR, we conclude that 24 code smells are more influential in the occurrence of bugs relative to the remaining ...Missing: prevalence | Show results with:prevalence
  40. [40]
    Exploring the impact of inter-smell relations on software maintainability: An empirical study
    **Summary of Findings on Inter-Smell Relations and Maintenance Problems:**
  41. [41]
    To what extent can maintenance problems be predicted by code ...
    Code smells are indicators of poor coding and design choices that can cause problems during software maintenance and evolution. Objective. This study is aimed ...Missing: limitations | Show results with:limitations
  42. [42]
    Cost of Poor Software Quality in the U.S.: A 2022 Report - CISQ
    Dec 16, 2022 · Our 2022 update report estimates that the cost of poor software quality in the US has grown to at least $2.41 trillion, but not in similar proportions as seen ...
  43. [43]
    studying code representation techniques for ML-based God class ...
    Jul 25, 2025 · The God class code smell occurs when a single class takes on too many responsibilities, violating the Single Responsibility Principle (SRP); ...
  44. [44]
    Investigating the Impact of SOLID Design Principles on Machine ...
    Jun 11, 2024 · Understanding and Detecting Harmful Code​​ Code smells typically indicate poor design implementation and choices that may degrade software ...
  45. [45]
    Domain-Driven Design in software development: A systematic ...
    This Systematic Literature Review (SLR) aims to provide a comprehensive analysis of existing research studies that have used DDD for various purposes in ...
  46. [46]
    On Technical Debt And Code Smells: Surprising insights from ...
    Dec 23, 2021 · You know that your codebase has technical debt when it gets smelly. The notion of “code smells” was popularized by Martin Fowler. There are ...Missing: metaphor | Show results with:metaphor
  47. [47]
    (PDF) Test-Driven Development (TDD) and its Impact on Software ...
    Aug 18, 2025 · This empirical study aims to explore the relationship between TDD and software maintainability by analyzing the effect of TDD on the long-term ...
  48. [48]
    Catalog and detection techniques of microservice anti-patterns and ...
    This work catalogs recurring bad design practices known as anti-patterns and bad smells for microservice architectures, and provides a classification into ...
  49. [49]
    Pair programming: A peek into its benefits and drawbacks - Noibu
    Mar 21, 2024 · During pair programming developers critically review each other's code, thus helping them recognize the smells early and avoid them in future ...
  50. [50]
    What is a linter? | JetBrains Qodana
    A linter is a tool within static code analysis that examines source code to flag mistakes, anomalous code, probable bugs, stylistic errors, and anything that ...
  51. [51]
    (PDF) Research Trends, Detection Methods, Practices, and ...
    Oct 15, 2025 · Context: A code smell indicates a flaw in the design, implementation, or maintenance process that could degrade the software's quality and ...<|separator|>