Fact-checked by Grok 2 weeks ago

Test double

A test double is a generic term for a test-specific equivalent that replaces a real production component—known as a depended-on component (DOC)—when testing a (SUT) in . This substitution provides the same as the original but simplified or controlled , allowing developers to isolate and verify the SUT's without relying on external dependencies such as databases, networks, or third-party services. The concept was formalized by Meszaros in his 2007 book xUnit Test Patterns: Refactoring Test Code, addressing inconsistencies in terminology across testing frameworks like . Test doubles serve critical purposes in unit and by enabling faster execution, greater reliability, and precise control over test conditions. They mitigate issues like slow performance from real components, undesirable side effects (e.g., actual data modifications), or unavailability in controlled test environments, thus allowing developers to focus on verifying specific behaviors of the . Common motivations include verifying indirect inputs and outputs, simulating edge cases, and ensuring tests remain deterministic and repeatable, which are essential for maintaining code quality in agile and practices. There are five primary types of test doubles, each tailored to different testing needs:
  • Dummy objects: Simple placeholders passed as parameters but never actually used, often to satisfy method signatures without influencing the test outcome.
  • Fake objects: Functional implementations with simplified logic, such as an in-memory database that mimics a real one but operates faster and without persistence.
  • Stubs: Provide predefined (canned) responses to calls from the SUT, controlling indirect inputs but not tracking usage.
  • Spies: Extend stubs by recording information about interactions, such as the number of method calls or arguments passed, to observe indirect outputs.
  • Mocks: Assert expectations on the SUT's interactions with the double, verifying that specific calls occur as anticipated and potentially failing the test if they do not.
These types can overlap in practice, and tools in modern testing frameworks (e.g., for or unittest.mock for ) often support creating and configuring them programmatically to streamline test development.

Fundamentals

Definition

A test double is a generic term for any object or component that stands in for a real dependency in , enabling the of the unit under test from external influences. This allows developers to focus on verifying the behavior of the specific code module without interference from complex or unpredictable real-world dependencies, such as databases, networks, or third-party services. The term "test double" was coined by Gerard Meszaros in his 2007 book xUnit Test Patterns: Refactoring Test Code, where it serves as an umbrella concept encompassing various substitutes used in frameworks like . Meszaros introduced this terminology to unify the diverse practices in test , drawing an analogy to stunt doubles in who perform risky actions on behalf of actors. Key characteristics of a test double include mimicking the of the real object it replaces while allowing controlled behavior to ensure test predictability and repeatability. Unlike production code, test doubles are explicitly designed for temporary use in testing environments and are not deployed in live systems. This distinguishes the broader category of test doubles from narrower terms like "," which refers specifically to a subtype that verifies interactions rather than being synonymous with the entire concept.

Historical Context

The concept of test doubles traces its roots to the 1990s, when practices began emphasizing dependency isolation in testing to enable modular verification of components. In the Smalltalk community, early experimenters explored substitution techniques for external dependencies during unit tests, laying groundwork for isolating code behavior without full system integration. Similarly, in emerging ecosystems, developers adopted ad-hoc faking methods to simulate interactions, driven by the need for faster feedback in iterative development cycles. These precursors marked a shift from monolithic testing toward more granular, isolation-focused approaches in object-oriented languages. A pivotal milestone occurred in 2000 with the introduction of mock objects as a formalized technique for behavior verification, presented in the paper "Endo-Testing: Unit Testing with Mock Objects" by Tim Mackinnon, Steve Freeman, and Philip Craig at the XP2000 conference. This work, rooted in principles, highlighted mocks as tools for specifying expected interactions, influencing subsequent testing strategies. Concurrently, Kent Beck's development of in the late 1990s, as part of the family, provided a foundational framework that encouraged the use of such substitutions in (TDD), promoting a transition from informal faking to systematic patterns for reliable unit isolation. Beck's contributions, including his 2003 book "Test-Driven Development: By Example," further embedded these ideas in agile methodologies. The term "test double" was formalized in 2007 by Gerard Meszaros in his book "," which unified diverse substitution patterns—such as stubs, mocks, and fakes—under a single umbrella to standardize terminology and practices across frameworks. This publication synthesized years of community experimentation, providing a that clarified roles and reduced confusion in test design. Post-2007, the concept gained widespread adoption within agile and TDD workflows, as evidenced by the proliferation of supporting tools; for instance, the framework for released its first version in 2008, simplifying mock creation and verification. Similarly, Python integrated unittest.mock into its with version 3.3 in 2012, extending test double capabilities to a broader developer base and reinforcing structured patterns over ad-hoc implementations.

Role in Testing

Purposes and Benefits

Test doubles serve as substitutes for real objects or components during software testing, primarily to isolate the unit under test from external dependencies such as databases, APIs, or file systems. This isolation allows developers to focus exclusively on the logic of the unit without requiring a full system setup or dealing with the complexities and side effects of actual collaborators. By replacing these dependencies with controlled alternatives, test doubles enable testing in a simplified environment, ensuring that the unit's behavior can be verified independently of the broader application's state. One key benefit of test doubles is the significant improvement in test execution speed. Real components like or network services often introduce delays due to I/O operations or resource constraints, whereas test doubles can simulate responses instantaneously. For instance, replacing a persistent database with an in-memory using a fake object has been shown to accelerate test runs by up to 50 times, facilitating faster feedback loops and enabling more frequent test executions in development workflows. This speed enhancement also supports parallel test execution and seamless integration into continuous integration/continuous deployment () pipelines, reducing overall build times. Test doubles further enhance test reliability by promoting and the ability to simulate challenging scenarios. By controlling inputs and outputs precisely, they eliminate variability from external factors like network latency or data inconsistencies, ensuring that tests produce consistent results across runs. This is crucial for , as it allows edge cases—such as error conditions or rare data states impossible or impractical to replicate with real objects—to be tested reliably. Additionally, test doubles bolster code maintainability by encouraging through dependency inversion, making systems easier to refactor and extend. In the context of test-driven development (TDD), test doubles enable incremental construction by allowing units to be tested and refined before their dependencies are fully implemented, thus supporting agile practices and reducing integration risks later in the process.

Integration with Unit Testing

Test doubles are integrated into the unit testing workflow primarily during the Arrange phase of the Arrange-Act-Assert (AAA) pattern, where dependencies are configured and replaced with doubles to isolate the unit under test before the Act phase executes the method and the Assert phase verifies outcomes. This placement ensures that external dependencies, such as databases or external services, do not influence the test execution, allowing focused validation of the unit's logic. Isolation techniques for incorporating test doubles rely on () patterns, which facilitate runtime substitution of real objects with doubles through mechanisms like constructor injection—where dependencies are passed via the constructor; injection—where dependencies are assigned post-instantiation using methods; or interface-based injection—where abstractions define contracts that doubles implement. These approaches promote by the unit from concrete implementations, enabling seamless swaps without altering the production code. While test doubles are chiefly employed in unit tests to achieve complete of individual components, they can be extended briefly to integration tests for partial , where select dependencies are doubled to focus on subsystem interactions without full end-to-end involvement. This selective use maintains the speed and reliability of unit-level testing while probing limited integrations. A representative example involves a service class that depends on a database client for ; in the unit test, the real client is replaced with a test double during the Arrange phase via constructor injection, allowing the test to simulate query responses and validate without establishing actual database connections or data setup. This isolates the service's decision-making process, ensuring tests run efficiently and deterministically. Successful integration of in contributes to high on isolated units, indicating comprehensive exercise of the logic without external interference, while also fostering low by enforcing explicit dependencies that reduce inter-module entanglement. These outcomes enhance and support the benefits of , such as faster loops in cycles.

Classification of Test Doubles

Dummies and Stubs

Dummies represent the simplest form of test doubles, serving as placeholder objects that are passed to methods or functions to satisfy parameter requirements without any expectation of interaction or behavior. These objects are inert and contain no functionality, often implemented as null references, empty instances, or minimal structures that merely compile and pass type checks. According to Martin Fowler, dummy objects are "passed around but never actually used," making them ideal for scenarios where a dependency is required by the (SUT) but plays no role in the test's assertions or logic. In contrast, stubs provide predefined, canned responses to invocations, allowing the to proceed through specific execution paths while simulating controlled inputs or outputs. Unlike dummies, stubs are responsive to calls within the scope of the test but do not track or verify interactions; they simply return fixed values, such as a constant like "" for a computational or an exception to test handling. As outlined in xUnit Test Patterns, stubs replace real dependencies to "control indirect inputs," enabling isolated of the 's behavior under predictable conditions without relying on external systems. The key differences lie in their reactivity and purpose: dummies offer no responses and exist solely to fulfill signatures, remaining completely passive, whereas stubs are programmed to deliver consistent outputs but remain non-verifying, adhering to a "strict" fixed behavior without adaptability or call logging. Both types promote test isolation by substituting real components, but dummies require minimal effort for unused parameters, while stubs demand configuration for response simulation. A common way to create a in involves implementing an with hardcoded returns, as shown in this example for a UserService:
java
public [interface](/page/Interface) UserService {
    [User](/page/User) findUser(int id);
}

public class UserService[Stub](/page/Stub) implements UserService {
    @Override
    public [User](/page/User) findUser(int id) {
        return new [User](/page/User)(id, "[Stub](/page/Stub) User");
    }
}

// In a test:
UserService userService = new UserService[Stub](/page/Stub)();
User user = userService.findUser(1);
assertEquals("[Stub](/page/Stub) User", user.getName());
This allows testing of client code that depends on UserService without invoking the actual , focusing on output validation rather than side effects. Dummies and are particularly suited for input-focused tests where the emphasis is on controlling the SUT's to verify direct outputs, rather than monitoring collaborations, thus simplifying test setup and maintenance in early development stages or when real dependencies are unavailable.

Mocks and Spies

Mocks and spies represent active forms of test doubles that go beyond merely providing predefined responses, instead focusing on verifying the interactions between the and its dependencies. These doubles enable behavioral verification, ensuring that components adhere to expected contracts by checking not only the outcomes but also the manner in which methods are invoked, such as the sequence, frequency, and parameters of calls. This approach is particularly valuable in isolating units for testing while confirming collaborative behaviors in object-oriented designs. Mocks are fully fabricated objects pre-programmed with strict expectations about the calls they should receive, including specific sequences, argument matching, and invocation counts; if these expectations are not met, the test fails, often by throwing an exception. They verify both the state resulting from interactions and the behavior itself, making them suitable for defining and enforcing precise interaction protocols. For instance, a mock might expect a save method to be called exactly once with a particular entity object, failing the test if the call is absent or mismatched. In contrast, spies wrap real objects to observe and record invocations without fundamentally altering their underlying behavior, allowing most calls to delegate to the actual implementation while tracking details like call counts and arguments. This partial mocking capability makes spies ideal for scenarios where the full real object's logic is desired, but specific interactions need verification, such as monitoring method calls on a live instance during integration-style unit tests. For example, a spy on an email service could record the number of messages sent while still processing them normally. The primary differences lie in their fabrication and enforcement: mocks are entirely simulated with rigid expectations that dictate allowable interactions, whereas spies are observational wrappers that typically delegate to real objects and lack predefined failure conditions for unexpected calls. Mocks promote strict behavioral specification from the outset, while spies offer flexibility for verifying subsets of behavior in otherwise functional systems. Verification in both mocks and spies commonly employs assertion mechanisms like "verify" methods to inspect recorded interactions, checking aspects such as call counts, order, or parameter values. In the framework for , this is achieved via syntax like verify(mock).methodCall(expectedArgs), which asserts that the specified method was invoked with the given arguments. Similar capabilities exist in JavaScript's Sinon.JS, where spies provide assertions like spy.calledOnce or spy.calledWith(args) to confirm invocation details. Mocks and spies are particularly employed in contract testing to ensure components interact correctly, such as verifying that a invokes a exactly once under defined conditions, thereby validating the adherence to expectations without relying on external systems. This usage supports mockist , where interaction verification isolates units and detects integration issues early.

Fakes

Fakes are simplified, working implementations of production objects used as test doubles, providing functional approximations that mimic real behavior without the full complexity or external dependencies of the actual components. Unlike stubs, which return predefined responses without performing operations, fakes execute logic to deliver realistic outcomes, often operating entirely in to avoid side effects like calls or database writes. For instance, a fake might implement core algorithms but omit features, error handling for edge cases, or with external systems, ensuring self-consistent interactions during tests. These test doubles are particularly useful when stubs prove too simplistic for validating algorithms that require some form of data persistence, , or , yet mocks impose overly rigid expectations on interactions. Fakes bridge this gap by allowing tests to exercise more authentic flows, such as simulating persistence without the overhead of a real database, which can accelerate test execution significantly—for example, an fake might speed up tests by up to 50 times compared to a full . They are ideal for scenarios where the depended-on component is slow, unavailable during , or too complex to integrate fully in isolation. Common examples include a email sender that logs messages to a or in-memory list instead of transmitting them via SMTP, enabling tests to verify message content and formatting without actual delivery. Similarly, a HTTP client might use hardcoded or file-based responses to simulate interactions, allowing evaluation of request handling logic without network latency. These implementations maintain higher to production behavior than non-functional doubles, supporting reusable setups across multiple scenarios. While fakes demand more initial setup effort than stubs due to their operational code, they offer greater realism, reducing the risk of tests passing in but failing in . However, this added introduces trade-offs, such as potential subtle if the fake's shortcuts diverge from production realities, and they provide less precise control over outputs compared to mocks. In the of doubles, fakes occupy a middle ground: more sophisticated than dummies or stubs, which focus on placeholders or canned data, but simpler and less resource-intensive than full production objects, often promoting reusability to enhance .

Implementation Strategies

Manual Creation

Manual creation of test doubles involves hand-coding substitute objects that mimic the behavior of real dependencies in unit tests, without relying on external libraries or frameworks. This approach is particularly useful in simple or educational contexts where full control over the double's implementation is desired, allowing developers to understand the underlying mechanics of isolation testing. According to Gerard Meszaros in xUnit Test Patterns, test doubles are created to provide the same as the depended-on component (DOC) while enabling controlled interactions during testing. Basic techniques for manual creation include subclassing an existing or implementing an to override specific methods. For instance, in object-oriented languages, a developer can define a that inherits from the and replaces complex operations with fixed responses, such as returning predefined values for queries. This is exemplified in C# by implementing an like IShopDataAccess with a class that hardcodes return values for methods like GetProductPrice. Anonymous objects, such as lambdas in or anonymous inner es in , can also be used for quick, one-off doubles, enabling inline creation of simple stubs without defining full classes. These methods ensure the test double adheres to the DOC's contract while simplifying test setup. The step-by-step process for manual creation begins with identifying the or that the () depends on. Next, create a substitute or object that implements this , defining canned responses—such as fixed returns for stubs—or basic state tracking for spies. Then, inject the test double into the during setup, replacing the real via constructor parameters or setters. Finally, exercise the and verify outcomes, ensuring the double's behavior supports the test's assertions without external side effects. This process promotes but requires careful alignment with the DOC's to avoid issues. Manual creation offers full control over the test double's logic and incurs no additional dependencies, making it ideal for small projects or when learning test isolation techniques. However, it can be verbose and error-prone for complex scenarios, as hand-coding expectations or verifications increases maintenance effort and risks inconsistencies with the real DOC's evolution. For example, updating a stub's responses manually across multiple tests demands more time than automated alternatives. Despite these drawbacks, it excels in environments where overhead is undesirable. A example is a for a that returns predefined , as shown in the following :
interface UserRepository {
    User findById(String id);
}

class StubUserRepository implements UserRepository {
    private Map<String, User> cannedUsers = new Map();
    
    StubUserRepository() {
        // Predefine responses
        cannedUsers.put("123", new User("Alice", "[email protected]"));
    }
    
    User findById(String id) {
        return cannedUsers.getOrDefault(id, null);
    }
}

// In test setup
UserRepository stubRepo = new StubUserRepository();
UserService sut = new UserService(stubRepo);
User result = sut.getUserById("123");
// Assert result equals expected User
This stub provides a simple, hardcoded response for testing user retrieval logic in isolation.

Framework-Based Approaches

Framework-based approaches to creating test doubles leverage specialized libraries that automate the generation, configuration, and verification of mocks, stubs, and other substitutes, reducing boilerplate code and enhancing test maintainability across various programming languages. These tools often integrate seamlessly with testing frameworks and dependency injection (DI) systems, allowing developers to focus on test logic rather than manual object manipulation. By providing declarative syntax and runtime interception, they enable dynamic behavior definition without altering production code. In the Java ecosystem, stands out as a widely adopted library for creating mocks and spies, utilizing annotations like @Mock to automatically inject mock instances into test classes via frameworks such as . This annotation-driven approach simplifies setup by leveraging Java's reflection capabilities to wire dependencies without explicit instantiation. Complementing Mockito, JMock emphasizes behavioral verification through expectation-based syntax, where developers define interaction sequences on mocks and assert their fulfillment at test completion, promoting stricter contract testing. Python's includes unittest.mock, a built-in that supports patching—temporarily replacing objects in a with Mock instances—to isolate units under test without external dependencies. For enhanced integration with the pytest framework, pytest-mock extends this functionality by providing a mocker fixture that automates patching within test fixtures, allowing concise setup and teardown for spies and stubs in workflows. In and environments, Sinon.js offers a versatile toolkit for comprehensive test doubles, including stubs for predefined responses, spies for call tracking, and fakes for lightweight implementations of complex objects, all operable across browser and server-side tests. Jest, a popular all-in-one testing suite, provides jest.fn() for creating inline mock functions that capture invocations and return values on-the-fly, streamlining asynchronous testing with built-in assertions. For .NET applications, the Moq library facilitates dynamic mock creation using a LINQ-inspired fluent syntax, where methods like It.IsAny() match any argument of type T during setup, enabling expressive stubbing of interfaces and abstract classes in unit tests. This approach exploits .NET's expression trees for verifiable, type-safe configurations. Emerging cross-language trends in the include -assisted mocking tools that generate test doubles from code analysis or descriptions, accelerating setup in large codebases. Examples include Keploy, an open-source tool that uses to generate mocks and stubs for unit, integration, and , and Diffblue Cover, which automates unit test creation including mocks for applications.

Challenges and Best Practices

Common Pitfalls

One common pitfall in using test doubles is over-specification, where developers define excessive expectations or behaviors in mocks, such as verifying the exact order or format of arguments passed to a . This leads to fragile tests that fail due to minor, unrelated changes in the production code, like reordering parameters in a method call. Test brittleness arises when test doubles couple tests too closely to implementation details of the system under test, requiring frequent updates to mock setups during refactoring. For instance, altering the sequence of method calls in the code can break multiple tests, increasing maintenance overhead and reducing overall test reliability. Incomplete isolation occurs when not all external dependencies are replaced with appropriate test doubles, allowing real components like databases or APIs to influence test outcomes. This results in non-deterministic tests that may pass or fail based on external factors, such as network latency or database state, undermining the isolation benefits intended by test doubles. Performance issues can emerge from excessive use of complex fakes or mocks, which may introduce computational overhead in test setups, slowing down execution. Conversely, underutilizing test doubles in favor of real dependencies can lead to protracted test runs, particularly in integration-heavy scenarios involving I/O operations.

Guidelines for Effective Use

When selecting test doubles, the choice should align with the specific needs of the test scenario to ensure without introducing unnecessary . Dummies are ideal as simple placeholders in parameters where no behavior or data is required from the , preventing reference issues while keeping tests focused on the . Stubs suit tests that need predefined responses from external components, such as returning fixed values to simulate database queries without actual I/O. Mocks are appropriate for verifying interactions, like ensuring a is called with correct arguments during between objects. Fakes provide lightweight, working implementations for scenarios requiring realistic but simplified behavior, such as an in-memory mimicking a full database. A key balance rule is to mock only external dependencies, such as third-party or , to isolate the unit under test from unpredictable or slow resources; avoid mocking internal methods or components you own, as this can lead to over-testing and brittle suites that break with minor refactoring. This approach maintains test reliability by focusing verification on observable behavior rather than implementation details. For maintenance, keep test doubles as simple as possible to minimize cognitive overhead and ease updates, documenting their expected behaviors and assumptions in comments or test names to facilitate team collaboration. Refactor tests in tandem with production code changes to preserve alignment and prevent accumulation of outdated doubles that could obscure true defects. Periodically verify the fidelity of test doubles by comparing their outputs against real objects in integration tests or smoke checks, ensuring they accurately represent production behavior without diverging over time due to untracked changes in dependencies. Practices such as the "humble object" pattern, where complex, hard-to-test components like user interfaces are separated into thin wrappers that delegate logic to pure, testable objects, enhance modularity and double usage. Integrating with contract testing tools like further supports doubles by generating verifiable pacts from consumer tests, ensuring provider compatibility without full end-to-end runs. In AI-assisted development as of 2025, challenges include ensuring test doubles mitigate biases in AI-generated test data, using tools to validate mocked behaviors. These strategies expand on traditional caveats by targeting metrics such as test flakiness reduction through consistent double behaviors.

References

  1. [1]
    Test Double at XUnitPatterns.com
    ### Summary of Test Double from http://xunitpatterns.com/Test%20Double.html
  2. [2]
    Test Double - Martin Fowler
    Jan 17, 2006 · Test Double is a generic term for any case where you replace a production object for testing purposes. There are various kinds of double ...
  3. [3]
    Test Double Tutorial: Definition, Types, and Best Practices | ZetCode
    Definition of Test Double ... Gerard Meszaros introduced the most widely recognized classification system for test doubles in his book "xUnit Test Patterns.<|control11|><|separator|>
  4. [4]
    The definitive guide to test doubles on Android — Part 1: Theory
    May 23, 2022 · The concept of test doubles was created by Gerard Meszaros in the book XUnit Test Patterns: Refactoring Test Code and refined by many other ...
  5. [5]
    Learn the differences of Test Doubles | Mercari Engineering
    Apr 22, 2022 · What is Test Doubles ; Stubs, ; Mocks, ; Spies, ; Fakes and ; Dummies. They were originally defined by Gerard Meszaros in xUnit Test Patterns. (Test ...
  6. [6]
    Mocks Aren't Stubs - Martin Fowler
    Jan 2, 2007 · In this article I'll explain how mock objects work, how they encourage testing based on behavior verification, and how the community around them uses them.
  7. [7]
    [PDF] Endo-Testing: Unit Testing with Mock Objects
    Endo-Testing: Unit Testing with Mock Objects. Tim Mackinnon, Steve Freeman ... Expectation classes [Mackinnon 2000], which makes it quick to write many types of ...
  8. [8]
    xUnit Test Patterns
    Gerard Meszaros's xUnit Test Patterns distills and codifies the crucial meta-knowledge to take us to the next level.
  9. [9]
    Eradicating Non-Determinism in Tests - Martin Fowler
    Apr 14, 2011 · The primary benefit of having automated tests is that they provide bug detection mechanism by acting as regression tests1. When a regression ...
  10. [10]
    The Arrange, Act, and Assert (AAA) Pattern in Unit Test Automation
    Jan 17, 2025 · The AAA unit testing pattern supports TDD (Test-Driven Development) by imposing an explicit structure for your tests. Each unit test that ...
  11. [11]
  12. [12]
    Best practices for writing unit tests - .NET - Microsoft Learn
    Mar 22, 2025 · This article describes some best practices for designing unit tests to support your .NET Core and .NET Standard projects.Missing: doubles | Show results with:doubles
  13. [13]
    Dependency injection - .NET - Microsoft Learn
    Oct 21, 2025 · Dependency injection in .NET is a built-in part of the framework, along with configuration, logging, and the options pattern.
  14. [14]
    Dependency Injection and Unit Testing - JavaRanch
    Dependency Injection sets relations between instances, injecting dependencies from the outside, and makes unit testing very flexible.
  15. [15]
    Tests doubles and dependency injection
    Test doubles let you write unit tests in isolation from other bits of code · Test doubles require dependency injection to be able to replace real parts of your ...Missing: integration AAA
  16. [16]
    Integration Test - Martin Fowler
    Jan 16, 2018 · Integration tests determine if independently developed units of software work correctly when they are connected to each other.
  17. [17]
  18. [18]
  19. [19]
    Simplifying Testing with Stub Implementations - Java Design Patterns
    Programmatic Example of Service Stub Pattern in Java. We demonstrate the ... xUnit Test Patterns: Refactoring Test Code. MIT licensed. Copyright © 2025 ...
  20. [20]
    Mockito framework site
    Mockito is a tasty mocking framework for Java unit tests, with a clean and simple API, allowing you to write beautiful tests.
  21. [21]
    Sinon.JS - Standalone test fakes, spies, stubs and mocks for ...
    Standalone test spies, stubs and mocks for JavaScript. Works with any unit testing framework. Become a backer and support Sinon.JS with a monthly donation.
  22. [22]
  23. [23]
    Fake Object at XUnitPatterns.com
    **Summary of Fake Object from XUnitPatterns.com**
  24. [24]
    Unit Testing: Exploring The Continuum Of Test Doubles
    A good mock library will create mocks that are easy to configure and that act as test spies at the same time. The line total example that I've shown ...Figure 1 Test Doubles... · Figure 3 Order Class · Figure 4 Spy ClassMissing: frameworks | Show results with:frameworks
  25. [25]
    Techniques for Using Test Doubles - Software Engineering at Google
    Test doubles are crucial to engineering velocity because they can help comprehensively test your code and ensure that your tests run fast.
  26. [26]
  27. [27]
    jMock - An Expressive Mock Object Library for Java
    JMock is a library that supports test-driven development 1 of Java 2 code with mock objects 3. Mock objects help you design and test the interactions between ...jMock 2 Cheat Sheet · Getting Started · Cookbook · jMock 1 DocumentationMissing: behavioral | Show results with:behavioral
  28. [28]
    unittest.mock — mock object library — Python 3.14.0 documentation
    unittest.mock is a library for testing in Python. It allows you to replace parts of your system under test with mock objects and make assertions about how they ...
  29. [29]
    pytest-mock documentation
    This pytest plugin provides a mocker fixture which is a thin-wrapper around the patching API provided by the mock package.Missing: integration | Show results with:integration
  30. [30]
    Mock Functions - Jest
    Jun 10, 2025 · Mock Functions · Using a mock function​ ·.mock property​ · Mock Return Values​ · Mocking Modules​ · Mocking Partials​ · Mock Implementations​ · Mock ...
  31. [31]
    devlooped/moq: The most popular and friendly mocking ... - GitHub
    Moq (pronounced "Mock-you" or just "Mock") is the only mocking library for .NET developed from scratch to take full advantage of .NET Linq expression trees and ...
  32. [32]
    (PDF) AI Agents in Software Testing and Test Automation
    Mar 15, 2025 · This book explores Agent AI, a subfield of AI that utilizes autonomous, intelligent systems for software testing.
  33. [33]
    Injecting Mockito Mocks in to Spring Beans | Baeldung
    Jul 18, 2024 · In this tutorial, we'll discuss how to use dependency injection to insert Mockito mocks into Spring Beans for unit testing.
  34. [34]
    Testing with Dagger
    This document explores some strategies for testing applications built with Dagger. Replace bindings for functional/integration/end-to-end testing.
  35. [35]
    Isolating Code Under Test with Microsoft Fakes - Visual Studio ...
    Jul 10, 2025 · Learn how Microsoft Fakes helps you isolate the code you are testing by replacing other parts of the application with stubs or shims.
  36. [36]
    Top 5 Testing Challenges Developers Will Face in 2025 (and How ...
    Jan 14, 2025 · Developers need to create unit tests that catch these biases early using tools to mock different scenarios and control test environments.
  37. [37]
    Mocking types that you own - Vladimir Khorikov
    This guideline is about writing your own adapters on top of third-party libraries and mocking those adapters instead of the underlying types.<|separator|>
  38. [38]
    Use test doubles in Android | Test your app on Android
    Feb 10, 2025 · When you need to provide a dependency to a subject under test, a common practice is to create a test double (or test object). Test doubles are ...
  39. [39]
    Humble Object - Martin Fowler
    Apr 29, 2020 · By making untestable objects humble 1, we reduce the chances that they harbor evil bugs. A common example of this is in the user-interface.Missing: TDD | Show results with:TDD
  40. [40]
    Pact Docs: Introduction
    Aug 30, 2022 · Contract testing is immediately applicable anywhere where you have two services that need to communicate - such as an API client and a web front ...
  41. [41]
    Enhancing Test Stability With Test Doubles for Flaky Test Management
    Apr 30, 2024 · Test doubles greatly help reduce flakiness; understanding simple causes of failures is also crucial.