Fact-checked by Grok 2 weeks ago

pytest

Pytest is an open-source testing framework for the Python programming language that enables developers to write simple, readable tests while scaling to handle complex functional testing for applications and libraries. Developed by Holger Krekel starting in 2003, pytest originated as a tool to support testing in Python projects and has since evolved into a mature, community-driven project under the MIT license. It is maintained by a team including Krekel and supported through platforms like Open Collective and Tidelift for enterprise use. Unlike Python's built-in unittest module, which requires subclassing and verbose assertions, pytest emphasizes simplicity with plain assert statements and automatic test discovery. Key features of pytest include detailed assertion for clear failure messages, modular fixtures for managing test resources, and a rich with over 1,300 external plugins available for extending functionality, such as parametrization, mocking, and parallel execution. It supports running unittest suites alongside native pytest tests and is compatible with 3.10+ and 3, making it versatile for modern development workflows. As of 2025, pytest is widely regarded as the most popular testing framework, used by major companies including , due to its flexibility, extensive ecosystem, and ease of integration with tools like and .

Overview

Purpose and Scope

pytest is an open-source testing framework for designed to simplify the process of writing, running, and managing tests by providing a concise syntax and powerful features that reduce . It enables developers to create small, readable tests that can scale to handle complex for applications and libraries, supporting a wide range of testing needs including , , , and end-to-end tests. Compatible with 3.10 and later versions, pytest maintains support for all actively maintained releases at the time of each version's issuance, ensuring broad applicability across modern environments. At its core, pytest embodies a philosophy of simplicity and extensibility, prioritizing plain assert statements over verbose assertion methods and allowing tests to be written as simple functions without requiring class inheritance or test method naming conventions. This approach minimizes setup overhead while offering extensibility through a robust plugin system, which enables customization and integration with other tools without altering core functionality. Fixtures serve as a mechanism for managing setup and teardown in tests, providing reusable and scoped resources that enhance modularity. While pytest functions as an alternative to the built-in unittest framework, it is not intended as a direct replacement but rather as an enhancer that can run existing unittest-based test suites seamlessly, allowing gradual adoption of its features.

Advantages over Built-in Frameworks

Pytest offers significant advantages over Python's built-in testing frameworks, particularly and doctest, by prioritizing simplicity, expressiveness, and extensibility, which lead to faster test development and reduced maintenance overhead. One key benefit is its simplicity in test structure: unlike unittest, which requires tests to inherit from unittest.TestCase and follow a class-based approach, pytest allows tests to be written as simple, standalone functions without boilerplate inheritance or explicit test classes. This functional style makes tests more concise and easier to read, enabling developers to focus on logic rather than framework conventions. In contrast, doctest primarily extracts and runs tests from docstrings, limiting its scope to informal examples rather than structured unit testing. Pytest's assertion system provides human-readable failure messages through assert rewriting, where standard Python assert statements are enhanced to display detailed introspection of expressions, subexpressions, and values without needing custom assertion methods like unittest's assertEqual or assertTrue. This feature improves debugging by showing exactly where and why an assertion fails, such as differing list elements or string mismatches, reducing the time spent interpreting generic error outputs. Doctest, while useful for documentation validation, offers minimal assertion diagnostics compared to pytest's introspective capabilities. For setup and teardown, pytest's fixture system surpasses unittest's setUp and tearDown methods by supporting scoped, reusable, and dependency-injected resources that can be applied at , , , or session levels, with automatic cleanup and parameterization for varied test contexts. This allows fixtures to be shared across tests without repetitive code, unlike unittest's more rigid, per-class setup that lacks fine-grained scoping or injection. Doctest has no built-in setup mechanism, often requiring manual context management in docstrings. Pytest's plugin ecosystem further enhances productivity, with over 1,600 third-party plugins available for integrations like (e.g., pytest-cov), mocking (e.g., pytest-mock), parallel execution (e.g., pytest-xdist), and (e.g., pytest-bdd for TDD/BDD styles). This extensibility allows customization without modifying core code, a absent in unittest's limited built-in extensions and entirely lacking in doctest. Overall, these features reduce overhead by automating , parameterization, and , making pytest ideal for large-scale projects.
Featurepytestunittest
Test StructureSimple functions, no requiredClass-based, from TestCase
AssertionsPlain assert with introspective rewriting for detailed failuresSpecific methods (e.g., assertEqual), basic output
Setup/TeardownFlexible fixtures with scoping and injectionsetUp/tearDown methods, class-scoped only
Test DiscoveryAutomatic based on naming conventions (e.g., test_*.py)Automatic discovery via TestLoader.discover() or python -m unittest discover (since Python 3.2), based on patterns like test*.py
Extensibility>1,600 plugins for coverage, mocking, parallelism, BDDLimited built-in, few third-party extensions

History

Origins and Early Development

pytest originated from efforts within the project, where Holger Krekel began developing a testing framework in 2004 to address the challenges of managing a large number of tests in a dynamic environment. Initially named py.test, it served as a successor to earlier test support scripts built on top of 's built-in unittest module, with roots tracing back to refactors in June 2003. Krekel, motivated by the verbosity and rigidity of unittest—particularly its class-based structure and limited flexibility for rapid iteration in projects like —aimed to create a tool that emphasized plain assertions, minimal boilerplate, and straightforward invocation. This approach allowed for more without the overhead of subclassing or XML-style output mandates common in other s. The framework's early development occurred primarily within the PyPy community's version control systems, starting in SVN repositories before migrating to and then in 2007. It was initially hosted on for collaborative development, reflecting the distributed version control preferences of the time, and later moved to to align with broader open-source trends. The first public release came as part of the broader "py" library in 2006 (version 0.8.0-alpha2), with py.test as a core component focused on command-line driven test discovery and execution. Key initial features included automatic collection of tests from Python files without requiring inheritance from base classes, support for options like -x for early stopping and --pdb for , and a collection for organizing tests hierarchically. These elements established py.test as a lightweight alternative, prioritizing ease of use over comprehensive reporting, and laid the groundwork for its evolution into a plugin-extensible system.

Major Releases and Evolution

Pytest's evolution has been marked by regular major releases that introduce new features, enhance compatibility, and remove legacy support to align with modern practices. The release in 2012 represented a significant milestone, introducing the funcarg mechanism—which laid the for the current fixture —along with improved assertion for sequences and strings, and support for ini-file in files like setup.cfg. This release also separated pytest from the broader py library, establishing it as a standalone package while maintaining for existing tests. Subsequent releases built on this foundation, with version 3.0 in July 2016 shifting the primary command-line entry point from py.test to pytest for simplicity and to reduce confusion with other tools. Key additions included the approx function for robust floating-point comparisons, support for yield statements in fixtures to handle teardown more elegantly (deprecating the older @yield_fixture decorator), and enhanced exception handling in pytest.raises with regex matching. Version 5.0 in October 2019 marked the end of Python 2 support, requiring Python 3.5 or later and focusing on improvements like better JUnit XML reporting and refined plugin architecture. By version 7.0 in February 2022, pytest dropped support for Python 3.6, introduced the --import-mode=importlib option for more reliable module imports, and added the pytest.Stash mechanism for plugins to store data across the test session. The project's maintenance transitioned fully to PyPI under the pytest-dev GitHub organization, formed to foster collaborative development among core contributors. This structure has facilitated widespread adoption, with major scientific computing projects like relying on pytest for their comprehensive test suites, leveraging its flexible discovery and reporting capabilities. Community growth accelerated, evidenced by over 10 million monthly downloads on PyPI by 2023, reflecting its status as a for testing. As of , recent developments emphasize modernization and performance. The 8.0 release in January 2024 introduced support for exception groups and improved diff output for better , while deprecating marks on fixtures to streamline . Pytest 9.0.0, released on November 5, , dropped Python 3.9 support, added native TOML files, integrated subtests for more granular assertions, and enabled strict mode to enforce best practices. Ongoing deprecations target legacy features like old hooks and path handling, with enhancements to Windows compatibility through refined temporary directory management and path normalization. Integration with Python 3.12 and later has improved via enhanced type support in fixtures and parametrized tests, aiding static type like mypy in large-scale projects.

Installation and Setup

Installing pytest

Pytest is primarily installed using the Python package manager pip, which is the recommended method for ensuring compatibility and access to the latest version. The command pip install -U pytest installs or upgrades pytest to the most recent release, currently version 9.0.1 as of November 2025. Pytest requires Python 3.10 or later (or PyPy3), along with the minimal dependency pluggy version 1.0 or higher for plugin management. It is advisable to install pytest within a virtual environment to isolate it from the system Python installation and avoid conflicts with other projects; this can be set up using the built-in venv module with commands like python -m venv myenv followed by activation and then pip install -U pytest. To verify the installation, execute pytest --version in , which outputs the installed version, Python interpreter details, and platform information, such as pytest 9.0.1 on 3.12. For optional features, such as generating HTML reports, install plugins separately via , for example pip install pytest-html. On Windows, installation via typically succeeds, but if the pytest command is not recognized, ensure the Scripts directory is added to the system or run tests using python -m pytest. On macOS, while is preferred, pytest can alternatively be installed using Homebrew with brew install pytest for system-wide access. On distributions like , remains the primary method, though pytest is available via package managers such as [sudo](/page/Sudo) apt install python3-pytest for a system installation. To upgrade pytest, run pip install --upgrade pytest, and review the for changes in the new version.

Configuration Options

pytest supports customization through placed in the of a project, allowing users to define default behaviors without repeating options on every command-line invocation. The supported formats include pytest.toml, .pytest.toml, pytest.ini, .pytest.ini, pyproject.toml, tox.ini, and setup.cfg, with pytest.toml introduced in version 9.0 and pyproject.toml support added in version 6.0. These files follow a specific for precedence, where pytest loads the first matching file it finds in the order: pytest.toml (highest), .pytest.toml, pytest.ini, .pytest.ini, pyproject.toml, tox.ini, and setup.cfg (lowest). The (rootdir) for is determined by the common ancestor path of specified test files or directories and the presence of these files, ensuring configurations apply to the intended project scope. Key options in these files control aspects like test discovery and execution defaults. For instance, the testpaths option specifies directories to search for tests, such as ["tests", "integration"], which narrows discovery to those paths rather than the current directory. The python_files option defines patterns for identifying test files, defaulting to test_*.py and *test.py, but can be customized (e.g., ["check_*.py", "test_*.py"]) to match project-specific naming conventions. Another essential option is addopts, which appends static command-line arguments like ["-ra", "-q"] for always-on reporting and quiet output, streamlining repeated invocations.
ini
# Example pytest.ini
[pytest]
testpaths = tests
python_files = test_*.py
addopts = -ra -q
In pyproject.toml, these are structured under the [tool.pytest] section:
toml
[tool.pytest]
testpaths = ["tests"]
python_files = ["test_*.py"]
addopts = ["-ra", "-q"]
Command-line flags provide overrides for file-based settings, enabling per-run adjustments. The --verbose (or -v) flag increases output verbosity, showing more details about execution, while --junitxml=path generates JUnit-compatible XML reports for integration with tools. These flags take precedence over options, allowing flexible customization without editing files. Environment variables further enhance configurability, particularly in automated environments. The PYTEST_ADDOPTS variable prepends additional options to the command line, such as PYTEST_ADDOPTS="-v --junitxml=report.xml", which is useful for pipelines to enforce consistent testing behaviors across builds. Best practices for configuration emphasize selecting appropriate file formats to avoid parsing issues; for example, prefer pytest.ini, tox.ini, or pyproject.toml over setup.cfg due to differences in INI parsing. Use addopts judiciously to set project defaults without introducing global conflicts, and employ the --rootdir=path flag to explicitly set the configuration root if the automatic detection yields unexpected results. These approaches ensure configurations remain maintainable and project-specific, influencing test discovery patterns like directory scanning without altering the core collection process.

Core Testing Workflow

Test Discovery and Collection

Pytest employs an automated test discovery mechanism that simplifies the process of identifying and gathering tests without requiring explicit file listings or manual invocation. By default, when invoked without arguments, pytest begins collection from the current or a configured testpaths and recursively traverses subdirectories, excluding those matching the norecursedirs pattern (such as build directories). It targets files matching the patterns test_*.py or *_test.py, importing them using a test-specific package name to avoid conflicts with application code. Within these files, pytest collects test items based on standard naming conventions: standalone functions or methods prefixed with test_ are recognized as tests, provided they are defined at the module level or within classes prefixed with Test that lack an __init__ method. Classes themselves are not executed as tests but serve to organize methods, and decorators like @staticmethod or @classmethod on methods are supported to enable collection. This process constructs a hierarchical tree of collection nodes, where each node represents an item such as a , , or , rooted at the project's rootdir (determined by the nearest pytest.ini, pyproject.[toml](/page/TOML), or similar ). Each node receives a unique nodeid combining its path and name, facilitating precise identification and reporting during execution. Nested structures, such as parameterized tests or fixtures, are incorporated into this tree, ensuring comprehensive session building. Customization of discovery rules is achieved through configuration files like pytest.ini or pyproject.toml, where options such as python_files, python_classes, and python_functions allow overriding defaults—for instance, altering file patterns to include check_*.py or function prefixes to check_*. Markers, applied via the @pytest.mark decorator (e.g., @pytest.mark.slow), enable grouping and selective collection without altering core discovery logic, supporting features like test categorization for focused runs. These configurations ensure flexibility while maintaining pytest's convention-over-configuration philosophy. To inspect the collection outcome without execution, the --collect-only option displays a verbose tree of all discovered nodes, optionally combined with -v for increased detail or -q for concise output, which can be redirected to a for further analysis. During collection, pytest handles such as skipped tests (via @pytest.mark.skip) or expected failures (via @pytest.mark.xfail), integrating them into the node tree and flagging them appropriately in reports to indicate non-execution without halting the process. This visibility aids discovery issues and verifies test organization prior to running the suite.

Writing Basic Tests

Pytest encourages writing tests as simple functions within modules, where assertions are used to verify expected outcomes without requiring from a base class, unlike some other frameworks. These test functions typically begin with the prefix test_ to facilitate automatic discovery, and they rely on the built-in assert statement for checks, allowing for expressive and readable validations. A basic test might verify the behavior of a , such as one that adds two numbers. For instance, consider the following code in a file named test_sample.py:
python
# content of test_sample.py
def add(a, b):
    return a + b

def test_add():
    assert add(2, 3) == 5
This test function calls add(2, 3) and asserts that the result equals 5, failing if the is not met. To execute tests, invoke the pytest command from in the containing the test files; it will automatically discover and run all functions matching the test_ convention. For more detailed output, use the -v (verbose) option, which displays test names and results as they run, such as pytest -v test_sample.py producing lines like test_sample.py::test_add PASSED. The -q option can be used for quieter, summary-only reporting. When a fails, pytest provides comprehensive reporting, including full tracebacks that pinpoint the exact line of failure and, for assertion mismatches, a detailed showing intermediate values and expected versus actual results. For example, if the assertion assert 4 == 5 fails, the output might include E assert 4 == 5 followed by E -4 and E +5 to highlight the discrepancy, aiding quick debugging without manual print statements. Adhering to pytest conventions enhances test maintainability: name test files starting with test_ or ending with _test.py, ensure test functions or classes (if used) begin with test_ or Test, and design tests to be independent by avoiding shared mutable state or side effects, as each test runs in isolation to prevent interference. Simple setup, if needed, can involve basic fixtures mentioned later, but standalone tests suffice for most basic cases.

Fixtures

Defining and Using Fixtures

Fixtures in pytest are functions decorated with the @pytest.fixture decorator that encapsulate reusable setup and teardown code, providing a consistent context for tests such as initialized objects or environments. This design allows developers to define the "arrange" phase of testing—preparing data or resources—without repeating boilerplate in each test function. By marking a function with @pytest.fixture, pytest treats it as a provider of test dependencies rather than a standalone test, enabling modular and maintainable test suites. To use a fixture, a test function declares it as an argument with the same name as the fixture, triggering pytest to execute the fixture code and inject its return value automatically before running the test. For instance, consider a simple fixture that returns a list of fruits:
python
import [pytest](/page/pytest)

@[pytest](/page/pytest).fixture
def fruit_bowl():
    return ["apple", "banana", "cherry"]
A test can then consume this fixture:
python
def test_fruit_count(fruit_bowl):
    assert len(fruit_bowl) == 3
This injection mechanism ensures fixtures are only computed when requested, promoting efficiency. Fixtures can return a value directly for simple setup or use yield to support teardown logic, where code after the yield statement executes after the test completes. Synchronous fixtures typically return the setup result, while yield enables cleanup, such as closing resources; for asynchronous fixtures (via extensions like pytest-asyncio), yield similarly handles post-test finalization to ensure proper resource management. An example of a fixture with teardown for a database connection might look like:
python
@pytest.fixture
def db_connection():
    conn = create_db_connection()  # Setup
    yield conn  # Provide to test
    conn.close()  # Teardown
A test using this would receive the connection object:
python
def test_query([db_connection](/page/.test)):
    result = db_connection.execute("SELECT * FROM users")
    assert len(result) > 0
This pattern prevents resource leaks by guaranteeing cleanup regardless of test outcome. Pytest supports dependency injection among fixtures, allowing one fixture to request another as an argument, fostering composable setups. For example, a temporary path fixture might depend on a base temporary directory:
python
@pytest.fixture
def tmpdir():
    return "/tmp/test_dir"

@pytest.fixture
def tmp_path(tmpdir):
    path = tmpdir + "/myfile.txt"
    with open(path, "w") as f:
        f.write("test content")
    return path
Here, tmp_path builds upon tmpdir, and tests requesting tmp_path implicitly get both. A test for file operations could then use:
python
def test_file_read(tmp_path):
    with open(tmp_path, "r") as f:
        content = f.read()
    assert content == "test content"
This hierarchical injection simplifies complex environment creation, such as stacking database sessions on top of connections.

Fixture Scopes and Lifetime

Pytest fixtures support multiple scope levels to control their execution frequency and lifetime, allowing developers to optimize test setup and teardown for efficiency. The available scopes are function (the default, executed once per test function), class (once per test class), module (once per test module), package (once per package, including sub-packages), and session (once per entire test session). These scopes determine when a fixture is created and destroyed, enabling reuse of resources across multiple tests without redundant initialization. To specify a fixture's scope, the scope parameter is passed to the @pytest.fixture decorator, such as @pytest.fixture(scope="module") for module-level execution. This configuration influences caching behavior, where pytest maintains a single instance of the fixture within its defined and reuses it for subsequent tests, reducing overhead. Additionally, it affects teardown order: fixtures with broader scopes (e.g., session) are torn down after narrower ones (e.g., ), ensuring dependent resources are cleaned up appropriately. Fixture lifetime is managed through setup and teardown mechanisms, including the use of finalizers added via the request.addfinalizer method within the fixture . For example, a fixture can register a cleanup like request.addfinalizer(lambda: delete_user(user_id)) to handle deallocation. Finalizers execute in reverse order of registration (last-in, first-out), promoting safe cleanup of nested dependencies. Yield-based fixtures further support this by running teardown code after the [yield](/page/Yield) , with execution occurring from right to left across fixtures in a test. Broader scopes enhance performance by minimizing repeated setup costs, particularly for expensive operations like database connections. A session-scoped database fixture, for instance, initializes the connection once and shares it across all tests, avoiding the latency of per-test recreations. Practical examples illustrate scope selection: a module-scoped fixture for loading configuration files executes once per module, caching the data for all tests within it, as shown below.
python
import pytest

@pytest.fixture(scope="module")
def config():
    return load_config("settings.json")
In contrast, function-scoped mocks ensure isolation per test, resetting state each time to prevent interference.
python
@pytest.fixture
def mock_client():
    with patch("module.Client") as mock:
        yield mock
This approach balances efficiency with test independence.

Key Features

Parameterized Testing

Parameterized testing in pytest allows a single test function to be executed multiple times with different sets of input parameters, promoting efficient coverage of various scenarios without duplicating code. This feature is particularly useful for , where tests are parameterized based on predefined inputs and expected outputs, such as edge cases or diverse datasets. By leveraging the @pytest.mark.parametrize decorator, developers can specify argument names and corresponding values, enabling pytest to generate and run distinct test instances for each parameter combination. The @pytest.mark.parametrize decorator is applied directly to a test , taking two main arguments: a string of comma-separated argument names (e.g., "input,expected") and an iterable of sets, typically a list of tuples or lists. For instance, the following example tests an evaluation with two input-output pairs:
python
import [pytest](/page/Python)

@pytest.mark.parametrize("test_input,expected", [("3+5", 8), ("2+4", 6)])
def test_eval(test_input, expected):
    assert [eval](/page/Eval)(test_input) == expected
This results in two separate test executions: one for ("3+5", 8) and another for ("2+4", 6), each treated as an independent by pytest's runner. sets can include any hashable values, and multiple decorators can be stacked to parameterize different argument groups, with the total number of test runs being the of the sets' sizes. During execution, each set invokes the test function once, passing the values as arguments in the order specified. Pytest automatically generates descriptive test IDs based on the parameter values, such as test_eval[3+5-8], which appear in reports and tracebacks for clarity. Developers can customize these IDs using the optional ids argument, providing a list of strings like ids=["case1", "case2"] to override defaults and improve readability in output. If a set is empty, the entire test is skipped, configurable via the empty_parameter_set_mark option in pytest configuration. For more advanced scenarios, indirect parameterization allows parameters to be sourced dynamically from fixtures using the pytest_generate_tests hook, where metafunc.parametrize injects values into fixture-backed ; this approach integrates seamlessly with fixture definitions for complex setup. Parameterized tests can also be combined with other , such as @pytest.mark.xfail applied to specific parameter sets via the marks argument in parametrize, enabling conditional expectations like expected failures for certain inputs. Use cases include systematically testing edge cases, such as boundary values in algorithms, and data-driven validation where are loaded from external sources like or files through custom loader functions or plugins. In reporting, each parameterized invocation is evaluated independently, with pass/fail status, errors, and tracebacks isolated to the specific set, facilitating granular . Specific parameters can be skipped using conditional like @pytest.mark.skipif, applied selectively to subsets via the parametrize marks , ensuring only relevant cases are executed while maintaining comprehensive coverage.

Assert Rewriting and

Pytest enhances the standard Python assert statement through a process known as assertion rewriting, which occurs at import time for test modules discovered during test collection. This mechanism intercepts assert statements by modifying the (AST) of the module before it is compiled into , using a PEP 302 . The rewritten assertions evaluate subexpressions—such as function calls, attribute accesses, comparisons, and operators—and include their values in the failure message if the assertion fails, providing detailed without requiring developers to write additional code. In failure reports, pytest displays the differing elements for common data structures; for instance, when comparing lists, it highlights the first index where they diverge, and for dictionaries, it shows the specific keys with mismatched values. For assertions involving exceptions, the framework captures the exception type, value, and a full traceback via the ExceptionInfo object, integrating this into the error output for comprehensive . These features extend to complex types by leveraging their __repr__ method for meaningful string representations, though for highly customized comparisons, developers can implement the pytest_assertrepr_compare hook to define tailored difference explanations. Customization options allow fine-grained control over . Globally, it can be disabled using the --assert=plain command-line option, which reverts to standard assertion behavior without introspection. Per-module disabling is possible by adding a with PYTEST_DONT_REWRITE to the module. Additionally, for supporting modules not automatically rewritten (e.g., imported helpers), the pytest.register_assert_rewrite function can be called before import to enable it explicitly. To support custom classes effectively, ensuring a descriptive __repr__ is recommended, as the rewriting relies on it for output clarity. The primary benefits of assert rewriting include significantly reduced debugging time compared to raw assert statements, as it eliminates the need for manual print statements or complex conditional expressions to isolate failure points. It handles intricate data structures, such as nested objects or collections, by breaking down the assertion into traceable parts, making it particularly valuable for testing functions with multiple interdependent computations. This introspection fosters more maintainable test code and quicker issue resolution in large test suites. Despite these advantages, assert rewriting introduces some overhead, particularly in large codebases where many modules require AST parsing and bytecode recompilation during test collection. This process caches rewritten modules as .pyc files in __pycache__ for reuse, but caching is skipped if the filesystem is read-only or if custom import machinery interferes, potentially increasing startup time on subsequent runs. For performance-critical environments, opting out via the available disabling mechanisms is advised to avoid this computational cost.

Advanced Usage

Test Filtering and Skipping

Pytest provides mechanisms for filtering and skipping tests to enable selective execution, which is particularly valuable during sessions or in (CI) environments where running the full test suite may be inefficient. These features allow developers to label tests with markers, conditionally skip them based on runtime conditions, or mark expected failures without halting the entire suite. By integrating these tools, pytest supports focused test runs that target specific subsets, reducing execution time and aiding in maintenance of large test bases. Markers in pytest are attributes applied to test functions, classes, or modules using the @pytest.mark decorator, enabling and selective running. markers, such as @pytest.mark.slow, can be defined to resource-intensive tests, and they must be registered in a like pytest.ini to avoid warnings: [pytest] markers = slow: marks tests as slow (deselect with '-m "not slow"'). To list all available markers, including built-in and custom ones, the command pytest --markers displays their descriptions and usage. Markers facilitate flexible test organization without altering test logic. Skipping tests can be achieved unconditionally or conditionally to handle cases where tests are inapplicable under certain conditions. The @pytest.mark.skip(reason="explanation") decorator skips a test entirely before execution, while pytest.skip("reason") allows imperative skipping inside a test function, useful when conditions are evaluated at . For conditional skips, @pytest.mark.skipif(condition, reason="explanation") evaluates a ; for instance, @pytest.mark.skipif(sys.platform == "win32", reason="does not run on Windows") excludes tests on Windows platforms. These approaches ensure tests are only run in supported environments, with skipped tests reported in the unless suppressed. The xfail mechanism marks tests expected to fail, such as those covering unimplemented features or known bugs, using @pytest.mark.xfail. Parameters include reason for documentation, raises=ExceptionType to specify expected exceptions, strict=True to treat unexpected passes (XPASS) as failures, and run=False to execution altogether. In non-strict mode, an xfail test that fails is reported as XFAIL (pass for the suite), while an unexpected pass is XPASS (still a pass overall); strict mode enforces failure on XPASS via configuration like xfail_strict = true in pytest.ini. This feature prevents transient bugs from masking real issues in test outcomes. Command-line options provide runtime control over test selection. The -k "expression" flag filters tests by matching substrings in test names using Python-like expressions, such as pytest -k "http and not slow" to run only HTTP-related tests excluding slow ones, enabling precise targeting for debugging. Similarly, -m "marker_expression" selects based on markers, like pytest -m "not slow" to exclude slow tests or pytest -m "slow(phase=1)" for parameterized markers, which is ideal for pipelines to run subsets efficiently. Markers can be configured globally in pytest.ini for consistent filtering across invocations. These features are commonly used for platform-specific testing, where skips ensure compatibility across operating systems, or in to deselect flaky or long-running tests marked as slow, thereby optimizing build times without compromising coverage. For example, conditional skips based on environment variables allow tests to run only in dedicated agents, while xfail markers track progress on unresolved issues until fixes are implemented.

Plugins and Extensibility

Pytest employs a robust plugin that allows for seamless extension of its core functionality through hook functions and modular components. Plugins are discovered and loaded in a specific order: pytest first scans the command line for the -p no:name option to disable plugins, then loads built-in plugins from the _pytest directory, followed by those specified via command-line options like -p name, entry point plugins registered under the pytest11 group in pyproject.toml or setup.py (unless autoloading is disabled by the PYTEST_DISABLE_PLUGIN_AUTOLOAD ), plugins listed in the PYTEST_PLUGINS , and finally local plugins defined in conftest.py files or via the pytest_plugins variable. This discovery mechanism ensures predictable behavior, with options to disable plugins using -p no:name or the PYTEST_DISABLE_PLUGIN_AUTOLOAD . Central to this architecture are hook functions, which plugins implement to integrate with pytest's test execution lifecycle. These functions follow a pytest_ convention, such as pytest_addoption for registering custom command-line options or pytest_configure for setup tasks. For instance, a plugin can use pytest_addoption("--my-option", action="store", default="value") to add a configurable flag, which is then accessible via config.getoption("--my-option") in other hooks. plugins in conftest.py files apply to specific directories, enabling directory-scoped customizations without global impact. While pytest includes built-in plugins for core features like assertion rewriting and terminal reporting, much of its power derives from third-party extensions installed via . Notable examples include pytest-xdist, which enables parallel test execution across multiple processes to accelerate runs (e.g., pytest -n auto distributes tests over available CPUs), and pytest-rerunfailures, which automatically retries flaky tests up to a specified number of times (e.g., --reruns 5) to mitigate intermittent failures. Other widely adopted plugins are pytest-cov for integrating measurement with reporting options like --cov-report=[html](/page/HTML), pytest-mock providing a mocker fixture for simplified patching and spying on objects, and pytest-asyncio for supporting asynchronous test functions marked with @pytest.mark.asyncio to handle coroutine-based code. These plugins are distributed on PyPI and automatically loaded upon installation, enhancing pytest's versatility without altering core behavior. Developers can write custom plugins by defining hook functions in a Python module and exposing it via the pytest11 in their package's , allowing distribution on PyPI as installable packages. For example, a plugin might implement pytest_collection_modifyitems to filter tests dynamically, and once packaged with setuptools, it becomes available for installation like any other plugin. This approach fosters a thriving , with over 1,300 external plugins listed on PyPI as of recent compilations. The extensibility of pytest supports modular testing tailored to diverse domains, such as web applications via pytest-django, which provides Django-specific fixtures for database management and request simulation without requiring unittest inheritance, or workflows with pytest-notebook, enabling of Jupyter notebooks by executing cells and diffing outputs for . This plugin-based modularity promotes reusable components, reduces boilerplate, and integrates seamlessly with pipelines, making pytest adaptable for large-scale projects.

References

  1. [1]
    pytest documentation
    ### Summary of pytest
  2. [2]
    pytest - PyPI
    Copyright Holger Krekel and others, 2004. Distributed under the terms of the MIT license, pytest is free and open source software.Project Description · Features · Pytest For Enterprise
  3. [3]
    The pytest framework makes it easy to write small tests, yet ... - GitHub
    Documentation. For full documentation, including installation, tutorials and PDF documents, please see https://docs.pytest.org/en/stable/.Pytest-dev · Pull requests 86 · Issue #13033 · Activity
  4. [4]
    10 Best Python Testing Frameworks in 2025 - GeeksforGeeks
    Jul 23, 2025 · Top 10 Python Testing Frameworks in 2025 · 1. PyTest · 2. PyUnit · 3. Nose2 · 4. The Robot Framework · 5. Behave · 6. Testify · 7. Lettuce · 8.What Is a Python Testing... · Top 10 Python Testing... · The Robot Framework
  5. [5]
    Top 10 Python Testing Frameworks That are Dominating in 2025
    Sep 13, 2025 · 1. PyTest: The Scalable Testing Champion. PyTest leads the market with 12,516 verified company users , including tech giants like Amazon, ...1. Pytest: The Scalable... · 2. Selenium: The Web... · 3. Robot Framework...<|control11|><|separator|>
  6. [6]
    Backwards Compatibility Policy - pytest documentation
    Python version support¶. Released pytest versions support all Python versions that are actively maintained at the time of the release: pytest version. min ...
  7. [7]
    History - pytest documentation
    pytest has a long and interesting history. The first commit in this repository is from January 2007, and even that commit alone already tells a lot.
  8. [8]
    How to use fixtures - pytest documentation
    Test functions request fixtures they require by declaring them as arguments. When pytest goes to run a test, it looks at the parameters in that test function's ...
  9. [9]
    How to use unittest-based tests with pytest
    pytest supports running Python unittest -based tests out of the box. It's meant for leveraging existing unittest -based test suites to use pytest as a test ...<|control11|><|separator|>
  10. [10]
    pytest vs Unittest, Which is Better? - JetBrains Guide
    Mar 15, 2024 · Pytest offers advantages like automatic test discovery, skip/xfail tests, and powerful assertion introspection. It also supports parallel test ...
  11. [11]
    Pytest vs. Unittest: A Comparison and How-to Guide | Built In
    Pytest is a testing framework that offers a more concise and intuitive way of writing tests compared to unittest. It emphasizes simplicity, scalability and ease ...
  12. [12]
    Pytest vs Unittest: A Comparison - Trunk.io
    Oct 21, 2024 · Pytest has simple syntax and automatic test discovery, while Unittest uses a class-based structure, is more verbose, and has less expressive ...
  13. [13]
    Pytest vs Unittest: The Definitive Comparison - Katalon Studio
    Pytest is flexible, third-party, using functions; Unittest is built-in, class-based. Pytest is modern, flexible; Unittest is reliable, stable.<|separator|>
  14. [14]
    How to write and report assertions in tests - pytest documentation
    pytest has support for showing the values of the most common subexpressions including calls, attributes, comparisons, and binary and unary operators. (See Demo ...Asserting With The Assert... · Xfail Mark And Pytest... · Use In Conftest PluginsMissing: advantages | Show results with:advantages
  15. [15]
    Difference between Pytest and Unittest - GeeksforGeeks
    Jul 23, 2025 · Pytest has a concise syntax and automatic test discovery, while Unittest has a more verbose syntax, requires test classes, and has limited ...What are Python Testing... · What is Pytest? · What is Unittest?
  16. [16]
    Pytest Plugin List
    This list contains 1641 plugins. name. summary. last_release. status. requires. databricks- ...
  17. [17]
    Holger Krekel on Py.Test - The Python Podcast.__init__
    Pytest name came in 2004 on Pypy project; Huge number of tests on that project (20,000) – distributed test runner – xdist helped solve this. There are many ...Missing: origins | Show results with:origins
  18. [18]
    pytest-dev/pytest-localserver: pytest plugin to test server ... - GitHub
    pytest plugin to test server connections locally. This repository was migrated from Bitbucket. - pytest-dev/pytest-localserver.Pytest-Localserver · How-To · Fixtures<|control11|><|separator|>
  19. [19]
    py.test 2.0.0: asserts++, unittest++, reporting++, config++, docs++
    improved standard unittest support. In general py.test should now better be able to run custom unittest.TestCases like twisted trial or Django based TestCases.
  20. [20]
    pytest-2.3: reasoning for fixture/funcarg evolution
    funcargs were originally introduced to pytest-2.0. In pytest-2.3 the mechanism was extended and refined and is now described as fixtures: previously funcarg ...
  21. [21]
    pytest-3.0.0
    The pytest team is proud to announce the 3.0.0 release! pytest is a mature Python testing tool with more than 1600 tests against itself, passing on many ...
  22. [22]
    What's new in pytest 3.0
    Jul 12, 2016 · In pytest-3.0 the command-line tool is now called pytest , which is easier to type and prevents some common misunderstandings when tools with ...
  23. [23]
    Python 2.7 and 3.4 support - pytest documentation
    Pytest 4.6.x is the last version supporting Python 2.7/3.4. Users will automatically install it with modern pip, and the core team will no longer actively ...
  24. [24]
    Changelog - pytest documentation
    Some history: With pytest 8.0, -rx or -ra would not only turn on summary reports for xfail, but also report the tracebacks for xfail results. This caused ...
  25. [25]
    Testing guidelines — NumPy v2.4.dev0 Manual
    All tests for NumPy should use pytest. Our goal is that every module and package in NumPy should have a thorough set of unit tests.
  26. [26]
    pytest - PyPI Download Stats
    PyPI Download Stats. ... Downloads last month: 256,603,151. API · About · FAQs. Hosted by The PSF.Missing: 2023 | Show results with:2023
  27. [27]
    Get Started - pytest documentation
    Get Started¶ · Install pytest ¶ · Create your first test¶ · Run multiple tests¶ · Assert that a certain exception is raised¶ · Group multiple tests in a class¶.Pytest API and builtin fixtures · How-to guides · Good Integration Practices
  28. [28]
    Good Integration Practices - pytest documentation
    Within Python modules, pytest also discovers tests using the standard unittest. TestCase subclassing technique.Typing in pytest · Pytest import mechanisms and... · Changing standard (Python...
  29. [29]
    pytest-html - PyPI
    pytest plugin for generating HTML reports. Project description pytest-html is a plugin for pytest that generates a HTML report for test results.
  30. [30]
    pytest - Homebrew Formulae
    pytest. Install command: brew install pytest. Simple powerful testing with Python. https://docs.pytest.org/en/latest/. License: MIT. Formula JSON API: /api ...
  31. [31]
    Configuration - pytest documentation
    pytest determines a rootdir for each test run which depends on the command line arguments (specified test files, paths) and on the existence of configuration ...
  32. [32]
    How to invoke pytest - pytest documentation
    Pytest supports several ways to run and select tests from the command-line or from a file (see below for reading arguments from file). Run tests in a module.
  33. [33]
  34. [34]
    About fixtures - pytest documentation
    Fixtures define the steps and data that constitute the arrange phase of a test (see Anatomy of a test). In pytest, they are functions you define that serve this ...
  35. [35]
  36. [36]
  37. [37]
  38. [38]
  39. [39]
  40. [40]
  41. [41]
    How to parametrize fixtures and test functions - pytest documentation
    ### Summary of `@pytest.mark.parametrize` from pytest Documentation
  42. [42]
  43. [43]
  44. [44]
  45. [45]
    Writing plugins - pytest documentation
    A plugin contains one or multiple hook functions. Writing hooks explains the basics and details of how you can write a hook function yourself.
  46. [46]
    API Reference - pytest documentation
    More detailed information can be found in the official Python documentation for the try statement. pytest.deprecated_call¶. Tutorial: Ensuring code triggers ...
  47. [47]
    pytest-xdist
    ### Summary of pytest-xdist Plugin
  48. [48]
    pytest-rerunfailures
    **Description and Purpose of pytest-rerunfailures:**
  49. [49]
    pytest-cov
    ### Summary of pytest-cov Plugin
  50. [50]
    pytest-mock
    ### Summary of pytest-mock
  51. [51]
    pytest-asyncio - PyPI
    pytest-asyncio is a pytest plugin. It facilitates testing of code that uses the asyncio library. Specifically, pytest-asyncio provides support for coroutines ...
  52. [52]
    How to install and use plugins - pytest documentation
    How to install and use plugins¶. This section talks about installing and using third party plugins. For writing your own plugins, please refer to Writing ...Missing: architecture | Show results with:architecture
  53. [53]
  54. [54]