Fact-checked by Grok 2 weeks ago

Richardson Maturity Model

The Richardson Maturity Model (RMM) is a conceptual framework introduced by software developer Leonard Richardson in a 2008 presentation at QCon to evaluate the maturity of web services and APIs based on their adherence to REST (Representational State Transfer) architectural principles. It structures the progression toward RESTful design into four incremental levels, starting from rudimentary remote procedure call (RPC)-style interactions over HTTP and advancing to fully hypermedia-enabled systems that promote loose coupling and discoverability. The model serves as a practical rather than a strict standard, helping developers iteratively improve designs toward greater and while aligning with the 's distributed nature.

Introduction

Definition and Purpose

The Richardson Maturity Model (RMM) is a framework proposed by Leonard Richardson to evaluate the maturity of in terms of their adherence to the Representational State Transfer () architectural style. It consists of a four-level scale, ranging from Level 0 to Level 3, where each level represents progressive incorporation of principles, starting from basic (RPC)-style interactions and advancing to fully hypermedia-enabled services. This model classifies based on criteria such as resource identification, use of HTTP methods, and hypermedia controls, with higher levels indicating stronger conformance to guidelines for scalable and interoperable services. The primary purpose of the RMM is to assist developers and architects in assessing the current state of their and identifying opportunities for improvement toward more RESTful designs. By highlighting gaps in implementation—such as the absence of resource-oriented endpoints or standardized HTTP verbs—it guides iterative enhancements that promote efficiency, maintainability, and evolvability in API development. Ultimately, the model encourages a structured progression from rudimentary, tunnel-like HTTP usage to sophisticated, self-descriptive services that leverage the web's inherent affordances. The RMM draws an analogy to the (CMM) from , adapting its staged approach to provide a roadmap for maturing web services rather than organizational processes. This comparison underscores the model's role in fostering predictable improvements, where at lower levels may function adequately but lack the robustness and flexibility of those at higher maturity stages.

Historical Development

The Richardson Maturity Model (RMM) draws its foundational inspiration from Roy Fielding's 2000 doctoral dissertation, "Architectural Styles and the Design of Network-based Software Architectures," which formalized the () architectural style as a set of constraints for scalable distributed hypermedia systems. Fielding's work emphasized principles such as resource identification, , and (), but it lacked a practical framework for assessing adherence in real-world web services. Leonard Richardson, a software developer and author focused on web technologies, addressed this gap by developing RMM as a tool to evaluate and critique the maturity of implementations, building directly on Fielding's constraints to provide a graduated scale for design practices. Richardson first proposed the model during a presentation at the QCon developer conference in on November 19, 2008, titled "Justice Will Take Us Millions of Intricate Moves," with Act 3 specifically introducing the "maturity heuristic" to highlight common pitfalls in designs like over-reliance on RPC-style endpoints. He published accompanying notes and slides on his personal website shortly thereafter, framing RMM as a four-level progression from basic HTTP tunneling (Level 0) to full hypermedia-driven services (Level 3), aimed at guiding developers toward more ful architectures. This initial exposition critiqued prevalent practices in early web APIs, such as and plain XML over HTTP, positioning RMM as a diagnostic rather than a rigid standard. Concurrently, Richardson's 2007 book RESTful Web Services, co-authored with Sam Ruby, laid broader groundwork for adoption by advocating practical implementations, though it predated the formal RMM proposal and focused on core principles without the maturity levels. The model's popularity surged following Martin Fowler's influential 2010 article, "Richardson Maturity Model," which synthesized Richardson's ideas into a widely accessible bliki entry, emphasizing its utility for breaking down elements into incremental steps and sparking broader discussions in communities. By 2017, enterprise adoption was evident in resources like Red Hat's developer blog series, which applied RMM to evaluate maturity in production environments, underscoring its role in standardizing RESTful design for scalable services. Richardson has made no major revisions to the model since its inception, preserving its original structure as a timeless rather than an evolving specification. RMM's evolution has centered on widespread integration into design ecosystems, with tools and standards like 3.0 incorporating concepts aligned with its levels—such as resource modeling and HTTP semantics—to facilitate mature implementations without direct endorsement of the model itself. As of 2025, it remains highly relevant in the and economy, where organizations use it to benchmark service-oriented architectures for and evolvability, as seen in industry analyses of cloud-native deployments.

Foundational Concepts

REST Architectural Style

Representational State Transfer () is an architectural style for designing networked applications, particularly distributed hypermedia systems, that emphasizes scalability, simplicity, and component independence. Introduced by in his 2000 PhD dissertation at the , REST draws from several network-based architectural styles and imposes specific constraints to enable efficient, uniform interactions over the web. It focuses on transferring representations of resources rather than direct access to underlying data, allowing clients to interact with servers in a standardized manner without tight . REST is defined by six core architectural constraints, which, when applied together, promote desirable properties like visibility, reliability, and modifiability. These constraints are:
  • Client-server: Separates the user interface concerns of the client from the data storage concerns of the server, enabling independent evolution of each.
  • Stateless: The server treats each request as an independent transaction, containing all necessary information without relying on prior interactions, which enhances scalability and fault tolerance.
  • Cacheable: Responses must explicitly indicate whether they can be cached, allowing clients or intermediaries to reuse data and reduce latency and server load.
  • Uniform interface: Provides a consistent way to interact with resources through resource identification (via URIs), manipulation of representations, self-descriptive messages, and hypermedia as the engine of application state (HATEOAS).
  • Layered system: The architecture is composed of hierarchical layers, where components cannot see beyond their adjacent layer, supporting scalability and encapsulation.
  • Code on demand (optional): Servers can temporarily extend client functionality by transferring executable code (e.g., JavaScript), though this is not required for REST compliance.
Unlike protocols such as , which define a rigid messaging framework often built on XML and WS-* standards, or RPC mechanisms that mimic remote function calls, is not a protocol but an that leverages existing web technologies like HTTP methods, URIs, and standard media types for and . This approach avoids the overhead of custom envelopes or bindings, favoring lightweight, human-readable formats like or XML to achieve greater scalability in large-scale distributed systems. In the context of web , REST facilitates resource-oriented design, where applications are modeled around manipulable resources identified by URIs, and supports hypertext-driven navigation through embedded links in responses, allowing clients to discover and traverse dynamically without hardcoded of endpoints. This serves as the foundational benchmark for evaluating the adherence and effectiveness of API designs in promoting and evolvability.

Key Principles of REST

The Representational State Transfer (REST) architectural style relies on a core set of six constraints that promote , simplicity, and evolvability in distributed hypermedia systems. These constraints, articulated by in his 2000 dissertation, form the foundation for designing web-based applications that treat information as resources accessible over the network. By adhering to these constraints, REST enables between clients and servers, allowing independent evolution of components while maintaining . The constraints include client-server separation, which distinguishes user interface concerns from data storage, enabling the independent evolution of clients and servers; statelessness, which requires that each client request be independent and self-contained, containing all necessary information for the server to process it without retaining any session state on the server side. As Fielding describes, "communication must be stateless in the sense that... each request from client to server must contain all of the information necessary to understand the request, and cannot take advantage of any stored context on the server." This constraint enhances scalability by allowing servers to handle requests in isolation, reducing complexity in load balancing and fault tolerance, though it places the burden of state management on clients. The uniform is the central feature of , achieved through four interrelated sub-constraints that standardize interactions between components. First, resource identification (addressability) ensures are named distinctly via URIs, decoupling the from the underlying and treating as nouns for direct access and , as "the key abstraction of information in REST is a ," where each is referenced by a global identifier to facilitate a noun-oriented model aligned with the web's structure. Second, manipulation through representations allows clients to interact with via standardized representations (e.g., or XML), where changes to a representation modify the without exposing internal details. Third, self-descriptive messages mandate that each request and response include sufficient metadata (e.g., media types, status codes) to be processed independently, eliminating the need for knowledge. Fourth, hypermedia as the engine of application () drives client navigation by embedding links within responses, enabling dynamic discovery of available actions without hardcoding server logic into the client. Fielding notes that "the central feature that distinguishes the architectural style from other network-based styles is its emphasis on a uniform between components." This uniformity simplifies and fosters evolvability by standardizing operations across diverse systems. Cacheability explicitly designates responses as cacheable or non-cacheable, allowing intermediaries like proxies to store and reuse data to improve performance and reduce network load. Fielding specifies that "cache constraints... require that the data within a response to a request (the ) be implicitly or explicitly labeled as cacheable or non-cacheable," which enhances efficiency in high-latency environments by enabling transparent optimization without altering the core . The layered principle structures the into hierarchical layers, where components interact only with adjacent layers, preventing direct access to non-adjacent ones and supporting intermediaries for security, load balancing, or caching. As defined by Fielding, "the layered allows an to be composed of hierarchical layers by constraining component behavior such that each component sees only those below it as components and those above it as the as a whole." This fosters and by isolating concerns and allowing evolution at any layer without impacting others. The optional constraint allows servers to extend client functionality by transferring executable code, such as , though it is not essential for REST compliance. Collectively, these principles interrelate to enable , where clients and servers evolve independently; evolvability, through standardized interfaces that accommodate changes without breaking compatibility; and discoverability, via hypermedia-driven navigation that reveals capabilities at runtime. By enforcing these constraints, REST promotes systems that scale horizontally, maintain visibility into interactions, and support intermediary optimizations, as evaluated in Fielding's analysis of web architecture performance. The Richardson Maturity Model builds upon these principles by providing a progressive framework for their adoption in practice.

The Maturity Levels

Level 0: The Swamp of

Level 0 of the Richardson Maturity Model, known as the Swamp of , describes that employ HTTP solely as a transport mechanism for remote procedure calls (RPC), without leveraging the 's architectural elements. These services typically route all operations through a single using requests, with payloads formatted in Plain Old XML () to specify actions and data. This setup resembles tunneling a non-web over HTTP, often seen in early or implementations. Key characteristics include the absence of resource identification via unique URIs, reliance on request bodies or query parameters to define operations, and no enforcement of stateless interactions or a uniform interface. Instead of addressing specific entities, clients send self-contained documents that instruct the on the desired function, such as encapsulating procedure names and arguments in XML. This approach results in tightly coupled client-server interactions, where changes to the service's internal procedures can break compatibility without clear versioning or discoverability. The term "Swamp of POX" was introduced by Leonard Richardson in his presentation to highlight the disordered, non-web-native nature of these designs, akin to the much-criticized Flash-based websites that offered limited, opaque access to functionality—like a single "peephole" into a . refers to the straightforward use of XML for request-response exchanges, often without the overhead of envelopes but still lacking the web's hypermedia-driven extensibility. A representative example is an appointment booking service with a lone /appointmentService . To check open slots, a client issues a request with an XML payload like <openSlotRequest date="2010-01-04" doctor="mjones"/>, receiving a response such as <openSlotList><openSlot start="10:00" doctor="mjones" ... /></openSlotList>. Similarly, booking might involve another with <bookAppointment doctor="mjones" start="10:00"/>, yielding success or failure XML, but all without distinct resource URIs or method-specific semantics. Progressing beyond Level 0 requires introducing addressable resources, which shifts the design toward a more structured, REST-aligned architecture at Level 1. This foundational step exposes the limitations of POX-style services, which fail to capitalize on the web's distributed nature.

Level 1: Resources

Level 1 of the Richardson Maturity Model introduces resource-oriented design, where assign unique to individual resources, marking a shift from action-centric endpoints to a noun-centric approach that treats data entities as addressable objects. This level builds on Level 0's single-endpoint model by dividing complex services into multiple, identifiable resources, allowing clients to interact with specific entities rather than a monolithic interface. For instance, a might be addressed via /users/123, enabling direct targeting of that entity without embedding actions in the . Characteristics of Level 1 include the creation of distinct endpoints for different resource types, such as /orders/456 for a specific order, while operations on these resources—such as creating, updating, or querying—typically rely on HTTP POST requests with payloads in formats like XML or JSON to return resource representations. This approach fosters better organization by giving resources an object-like identity. An example interaction might involve posting to /orders/456 with a JSON payload { "action": "ship" } to update the order status, resulting in a response containing the modified order details in JSON. At this maturity level, gain improved and , as clients can more easily locate and compose interactions with discrete resources, serving as a foundational step toward incorporating HTTP verbs and hypermedia controls in higher levels. This reduces overall system complexity, facilitating easier and mashups compared to the undifferentiated "swamp" of Level 0. However, limitations persist, including the absence of proper HTTP semantics, which often leads to tunneling—using for read operations or other non-mutating actions—potentially undermining efficiency and safety without idempotency guarantees.

Level 2: HTTP Verbs

Level 2 of the Richardson Maturity Model introduces the proper application of HTTP methods, or , to interact with resources identified in Level 1, thereby establishing a more uniform interface for operations. This level maps standard HTTP verbs such as GET, , PUT, and DELETE to common CRUD (Create, Read, , Delete) operations on resources, avoiding the practice of "verb tunneling" where actions are embedded in URIs or request bodies instead of using the protocol's built-in semantics. By leveraging these verbs correctly, services achieve greater and predictability in how clients perform actions without needing custom conventions. Key characteristics of Level 2 include the semantic use of HTTP verbs aligned with their defined properties in the HTTP specification. GET is employed for retrieval operations, which are safe (do not modify server state) and idempotent (multiple identical requests yield the same result), allowing for caching and prefetching to improve . handles resource creation, typically non-idempotent and resulting in a new entity, while PUT is used for full resource updates (idempotent, replacing the entire resource) and for partial updates. removes resources idempotently. Additionally, responses incorporate appropriate HTTP status codes, such as 200 OK for successful GETs, 201 Created for s, 404 for missing resources, and 409 Conflict for update failures, enabling precise error communication and handling. Headers like and Last-Modified support conditional requests, further optimizing interactions. For instance, in a management at Level 2, a client might issue GET /users/123 to retrieve details, receiving a response with the representation and caching headers. To create a new , POST /users with a request body containing data returns 201 Created and a header pointing to the new , such as /users/124. Updating the involves PUT /users/123 with the full updated payload for idempotent replacement, or PATCH /users/123 for targeted changes, both potentially yielding or 204 No Content on success. Deleting the uses DELETE /users/123, resulting in 204 No Content if successful. These patterns ensure operations are distinctly addressed through the protocol rather than overloaded URIs. The benefits of Level 2 lie in its exploitation of HTTP's built-in features, which facilitate standard tooling, enhanced through method-specific rules, and better integration with like proxies and search engines that can GET endpoints. This level realizes much of REST's uniform interface constraint by standardizing interactions, reducing client-server , and enabling optimizations such as caching for read-heavy operations, which can significantly lower and usage in distributed systems. It builds directly on Level 1's resource-oriented URIs, replacing the singular POST reliance of earlier levels with a richer set of methods to avoid custom RPC-like tunneling and promote protocol-native behavior.

Level 3: Hypermedia Controls

Level 3 of the Richardson Maturity Model, known as Hypermedia Controls, represents the pinnacle of RESTful API maturity by incorporating Hypermedia as the Engine of Application State (HATEOAS). HATEOAS, a core constraint of the REST architectural style, requires that server responses include not only resource representations but also embedded navigational links that indicate available actions and related resources, allowing clients to dynamically discover and navigate the API without prior knowledge of specific URIs. This approach transforms the API into a self-descriptive hypermedia system, where the application state is driven entirely by the hypermedia provided in responses, fulfilling Fielding's vision of a uniform interface that enhances scalability and decoupling between client and server. A key characteristic of Level 3 is that clients rely on following these embedded links rather than hardcoding URIs or assuming fixed endpoints, which promotes and enables seamless API evolution—such as adding new resources or changing paths—without breaking existing clients. To implement , APIs typically employ standardized hypermedia formats that structure links and actions in a consistent, machine-readable way; common formats include (), which uses simple link relations and embedded resources; (), which adds semantic context via principles; and , which emphasizes actions alongside entities for more interactive controls. These formats ensure that responses are interpretable by generic clients, supporting the discoverability that distinguishes Level 3 from lower maturity levels. For instance, a response to a GET request on /orders/456 at Level 3 might include both the order details and hypermedia links, as shown in the following example using format:
json
{
  "_links": {
    "self": { "href": "/orders/456" },
    "customer": { "href": "/users/123" },
    "pay": { "href": "/orders/456/payments", "method": "POST" }
  },
  "order": {
    "id": 456,
    "total": 99.99
  }
}
Here, the client can follow the "customer" link to access related user data or use the "pay" link to initiate , all without predefined knowledge. The benefits of achieving Level 3 include maximized , as clients depend only on types and link relations rather than server-specific details, and enhanced evolvability, allowing servers to modify implementations over time while maintaining through stable link semantics. This level fully realizes Fielding's constraints, particularly the uniform interface, leading to more scalable and maintainable distributed systems. However, implementing Hypermedia Controls introduces challenges, such as increased complexity in server-side generation of dynamic links and client-side parsing of varied hypermedia formats, which can complicate and testing compared to simpler RPC-style interactions. Additionally, not all clients are equipped to process hypermedia, potentially limiting adoption in ecosystems favoring static contracts.

Implications and Applications

Benefits of Higher Maturity

Advancing to higher levels in the Richardson Maturity Model (RMM) significantly enhances the scalability and performance of by leveraging standardized HTTP features. At Level 2 and above, the proper use of HTTP verbs such as GET for retrieval enables effective caching mechanisms, which reduce server load and allow for horizontal scaling across distributed systems, mirroring the web's proven for handling massive traffic. This stateless design further supports load balancing without session affinity, improving overall system resilience and efficiency in high-demand environments. Additionally, Level 3's hypermedia controls () minimize client-server coupling, allowing servers to evolve independently while clients discover endpoints dynamically, thereby facilitating seamless scaling without widespread redeployments. Higher maturity levels promote greater evolvability, enabling APIs to adapt over time with minimal disruption. By introducing resources at Level 1, APIs avoid monolithic endpoints, making it easier to refactor and extend individual components without affecting the entire system. At Level 3, allows servers to modify structures or add new actions while providing embedded links in responses, ensuring clients remain functional without requiring updates, as demonstrated in services like where changes occur transparently. This decoupling reduces the risk of breaking changes during evolution, supporting long-term maintainability in dynamic ecosystems. From a developer experience perspective, higher RMM levels standardize interactions, lowering the and enabling the use of off-the-shelf tools. Level 2's adherence to HTTP verbs and status codes provides clear semantics—such as 201 Created for successful resource creation or 409 Conflict for duplicates—facilitating debugging and integration with existing HTTP libraries and frameworks. The uniform interface at these levels reduces , as developers can apply familiar web patterns rather than custom RPC-like protocols, accelerating development and reducing errors in client implementations. In business contexts, progressing to higher maturity fosters improved , particularly in architectures where must integrate across diverse systems. For instance, platforms like AWS API Gateway natively support Level 2+ features, including HTTP methods and caching, which streamline deployment and , enabling faster time-to-market for cloud-native applications. This maturity enhances ecosystem compatibility, allowing organizations to compose services more reliably and scale operations in distributed environments like , ultimately driving efficiency in enterprise integrations. Empirical evidence underscores these advantages, with studies post-2010 indicating that conforming to higher RMM levels exhibit improved . A controlled experiment involving 105 developers found that adherence to 11 out of 12 RESTful design rules—aligned with Levels 1-3—significantly enhanced API understandability, with effect sizes ranging from small to huge (Cohen's d up to 2.17), suggesting reduced complexity in and tasks. While direct bug reduction metrics vary, such design practices correlate with fewer integration issues, as higher maturity demonstrate better modifiability in real-world scenarios.

Criticisms and Limitations

One significant criticism of the Richardson Maturity Model (RMM) concerns its practicality, particularly at Level 3, where Hypermedia as the Engine of Application State () requires resources to include dynamic links and forms that guide client navigation without relying on documentation. Implementing introduces substantial complexity for clients, as they must parse and follow server-provided hypermedia controls, which often lack standardized formats, making , testing, and more challenging than at lower levels. This overhead, combined with potential impacts from embedding extensive in responses, results in Level 3 being rarely achieved in practice, with most APIs plateauing at Level 2 by utilizing resources and HTTP verbs but forgoing hypermedia. Leonard Richardson himself highlighted this difficulty in his original , noting that while Level 3 offers adaptability, many services settle at Level 2 due to the hurdles in hypermedia comprehension and client-side processing. Critics also argue that the RMM oversimplifies REST evaluation by focusing narrowly on identification, HTTP methods, and hypermedia while neglecting other core principles such as cacheability, which enables efficient data reuse, or layered system architecture for . considerations, like and mechanisms, are entirely absent from the model's framework, leading some to contend that it does not provide a comprehensive gauge of adherence and can mislead developers into prioritizing superficial HTTP compliance over holistic architectural integrity. , 's originator, emphasized that true demands hypertext-driven interfaces but observed widespread violations due to incomplete understanding, underscoring the model's limited scope in capturing these broader constraints. The RMM's perspective is further dated by its 2008 origins, predating the rise of alternatives like (introduced in 2015) and (also 2015), which address over-fetching and performance in ways the model does not contemplate, especially for non-HTTP protocols or real-time scenarios involving WebSockets. This temporal gap limits its applicability to modern ecosystems, where hybrid approaches blending with these technologies are common, and the model offers no guidance on integrating such paradigms. Alternative frameworks, such as the API Canvas for holistic API strategy or maturity assessments tied to OpenAPI specifications, provide broader lenses by incorporating business alignment, , and —areas the RMM overlooks. Richardson described Level 3 as an aspirational ideal rather than a strict , acknowledging in his 2008 talk that full hypermedia adoption is often impractical for real-world services. As of 2025, the RMM retains relevance in educational contexts and API audits to benchmark basic REST conformance, but it is increasingly supplemented by automated tools like API linting based on OpenAPI for validating hybrid and evolved designs.

References

  1. [1]
    JWTUMOIM: Act 3 - Crummy
    A talk delivered at QCon 2008 ... This document is part of Crummy, the webspace of Leonard Richardson (contact information).
  2. [2]
    Richardson Maturity Model - Martin Fowler
    Mar 18, 2010 · A model (developed by Leonard Richardson) that breaks down the principal elements of a REST approach into three steps. These introduce resources, http verbs, ...
  3. [3]
    What is the Richardson Maturity Model? - Nordic APIs
    Apr 26, 2018 · The Richardson Maturity Model can function as a system to ensure adherence and promote maturity throughout the development lifecycle. A Path ...
  4. [4]
    Richardson Maturity Model - REST API Tutorial
    Nov 5, 2023 · The Richardson Maturity Model is a framework for evaluating the maturity of web services in terms of their adherence to RESTful principles.
  5. [5]
  6. [6]
    CHAPTER 5: Representational State Transfer (REST)
    This chapter introduces and elaborates the Representational State Transfer (REST) architectural style for distributed hypermedia systems.
  7. [7]
    RESTful Web Services [Book] - O'Reilly
    Shows how to implement RESTful services in three popular frameworks -- Ruby on Rails, Restlet (for Java), and Django (for Python); Focuses on practical issues: ...
  8. [8]
    Know how RESTful your API is: An Overview of the Richardson ...
    Sep 13, 2017 · According to the Richardson Maturity Model, any REST API belongs to any of the maturity levels from Level 0 to Level 3, mentioned below.
  9. [9]
    An interview with Leonard Richardson - HTMX
    Feb 19, 2025 · Leonard Richardson is a long time programmer and author and was the creator of what came to be termed the Richardson Maturity Model.Missing: explanation | Show results with:explanation
  10. [10]
    Richardson Maturity and OpenAPI 3.0 | PDF - Slideshare
    The document discusses the Richardson Maturity Model, which categorizes APIs into four levels based on their complexity, ranging from basic HTTP communication ...Missing: Specification | Show results with:Specification
  11. [11]
    API Security Maturity Model: Identity-Centric Approach | Curity
    Jun 10, 2025 · Inspired by the Richardson Maturity Model, which outlines increasing degrees of web service development maturity, the API Security Maturity ...Missing: OpenAPI | Show results with:OpenAPI
  12. [12]
    Architectural styles in web services - IBM
    Although the information in this topic distinguishes between REST and SOAP services, the real distinction is between the RPC and RESTful styles. For additional ...
  13. [13]
  14. [14]
  15. [15]
  16. [16]
  17. [17]
  18. [18]
    What is the Richardson Maturity Model? - The RESTful cookbook
    Level 0: Swamp of POX​​ Level 0 uses its implementing protocol (normally HTTP, but it doesn't have to be) like a transport protocol. That is, it tunnels requests ...
  19. [19]
  20. [20]
    Three Levels of the REST Maturity Model - InfoQ
    Mar 24, 2010 · Level 1 tackles the question of handling complexity by using divide and conquer, breaking a large service endpoint down into multiple resources.
  21. [21]
    Hypermedia controls in REST - The final hurdle
    Mar 8, 2017 · First presented by Leonard Richardson in 2008, the Richardson Maturity Model has four levels (0 through 3). These levels, particularly Level ...
  22. [22]
    Designing Evolvable Web APIs with ASP.NET [Book] - O'Reilly Media
    Designing Evolvable Web APIs with ASP.NET · 1. The Internet, the World Wide Web, and HTTP · 2. Web APIs · 3. ASP.NET Web API 101 · 4. Processing Architecture · 5.Missing: explanation | Show results with:explanation<|control11|><|separator|>
  23. [23]
    How to break a Monolith into Microservices - Martin Fowler
    Apr 24, 2018 · Use Richardson Maturity Model L3 and hyperlinks to enable future decoupling of services without impacting callers, i.e. caller discovers how ...Missing: explanation | Show results with:explanation
  24. [24]
    Do RESTful API design rules have an impact on the ...
    Sep 26, 2023 · Moreover, Leonard Richardson developed a maturity model (Martin Fowler, 2010) allowing Web developers to estimate the degree of REST compliance ...