API Contracts: Design, Versioning, and Contract Testing

Master API contract design for microservices including OpenAPI specs, semantic versioning strategies, and automated contract testing.

published: reading time: 11 min read

API Contracts: Design, Versioning, and Contract Testing

Building microservices means dealing with a problem that does not exist in monoliths: keeping distributed services aligned. Your user service expects a certain payload from the order service. The billing service assumes a particular response structure from the inventory service. Someone changes the order service on a Friday afternoon, deploys, and Monday morning you are debugging a cascade of failures that started somewhere unexpected.

I have seen this play out more times than I care to recount. The fix is not better communication or more meetings. It is treating API contracts as actual artifacts that get tested automatically, not just documentation that everyone promises to read.

What Are API Contracts and Why Do They Matter

An API contract defines the agreement between a service provider and its consumers. It specifies what requests the provider accepts, what responses it returns, and what those responses look like. Done right, it lets the provider and consumer evolve independently, as long as neither violates the agreed-upon interface.

In a monolith, you skip this thinking because everything lives in one codebase. One deploy, one team, easy. Microservices flip this. You have multiple teams shipping on different schedules, different failure domains, different tolerances for risk. Without a contract that is enforceable and tested, you are relying on documentation and hope. Hope does not scale past a handful of services.

The contract becomes a buffer. The order service team can ship on Tuesday without a sync meeting with the user service team, as long as the new version still honors the existing contract. That is the foundation of independent deployability, which is the real promise of microservices.

OpenAPI Specification Basics

The most widely adopted standard for describing REST APIs is the OpenAPI Specification (OAS). Originally known as Swagger, it provides a machine-readable format for defining your API endpoints, request/response schemas, authentication methods, and more.

Here is a simplified example of what an OpenAPI spec looks like for a user endpoint.

openapi: 3.1.0
info:
  title: User Service API
  version: 1.0.0
paths:
  /users/{userId}:
    get:
      summary: Get user by ID
      operationId: getUser
      parameters:
        - name: userId
          in: path
          required: true
          schema:
            type: string
      responses:
        "200":
          description: Successful response
          content:
            application/json:
              schema:
                $ref: "#/components/schemas/User"
components:
  schemas:
    User:
      type: object
      properties:
        id:
          type: string
        email:
          type: string
          format: email
        createdAt:
          type: string
          format: date-time

The spec gives you several things for free. You can generate client SDKs, spin up mock servers, validate requests against the schema, and produce interactive documentation. Most teams start with OpenAPI when they want to bring discipline to their API design process.

You can read more about REST API design principles in my post on RESTful API Design.

API Versioning Strategies

At some point, you will need to change your API. Perhaps you need to add new fields, rename something, or change the structure of a response. This is where versioning becomes critical.

There are three main approaches to versioning your APIs.

URL Path Versioning

You embed the version number directly in the URL path. For example, /api/v1/users or /api/v2/orders. This is the most explicit approach and the easiest to route. You can see exactly which version you are calling, and load balancers can route accordingly.

The downside is that it violates the idea that a URL should identify a resource uniquely. /api/v1/users and /api/v2/users are technically the same resource according to REST purists, but they have different URLs. It can also get messy if you have many versions floating around.

Header Versioning

With header versioning, the URL stays the same, but clients include a header like API-Version: 2024-01-01 or Accept: application/vnd.mycompany.v2+json. This keeps URLs clean but makes testing and debugging harder since the version is invisible in the URL.

Query Parameter Versioning

You pass the version as a query parameter: /api/users?version=2. It is simple but easy to forget and harder to route at the infrastructure level.

My recommendation is URL path versioning for most teams. The explicitness pays off in operational simplicity. You always know which version is running, and you can route traffic based on path alone.

For a deeper dive into the tradeoffs between these approaches, see my post on API Versioning Strategies.

Semantic Versioning for APIs

Semantic versioning (SemVer) gives you a formal system for communicating the nature of changes. The format is MAJOR.MINOR.PATCH.

  • MAJOR version increments when you make breaking changes
  • MINOR version increments when you add functionality in a backward-compatible way
  • PATCH version increments when you make backward-compatible bug fixes

For APIs, the rules are fairly strict. A breaking change includes removing fields, changing field types, renaming endpoints, or changing the semantics of a response. Adding optional request fields or adding new response fields are backward-compatible changes that warrant a MINOR bump.

The key insight is that you should never deploy a MAJOR version bump without a migration path. If your user service is on version 3, the old version 2 should continue working until all consumers have migrated. This is where API gateways become invaluable. You can route traffic between versions, giving consumers time to adapt.

Understanding Contract Testing

This is where contract testing comes in. The question it answers: how do you verify that your service provider satisfies every consumer expectations, without standing up your entire microservices stack for every test run?

Traditional integration testing requires all services running together. With fifty microservices, you either run all fifty locally (unworkable) or in a shared environment (slow, flaky, requiring constant coordination).

Contract testing flips this. Instead of testing the whole system, you test the agreement between one consumer and one provider in a controlled, isolated setting.

Two approaches exist.

Provider-Driven Contracts

The provider defines its promises. The provider team writes tests validating their own behavior against the contract. Consumers then trust the provider assertions.

Simple to set up, but it places the burden on the provider team to track what every consumer actually needs.

Consumer-Driven Contracts

Each consumer describes what it requires from the provider. Those requirements get encoded as tests. The provider then verifies it satisfies all consumer contracts before shipping.

This gives every consumer a voice. If the billing service needs the totalAmount field, that expectation lives in a contract test that the provider must pass. The billing team does not have to hope the order service team remembered their requirements. They can verify it automatically.

Consumer-driven contracts shine in multi-team environments. Dependencies become visible and testable, and provider teams gain real confidence that their changes are safe.

Breaking vs Non-Breaking Changes

Understanding which changes break contracts and which do not is essential for safe API evolution.

Non-Breaking Changes

These changes do not require a MAJOR version bump and should not break existing consumers.

  • Adding new optional fields to request bodies
  • Adding new fields to response bodies
  • Adding new endpoints
  • Making previously required fields optional
  • Adding new optional query parameters

Breaking Changes

These changes require a MAJOR version bump and a migration strategy.

  • Removing fields from requests or responses
  • Changing the type of a field (string to number, for example)
  • Renaming fields or endpoints
  • Making optional fields required
  • Changing the structure of a response (nested objects where there were none)
  • Changing authentication requirements

The safest approach is to think of your API as a public promise. Once you publish version 1.0, you need to support it until all consumers have migrated. This is why adding required fields to responses is dangerous, even though it seems like a minor change. A consumer that was written before that field existed might not handle it correctly.

Contract Testing with Pact

Pact is the most popular consumer-driven contract testing framework. It works by having consumers define their expectations, generating a “pact” file, and then having providers verify against that pact.

Here is how it works in practice with a consumer test.

@ExtendWith(PactConsumerTestExt.class)
class UserServiceConsumerPactTest {

    @Pact(consumer = "UserService", provider = "OrderService")
    V4Pact getOrdersPact(PactDslWithProvider builder) {
        return builder
            .uponReceiving("a request for user orders")
                .path("/orders")
                .query("userId", "123")
                .method("GET")
            .willRespondWith()
                .status(200)
                .body(newJsonBody(body -> {
                    body.stringValue("orderId", "ord-456");
                    body.stringValue("status", "shipped");
                    body.stringValue("totalAmount", "29.99");
                }).build())
            .toPact(V4Pact.class);
    }

    @Test
    @PactTestFor(pactMethod = "getOrdersPact", port = "8080")
    void testOrders(MockServer mockServer) {
        OrderClient client = new OrderClient(mockServer.getUrl());
        Order order = client.getOrdersForUser("123");

        assertEquals("ord-456", order.getOrderId());
        assertEquals("shipped", order.getStatus());
    }
}

This test defines exactly what the consumer expects from the order service. When this test runs, Pact records the interaction and generates a pact file. The order service team can then download this pact file and verify their implementation satisfies it.

For Spring Boot applications, Spring Cloud Contract is another excellent option. It follows a similar philosophy but integrates more tightly with the Spring ecosystem. You define your contracts using Groovy or YAML, and it generates both provider and consumer tests automatically.

Contract Flow Diagram

Understanding the flow of contract testing helps visualize how everything connects. Here is a diagram showing the typical contract testing lifecycle.

graph TD
    A[Consumer writes CDC test] --> B[Pact file generated]
    B --> C[Pact file published to Pact Broker]
    D[Provider pulls latest pact] --> E[Provider runs contract verification]
    E --> F{All contracts satisfied?}
    F -->|Yes| G[Safe to deploy provider]
    F -->|No| H[Provider must fix failures]
    H --> E
    C --> D
    I[New version of Provider deployed] --> J[Consumer notified of changes]
    J --> K{Consumer contract still valid?}
    K -->|No| L[Consumer updates their CDC test]
    L --> A

The Pact Broker acts as a central hub. Providers pull contracts from the broker and verify against them. When a provider deploys successfully, it publishes its verification results. If a provider breaks a contract, the broker can notify affected consumers.

Service Orchestration vs Choreography

When you have multiple services that need to coordinate, there are two broad patterns: orchestration and choreography.

In orchestration, you have a central coordinator that directs the flow. It calls each service in sequence, handles responses, and decides what to do next. This is straightforward to understand and implement, but the coordinator becomes a bottleneck and a single point of failure.

In choreography, services react to events and publish their own events. There is no central coordinator. Each service only knows about its own responsibilities. This is more resilient and allows services to evolve independently, but it can be harder to trace end-to-end flows.

Both approaches benefit from contract testing. In an orchestrated system, the orchestrator is a consumer of all the downstream services, so it should have contract tests for each one. In a choreographed system, every service that publishes events should have contracts that describe those events, and every service that consumes events should have contracts that describe what it expects.

You can read more about these patterns in my posts on Service Orchestration and Service Choreography.

Practical Recommendations

If you are starting from scratch, here is what I would suggest based on what actually works.

Start with OpenAPI. Define your contracts upfront. Even if you skip contract testing for now, a machine-readable spec forces you to reason about your API design before you write code. Retrofitting contracts onto an existing system is painful.

Pick one service to pilot consumer-driven contracts. Not your whole organization at once. Pick a service with multiple consumers, write the contracts, show the value. Small wins build momentum.

Get a Pact Broker or something similar. The broker is what makes this manageable at scale. Without it, you are tracking pact files by hand, which becomes a mess fast.

Take breaking changes seriously. Major version bumps should be rare and announced with plenty of lead time. Run old versions in production alongside new ones until consumers have migrated. An API gateway helps here.

Automate contract verification in your CI pipeline. It should run on every pull request. If a provider breaks a consumer contract, the build fails before the code merges. That is the safety net.

Conclusion

API contracts are not documentation or an API design exercise. They are a safety mechanism. They let teams move fast without accidentally breaking each other, and they make implicit agreements between services explicit and testable.

Contract testing is one of those practices that feels like pure overhead until you have worked without it. Then you miss it. The confidence of knowing your changes do not break downstream consumers, before production, every time, is worth the investment.

Pick one API. Write the contract. Add a consumer-driven test. Find out what breaks before your users do. That is the point.

Category

Related Posts

Data Contracts: Establishing Reliable Data Agreements

Learn how to implement data contracts between data producers and consumers to ensure quality, availability, and accountability.

#data-engineering #data-contracts #data-quality

Load Balancing Algorithms: Round Robin, Least Connections, and Beyond

Explore load balancing algorithms used in microservices including round robin, least connections, weighted, IP hash, and adaptive algorithms.

#microservices #load-balancing #algorithms

Amazon's Architecture: Lessons from the Pioneer of Microservices

Learn how Amazon pioneered service-oriented architecture, the famous 'two-pizza team' rule, and how they built the foundation for AWS.

#microservices #amazon #architecture