Skip to content

Responses

There are many ways to define the response for your mocked API request. To define a response you just need to set the response property to either a partial, a model, an HTTPX response, or a function that returns an HTTPX response. You can also ignore setting the response and a default response will be used.

Default Response

For all routes, but especially stateful routes, you can skip manually defining the response and a default response will be returned. The default response will have all required fields of the return object but will not include any meaningful values for fields that would have been generated by the LLM.

Tip

For stateful routes that do not involve LLM generated fields it is actually recommended to not define the response. Doing so might actually result in an error.

Partial

All routes have an associated partial object. Partials are just typed dictionary representations of the OpenAI response object. Any field not defined by the user will be given a default value by merging the partial object with the default response object.

Let's look at an example:

1
2
3
4
5
6
7
8
9
openai_mock.chat.completions.create.response = {
    "choices": [
        {
            "index": 0,
            "finish_reason": "stop",
            "message": {"content": "Hello! How can I help?", "role": "assistant"},
        }
    ]
}

In this example, we're explicitly defining what the completion choices field should look like in the response but we're not explicitly defining any of the other fields.

Thanks to Python's TypedDict type, autocompletion for field names are automatically supported in your text editor or IDE.

Model

Along with partial objects, you can also choose to set the response to the full OpenAI object.

One use case for this is to manually set the status field on the run resource object for polling.

# create run
run = client.beta.threads.runs.create(thread.id, assistant_id=assistant.id)

# manually change status and assign updated run as response for retrieve call
run.status = "in_progress"
openai_mock.beta.threads.runs.retrieve.response = run

# retrieve run
run = client.beta.threads.runs.retrieve(run.id, thread_id=thread.id)
assert run.status == "in_progress"

HTTPX Response

You can set the response to a raw HTTPX response object. This is more involved than using either a partial or model but can allow you to test things like server failures or other status codes.

Tip

For convenience, this library provides an easy way to import external objects from HTTPX and RESPX.

import pytest

import openai
from openai import APIStatusError

import openai_responses
from openai_responses import OpenAIMock
from openai_responses.ext.httpx import Response


@openai_responses.mock()
def test_create_chat_completion_failure(openai_mock: OpenAIMock):
    openai_mock.chat.completions.create.response = Response(500)

    client = openai.Client(api_key="sk-fake123", max_retries=0)

    with pytest.raises(APIStatusError):
        client.chat.completions.create(
            model="gpt-3.5-turbo",
            messages=[
                {"role": "system", "content": "You are a helpful assistant."},
                {"role": "user", "content": "Hello!"},
            ],
        )

Function

For more complex scenarios or for taking advantage of RESPX side effects, you can also define the response as a function as long as that function returns an HTTPX response object.

The function's signature must match one of:

(request: httpx.Request) -> httpx.Response

(request: httpx.Request, route: respx.Route) -> httpx.Response

(request: httpx.Request, route: respx.Route, ,*, state: openai_responses.StateStore) -> httpx.Response

(request: httpx.Request, route: respx.Route, ,*, state: openai_responses.StateStore, ...) -> httpx.Response

Looking at a real-life example, this test simulates two failed calls before finally succeeding on the third call.

import openai

import openai_responses
from openai_responses import OpenAIMock
from openai_responses.ext.httpx import Request, Response
from openai_responses.ext.respx import Route
from openai_responses.helpers.builders.chat import chat_completion_from_create_request


def completion_with_failures(request: Request, route: Route) -> Response:
    """Simulate 2 failures before sending successful response"""
    if route.call_count < 2:
        return Response(500)

    completion = chat_completion_from_create_request(request)

    return Response(201, json=completion.model_dump())


@openai_responses.mock()
def test_create_chat_completion(openai_mock: OpenAIMock):
    openai_mock.chat.completions.create.response = completion_with_failures

    client = openai.Client(api_key="sk-fake123", max_retries=3)
    client.chat.completions.create(
        model="gpt-3.5-turbo",
        messages=[
            {"role": "system", "content": "You are a helpful assistant."},
            {"role": "user", "content": "Hello!"},
        ],
    )

    assert openai_mock.chat.completions.create.calls.call_count == 3

Note

This example also makes use of helpers which are convenient utilities for common operations.

State store injection

For functions used with stateful routes you can add state_store as an argument or keyword-only argument and it will be automatically provided.

import openai_responses
from openai_responses import OpenAIMock
from openai_responses.stores import StateStore
from openai_responses.ext.httpx import Request, Response, post
from openai_responses.ext.respx import Route

def polled_get_run_responses(
    request: Request,
    route: Route,
    *,
    state_store: StateStore,
) -> Response:
    ...

Path parameters

If a route has path parameters then those will also be automatically passed to the response function.

For example, the route for retrieving runs is:

/threads/{thread_id}/runs/{run_id}

For functions, you can access those path parameters like this:

1
2
3
4
5
6
7
8
9
def polled_get_run_responses(
    request: Request,
    route: Route,
    *,
    state_store: StateStore,
    thread_id: str,
    run_id: str,
) -> Response:
    ...

Warning

If a route has path parameters but you do not need them in the function signature then you must add kwargs to the function. These arguments are automatically added to the function and without them in the signature or without using kwargs you will get an error.