Responses
A utility library for mocking out the requests Python library.
Note
Responses requires Python 3.8 or newer, and requests >= 2.30.0
Table of Contents
Contents
- Table of Contents
- Installing
- Deprecations and Migration Path
- Basics
- Response Parameters
- Exception as Response body
- Matching Requests
- Response Registry
- Dynamic Responses
- Integration with unit test frameworks
- Assertions on declared responses
- Assert Request Call Count
- Assert Request Calls data
- Multiple Responses
- URL Redirection
- Validate
Retrymechanism - Using a callback to modify the response
- Passing through real requests
- Viewing/Modifying registered responses
- Coroutines and Multithreading
- BETA Features
- Contributing
Installing
pip install responses
Deprecations and Migration Path
Here you will find a list of deprecated functionality and a migration path for each. Please ensure to update your code according to the guidance.
Deprecation and Migration| Deprecated Functionality | Deprecated in Version | Migration Path |
|---|---|---|
responses.json_params_matcher |
0.14.0 | responses.matchers.json_params_matcher |
responses.urlencoded_params_matcher |
0.14.0 | responses.matchers.urlencoded_params_matcher |
stream argument in Response and CallbackResponse |
0.15.0 | Use stream argument in request directly. |
match_querystring argument in Response and CallbackResponse. |
0.17.0 | Use responses.matchers.query_param_matcher or responses.matchers.query_string_matcher |
responses.assert_all_requests_are_fired, responses.passthru_prefixes, responses.target |
0.20.0 | Use responses.mock.assert_all_requests_are_fired,
responses.mock.passthru_prefixes, responses.mock.target instead. |
Basics
The core of responses comes from registering mock responses and covering test function
with responses.activate decorator. responses provides similar interface as requests.
Main Interface
- responses.add(
ResponseorResponse args) - allows either to registerResponseobject or directly provide arguments ofResponseobject. See Response Parameters
import responses import requests @responses.activate def test_simple(): # Register via 'Response' object rsp1 = responses.Response( method="PUT", url="http://example.com", ) responses.add(rsp1) # register via direct arguments responses.add( responses.GET, "http://twitter.com/api/1/foobar", json={"error": "not found"}, status=404, ) resp = requests.get("http://twitter.com/api/1/foobar") resp2 = requests.put("http://example.com") assert resp.json() == {"error": "not found"} assert resp.status_code == 404 assert resp2.status_code == 200 assert resp2.request.method == "PUT"
If you attempt to fetch a url which doesn't hit a match, responses will raise
a ConnectionError:
import responses import requests from requests.exceptions import ConnectionError @responses.activate def test_simple(): with pytest.raises(ConnectionError): requests.get("http://twitter.com/api/1/foobar")
Shortcuts
Shortcuts provide a shorten version of responses.add() where method argument is prefilled
- responses.delete(
Response args) - register DELETE response - responses.get(
Response args) - register GET response - responses.head(
Response args) - register HEAD response - responses.options(
Response args) - register OPTIONS response - responses.patch(
Response args) - register PATCH response - responses.post(
Response args) - register POST response - responses.put(
Response args) - register PUT response
import responses import requests @responses.activate def test_simple(): responses.get( "http://twitter.com/api/1/foobar", json={"type": "get"}, ) responses.post( "http://twitter.com/api/1/foobar", json={"type": "post"}, ) responses.patch( "http://twitter.com/api/1/foobar", json={"type": "patch"}, ) resp_get = requests.get("http://twitter.com/api/1/foobar") resp_post = requests.post("http://twitter.com/api/1/foobar") resp_patch = requests.patch("http://twitter.com/api/1/foobar") assert resp_get.json() == {"type": "get"} assert resp_post.json() == {"type": "post"} assert resp_patch.json() == {"type": "patch"}
Responses as a context manager
Instead of wrapping the whole function with decorator you can use a context manager.
import responses import requests def test_my_api(): with responses.RequestsMock() as rsps: rsps.add( responses.GET, "http://twitter.com/api/1/foobar", body="{}", status=200, content_type="application/json", ) resp = requests.get("http://twitter.com/api/1/foobar") assert resp.status_code == 200 # outside the context manager requests will hit the remote server resp = requests.get("http://twitter.com/api/1/foobar") resp.status_code == 404
Response Parameters
The following attributes can be passed to a Response mock:
- method (
str) - The HTTP method (GET, POST, etc).
- url (
strorcompiled regular expression) - The full resource URL.
- match_querystring (
bool) DEPRECATED: Use
responses.matchers.query_param_matcherorresponses.matchers.query_string_matcherInclude the query string when matching requests. Enabled by default if the response URL contains a query string, disabled if it doesn't or the URL is a regular expression.
- body (
strorBufferedReaderorException) - The response body. Read more Exception as Response body
- json
- A Python object representing the JSON response body. Automatically configures the appropriate Content-Type.
- status (
int) - The HTTP status code.
- content_type (
content_type) - Defaults to
text/plain. - headers (
dict) - Response headers.
- stream (
bool) - DEPRECATED: use
streamargument in request directly - auto_calculate_content_length (
bool) - Disabled by default. Automatically calculates the length of a supplied string or JSON body.
- match (
tuple) An iterable (
tupleis recommended) of callbacks to match requests based on request attributes. Current module provides multiple matchers that you can use to match:- body contents in JSON format
- body contents in URL encoded data format
- request query parameters
- request query string (similar to query parameters but takes string as input)
- kwargs provided to request e.g.
stream,verify - 'multipart/form-data' content and headers in request
- request headers
- request fragment identifier
Alternatively user can create custom matcher. Read more Matching Requests
Exception as Response body
You can pass an Exception as the body to trigger an error on the request:
import responses import requests @responses.activate def test_simple(): responses.get("http://twitter.com/api/1/foobar", body=Exception("...")) with pytest.raises(Exception): requests.get("http://twitter.com/api/1/foobar")
Matching Requests
Matching Request Body Contents
When adding responses for endpoints that are sent request data you can add
matchers to ensure your code is sending the right parameters and provide
different responses based on the request body contents. responses provides
matchers for JSON and URL-encoded request bodies.
URL-encoded data
import responses import requests from responses import matchers @responses.activate def test_calc_api(): responses.post( url="http://calc.com/sum", body="4", match=[matchers.urlencoded_params_matcher({"left": "1", "right": "3"})], ) requests.post("http://calc.com/sum", data={"left": 1, "right": 3})
JSON encoded data
Matching JSON encoded data can be done with matchers.json_params_matcher().
import responses import requests from responses import matchers @responses.activate def test_calc_api(): responses.post( url="http://example.com/", body="one", match=[ matchers.json_params_matcher({"page": {"name": "first", "type": "json"}}) ], ) resp = requests.request( "POST", "http://example.com/", headers={"Content-Type": "application/json"}, json={"page": {"name": "first", "type": "json"}}, )
Query Parameters Matcher
Query Parameters as a Dictionary
You can use the matchers.query_param_matcher function to match
against the params request parameter. Just use the same dictionary as you
will use in params argument in request.
Note, do not use query parameters as part of the URL. Avoid using match_querystring
deprecated argument.
import responses import requests from responses import matchers @responses.activate def test_calc_api(): url = "http://example.com/test" params = {"hello": "world", "I am": "a big test"} responses.get( url=url, body="test", match=[matchers.query_param_matcher(params)], ) resp = requests.get(url, params=params) constructed_url = r"http://example.com/test?I+am=a+big+test&hello=world" assert resp.url == constructed_url assert resp.request.url == constructed_url assert resp.request.params == params
By default, matcher will validate that all parameters match strictly.
To validate that only parameters specified in the matcher are present in original request
use strict_match=False.
Query Parameters as a String
As alternative, you can use query string value in matchers.query_string_matcher to match
query parameters in your request
import requests import responses from responses import matchers @responses.activate def my_func(): responses.get( "https://httpbin.org/get", match=[matchers.query_string_matcher("didi=pro&test=1")], ) resp = requests.get("https://httpbin.org/get", params={"test": 1, "didi": "pro"}) my_func()
Request Keyword Arguments Matcher
To validate request arguments use the matchers.request_kwargs_matcher function to match
against the request kwargs.
Only following arguments are supported: timeout, verify, proxies, stream, cert.
Note, only arguments provided to matchers.request_kwargs_matcher will be validated.
import responses import requests from responses import matchers with responses.RequestsMock(assert_all_requests_are_fired=False) as rsps: req_kwargs = { "stream": True, "verify": False, } rsps.add( "GET", "http://111.com", match=[matchers.request_kwargs_matcher(req_kwargs)], ) requests.get("http://111.com", stream=True) # >>> Arguments don't match: {stream: True, verify: True} doesn't match {stream: True, verify: False}
Request multipart/form-data Data Validation
To validate request body and headers for multipart/form-data data you can use
matchers.multipart_matcher. The data, and files parameters provided will be compared
to the request:
import requests import responses from responses.matchers import multipart_matcher @responses.activate def my_func(): req_data = {"some": "other", "data": "fields"} req_files = {"file_name": b"Old World!"} responses.post( url="http://httpbin.org/post", match=[multipart_matcher(req_files, data=req_data)], ) resp = requests.post("http://httpbin.org/post", files={"file_name": b"New World!"}) my_func() # >>> raises ConnectionError: multipart/form-data doesn't match. Request body differs.
Request Fragment Identifier Validation
To validate request URL fragment identifier you can use matchers.fragment_identifier_matcher.
The matcher takes fragment string (everything after # sign) as input for comparison:
import requests import responses from responses.matchers import fragment_identifier_matcher @responses.activate def run(): url = "http://example.com?ab=xy&zed=qwe#test=1&foo=bar" responses.get( url, match=[fragment_identifier_matcher("test=1&foo=bar")], body=b"test", ) # two requests to check reversed order of fragment identifier resp = requests.get("http://example.com?ab=xy&zed=qwe#test=1&foo=bar") resp = requests.get("http://example.com?zed=qwe&ab=xy#foo=bar&test=1") run()
Request Headers Validation
When adding responses you can specify matchers to ensure that your code is sending the right headers and provide different responses based on the request headers.
import responses import requests from responses import matchers @responses.activate def test_content_type(): responses.get( url="http://example.com/", body="hello world", match=[matchers.header_matcher({"Accept": "text/plain"})], ) responses.get( url="http://example.com/", json={"content": "hello world"}, match=[matchers.header_matcher({"Accept": "application/json"})], ) # request in reverse order to how they were added! resp = requests.get("http://example.com/", headers={"Accept": "application/json"}) assert resp.json() == {"content": "hello world"} resp = requests.get("http://example.com/", headers={"Accept": "text/plain"}) assert resp.text == "hello world"
Because requests will send several standard headers in addition to what was
specified by your code, request headers that are additional to the ones
passed to the matcher are ignored by default. You can change this behaviour by
passing strict_match=True to the matcher to ensure that only the headers
that you're expecting are sent and no others. Note that you will probably have
to use a PreparedRequest in your code to ensure that requests doesn't
include any additional headers.
import responses import requests from responses import matchers @responses.activate def test_content_type(): responses.get( url="http://example.com/", body="hello world", match=[matchers.header_matcher({"Accept": "text/plain"}, strict_match=True)], ) # this will fail because requests adds its own headers with pytest.raises(ConnectionError): requests.get("http://example.com/", headers={"Accept": "text/plain"}) # a prepared request where you overwrite the headers before sending will work session = requests.Session() prepped = session.prepare_request( requests.Request( method="GET", url="http://example.com/", ) ) prepped.headers = {"Accept": "text/plain"} resp = session.send(prepped) assert resp.text == "hello world"
Creating Custom Matcher
If your application requires other encodings or different data validation you can build
your own matcher that returns Tuple[matches: bool, reason: str].
Where boolean represents True or False if the request parameters match and
the string is a reason in case of match failure. Your matcher can
expect a PreparedRequest parameter to be provided by responses.
Note, PreparedRequest is customized and has additional attributes params and req_kwargs.
Response Registry
Default Registry
By default, responses will search all registered Response objects and
return a match. If only one Response is registered, the registry is kept unchanged.
However, if multiple matches are found for the same request, then first match is returned and
removed from registry.
Ordered Registry
In some scenarios it is important to preserve the order of the requests and responses.
You can use registries.OrderedRegistry to force all Response objects to be dependent
on the insertion order and invocation index.
In following example we add multiple Response objects that target the same URL. However,
you can see, that status code will depend on the invocation order.
import requests import responses from responses.registries import OrderedRegistry @responses.activate(registry=OrderedRegistry) def test_invocation_index(): responses.get( "http://twitter.com/api/1/foobar", json={"msg": "not found"}, status=404, ) responses.get( "http://twitter.com/api/1/foobar", json={"msg": "OK"}, status=200, ) responses.get( "http://twitter.com/api/1/foobar", json={"msg": "OK"}, status=200, ) responses.get( "http://twitter.com/api/1/foobar", json={"msg": "not found"}, status=404, ) resp = requests.get("http://twitter.com/api/1/foobar") assert resp.status_code == 404 resp = requests.get("http://twitter.com/api/1/foobar") assert resp.status_code == 200 resp = requests.get("http://twitter.com/api/1/foobar") assert resp.status_code == 200 resp = requests.get("http://twitter.com/api/1/foobar") assert resp.status_code == 404
Custom Registry
Built-in registries are suitable for most of use cases, but to handle special conditions, you can
implement custom registry which must follow interface of registries.FirstMatchRegistry.
Redefining the find method will allow you to create custom search logic and return
appropriate Response
Example that shows how to set custom registry
import responses from responses import registries class CustomRegistry(registries.FirstMatchRegistry): pass print("Before tests:", responses.mock.get_registry()) """ Before tests: <responses.registries.FirstMatchRegistry object> """ # using function decorator @responses.activate(registry=CustomRegistry) def run(): print("Within test:", responses.mock.get_registry()) """ Within test: <__main__.CustomRegistry object> """ run() print("After test:", responses.mock.get_registry()) """ After test: <responses.registries.FirstMatchRegistry object> """ # using context manager with responses.RequestsMock(registry=CustomRegistry) as rsps: print("In context manager:", rsps.get_registry()) """ In context manager: <__main__.CustomRegistry object> """ print("After exit from context manager:", responses.mock.get_registry()) """ After exit from context manager: <responses.registries.FirstMatchRegistry object> """
Dynamic Responses
You can utilize callbacks to provide dynamic responses. The callback must return
a tuple of (status, headers, body).
import json import responses import requests @responses.activate def test_calc_api(): def request_callback(request): payload = json.loads(request.body) resp_body = {"value": sum(payload["numbers"])} headers = {"request-id": "728d329e-0e86-11e4-a748-0c84dc037c13"} return (200, headers, json.dumps(resp_body)) responses.add_callback( responses.POST, "http://calc.com/sum", callback=request_callback, content_type="application/json", ) resp = requests.post( "http://calc.com/sum", json.dumps({"numbers": [1, 2, 3]}), headers={"content-type": "application/json"}, ) assert resp.json() == {"value": 6} assert len(responses.calls) == 1 assert responses.calls[0].request.url == "http://calc.com/sum" assert responses.calls[0].response.text == '{"value": 6}' assert ( responses.calls[0].response.headers["request-id"] == "728d329e-0e86-11e4-a748-0c84dc037c13" )
You can also pass a compiled regex to add_callback to match multiple urls:
import re, json from functools import reduce import responses import requests operators = { "sum": lambda x, y: x + y, "prod": lambda x, y: x * y, "pow": lambda x, y: x**y, } @responses.activate def test_regex_url(): def request_callback(request): payload = json.loads(request.body) operator_name = request.path_url[1:] operator = operators[operator_name] resp_body = {"value": reduce(operator, payload["numbers"])} headers = {"request-id": "728d329e-0e86-11e4-a748-0c84dc037c13"} return (200, headers, json.dumps(resp_body)) responses.add_callback( responses.POST, re.compile("http://calc.com/(sum|prod|pow|unsupported)"), callback=request_callback, content_type="application/json", ) resp = requests.post( "http://calc.com/prod", json.dumps({"numbers": [2, 3, 4]}), headers={"content-type": "application/json"}, ) assert resp.json() == {"value": 24} test_regex_url()
If you want to pass extra keyword arguments to the callback function, for example when reusing
a callback function to give a slightly different result, you can use functools.partial:
from functools import partial def request_callback(request, id=None): payload = json.loads(request.body) resp_body = {"value": sum(payload["numbers"])} headers = {"request-id": id} return (200, headers, json.dumps(resp_body)) responses.add_callback( responses.POST, "http://calc.com/sum", callback=partial(request_callback, id="728d329e-0e86-11e4-a748-0c84dc037c13"), content_type="application/json", )
Integration with unit test frameworks
Responses as a pytest fixture
Use the pytest-responses package to export responses as a pytest fixture.
pip install pytest-responses
You can then access it in a pytest script using:
import pytest_responses def test_api(responses): responses.get( "http://twitter.com/api/1/foobar", body="{}", status=200, content_type="application/json", ) resp = requests.get("http://twitter.com/api/1/foobar") assert resp.status_code == 200
Add default responses for each test
When run with unittest tests, this can be used to set up some
generic class-level responses, that may be complemented by each test.
Similar interface could be applied in pytest framework.
class TestMyApi(unittest.TestCase): def setUp(self): responses.get("https://example.com", body="within setup") # here go other self.responses.add(...) @responses.activate def test_my_func(self): responses.get( "https://httpbin.org/get", match=[matchers.query_param_matcher({"test": "1", "didi": "pro"})], body="within test", ) resp = requests.get("https://example.com") resp2 = requests.get( "https://httpbin.org/get", params={"test": "1", "didi": "pro"} ) print(resp.text) # >>> within setup print(resp2.text) # >>> within test
RequestMock methods: start, stop, reset
responses has start, stop, reset methods very analogous to
unittest.mock.patch.
These make it simpler to do requests mocking in setup methods or where
you want to do multiple patches without nesting decorators or with statements.
class TestUnitTestPatchSetup: def setup(self): """Creates ``RequestsMock`` instance and starts it.""" self.r_mock = responses.RequestsMock(assert_all_requests_are_fired=True) self.r_mock.start() # optionally some default responses could be registered self.r_mock.get("https://example.com", status=505) self.r_mock.put("https://example.com", status=506) def teardown(self): """Stops and resets RequestsMock instance. If ``assert_all_requests_are_fired`` is set to ``True``, will raise an error if some requests were not processed. """ self.r_mock.stop() self.r_mock.reset() def test_function(self): resp = requests.get("https://example.com") assert resp.status_code == 505 resp = requests.put("https://example.com") assert resp.status_code == 506
Assertions on declared responses
When used as a context manager, Responses will, by default, raise an assertion
error if a url was registered but not accessed. This can be disabled by passing
the assert_all_requests_are_fired value:
import responses import requests def test_my_api(): with responses.RequestsMock(assert_all_requests_are_fired=False) as rsps: rsps.add( responses.GET, "http://twitter.com/api/1/foobar", body="{}", status=200, content_type="application/json", )
When assert_all_requests_are_fired=True and an exception occurs within the
context manager, assertions about unfired requests will still be raised. This
provides valuable context about which mocked requests were or weren't called
when debugging test failures.
import responses import requests def test_with_exception(): with responses.RequestsMock(assert_all_requests_are_fired=True) as rsps: rsps.add(responses.GET, "http://example.com/users", body="test") rsps.add(responses.GET, "http://example.com/profile", body="test") requests.get("http://example.com/users") raise ValueError("Something went wrong") # Output: # ValueError: Something went wrong # # During handling of the above exception, another exception occurred: # # AssertionError: Not all requests have been executed [('GET', 'http://example.com/profile')]
Assert Request Call Count
Assert based on Response object
Each Response object has call_count attribute that could be inspected
to check how many times each request was matched.
@responses.activate def test_call_count_with_matcher(): rsp = responses.get( "http://www.example.com", match=(matchers.query_param_matcher({}),), ) rsp2 = responses.get( "http://www.example.com", match=(matchers.query_param_matcher({"hello": "world"}),), status=777, ) requests.get("http://www.example.com") resp1 = requests.get("http://www.example.com") requests.get("http://www.example.com?hello=world") resp2 = requests.get("http://www.example.com?hello=world") assert resp1.status_code == 200 assert resp2.status_code == 777 assert rsp.call_count == 2 assert rsp2.call_count == 2
Assert based on the exact URL
Assert that the request was called exactly n times.
import responses import requests @responses.activate def test_assert_call_count(): responses.get("http://example.com") requests.get("http://example.com") assert responses.assert_call_count("http://example.com", 1) is True requests.get("http://example.com") with pytest.raises(AssertionError) as excinfo: responses.assert_call_count("http://example.com", 1) assert ( "Expected URL 'http://example.com' to be called 1 times. Called 2 times." in str(excinfo.value) ) @responses.activate def test_assert_call_count_always_match_qs(): responses.get("http://www.example.com") requests.get("http://www.example.com") requests.get("http://www.example.com?hello=world") # One call on each url, querystring is matched by default responses.assert_call_count("http://www.example.com", 1) is True responses.assert_call_count("http://www.example.com?hello=world", 1) is True
Assert Request Calls data
Request object has calls list which elements correspond to Call objects
in the global list of Registry. This can be useful when the order of requests is not
guaranteed, but you need to check their correctness, for example in multithreaded
applications.
import concurrent.futures import responses import requests @responses.activate def test_assert_calls_on_resp(): rsp1 = responses.patch("http://www.foo.bar/1/", status=200) rsp2 = responses.patch("http://www.foo.bar/2/", status=400) rsp3 = responses.patch("http://www.foo.bar/3/", status=200) def update_user(uid, is_active): url = f"http://www.foo.bar/{uid}/" response = requests.patch(url, json={"is_active": is_active}) return response with concurrent.futures.ThreadPoolExecutor(max_workers=3) as executor: future_to_uid = { executor.submit(update_user, uid, is_active): uid for (uid, is_active) in [("3", True), ("2", True), ("1", False)] } for future in concurrent.futures.as_completed(future_to_uid): uid = future_to_uid[future] response = future.result() print(f"{uid} updated with {response.status_code} status code") assert len(responses.calls) == 3 # total calls count assert rsp1.call_count == 1 assert rsp1.calls[0] in responses.calls assert rsp1.calls[0].response.status_code == 200 assert json.loads(rsp1.calls[0].request.body) == {"is_active": False} assert rsp2.call_count == 1 assert rsp2.calls[0] in responses.calls assert rsp2.calls[0].response.status_code == 400 assert json.loads(rsp2.calls[0].request.body) == {"is_active": True} assert rsp3.call_count == 1 assert rsp3.calls[0] in responses.calls assert rsp3.calls[0].response.status_code == 200 assert json.loads(rsp3.calls[0].request.body) == {"is_active": True}
Multiple Responses
You can also add multiple responses for the same url:
import responses import requests @responses.activate def test_my_api(): responses.get("http://twitter.com/api/1/foobar", status=500) responses.get( "http://twitter.com/api/1/foobar", body="{}", status=200, content_type="application/json", ) resp = requests.get("http://twitter.com/api/1/foobar") assert resp.status_code == 500 resp = requests.get("http://twitter.com/api/1/foobar") assert resp.status_code == 200
URL Redirection
In the following example you can see how to create a redirection chain and add custom exception that will be raised in the execution chain and contain the history of redirects.
A -> 301 redirect -> B B -> 301 redirect -> C C -> connection issue
import pytest import requests import responses @responses.activate def test_redirect(): # create multiple Response objects where first two contain redirect headers rsp1 = responses.Response( responses.GET, "http://example.com/1", status=301, headers={"Location": "http://example.com/2"}, ) rsp2 = responses.Response( responses.GET, "http://example.com/2", status=301, headers={"Location": "http://example.com/3"}, ) rsp3 = responses.Response(responses.GET, "http://example.com/3", status=200) # register above generated Responses in ``response`` module responses.add(rsp1) responses.add(rsp2) responses.add(rsp3) # do the first request in order to generate genuine ``requests`` response # this object will contain genuine attributes of the response, like ``history`` rsp = requests.get("http://example.com/1") responses.calls.reset() # customize exception with ``response`` attribute my_error = requests.ConnectionError("custom error") my_error.response = rsp # update body of the 3rd response with Exception, this will be raised during execution rsp3.body = my_error with pytest.raises(requests.ConnectionError) as exc_info: requests.get("http://example.com/1") assert exc_info.value.args[0] == "custom error" assert rsp1.url in exc_info.value.response.history[0].url assert rsp2.url in exc_info.value.response.history[1].url
Validate Retry mechanism
If you are using the Retry features of urllib3 and want to cover scenarios that test your retry limits, you can test those scenarios with responses as well. The best approach will be to use an Ordered Registry
import requests import responses from responses import registries from urllib3.util import Retry @responses.activate(registry=registries.OrderedRegistry) def test_max_retries(): url = "https://example.com" rsp1 = responses.get(url, body="Error", status=500) rsp2 = responses.get(url, body="Error", status=500) rsp3 = responses.get(url, body="Error", status=500) rsp4 = responses.get(url, body="OK", status=200) session = requests.Session() adapter = requests.adapters.HTTPAdapter( max_retries=Retry( total=4, backoff_factor=0.1, status_forcelist=[500], method_whitelist=["GET", "POST", "PATCH"], ) ) session.mount("https://", adapter) resp = session.get(url) assert resp.status_code == 200 assert rsp1.call_count == 1 assert rsp2.call_count == 1 assert rsp3.call_count == 1 assert rsp4.call_count == 1
Using a callback to modify the response
If you use customized processing in requests via subclassing/mixins, or if you
have library tools that interact with requests at a low level, you may need
to add extended processing to the mocked Response object to fully simulate the
environment for your tests. A response_callback can be used, which will be
wrapped by the library before being returned to the caller. The callback
accepts a response as it's single argument, and is expected to return a
single response object.
import responses import requests def response_callback(resp): resp.callback_processed = True return resp with responses.RequestsMock(response_callback=response_callback) as m: m.add(responses.GET, "http://example.com", body=b"test") resp = requests.get("http://example.com") assert resp.text == "test" assert hasattr(resp, "callback_processed") assert resp.callback_processed is True
Passing through real requests
In some cases you may wish to allow for certain requests to pass through responses
and hit a real server. This can be done with the add_passthru methods:
import responses @responses.activate def test_my_api(): responses.add_passthru("https://percy.io")
This will allow any requests matching that prefix, that is otherwise not registered as a mock response, to passthru using the standard behavior.
Pass through endpoints can be configured with regex patterns if you need to allow an entire domain or path subtree to send requests:
responses.add_passthru(re.compile("https://percy.io/\\w+"))
Lastly, you can use the passthrough argument of the Response object
to force a response to behave as a pass through.
# Enable passthrough for a single response response = Response( responses.GET, "http://example.com", body="not used", passthrough=True, ) responses.add(response) # Use PassthroughResponse response = PassthroughResponse(responses.GET, "http://example.com") responses.add(response)
Viewing/Modifying registered responses
Registered responses are available as a public method of the RequestMock
instance. It is sometimes useful for debugging purposes to view the stack of
registered responses which can be accessed via responses.registered().
The replace function allows a previously registered response to be
changed. The method signature is identical to add. response s are
identified using method and url. Only the first matched response is
replaced.
import responses import requests @responses.activate def test_replace(): responses.get("http://example.org", json={"data": 1}) responses.replace(responses.GET, "http://example.org", json={"data": 2}) resp = requests.get("http://example.org") assert resp.json() == {"data": 2}
The upsert function allows a previously registered response to be
changed like replace. If the response is registered, the upsert function
will registered it like add.
remove takes a method and url argument and will remove all
matched responses from the registered list.
Finally, reset will reset all registered responses.
Coroutines and Multithreading
responses supports both Coroutines and Multithreading out of the box.
Note, responses locks threading on RequestMock object allowing only
single thread to access it.
async def test_async_calls(): @responses.activate async def run(): responses.get( "http://twitter.com/api/1/foobar", json={"error": "not found"}, status=404, ) resp = requests.get("http://twitter.com/api/1/foobar") assert resp.json() == {"error": "not found"} assert responses.calls[0].request.url == "http://twitter.com/api/1/foobar" await run()
BETA Features
Below you can find a list of BETA features. Although we will try to keep the API backwards compatible with released version, we reserve the right to change these APIs before they are considered stable. Please share your feedback via GitHub Issues.
Record Responses to files
You can perform real requests to the server and responses will automatically record the output to the
file. Recorded data is stored in YAML format.
Apply @responses._recorder.record(file_path="out.yaml") decorator to any function where you perform
requests to record responses to out.yaml file.
Following code
import requests from responses import _recorder def another(): rsp = requests.get("https://httpstat.us/500") rsp = requests.get("https://httpstat.us/202") @_recorder.record(file_path="out.yaml") def test_recorder(): rsp = requests.get("https://httpstat.us/404") rsp = requests.get("https://httpbin.org/status/wrong") another()
will produce next output:
responses: - response: auto_calculate_content_length: false body: 404 Not Found content_type: text/plain method: GET status: 404 url: https://httpstat.us/404 - response: auto_calculate_content_length: false body: Invalid status code content_type: text/plain method: GET status: 400 url: https://httpbin.org/status/wrong - response: auto_calculate_content_length: false body: 500 Internal Server Error content_type: text/plain method: GET status: 500 url: https://httpstat.us/500 - response: auto_calculate_content_length: false body: 202 Accepted content_type: text/plain method: GET status: 202 url: https://httpstat.us/202
If you are in the REPL, you can also activete the recorder for all following responses:
import requests from responses import _recorder _recorder.recorder.start() requests.get("https://httpstat.us/500") _recorder.recorder.dump_to_file("out.yaml") # you can stop or reset the recorder _recorder.recorder.stop() _recorder.recorder.reset()
Replay responses (populate registry) from files
You can populate your active registry from a yaml file with recorded responses.
(See Record Responses to files to understand how to obtain a file).
To do that you need to execute responses._add_from_file(file_path="out.yaml") within
an activated decorator or a context manager.
The following code example registers a patch response, then all responses present in
out.yaml file and a post response at the end.
import responses @responses.activate def run(): responses.patch("http://httpbin.org") responses._add_from_file(file_path="out.yaml") responses.post("http://httpbin.org/form") run()
Contributing
Environment Configuration
Responses uses several linting and autoformatting utilities, so it's important that when submitting patches you use the appropriate toolchain:
Clone the repository:
git clone https://github.com/getsentry/responses.git
Create an environment (e.g. with virtualenv):
virtualenv .env && source .env/bin/activate
Configure development requirements:
Tests and Code Quality Validation
The easiest way to validate your code is to run tests via tox.
Current tox configuration runs the same checks that are used in
GitHub Actions CI/CD pipeline.
Please execute the following command line from the project root to validate your code against:
- Unit tests in all Python versions that are supported by this project
- Type validation via
mypy - All
pre-commithooks
Alternatively, you can always run a single test. See documentation below.
Unit tests
Responses uses Pytest for testing. You can run all tests by:
OR manually activate required version of Python and run
And run a single test by:
pytest -k '<test_function_name>'Type Validation
To verify type compliance, run mypy linter:
OR
mypy --config-file=./mypy.ini -p responses
Code Quality and Style
To check code style and reformat it run:
OR
pre-commit run --all-files