Sending requests
Ultimately, the core part of this whole package is the RequestHandler
found in the request
module.
This object will handle, amongst other things, these core processes:
creating sessions
sending requests
processing responses as configured
handling error responses including backoff/retry time
authorising if configured
caching responses if configured
Each part listed above can be configured as required. Before we get to that though, let’s start with a simple example.
Sending simple requests
import asyncio
from typing import Any
from yarl import URL
from aiorequestful.request import RequestHandler
async def send_get_request(handler: RequestHandler, url: str | URL) -> Any:
"""Sends a simple GET request using the given ``handler`` for the given ``url``."""
async with handler:
payload = await handler.get(url)
return payload
request_handler: RequestHandler = RequestHandler.create()
api_url = "https://official-joke-api.appspot.com/jokes/programming/random"
task = send_get_request(request_handler, url=api_url)
result = asyncio.run(task)
print(result)
print(type(result).__name__)
To send many requests, we simply do the following:
async def send_get_requests(handler: RequestHandler, url: str | URL, count: int = 20) -> tuple[Any]:
async with handler:
payloads = await asyncio.gather(*[handler.get(url) for _ in range(count)])
return payloads
results = asyncio.run(send_get_requests(request_handler, url=api_url, count=20))
for result in results:
print(result)
Note
Here we use the RequestHandler.create()
class method to create the object.
We can create the object directly, by providing a connector
to a
ClientSession as seen below,
however it is preferable to use the RequestHandler.create()
class method to automatically generate the connector
from the given kwargs.
import aiohttp
def connector() -> aiohttp.ClientSession:
return aiohttp.ClientSession()
request_handler = RequestHandler(connector=connector)
Here, we requested some data from an open API that requires no authentication to access. Notice how the data type of the object we retrieve is a string, but we can see from the print that this is meant to be JSON data.
Handling the response payload
When we know the data type we want to retrieve, we can assign a PayloadHandler
to the RequestHandler
to retrieve the data type we require.
from aiorequestful.response.payload import JSONPayloadHandler
payload_handler = JSONPayloadHandler()
request_handler.payload_handler = payload_handler
task = send_get_request(request_handler, url=api_url)
result = asyncio.run(task)
print(result)
print(type(result).__name__)
By doing so, we ensure that our RequestHandler
only returns data in a format that we expect.
The JSONPayloadHandler
is set to fail if the data given to it is not valid JSON data.
We may also assign this PayloadHandler
when we create the RequestHandler
too.
request_handler = RequestHandler.create(payload_handler=payload_handler)
task = send_get_request(request_handler, url=api_url)
result = asyncio.run(task)
print(result)
print(type(result).__name__)
Additionally, if we have a ResponseCache
set up on the RequestHandler
,
the PayloadHandler
for each ResponseRepository
will also be updated as shown below.
@payload_handler.setter
def payload_handler(self, value: PayloadHandler):
self._payload_handler = value
if isinstance(self._session, CachedSession):
for repository in self._session.cache.values():
# all repositories must use the same payload handler as the request handler
# for it to function correctly
repository.settings.payload_handler = self._payload_handler
See also
For more info on setting up on how a ResponseRepository
uses the PayloadHandler
,
see Caching responses.
See also
For more info on payload handling, see Handling payload data.
Caching responses
When requesting a large amount of requests from a REST API, you will often find it is comparatively slow for it to respond.
You may add a ResponseCache
to the RequestHandler
to cache the initial responses from
these requests.
This will help speed up future requests by hitting the cache for requests first and returning any matching response
from the cache first before making an HTTP request to get the data.
from aiorequestful.cache.backend import SQLiteCache
cache = SQLiteCache.connect_with_in_memory_db()
request_handler = RequestHandler.create(cache=cache)
task = send_get_request(request_handler, url=api_url)
result = asyncio.run(task)
print(result)
However, this example will not cache anything as we have not set up repositories for the endpoints we require. See Caching responses for more info on setting up cache repositories.
Note
We cannot dynamically assign a cache to an instance of RequestHandler
.
Hence, we always need to supply the ResponseCache
when instantiating the RequestHandler
.
See also
For more info on setting a successful cache and other supported cache backends, see Caching responses.
Handling error responses
Often, we will receive error responses that we will need to handle.
We can have the RequestHandler
handle these responses by assigning StatusHandler
objects.
from aiorequestful.response.status import ClientErrorStatusHandler, UnauthorisedStatusHandler, RateLimitStatusHandler
response_handlers = [
UnauthorisedStatusHandler(), RateLimitStatusHandler(), ClientErrorStatusHandler()
]
request_handler.response_handlers = response_handlers
task = send_get_request(request_handler, url=api_url)
result = asyncio.run(task)
print(result)
print(type(result).__name__)
We may also assign these StatusHandler
objects when we create the RequestHandler
too.
request_handler = RequestHandler.create(response_handlers=response_handlers)
task = send_get_request(request_handler, url=api_url)
result = asyncio.run(task)
print(result)
print(type(result).__name__)
Note
The order of the StatusHandler
objects is important in determining which one has priority to
handle a response when the status codes of the StatusHandler
objects overlap.
In this example, the ClientErrorStatusHandler
is responsible for handling all client error status codes
i.e. 400
-499
, the UnauthorisedStatusHandler
is responsible for 401
status codes and the
RateLimitStatusHandler
is responsible for 429
status codes.
Because we supplied the UnauthorisedStatusHandler
and the RateLimitStatusHandler
handlers first in the list, they take priority over the ClientErrorStatusHandler
.
However, if we had done the following then all 400
-499
responses would be handled by the
ClientErrorStatusHandler
.
response_handlers = [
ClientErrorStatusHandler(), UnauthorisedStatusHandler(), RateLimitStatusHandler()
]
request_handler.response_handlers = response_handlers
task = send_get_request(request_handler, url=api_url)
result = asyncio.run(task)
print(result)
print(type(result).__name__)
See also
For more info on StatusHandler
and how they handle each response type, see Handling error responses.
Managing retries and backoff time
Another way we can ensure a successful response is to include a retry and backoff time management strategy.
The RequestHandler
provides two key mechanisms for these operations:
The
RequestHandler.wait_timer
manages the time to wait after every request whether successful or not. This is object-bound i.e. any increase to this timer affects future requests.The
RequestHandler.retry_timer
manages the time to wait after each unsuccessful and unhandled request. This is request-bound i.e. any increase to this timer only affects the current request and not future requests.
Retries and unsuccessful backoff time
As an example, if we want to simply retry the same request 3 times without any backoff time in-between each request, we can set the following:
from aiorequestful.timer import StepCountTimer
request_handler.retry_timer = StepCountTimer(initial=0, count=3, step=0)
We set the count
value to 3
for 3 retries and all other values to 0
to ensure there is no wait time between
these retries.
Should we wish to add some time between each retry, we can do the following:
request_handler.retry_timer = StepCountTimer(initial=0, count=3, step=0.2)
This will now add 0.2 seconds between each unsuccessful request, waiting 0.6 seconds before the final retry.
This timer is generated as new for each new request so any increase in time does not carry through to future requests.
Wait backoff time
We may also wish to handle wait time after all requests. This can be useful for sensitive services that often return ‘Too Many Requests’ errors when making a large volume of requests at once.
from aiorequestful.timer import StepCeilingTimer
request_handler.wait_timer = StepCeilingTimer(initial=0, final=1, step=0.1)
This timer will increase by 0.1 seconds each time it is increased up to a maximum of 1 second.
Warning
The RequestHandler
is not responsible for handling when this timer is increased.
A StatusHandler
should be used to increase this timer such as the RateLimitStatusHandler
which will increase this timer every time a ‘Too Many Requests’ error is returned.
This timer is the same for each new request so any increase in time does carry through to future requests.
Assignment on instantiation
As usual, we may also assign these Timer
objects when we create the RequestHandler
too.
retry_timer = StepCountTimer(initial=0, count=3, step=0.2)
wait_timer = StepCeilingTimer(initial=0, final=1, step=0.1)
request_handler = RequestHandler.create(retry_timer=retry_timer, wait_timer=wait_timer)
task = send_get_request(request_handler, url=api_url)
result = asyncio.run(task)
print(result)