Rate Limits

Understand request limits and how to handle throttling.

Limits by endpoint category

Rate limits vary by endpoint type:

Endpoint categoryLimitWindow
Authentication (/auth/*)10 requestsper minute
API key operations (/api-keys/*)1,000 requestsper minute
General endpoints100 requestsper minute
File uploads20 requestsper minute

Response headers

Every response includes rate limit headers so you can track your usage:

HeaderDescription
X-RateLimit-LimitMaximum requests allowed in the current window.
X-RateLimit-RemainingRequests remaining in the current window.
X-RateLimit-ResetUnix timestamp when the current window resets.
Retry-AfterSeconds to wait before retrying. Only present on 429 responses.

Example headers on a successful response:

HTTP/1.1 200 OK
X-RateLimit-Limit: 100
X-RateLimit-Remaining: 87
X-RateLimit-Reset: 1702656000

Handling 429 responses

When you exceed the limit, the API returns a 429 status with a Retry-After header:

1{
2 "error": {
3 "code": "rate_limit_exceeded",
4 "message": "Rate limit exceeded. Retry after 30 seconds.",
5 "param": null
6 }
7}
HTTP/1.1 429 Too Many Requests
Retry-After: 30
X-RateLimit-Limit: 100
X-RateLimit-Remaining: 0
X-RateLimit-Reset: 1702656030

Exponential backoff

Implement exponential backoff to handle rate limits gracefully:

1import time
2import requests
3
4def make_request(url: str, headers: dict, max_retries: int = 3) -> requests.Response:
5 for attempt in range(max_retries):
6 response = requests.get(url, headers=headers)
7
8 if response.status_code != 429:
9 return response
10
11 retry_after = int(response.headers.get("Retry-After", 2 ** attempt))
12 time.sleep(retry_after)
13
14 raise Exception("Max retries exceeded")
1async function makeRequest(url: string, headers: Record<string, string>, maxRetries = 3) {
2 for (let attempt = 0; attempt < maxRetries; attempt++) {
3 const response = await fetch(url, { headers });
4
5 if (response.status !== 429) {
6 return response;
7 }
8
9 const retryAfter = parseInt(response.headers.get("Retry-After") ?? String(2 ** attempt), 10);
10 await new Promise((resolve) => setTimeout(resolve, retryAfter * 1000));
11 }
12
13 throw new Error("Max retries exceeded");
14}

Best practices

  • Implement exponential backoff. Always respect the Retry-After header and back off progressively on repeated 429s.
  • Cache responses. Store carrier and load data locally instead of fetching on every request.
  • Use batch endpoints. POST /loads/batch lets you create multiple resources in a single request, reducing total API calls.
  • Monitor remaining quota. Check X-RateLimit-Remaining and throttle proactively before hitting the limit.
  • Use webhooks instead of polling. Prefer push-based updates over repeated GET requests to reduce unnecessary calls.
  • Stagger requests. Spread batch operations over time rather than sending them in a burst.