Error Handling
The OpenAI Responses API uses a consistent error format across all providers. This ensures that your integration remains robust even when switching between different underlying models.
Error Object
All errors return a JSON response with the following shape:
{
"error": {
"message": "The provided model is currently overloaded.",
"type": "server_error",
"param": null,
"code": "model_overloaded"
}
}
Common Response API Errors
| Type | Code | Description |
|---|---|---|
invalid_request_error |
invalid_prompt |
The input prompt contains content that violates safety policies. |
invalid_request_error |
invalid_api_key |
The provided API key is invalid or expired. |
rate_limit_error |
rate_limit_exceeded |
You have sent too many requests in a short period. |
server_error |
provider_error |
The underlying model provider returned an unrecoverable error. |
Handling Errors in Code
When using the OpenAI SDK, you can catch specific exception types:
from openai import OpenAI, APIError, RateLimitError
try:
client.chat.completions.create(...)
except RateLimitError as e:
print("Handle rate limiting (e.g. exponential backoff)")
except APIError as e:
print(f"Handle API error: {e.message}")
For a full list of HTTP status codes and their meanings, refer to the General Errors Code page.