For errors, OpenRouter returns a JSON response with the following shape:
The HTTP Response will have the same status code as error.code, forming a request error if:
Otherwise, the returned HTTP response status will be 200 and any error occurred while the LLM is producing the output will be emitted in the response body or as an SSE data event.
Example code for printing errors in JavaScript:
If your input was flagged, the error.metadata will contain information about the issue. The shape of the metadata is as follows:
If the model provider encounters an error, the error.metadata will contain information about the issue. The shape of the metadata is as follows:
Occasionally, the model may not generate any content. This typically occurs when:
Warm-up times usually range from a few seconds to a few minutes, depending on the model and provider.
If you encounter persistent no-content issues, consider implementing a simple retry mechanism or trying again with a different provider or model that has more recent activity.
Additionally, be aware that in some cases, you may still be charged for the prompt processing cost by the upstream provider, even if no content is generated.
When using streaming mode (stream: true), errors are handled differently depending on when they occur:
Errors that occur before any tokens are sent follow the standard error format above, with appropriate HTTP status codes.
Errors that occur after streaming has begun are sent as Server-Sent Events (SSE) with a unified structure that includes both the error details and a completion choice:
Example SSE data:
Key characteristics:
choices array is included with finish_reason: "error" to properly terminate the streamThe OpenAI Responses API (/api/alpha/responses) uses specific event types for streaming errors:
response.failed - Official failure event
response.error - Error during response generation
error - Plain error event (undocumented but sent by OpenAI)
The Responses API transforms certain error codes into successful completions with specific finish reasons:
This allows for graceful handling of limit-based errors without treating them as failures.
Different OpenRouter API endpoints handle errors in distinct ways:
/api/v1/chat/completions)ErrorResponsechoices array of the final response/api/alpha/responses)response.failed, response.error, error)OpenRouter provides a debug option that allows you to inspect the exact request body that was sent to the upstream provider. This is useful for understanding how OpenRouter transforms your request parameters to work with different providers.
The debug option is an object with the following shape:
To enable debug output, include the debug parameter in your request:
When debug.echo_upstream_body is set to true, OpenRouter will send a debug chunk as the first chunk in the streaming response. This chunk will have an empty choices array and include a debug field containing the transformed request body:
The debug option only works with streaming mode (stream: true) for the Chat Completions API. Non-streaming requests and Responses API requests will ignore the debug parameter.
The debug flag should not be used in production environments. It is intended for development and debugging purposes only, as it may potentially return sensitive information included in the request that was not intended to be visible elsewhere.
The debug output is particularly useful for:
Understanding Parameter Transformations: See how OpenRouter maps your parameters to provider-specific formats (e.g., how max_tokens is set, how temperature is handled).
Verifying Message Formatting: Check how OpenRouter combines and formats your messages for different providers (e.g., how system messages are concatenated, how user messages are merged).
Checking Applied Defaults: See what default values OpenRouter applies when parameters are not specified in your request.
Debugging Provider Fallbacks: When using provider fallbacks, a debug chunk will be sent for each attempted provider, allowing you to see which providers were tried and what parameters were sent to each.
OpenRouter will make a best effort to automatically redact potentially sensitive or noisy data from debug output. Remember that the debug option is not intended for production.