Total Context | Max Output | Input Price | Output Price | Cache Read | Cache Write | Input Audio | Input Audio Cache |
|---|---|---|---|---|---|---|---|
1.05M | 128K | ≤272K$30>272K$60 | ≤272K$180>272K$270 | -- | -- | -- | -- |
OpenAI: GPT-5.4 Pro
openai/gpt-5.4-pro
GPT-5.4 Pro is OpenAI's most advanced model, building on GPT-5.4's unified architecture with enhanced reasoning capabilities for complex, high-stakes tasks. It features a 1M+ token context window (922K input, 128K output) with support for text and image inputs. Optimized for step-by-step reasoning, instruction following, and accuracy, GPT-5.4 Pro excels at agentic coding, long-context workflows, and multi-step problem solving.
Providers for GPT-5.4 Pro
OpenRouter routes requests to the best providers that are able to handle your prompt size and parameters, with fallbacks to maximize uptime.
Weighted Average
Weighted Avg Input Price
$30.00
per 1M tokens
Weighted Avg Output Price
$180.00
per 1M tokens
| Provider | Input $/1M | Output $/1M | Cache Hit Rate |
|---|---|---|---|
OpenAI | $30.00 | $180.00 | 0.0% |
Input Price / 1M tokens (7 days)
Output Price / 1M tokens (7 days)
Reasoning
CritPt
Research-level physics reasoning
30.0%
When an error occurs in an upstream provider, we can recover by routing to another healthy provider, if your request filters allow it. You can access uptime data programmatically through the Endpoints API
Learn more about our load balancing and customization options.
Sample code and API for GPT-5.4 Pro
OpenRouter normalizes requests and responses across providers for you.
OpenRouter supports reasoning-enabled models that can show their step-by-step thinking process. Use the reasoning parameter in your request to enable reasoning, and access the reasoning_details array in the response to see the model's internal reasoning before the final answer. When continuing a conversation, preserve the complete reasoning_details when passing messages back to the model so it can continue reasoning from where it left off. Learn more about reasoning tokens.
In the examples below, the OpenRouter-specific headers are optional. Setting them allows your app to appear on the OpenRouter leaderboards.
import { OpenRouter } from "@openrouter/sdk";
const openrouter = new OpenRouter({
apiKey: "<OPENROUTER_API_KEY>"
});
// Stream the response to get reasoning tokens in usage
const stream = await openrouter.chat.send({
model: "openai/gpt-5.4-pro",
messages: [
{
role: "user",
content: "How many r's are in the word 'strawberry'?"
}
],
stream: true
});
let response = "";
for await (const chunk of stream) {
const content = chunk.choices[0]?.delta?.content;
if (content) {
response += content;
process.stdout.write(content);
}
// Usage information comes in the final chunk
if (chunk.usage) {
console.log("\nReasoning tokens:", chunk.usage.reasoningTokens);
}
}Using third-party SDKs
For information about using third-party SDKs and frameworks with OpenRouter, please see our frameworks documentation.
See the Request docs for all possible fields, and Parameters for explanations of specific sampling parameters.