Send traces to your favorite observability platforms with Broadcast (now GA).

All prompts and completions for this model are logged by the provider and may be used to improve the model.

Pony Alpha

openrouter/pony-alpha

Created Feb 6, 2026200,000 context
$0/M input tokens$0/M output tokens

Pony is a cutting-edge foundation model with strong performance in coding, agentic workflows, reasoning, and roleplay, making it well suited for hands-on coding and real-world use.

Note: All prompts and completions for this model are logged by the provider and may be used to improve the model.

Providers for Pony Alpha

OpenRouter routes requests to the best providers that are able to handle your prompt size and parameters, with fallbacks to maximize uptime.

Latency
6.61s
Throughput
17tps
Uptime
100.0%
Uptime 100.0 percent
Total Context
Max Output
Input Price
Output Price
Cache Read
Cache Write
Input Audio
Input Audio Cache
200K
131K
--
--
--
--
--
--

Performance for Pony Alpha

Compare different providers across OpenRouter

Apps using Pony Alpha

Top public apps this month

1.
Favicon for https://kilocode.ai/
Kilo Code
AI coding agent for VS Code
154Mtokens
2.
Favicon for https://claude.ai/apple-touch-icon.png
Claude Code
The AI for problem solvers
113Mtokens
3.
Favicon for https://sillytavern.app/
SillyTavern
LLM frontend for power users
72.7Mtokens
4.
Favicon for https://janitorai.com/
Janitor AI
Character chat and creation
50.2Mtokens
5.
Favicon for https://openclaw.ai/
OpenClaw
The AI that actually does things
31.1Mtokens
Feb 6

Recent activity on Pony Alpha

Total usage per day on OpenRouter

Feb 6250M500M750M1B
Prompt
840M
Completion
103M
Reasoning
0

Prompt tokens measure input size. Reasoning tokens show internal thinking before a response. Completion tokens reflect total output length.

Uptime stats for Pony Alpha

Uptime stats for Pony Alpha on the only provider

When an error occurs in an upstream provider, we can recover by routing to another healthy provider, if your request filters allow it. You can access uptime data programmatically through the Endpoints API

Learn more about our load balancing and customization options.

Sample code and API for Pony Alpha

OpenRouter normalizes requests and responses across providers for you.

OpenRouter supports reasoning-enabled models that can show their step-by-step thinking process. Use the reasoning parameter in your request to enable reasoning, and access the reasoning_details array in the response to see the model's internal reasoning before the final answer. When continuing a conversation, preserve the complete reasoning_details when passing messages back to the model so it can continue reasoning from where it left off. Learn more about reasoning tokens.

In the examples below, the OpenRouter-specific headers are optional. Setting them allows your app to appear on the OpenRouter leaderboards.

import { OpenRouter } from "@openrouter/sdk";

const openrouter = new OpenRouter({
  apiKey: "<OPENROUTER_API_KEY>"
});

// Stream the response to get reasoning tokens in usage
const stream = await openrouter.chat.send({
  model: "openrouter/pony-alpha",
  messages: [
    {
      role: "user",
      content: "How many r's are in the word 'strawberry'?"
    }
  ],
  stream: true
});

let response = "";
for await (const chunk of stream) {
  const content = chunk.choices[0]?.delta?.content;
  if (content) {
    response += content;
    process.stdout.write(content);
  }

  // Usage information comes in the final chunk
  if (chunk.usage) {
    console.log("\nReasoning tokens:", chunk.usage.reasoningTokens);
  }
}

Using third-party SDKs

For information about using third-party SDKs and frameworks with OpenRouter, please see our frameworks documentation.

See the Request docs for all possible fields, and Parameters for explanations of specific sampling parameters.

More models from OpenRouter