Headline benchmark standings and comparison context.
Top benchmark results for anthropic/claude-3-haiku.
Key dates, capabilities, and model metadata.
04 Mar 2024
13 Mar 2024
19 Feb 2026
19 Apr 2026
License
Proprietary
Input
Output
Detailed benchmark comparisons now live in the Compare tool.
Public apps observed in gateway request traffic for this model.
Based on the last hour of usage: we total provider spend, divide by total token volume, and express it as USD per 1M tokens. Providers with more traffic naturally carry more weight.
Weighted Avg Input Price
--
per 1M tokens (past hour)
Weighted Avg Output Price
--
per 1M tokens (past hour)
| Provider | Input $/Million | Output $/Million | Cache Token % |
|---|---|---|---|
Amazon Bedrock | $0.250 | $1.250 | -- |
Anthropic | $0.250 | $1.250 | -- |
Google Vertex | $0.250 | $1.250 | -- |
Meter
Latency
Throughput
Uptime
Total Context
Max Output
Latency
Throughput
Uptime
Latency
Throughput
Uptime
Total Context
Max Output
Core latency and throughput trends from recent traffic.
Start calling this model with endpoint-specific examples.
# 1) Set your key
export AI_STATS_API_KEY="aistats_***"
# 2) Send a request
curl -s https://api.phaseo.app/v1/responses \
-H "Authorization: Bearer $AI_STATS_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"model": "anthropic/claude-3-haiku",
"input": "Give me one fun fact about cURL."
}'