Claude Haiku 4.5 is Anthropic's fastest and most cost-efficient model, delivering near-frontier intelligence at a fraction of the cost of larger Claude models. It matches Claude Sonnet 4's coding performance at one-third the cost and over twice the speed, scoring 73.3% on SWE-bench Verified — placing it among the world's top coding models. With support for extended thinking, tool use, computer use, and a 200K-token context window, it is ideal for real-time applications, parallelized sub-agents, and high-volume deployments.
Anthropic FreeAnthropic ProAnthropic Max (5x)Anthropic Max (20x)API|VisionReasoningWeb Search|Proprietary Model
Knowledge Cutoff
2025-07
Input → Output Format
Context Memory
200KIN64KOUT
AI Performance Evaluation
Arena Overall Score
1408
±3As of 2026-05-01
Overall Rank
No.98
65,644 Votes
Arena by Ability
Hard Prompts
1437±4No.82
Expert Knowledge
1447±10No.70
Instruction Following
1411±5No.79
Conversation Memory
1422±6No.76
Creative
1385±7No.87
Coding
1478±6No.60
Math
1391±10No.123
Arena by Occupation
Creative Writing
1395±6No.89
Social Sciences
1422±7No.100
Media
1382±6No.89
Business
1415±6No.83
Healthcare
1417±10No.116
Legal
1410±9No.100
Software
1460±5No.69
Mathematics
1420±11No.83
Source:Arena Intelligence
Overall
AA Intelligence Index
37%↓2%
LiveBench
43%↓18%
ForecastBench
59%↑0%
Reasoning & Math
AA Math Index
84%↑9%
GPQA Diamond
67%↓15%
HLE
9.7%↓8%
MMLU-Pro
76%↓5%
AIME 2025
84%↑9%
LB Reasoning
34%↓35%
LB Math
58%↓16%
LB Data
45%↓8%
Coding
AA Coding Index
33%↓4%
LiveCodeBench
62%↓4%
LB Coding
72%↓1%
LB Agentic
33%↓12%
TAU2
55%↓26%
TerminalBench
27%↓7%
SciCode
43%↑1%
Language & Instructions
IFBench
54%↓9%
AA-LCR
70%↑8%
Hallucination (HHEM)
9.8%↑0%
Factual (HHEM)
90%↑0%
LB Language
57%↓15%
LB IF
18%↓33%
Output Speed
Standard Mode
99tok/s↑22
First Output 0.51s
Reasoning Mode
111tok/s↑24
First Output 13.92s