Z.ai
Z.ai

GLM-5.1

2026-04-07

GLM-5.1 is Z.ai's latest open-source model released in April 2026 under the MIT license, a post-training upgrade to GLM-5 targeting coding and agentic performance through refined reinforcement learning. Built on the same 744B MoE architecture with 40B active parameters and a 200K-token context window, it scores 58.4% on SWE-Bench Pro — surpassing Claude Opus 4.6 (57.3%) — and can autonomously manage a full plan-execute-test-fix-optimize loop for up to eight hours without human intervention. It represents one of the strongest open-weight models available for long-horizon agentic engineering tasks.

Reasoning|Open ModelMIT
Knowledge Cutoff
2025
Input → Output Format
Context Memory
203KIN66KOUT
Cost/1M Words
$1.05IN$3.5OUT
Calculate Cost

AI Performance Evaluation

Arena Overall Score
1471
±6
As of 2026-05-01
Overall Rank
No.18
11,071 Votes
Arena by Ability
Hard Prompts
1493±8No.18
Expert Knowledge
1488±19No.26
Instruction Following
1463±10No.18
Conversation Memory
1477±14No.25
Creative
1454±14No.12
Coding
1524±11No.10
Math
1469±21No.19
Arena by Occupation
Creative Writing
1458±12No.13
Social Sciences
1494±14No.10
Media
1455±13No.11
Business
1452±13No.35
Healthcare
1472±21No.41
Legal
1477±21No.24
Software
1510±9No.14
Mathematics
1473±23No.20
Overall
AA Intelligence Index
51%↑12%
LiveBench
71%↑10%
Reasoning & Math
GPQA Diamond
87%↑5%
HLE
28%↑10%
LB Reasoning
73%↑3%
LB Math
85%↑11%
LB Data
63%↑10%
Coding
AA Coding Index
43%↑7%
LB Coding
75%↑2%
LB Agentic
55%↑10%
TAU2
98%↑17%
TerminalBench
43%↑9%
SciCode
44%↑2%
Language & Instructions
IFBench
76%↑13%
AA-LCR
62%↑0%
Hallucination (HHEM)
10%↑0%
Factual (HHEM)
90%↑0%
LB Language
72%↑0%
LB IF
68%↑17%
Output Speed
Standard Mode
42tok/s↓35
First Output 1.37s
Reasoning Mode
52tok/s↓35
First Output 73.87s