Z.ai

1,113 posts
Square profile picture and Opens profile photo
Z.ai
@Zai_org
The AI Lab behind GLM models, dedicated to inspiring the development of AGI to benefit humanity. huggingface.co/zai-org discord.gg/QR7SARHRxK
user_feedback@z.aiz.ai

Z.ai’s posts

Pinned
Square profile picture
Introducing GLM-5.1: The Next Level of Open Source - Top-Tier Performance: #1 in open source and #3 globally across SWE-Bench Pro, Terminal-Bench, and NL2Repo. - Built for Long-Horizon Tasks: Runs autonomously for 8 hours, refining strategies through thousands of iterations.
Image
Truly sorry for any confusion or frustration caused by unclear, misleading, or inappropriate rules in our moderation system and on our pages. OpenClaw, Hermes, and SillyTavern are now explicitly marked as supported under the GLM Coding Plan. Other general-purpose tools will be
Image
Square profile picture
Fantastic to see GLM being applied to such fresh, dynamic scenarios.
Quote
Jifan Yu
@yujifan_0326
Doing some stress tests on OpenMAIC’s Interactive Simulation with a DNA Replication case. 💻 Both powered by @Zai_org — with GLM-5.1 and GLM-5V-Turbo each generating these complex pedagogical simulations in real time. Can you spot the difference? The "Turbo" is catching up
0:11
Square profile picture
GMI Cloud and are bringing fast inference + frontier models around the globe. First stop: Singapore. Big congrats to all the builders at the GMI x Agent Hackathon in 🇸🇬 100+ on-site, 16 teams shipped projects and battled it out for the top prizes with GLM
0:04 / 1:24
Square profile picture
GLM-5.1 by is now #3 in Code Arena - surpassing Gemini 3.1 and GPT-5.4, and now on par with Claude Sonnet 4.6. The first frontier level open model to break into the top 3. It’s a major +90 point jump over GLM-5, and +100 over Kimi K2.5 Thinking. Huge congrats to
Image
Quote
Z.ai
@Zai_org
Image
Introducing GLM-5.1: The Next Level of Open Source - Top-Tier Performance: #1 in open source and #3 globally across SWE-Bench Pro, Terminal-Bench, and NL2Repo. - Built for Long-Horizon Tasks: Runs autonomously for 8 hours, refining strategies through thousands of iterations.
Often discuss my three-level vision for opening GLM to the community: First, we focus on accessibility by lowering the barrier to entry and removing unnecessary constraints so developers can truly explore the model. Second, we provide a robust baseline that empowers everyone to
Quote
Peter Yang
@petergyang
Silicon Valley is quietly running on Chinese open source AI models. Here are the receipts: → Cursor confirmed last month that Composer 2 is built on Moonshot's Kimi K2.5 → Cognition's SWE-1.6 model is likely post-trained on Zhipu's GLM → Shopify saved $5M a year by x.com/petergyang/sta…
Image
Square profile picture
We partnered with to bring GLM-5.1 to Modal. Free to try as an endpoint for the next month. GLM-5.1 further improves upon GLM-5's coding abilities and long-horizon effectiveness.
Quote
Z.ai
@Zai_org
Introducing GLM-5.1: The Next Level of Open Source - Top-Tier Performance: #1 in open source and #3 globally across SWE-Bench Pro, Terminal-Bench, and NL2Repo. - Built for Long-Horizon Tasks: Runs autonomously for 8 hours, refining strategies through thousands of iterations.
Image
INCREDIBLE GLM-5.1 weights are now opensource > i’ve had early access to the weights for the past few days > and yeah… this one matters a lot benchmarks? > SWE-Bench Pro: 58.4 > beats Opus 4.6 (57.3) > beats GPT-5.4 (57.7) > beats Gemini 3.1 Pro (54.2) let that sink in
Image
Quote
Z.ai
@Zai_org
Image
Introducing GLM-5.1: The Next Level of Open Source - Top-Tier Performance: #1 in open source and #3 globally across SWE-Bench Pro, Terminal-Bench, and NL2Repo. - Built for Long-Horizon Tasks: Runs autonomously for 8 hours, refining strategies through thousands of iterations.
🚀 GLM-5.1 is now available on SiliconFlow Start a task, go to sleep. GLM-5.1 plans, executes, and self-improves for 8 hours and delivers high-quality results by morning. Still open-source. Big kudos to the team 👏 Now on SiliconFlow, you can get: 💰 Competitive
Image
Introducing GLM-5.1 for understanding research papers 🚀 Highlight any section of a paper to ask questions and “@” other papers for quick context, comparisons, and benchmark references
If you’ve encountered garbled output like this while using GLM-5 or GLM-5.1 on our official service, the issue is now resolved. We've patched the underlying inference-side bugs and will be releasing an article shortly to dive into the technical details and our specific fixes.
Image
The chart says GLM-5.1 scored 54.9 on coding benchmarks. Three points behind Claude Opus 4.6. Interesting but not the story. The story is what trained it. Zero Nvidia GPUs. 100,000 Huawei Ascend 910B chips. Every parameter. Every one of the 28.5 trillion training tokens.
Quote
Z.ai
@Zai_org
Introducing GLM-5.1: The Next Level of Open Source - Top-Tier Performance: #1 in open source and #3 globally across SWE-Bench Pro, Terminal-Bench, and NL2Repo. - Built for Long-Horizon Tasks: Runs autonomously for 8 hours, refining strategies through thousands of iterations.
Image
智谱直接把开源 Agent 拉到新高度了! GLM-5.1 正式开源: ✅ 开源模型里 SWE-Bench Pro 拿下 #1(58.4),全球第 3 ✅ 真正长时程 Agent:自主运行 8 小时、几千次迭代 + 自审循环 ✅ 从零搭出一个带 50+ App 的完整 Linux 桌面(视频太炸了) ✅ Vector-DB-Bench 性能直接 6 倍提升
Image
Quote
Z.ai
@Zai_org
Image
Introducing GLM-5.1: The Next Level of Open Source - Top-Tier Performance: #1 in open source and #3 globally across SWE-Bench Pro, Terminal-Bench, and NL2Repo. - Built for Long-Horizon Tasks: Runs autonomously for 8 hours, refining strategies through thousands of iterations.
Square profile picture
GLM-5.1 by just launched in the Text Arena, and is now the #1 open model. It outperforms the next best open model, its predecessor, GLM-5, by +11 points and +15 over Kimi K2.5 Thinking. It shows strength in: - #1 open model in Longer Query (#4 overall) - #1 open model
Image
Quote
Z.ai
@Zai_org
Image
Introducing GLM-5.1: The Next Level of Open Source - Top-Tier Performance: #1 in open source and #3 globally across SWE-Bench Pro, Terminal-Bench, and NL2Repo. - Built for Long-Horizon Tasks: Runs autonomously for 8 hours, refining strategies through thousands of iterations.
Square profile picture
Replying to
Check out the GLM-5.1 first impressions with Peter on our YouTube