KNOWLEDGE ATLAS (02513) GLM-5.1 released: prices increased by 10% against the trend, open-source models surpass closed-source ones, programming capabilities aligned with Claude Opus 4.6.

date
10:31 08/04/2026
avatar
GMT Eight
The release of GLM-5.1 marks the transition of domestic large models from "cost-effective competition" to "global benchmarking of capabilities", and also heralds that AI is accelerating from an efficiency tool to a new type of productivity with independent output capabilities.
On April 8, KNOWLEDGE ATLAS (02513) officially released the new generation open-source model GLM-5.1. According to OpenRouter, with this release, KNOWLEDGE ATLAS GLM has once again raised its price by 10%. After the adjustment, the cache hit token price of GLM-5.1 in the coding scenario has approached the level of Claude Sonnet 4.6 under the Anthropic umbrella. This is the first time that a domestic large model has achieved price alignment with overseas leading companies in core scenarios. From a year ago when the industry competed for market share by reducing fees by more than 90%, to now anchoring international standards with a performance premium, this change shows that domestic models are gradually moving away from relying solely on low-price competition and are starting to benchmark performance against international levels, officially entering a new stage of "value pricing". GLM-5.1 continues to maintain its leading position in programming capabilities. In the comprehensive average scores of the three major code benchmark tests SWE-bench Pro, Terminal-Bench, and NL2Repo, it ranks third globally, first domestically, and first in open source. In the SWE-bench Pro benchmark test, which is closest to real software development, it has achieved the first domestic model to surpass Opus 4.6, breaking the world record. Of particular interest is its breakthrough in long-term tasks. GLM-5.1 is the only open-source model that can sustain 8-hour continuous work, and is one of the few models worldwide, apart from Claude Opus 4.6, with this long-term capability. GLM-5.1 has surpassed the current limitation of large models focusing on minutes of interaction, and can work continuously and autonomously for up to 8 hours in a single task, autonomously breaking down tasks, trying repeatedly, fixing issues, and eventually delivering complete engineering-level results. The release of GLM-5.1 signifies that domestic large models are moving from "cost-effective competition" to "benchmarking capabilities globally," and also heralds that AI is accelerating from an efficiency tool to a new type of productivity with independent output capabilities.