The creation of simulation boxes + Moore threads | completion of over a hundred model adaptations. Quantitative modeling advantages are significant.

date
14:55 25/11/2025
avatar
GMT Eight
Paradigm Intelligence recently announced that the "ModelHub XC" has completed certification for 108 mainstream AI models on Moore-threaded GPUs, covering various tasks such as text generation, visual understanding, and multimodal question answering. It is expected to expand to thousands of models within the next six months, injecting continuous energy into the domestic computing power ecosystem.
Paradigm Intelligence recently announced that the "ModelHub XC" has completed certification for 108 mainstream AI models on the Moore Thread GPU, covering various tasks such as text generation, visual understanding, and multimodal question answering. They plan to expand to thousands of models within the next six months, injecting continuous energy into the domestic computing power ecosystem. It is worth mentioning that during this batch adaptation process, Moore Thread, a domestic GPU company that will soon be listed on the Science and Technology Innovation Board, has shown significant advantages in quantized models. Its GPU, with hardware level support for low-precision data types, optimized instruction sets, and cache mechanisms, effectively reduces model memory usage and improves inferencing speed, making quantized models efficient and energy efficient in practical deployments. Through fine-tuning and optimization, the adapted models not only improve performance but also ensure that inference accuracy meets commercial deployment requirements. It was reported that Moore Thread officially launched its IPO on November 24th, with an issue price of 114.28 yuan per share, setting a new high for A-share IPO prices since 2025. In today's industry where AI inferencing efficiency has become a core challenge for industrial applications, achieving efficient and stable operation of models on domestic chips has become a key factor in promoting the maturity of the computing power ecosystem. In response, Paradigm Intelligence relies on its self-developed EngineX engine technology to focus on breakthroughs in model compatibility and operational efficiency on domestic chips, significantly reducing the deployment threshold for developers. Currently, the "ModelHub XC" has completed adaptation verification for models including Mata, Qianwen, Deepseek, Hunyuan, and Open Sora series on the Moore Thread GPU. The EngineX engine, as the underlying support system, enables "engine-driven, plug-and-play for multiple models", effectively addressing bottlenecks in model compatibility and scale support on domestic chips. About ModelHub XC ModelHub XC is an AI model and tool platform for the domestic computing power ecosystem, combining community and service functions to promote AI innovation and implementation on domestic hardware platforms, providing end-to-end solutions covering model training, inferencing, and deployment.