Guotai Junan: Manus opens a new era of AI "digital agents", and is expected to achieve deep applications in multiple industries in the future.

date
07/03/2025
avatar
GMT Eight
Guotai Junan released a research report stating that recently, the Monica team launched the world's first universal AI Agent product, Manus. Manus's success demonstrated the potential of AI as a "digital agent," providing a new paradigm for AI-human collaboration. This model not only improves work efficiency but also provides a new paradigm for AI-human collaboration, allowing AI to serve as an independent "intelligent entity" to take on more complex tasks. It showed strong cross-domain applicability and is expected to achieve deep applications in more industries in the future, such as healthcare, finance, education, and enterprise management. Manus has an innovative architecture operating model and powerful task processing capabilities. Manus adopts a Multiple Agent architecture, running in independent virtual machines, greatly enhancing the efficiency in handling complex tasks through the coordination mechanism of planning agents, execution agents, and verification agents, and shortening response time through parallel computing. This architecture allows Manus to think and execute commands like humans, simulating human work processes by breaking down complex tasks into executable steps, and then calling appropriate tools to complete tasks. In terms of task processing, Manus delivers complete task results directly rather than providing suggestions or answers. In addition, Manus's interactive design focuses on user experience, providing real-time synchronization of task progress, allowing users to clearly see the execution process of tasks, enhancing user control. Manus also has memory capabilities to optimize the output format of subsequent tasks based on user preferences, increasing user satisfaction. In the GAIA benchmark test, Manus achieved new state-of-the-art (SOTA) performance on three difficulty levels, surpassing OpenAI's models of the same level. The GAIA benchmark test is an authoritative benchmark test for evaluating the ability of general AI assistants to solve real-world problems. Proposed by research teams such as Meta AI and Hugging Face in 2023, the test consists of 466 questions divided into three difficulty levels, aiming to measure AI systems' abilities in reasoning, multimodal processing, web browsing, and tool invocation. In the GAIA benchmark test, Manus achieved SOTA performance, surpassing OpenAI's models of the same level. This shows its outstanding robustness in solving complex tasks and long-tail problems. This result suggests that Manus not only has powerful task planning and execution capabilities but also can flexibly call upon a variety of tools to complete tasks through autonomous learning and cross-domain collaboration. The success of Manus is largely attributed to the Multiple Agent architecture, which achieves efficient task decomposition and parallel processing through the collaborative work of planning, execution, and verification agents. Additionally, Manus's test configuration remains consistent with its production version to ensure result reproducibility. Recommended SoC on the client side: Shenzhen Bluetrum Technology (688332.SH), Amlogic (688099.SH), Rockchip Electronics (603893.SH). Related beneficiaries: Espressif Systems (688018.SH), Allwinner Technology (300458.SZ). Catalyst: The progress of AI application promotion exceeds expectations. Risk warning: The actual application effect of Manus may fall short of expectations.

Contact: contact@gmteight.com