Sealand: Bus interconnection promotes the development of AI models and applications in the industry, maintaining the "recommended" rating in the computer industry.
The current mainstream interconnection protocols include: NVLink, UALink, SUE, CXL, HSL, UB, etc.
Sealand released a research report stating that in the era of large models, Scale-Up is creating new demands for high-speed interconnection protocols. Bus interconnection plays an important role in interconnecting nodes in super nodes, and currently both domestically and internationally, new bus interconnection architectures are actively being released to promote the industrial development of AI models and applications, thereby helping to create a positive feedback loop from models to computing power. Sealand maintains a "recommended" rating for the computer industry.
The main points of Sealand are as follows:
1. High-speed interconnection protocols serve the Scale-Up in the era of large models.
Computer buses are used to connect systems and components, and bus interfaces are the physical cables for data transmission between different components of a system or PCB, with functions such as sending data, searching for specific data, and controlling the operation of different parts of the system. Common computer bus protocols for servers include PCIe and Ethernet, with Switch devices being responsible for communication among the hosts in Scale-Up and expanding system bandwidth and device count. In the era of large models, Scale-Up is creating new demands for high-speed interconnection protocols, and the current mainstream interconnection protocols include NVLink, UALink, SUE, CXL, HSL, and UB.
2. NVLink leads in high-speed interconnection protocols in Scale-Up, with many manufacturers catching up.
1) PCIe protocol and switches are traditional computer expansion bus standards, with the rate of technological iteration increasing, but there are still bottlenecks in communication speeds between devices such as CPUs and GPUs, hence the emergence of the CXL protocol. In addition, many manufacturers also use their own interconnection protocols, with NVLink in a leading position.
2) NVLink achieves high-speed interconnection between GPUs in Scale-Up; NVSwitch provides hardware support for interconnecting multiple GPUs for inference with low latency, multiple channels, high bandwidth, and high power consumption; NVLink C2C achieves high-speed interconnection between CPUs and GPUs in Scale-Up, with a bandwidth of 200Gbps in the fifth generation NVLink single channel, while PCIe Gen5 is at 32Gbps.
3) Huawei's UB provides a hundred ns synchronous memory access latency, 2-5us asynchronous memory access latency, and provides TB/s-level bandwidth between components. The UB Processing Unit supports the UB protocol stack, with an embedded UB Switch that realizes multi-level UB Switch network expansion and supports network integration with Ethernet Switch through UBoE.
4) UALink uses Ethernet infrastructure to achieve Scale-Up, with the UALink 1.0 specification supporting a maximum data transfer rate of 200GT/s per channel, with each group of four physical channels forming a basic unit in UALink, providing a maximum bandwidth of 800Gbps in both the transmitting (TX) and receiving (RX) directions.
5) Broadcom's SUE achieves network busification using Ethernet, with SUE efficiently deploying high bandwidth and low latency, supporting multi-instantiation with high efficiency in terms of area and power consumption.
6) Hygon Information Technology announced the release of the Huaguang System Interconnect Bus Protocol (HSL) 1.0 specification at the 2025 Photosynthesis Organization Artificial Intelligence Innovation Conference, and published the open roadmap for the next three years for HSL, with the aim of breaking technological barriers and promoting collaborative innovation in the domestic computing industry ecosystem.
3. NVLink moves towards open source, and interconnection technology should serve high bandwidth and low latency.
1) NVLink Fusion is partially open source. MediaTek, Marvell, Alchip Technologies, Astera Labs, Synopsys, and Cadence are the first batch of cooperating manufacturers, supporting custom chip design for Scale-Up to meet the needs of model training and inference; cloud service providers can use custom ASICs, NVIDIA's rack-level systems, and NVIDIA's end-to-end network platform.
2) The evolution of computing demand places higher requirements on interconnection technology to achieve high bandwidth and low latency. As the scale of models, dataset, and training computation increases, language modeling performance shows a smooth improvement trend. To achieve optimal performance, these three factors must be improved synchronously; there is a significant problem of inadequate training in large language models, which is a result of excessive pursuit of model scale expansion while keeping the training data volume constant.
Related Articles

Software crashed together? Roblox (RBLX.US): It has an ecological closed-loop, Genie can't break.

Industrial: Hong Kong stock market sentiment index has reached the bottom area.

"The 'Chinese Choice' for Global SiC Core Customers: Why TIANYU SEMI (02658)?"
Software crashed together? Roblox (RBLX.US): It has an ecological closed-loop, Genie can't break.

Industrial: Hong Kong stock market sentiment index has reached the bottom area.

"The 'Chinese Choice' for Global SiC Core Customers: Why TIANYU SEMI (02658)?"

RECOMMEND

Nine Companies With Market Value Over RMB 100 Billion Awaiting, Hong Kong IPO Boom Continues Into 2026
07/02/2026

Hong Kong IPO Cornerstone Investments Surge: HKD 18.52 Billion In First Month, Up More Than 13 Times Year‑On‑Year
07/02/2026

Over 400 Companies Lined Up For Hong Kong IPOs; HKEX Says Market Can Absorb
07/02/2026


