Another challenger! Amazon.com, Inc. (AMZN.US) joins the "Three Kingdoms" AI chip battle with Trainium3, Citigroup: Compatible NVIDIA Corporation strategy is very flexible.
Google has just announced the expansion of external sales of its self-developed TPU chips, and shortly after, Amazon (AMZN.US) has entered the fray with its own Trainium3 chips.
Alphabet Inc. Class C just announced the expansion of external sales of its self-developed TPU chips, and shortly afterwards, Amazon.com, Inc. (AMZN.US) entered the fray with its self-developed Trainium3 chip. At the recent Amazon.com, Inc. Cloud Technology Conference, Amazon.com, Inc. announced two key developments around the Trainium chip family - Trainium3 chip officially available and a preview of the more powerful Trainium4 chip. Both chips have made significant breakthroughs in computing power, energy efficiency, and compatibility, directly targeting the core demand for the large-scale implementation of generative AI.
Amazon.com, Inc.'s move is seen as another giant, following Alphabet Inc. Class C, attempting to challenge NVIDIA Corporation's GPU. In a subsequent research report, Citi pointed out that amidst Microsoft Corporation and Alphabet Inc. Class C accelerating their self-developed AI chip layout, the iteration of the Trainium series is helping Amazon.com, Inc. maintain its lead in the "self-developed computing power ecosystem".
Part.01 Trainium3: The "compute multiplier" now in commercial use
As the flagship product of the current Trainium family, the core advantages of the Trainium3 chip are focused on "performance improvement" and "cost optimization". A comparison of specific parameters with Trainium2 is as follows:
Compute: 4.4 times better than Trainium2, supporting more complex generative AI models efficiently (such as large language model inference, multimodal processing).
Energy efficiency: 4 times improvement in energy efficiency, meaning that at the same computing power output, customers' energy costs can be reduced by 75%, aligning with the core demand for "reducing costs and increasing efficiency" in AI deployment.
Memory bandwidth: Nearly 4 times improvement in memory bandwidth, effectively solving the bottleneck of large model data transfer, reducing delays in model training and inference processes.
Commercial progress: Currently officially available, customers can directly access it through Amazon.com, Inc.'s cloud services without the need to set up additional hardware infrastructure.
Part.02 Trainium4: Compatible with NVIDIA Corporation interconnect technology
Amazon.com, Inc. also disclosed the development progress of the Trainium4 chip, which is expected to become the next generation AI compute core. Key expected performance indicators include:
Performance: Expected performance to be 6 times that of Trainium3, supporting training and inference of ultra-large scale parameter models (such as trillion-parameter large models).
Memory configuration: 4 times improvement in memory bandwidth, doubled memory capacity, further breaking through the high requirements of large models for storage and data transfer.
Ecological compatibility: Specially designed to support "NVIDIA Corporation NVLink Fusion chip interconnect technology", this compatibility means that Trainium4 can synergize with NVIDIA Corporation GPUs, meeting the demand for "mixed architecture deployment" and avoiding the limitations of a single chip ecosystem.
It is worth noting that the CEO of Amazon.com, Inc. Cloud Technology, in introducing the Trainium series of chips, emphasized the close collaboration with NVIDIA Corporation - this statement is seen by Citi as an important signal of its chip strategy: not aiming for "complete replacement", but providing customers with more flexible computing power choices through "self-developed chips + ecosystem collaboration".
Part.03 Trainium family deployment reaches over a million
In addition to the launch of new chips, Amazon.com, Inc. also disclosed the overall deployment and capacity situation of the Trainium family, data shows that it has formed the dual advantages of "large-scale deployment + rapid production expansion", laying a hardware foundation for meeting the demand for generative AI.
Deployment scale: Over a million chips deployed, forming a massive computing power network.
As of now, Amazon.com, Inc. has deployed over a million Trainium chips across its global data centers, widely used in customers' AI model training, inference, and cloud-native computing scenarios, forming the current public...
Production capacity ramp-up: Trainium2 production expansion sets record speeds.
As the predecessor of Trainium3, Trainium2's production capacity ramp-up rate is significantly faster than all previous AI chips. Citi pointed out in the report that Trainium2's production capacity expansion rate is 4 times that of Amazon.com, Inc.'s previous AI chips, this efficiency means that it can quickly meet the demand for mid-to-high-end AI computing power of customers, avoiding business delays due to hardware shortages.
Looking at the overall pace, the Trainium family has formed a product lineup of "Trainium2 as the base (meeting low to mid-range computing needs), Trainium3 as the main force (supporting large-scale AI deployments), Trainium4 as the future (targeting high computing scenarios)", covering the computing needs of different customers in layers.
Part.04 High emphasis on Trainium chip iteration
Combining the progress of the Trainium series of chips with the overall business, Citi clearly stated in the report that the technical breakthroughs and large-scale deployment of Trainium chips are one of the core support factors for Amazon.com, Inc. to achieve a 23% year-on-year revenue growth in 2026 and to maintain a 20%+ growth expectation before 2027. The specific logic includes three points:
Reducing customer AI deployment costs
The high energy efficiency ratio of Trainium3 and the large-scale deployment of Trainium2 can directly reduce customers' AI computing costs - Citi believes that this will attract more small and medium-sized enterprises and traditional industry customers to transition their generative AI projects from "concept validation" to "commercial implementation", thereby driving the growth of core cloud services.
Filling the gaps in computing infrastructure
In 2025, the number of generative AI concept validation projects soared, but some customers were unable to scale due to "insufficient computing power" or "high costs". The commercial use of Trainium3 and the preview of Trainium4 mean that Amazon.com, Inc. will provide more sufficient and cost-effective computing power supply in 2026, effectively meeting this backlog of demand and becoming a new engine for revenue growth.
Consolidating market competitiveness in the cloud
With Microsoft Corporation Azure, Alphabet Inc. Class C Cloud accelerating their self-developed AI chip layout, the iteration of the Trainium series helps AWS maintain its lead in the "self-developed computing power ecosystem" - Citi's analysis believes that the performance advantages and ecosystem compatibility of Trainium chips (such as supporting NVIDIA Corporation technology) will enhance customer stickiness with AWS, further consolidating its leading position in the global cloud market.
Related Articles

YIDU TECH (02158) granted 1.958 million shares of reward shares.

Subsidiary of Jones Tech Plc (300684.SZ) plans to increase capital by 35.7 million yuan to acquire a 51% stake in Zhongshi Xunleng.

BEAUTYFARM MED (02373) received non-executive director Li Fangyu increase holding of 52,000 shares.
YIDU TECH (02158) granted 1.958 million shares of reward shares.

Subsidiary of Jones Tech Plc (300684.SZ) plans to increase capital by 35.7 million yuan to acquire a 51% stake in Zhongshi Xunleng.

BEAUTYFARM MED (02373) received non-executive director Li Fangyu increase holding of 52,000 shares.






