During Amazon's recent earnings call, CEO Andy Jassy dropped a bombshell: Trainium2, the company's custom AI chip built for model training and inference, has exploded into a multibillion-dollar business. The chips are fully subscribed, demand is outpacing supply, and the growth trajectory is staggering — 150% quarter over quarter. What started as an in-house experiment has become the backbone of AWS's AI ambitions and a serious challenger to Nvidia's dominance in the AI hardware market.
Explosive Growth Revealed
Wall St Engine trader highlights how, according to the earnings transcript, Jassy made it clear that Trainium2 isn't just a promising product anymore — it's a revenue driver. "Trainium2 continues to see strong adoption, is fully subscribed, and is now a multibillion-dollar business that grew 150% quarter over quarter," he stated.
That kind of acceleration is rare, even in the fast-moving AI sector. It signals not only surging demand but Amazon's ability to deliver infrastructure at scale when competitors are struggling with chip shortages and capacity constraints. For investors and industry watchers, this marks a pivotal moment: Amazon has transitioned from being a cloud service provider to becoming a foundational AI hardware powerhouse.
Project Rainier: AWS's Massive AI Compute Cluster
Amazon also unveiled Project Rainier, a sprawling AI compute cluster that stretches across multiple U.S. data centers. The system currently houses nearly 500,000 Trainium2 chips, designed to handle the most demanding AI workloads. One of its first major clients is Anthropic, which is using Rainier to train and deploy Claude, its cutting-edge language model. Jassy confirmed that Amazon expects to have more than 1 million Trainium2 chips deployed by the end of the year — effectively doubling the infrastructure footprint in just months. This positions AWS as one of the largest AI infrastructure providers globally, competing head-to-head with Nvidia's dominance in GPU supply, as well as Google Cloud and Microsoft Azure.
Trainium2's Surge: Performance, Scale, and Profitability
The 150% quarter-over-quarter growth rate puts Trainium2 among the fastest-scaling AI hardware businesses in the world. Unlike traditional GPUs, these chips are purpose-built for AI — optimized for both training large models and running inference at lower cost and higher efficiency. When integrated within AWS, this vertical integration gives Amazon a crucial advantage over competitors who depend heavily on external suppliers like Nvidia. Amazon also disclosed that it has added over 3.8 gigawatts of data center capacity in the past year alone — more than any other cloud provider. That's double the power capacity since 2022, with plans to double again by 2027. Much of this expansion is fueled by Trainium and Nvidia GPU deployments, reflecting the explosive growth of AI infrastructure demand across AWS's ecosystem.
Anthropic Partnership Strengthens Amazon's AI Ambitions
The collaboration between Anthropic and AWS runs deeper than a typical vendor relationship. By training Claude on Trainium2 hardware, Anthropic is reducing its dependency on Nvidia GPUs and proving that Amazon's custom silicon can handle the most sophisticated AI research. This partnership is part of a broader industry shift: leading AI companies are increasingly turning to custom chip solutions to control costs and improve scalability. Amazon's chips, paired with its global cloud infrastructure, position it uniquely to supply the next generation of large-scale AI models.
AI Infrastructure: AWS's New Growth Engine
What stands out most from Jassy's remarks is that Trainium2 has moved beyond being just a technology success — it's now a financial powerhouse. With full subscription capacity and triple-digit growth, Trainium2 has become one of AWS's most profitable business lines. This represents a turning point in Amazon's cloud strategy: AI infrastructure is no longer just a service offering. It's a standalone product category generating billions in revenue. AWS is positioning itself not only as a platform for running AI but as the core infrastructure provider powering the global AI economy.
Peter Smith
Peter Smith