Meta Dumps Nvidia, Bets Big on Amazon Chips for AI

Meta signed a multibillion-dollar deal with Amazon for Graviton5 chips, signaling a strategic shift away from Nvidia dependency. For CMOs, this reshapes AI infrastructure costs and vendor negotiations across the industry.

Share
Meta Dumps Nvidia, Bets Big on Amazon Chips for AI

On April 24, 2026, Meta signed a multibillion-dollar, multi-year deal with Amazon Web Services to use tens of millions of Amazon's Graviton5 processors for its AI operations. The same day, Nvidia hit a US$5 trillion market cap.

The timing was not a coincidence. It was a signal.

Meta is deliberately building an AI infrastructure that does not depend on any single supplier. If you are a business leader watching how the world's most powerful tech companies manage their AI costs, this move matters.

Why CPUs Now Matter for AI

For years, building AI meant buying Nvidia's graphics processing units (GPUs). These chips are purpose-built for the heavy math required to train large AI models. Nvidia controls 81% of the AI chip market, and its software tools created a lock-in effect that made switching painful.

ByteDance Invests $22B in Nvidia While Developing Samsung AI Chips
ByteDance spends $22B on Nvidia chips while developing Samsung AI chips, signaling how Chinese tech giants are hedging bets between immediate AI needs and long-term chip independence.

But AI is entering a new phase. As models get deployed into real products, the work shifts from training to inference. Inference is the step where an AI model actually responds to your question or completes a task. It now accounts for 60-70% of total AI compute demand at major cloud companies, up from around 40% in 2024.

For inference, Nvidia's expensive GPUs are often overkill. General-purpose CPUs, like Amazon's Graviton5, can handle many of these workloads at a fraction of the cost. As Nafea Bshara, the Amazon vice president who co-founded its Annapurna Labs chip unit, put it: "The GPUs are useless if you don't have the CPUs next to them."

Meta's Multi-Supplier Strategy Is Not Accidental

This deal is one piece of a much larger pattern. Meta's total AI chip procurement now exceeds US$200 billion across Nvidia, AMD (a five-year, US$60 billion deal), Google's TPU processors, CoreWeave, Broadcom, Nebius, and now Amazon. Its 2026 capital spending guidance sits at US$115-135 billion, nearly double what it spent the previous year.

This is not diversification for its own sake. Meta learned a painful lesson in 2024 and 2025, when Nvidia GPU shortages delayed multiple AI projects. The company's head of infrastructure has now stated explicitly: "Diversifying our compute sources is a strategic imperative."

Meta is also building its own chips. Its MTIA chip line, co-developed with Broadcom, is already in production. New versions will arrive every six months through 2027, at a scale starting from one gigawatt of capacity. That is faster than the industry standard release cycle of one to two years.

Amazon's Chip Business Reaches Scale

The other story here is Amazon. Its silicon unit (the group that builds Graviton, Trainium, and other custom chips) has quietly doubled to more than US$20 billion in annual revenue in 2026, up from US$10 billion just six months earlier. CEO Andy Jassy has suggested the division could be worth US$50 billion as a standalone business.

Amazon's Trainium2 chip already delivers roughly 30% better price-performance than comparable Nvidia GPUs. Demand has been so strong that Amazon reportedly turned away two large customers who tried to lock up all of its Graviton capacity for 2026.

Anthropic and OpenAI are both increasing their use of Amazon's custom chips. Anthropic alone has committed to spending US$100 billion on AWS over 10 years.

Looking for World-Class PR & Comms in APAC?

Tailored service packages for select brands and agencies.

Get in Touch →

What This Means for Asian Business Leaders

JPMorgan projects that custom chips from Google, Amazon, Meta, and OpenAI will account for 45% of the AI chip market by 2028, up from 37% today. Nvidia is not disappearing. But its pricing power and its grip on AI infrastructure are both weakening.

For enterprises across Asia planning AI deployments, the strategic lesson is straightforward. Single-supplier dependency in AI infrastructure is a risk. The world's most capable AI operators have already moved beyond it. The question for your organization is when, not whether, you will do the same.

Want to reach thousands of marketing and comms professionals across Asia?

Get your brand in front of industry decision-makers.

Partner with Mission Media →