Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124

Although talking about The impending AI bubbleAmazon is the latest company to benefit from the AI arms race. Meta just signed a multibillion-dollar deal with Amazon to publish AWS Graviton processors in its 32 data centers over the next three years. While Amazon did not disclose the full value of the deal, we have seen companies spend big to keep their AI growing.
Recently, Meta also signed a six-year contract worth $10 billion Dealing with Google CloudWhile OpenAI agreed Spend $20 billion With Cerebras chips set to roll out over the next three years to use servers powered by the company’s hardware.
Graviton processors support cloud workloads running on Amazon Elastic Compute Cloud (Amazon EC2), and the company has long said they offer the best price-performance for cloud workloads.
What’s interesting here is that AWS Graviton is an ARM-based CPU, not a GPU. CPU refers to a computer’s central processing unit, which is the brain of the computer, while GPU is its graphics processing unit, which is typically used to train AI models.
“As we scale the infrastructure behind Meta’s AI ambitions, diversifying our compute resources is a strategic imperative,” said Santosh Janardhan, Head of Infrastructure at Meta. In a statement. “AWS has been a trusted cloud partner for years, and expanding into Graviton allows us to run CPU-intensive workloads behind agent AI with the performance and efficiency we need at our scale.”
Meta has been a long-time AWS customer, so this chip deal comes as no surprise. What’s notable is that it includes a central processing unit (CPU) chipset instead of a graphics processing unit (GPU).
Typically, AI models are trained on GPUs. Once trained, AI agents can use CPUs for more computationally intensive workloads, such as writing code.
Graviton chips are designed to be efficient in AI agent tasks. According to Amazon, the Graviton 5 chipset has 192 cores and five times more cache than the previous generation, reducing communication delay between cores by 33%. It should also be more energy efficient, with 25% better performance than previous generations.
“It’s not just about chips, it’s about giving customers the infrastructure foundation, as well as data and inference services, to build AI that understands, predicts, and efficiently scales to billions of people around the world,” said Nafie Bishara, vice president of AWS. In a statement.
Part of the impetus for this may also have been earlier this month, the South Pole I signed a deal to spend $100 billion on AWS to run Claude workloads on Amazon’s Trianium GPU chips, while Amazon agreed to invest $5 billion back into Antropic. Antropic may have monopolized Amazon’s inventory of Tranium2 to Tranium4 chips, and the company also has the option to purchase future Amazon chips when they become available.
In addition to working with Amazon, Meta is developing its own system Silicone at homewith work progressing on four iterations of the MITA AI chip and an expanded partnership with Broadcom to design and build the chips. Meta also agreed to spend billions on chips and AI devices from… Nvidia and AMDin addition to another multi-billion dollar deal to use tensor processing units From the alphabet.
A Meta representative declined to share specific workloads but said the company will support AI businesses, including MSL (Meta SuperIntelligence Labs).