© 2026 NervNow™. All rights reserved.

Meta Partners With AWS to Run Agentic AI on Graviton Chips
Meta signs a multibillion-dollar deal with AWS to deploy tens of millions of Graviton5 cores, powering its next generation of agentic AI workloads.

Meta signs a multibillion-dollar deal with AWS to deploy tens of millions of Graviton5 cores, powering its next generation of agentic AI workloads.
In a move that underscores the rapidly shifting landscape of artificial intelligence infrastructure, Meta has announced an agreement with Amazon Web Services (AWS) to bring tens of millions of AWS Graviton cores into its compute portfolio, making it one of the largest Graviton customers in the world.
Furthermore, the deal is described as a multibillion-dollar agreement, and it represents far more than a routine chip procurement. Rather, it signals a fundamental rethinking of what AI infrastructure needs to look like in the age of intelligent, autonomous systems.
For years, the AI industry has revolved around GPUs powerful processors used to train massive language models. However, as AI systems grow more autonomous, that calculus is beginning to change. While GPUs remain essential for training large models, the rise of agentic AI is creating massive demand for CPU-intensive workloads including real-time reasoning, code generation, search, and orchestrating multi-step tasks.
In other words, agentic AI systems that can plan, reason, and execute complex tasks on their own depends heavily on CPUs, not just GPUs. As a result, agentic AI is becoming almost as big a CPU story as a GPU story.
AWS launched its Graviton5 CPUs in December 2025, claiming the hardware has an efficient design that reduces inter-core communication latency by up to 33 percent. Graviton5 also has a cache five times larger than the previous generation, enabling customers to scale up workloads while reducing infrastructure costs.
Moreover, the Graviton5 chip features 192 cores, which means faster data processing with greater bandwidth key requirements for agentic AI systems. Additionally, the chips support Elastic Fabric Adapter technology, which enables low-latency communication across large clusters of servers.
It is also worth noting that AWS Graviton is an ARM-based CPU a central processing unit that handles general computing tasks and not a GPU. This distinction matters, as it highlights how the infrastructure demands of AI are diversifying well beyond traditional GPU-centric approaches.
The deployment starts with tens of millions of Graviton cores, with the flexibility to expand as Meta’s AI capabilities grow. To put that in perspective, each chip has 192 cores, meaning if Meta were to deploy exactly 10 million cores, that would represent just over 52,000 chips.
Additionally, the deal will last at least three years, and it follows a combined $48 billion in AI infrastructure commitments Meta made in recent weeks with CoreWeave and Nebius. Taken together, these moves paint a picture of a company investing aggressively and strategically in compute diversity.
ALSO READ: How AI Chips Are Redrawing the Global Equity Map
Meta’s Head of Infrastructure, Santosh Janardhan, was direct about the rationale behind the agreement. He stated that as Meta scales the infrastructure behind its AI ambitions, diversifying compute sources is a strategic imperative, and that expanding to Graviton allows the company to run the CPU-intensive workloads behind agentic AI with the performance and efficiency needed at its scale.
On the Amazon side, Nafea Bshara, Vice President and Distinguished Engineer at Amazon, emphasized that the deal is not just about chips, but about giving customers the infrastructure foundation, as well as data and inference services, to build AI that understands, anticipates, and scales efficiently to billions of people worldwide.
Beyond the technical details, this agreement carries significant strategic weight for both parties. The Meta deal is allowing Amazon to showcase a huge AI customer as a proving point for its homegrown CPUs, chips that compete with Nvidia’s new Vera CPU, which is also ARM-based and designed to handle AI agentic workloads.
Meanwhile, for Meta, the agreement reflects the principle behind its portfolio approach to infrastructure that no single chip architecture can efficiently serve every workload. Consequently, the company is building a more resilient and diversified compute foundation to support its ambitious AI roadmap.
Ultimately, this deal reflects a broader industry trend. As AI products evolve, the industry is moving beyond model training alone. Newer agentic AI systems are expected to handle planning, coding, reasoning, and task execution in real time, creating heavy demand for CPUs alongside GPUs.
Therefore, what Meta and AWS have announced is not merely a chip deal it is a directional statement about where AI infrastructure is heading next.
Disclaimer: This news is based on publicly available information. NervNow has not independently verified any claims.
MORE ON AI
Anthropic Hits $1 Trillion Mark as Tech Giants Compete Fiercely
South Korea’s Naver Signs AI and Cloud MoU with TCS
Google in Talks With Marvell to Design Two New AI Inference Chips
OpenText Brings AI and Data Solutions to AWS European Sovereign Cloud







