Broadcom and Google Expand AI Chip Deal to Supercharge Anthropic’s Growth

Broadcom and Google have signed a landmark agreement to supply Anthropic with 3.5 gigawatts of next-generation TPU computing capacity, starting 2027.

Share your love

Broadcom and Google have signed a landmark agreement to supply Anthropic with 3.5 gigawatts of next-generation TPU computing capacity, starting 2027.

Broadcom and Google have signed a sweeping new agreement to supply Anthropic with multiple gigawatts of next-generation AI computing capacity, a deal that not only cements the trio’s growing interdependence, but also signals just how aggressively the AI infrastructure race is escalating in 2026.

The announcement, confirmed on April 6, 2026, via a regulatory filing and Anthropic’s official blog, expands an already significant relationship between the three companies. According to Anthropic’s CFO Krishna Rao, the partnership will bring approximately 3.5 gigawatts of TPU-based compute online starting in 2027 making it the largest single compute commitment Anthropic has ever made.

To begin with, it is important to understand the structure of this deal. Broadcom, which has served as Google’s primary custom chip design partner since 2016, will manufacture and deliver Google’s next-generation Tensor Processing Units (TPUs), specifically the seventh-generation Ironwood chips. These will, in turn, be made available to Anthropic through Google’s cloud infrastructure.

Furthermore, Broadcom confirmed in a securities filing that it has formalised a supply assurance agreement with Google that runs until 2031, solidifying the manufacturing pipeline that underpins Anthropic’s access to capacity. As a result, Broadcom shares climbed roughly 3% in after-hours trading following Monday’s announcement.

It is also worth noting that this latest agreement builds directly on an earlier deal. Back in October 2025, Anthropic and Google announced a landmark cloud partnership valued at tens of billions of dollars that gave Anthropic access to up to one million Google TPU chips and more than 1 gigawatt of compute capacity coming online in 2026. The new agreement significantly extends that commitment into 2027 and beyond.

Meanwhile, the financial context surrounding this deal is equally striking. Anthropic simultaneously disclosed that its annualised revenue run rate has now surpassed $30 billion up sharply from approximately $9 billion at the close of 2025. That represents more than a threefold increase in just a few months.

Additionally, the company revealed that the number of enterprise customers spending over $1 million per year on Claude has more than doubled from 500 in February 2026 to over 1,000 today. This rapid acceleration in enterprise adoption is, in large part, what is driving the urgent need for more compute capacity.

At this point, a reasonable question is: why not simply rely on Nvidia’s widely used GPUs? The answer lies in efficiency. Google’s TPUs are application-specific integrated circuits (ASICs) purpose-built for the type of matrix computations that underpin large language models. Unlike Nvidia’s general-purpose GPUs, ASICs are engineered for specific tasks, which translates into better cost-per-performance for targeted workloads like training and inference.

Importantly, Broadcom’s role is evolving beyond that of a component supplier. Rather than simply manufacturing chips for Google to deploy, Broadcom is now delivering fully assembled Ironwood Racks complete rack-level AI systems that Anthropic can deploy directly into its data centres. This marks a strategic shift in how Broadcom positions itself within the AI supply chain.

Despite the scale of this Google-Broadcom agreement, Anthropic has been careful to emphasise that it is not abandoning its multi-cloud approach. The company continues to train and run Claude across AWS Trainium chips, Google TPUs, and Nvidia GPUs, matching workloads to the hardware best suited for each task.

Taken together, these moves suggest that Broadcom is rapidly consolidating its position as the infrastructure backbone of the generative AI era not merely as a chip designer, but as a full-stack delivery partner for the world’s largest AI labs.

One final detail worth highlighting is the geographic dimension of this partnership. Anthropic confirmed that the vast majority of new compute will be sited in the United States, positioning this agreement as a direct extension of its November 2025 pledge to invest $50 billion in American AI infrastructure. As regulatory and geopolitical pressures mount around AI supply chains globally, the decision to anchor capacity domestically carries both strategic and symbolic weight.

In summary, the Broadcom-Google-Anthropic agreement is not simply a procurement announcement. Rather, it represents a strategic alignment between three of the most consequential players in AI infrastructure and a clear signal that the demand for frontier AI compute is accelerating faster than almost anyone anticipated even six months ago. For Anthropic, it means the capacity to sustain its extraordinary growth trajectory. For Broadcom and Google, it means a deepening lock-in with the company whose Claude models are increasingly becoming the enterprise AI standard.

This groundbreaking partnership with Google and Broadcom is a continuation of our disciplined approach to scaling infrastructure. We are making our most significant compute commitment to date to keep pace with our unprecedented growth.
Krishna Rao, CFO of Anthropic
.

Avatar photo
NN Desk

Leave a Reply

Your email address will not be published. Required fields are marked *

Stay updated with NervNow Weekly

Subscribe now