© 2026 NervNow™. All rights reserved.

Google in Talks With Marvell to Design Two New AI Inference Chips
Google unit seeks custom memory processor and next-generation TPU; companies aim to finalize memory chip design as soon as 2027, before test production begins

Google unit seeks custom memory processor and next-generation TPU; companies aim to finalize memory chip design as soon as 2027, before test production begins
Alphabet’s Google is in discussions with Marvell Technology to co-develop two new artificial intelligence chips targeting inference workloads processing tasks rather than model training, according to people familiar with the matter cited by The Information. The talks signal Google’s push to expand its custom silicon strategy amid intensifying competition with Nvidia in the AI accelerator market.
One chip under discussion is a memory processing unit designed to work alongside Google’s existing tensor processing units, or TPUs. The second is a next-generation TPU built specifically for running, not building, AI models. The companies aim to lock down the memory chip’s design as soon as next year before moving to test production, per the report.
ALSO READ: Cerebras Systems Moves Ahead with IPO After 2024 Exit
Google and Marvell did not immediately respond to requests for comment. Reuters reported it could not independently verify the discussions.
The move comes as Google works to position its TPU line as a credible alternative to Nvidia’s graphics processing units, which have dominated AI infrastructure spending for the past several years. TPU revenue has become a meaningful contributor to Google Cloud’s growth. Cloud computing sales climbed 47% to more than $16 billion in the fourth quarter of 2025, and the unit’s backlog expanded 55% to $240 billion compared to the prior quarter, per Alphabet’s earnings disclosure.
The formalization of any agreement remains uncertain, as both entities operate in a highly competitive custom silicon landscape where alliances frequently evolve or terminate before tape-out.
Google already works with Broadcom on TPU development. Broadcom disclosed a long-term agreement with Google to extend TPU collaboration through 2031, and separately announced an expanded arrangement with AI company Anthropic to provide TPU computing access via Google Cloud infrastructure.
Marvell, for its part, has built a growing custom silicon business serving hyperscale cloud customers. The company’s stock has advanced roughly 56% in 2026, reflecting investor confidence in demand for tailored AI silicon beyond Nvidia’s off-the-shelf GPU offerings.
Separately, Wells Fargo has estimated that it’s custom chip intellectual property licensing could generate more than $10 billion in high-margin fees across 2026 and 2027, though that projection remains a Wall Street estimate and has not been confirmed by Alphabet.
The company is hosting its annual Google Cloud Next conference this week, beginning Wednesday, an event where the company has historically used TPU announcements to court enterprise cloud customers. First-quarter earnings are scheduled for April 29. Alphabet has forecast 2026 capital expenditures of $175 billion to $185 billion, roughly double last year’s pace, a figure that has drawn scrutiny from investors over the return-on-investment timelines.
The inference chip push reflects a broader industry shift. As foundation model training cycles lengthen and stabilize, cloud providers are turning attention to the economics of serving those models at scale, a workload that demands efficient, high-throughput chips optimized for speed and memory bandwidth rather than raw training power.
NVIDIA has not stood still. The company is developing new inference-focused silicon and has incorporated technology from AI chip startup Groq into its roadmap, according to industry reports.
Disclaimer: This news is based on publicly available information. NervNow has not independently verified any claims.
MORE ON GOOGLE
Google Releases Offline AI Dictation App for iOS With No Subscription Fee
Google Expands Gemma Family With New AI Models







