{"id":6403,"date":"2026-04-20T09:36:00","date_gmt":"2026-04-20T04:06:00","guid":{"rendered":"https:\/\/nervnow.com\/?p=6403"},"modified":"2026-04-20T09:36:01","modified_gmt":"2026-04-20T04:06:01","slug":"google-in-talks-with-marvell-to-design-two-new-ai-inference-chips","status":"publish","type":"post","link":"https:\/\/nervnow.com\/ro\/google-in-talks-with-marvell-to-design-two-new-ai-inference-chips\/","title":{"rendered":"Google in Talks With Marvell to Design Two New AI Inference Chips"},"content":{"rendered":"<p><strong><em>Google unit seeks custom memory processor and next-generation TPU; companies aim to finalize memory chip design as soon as 2027, before test production begins<\/em><\/strong><\/p>\n\n\n\n<p>Alphabet&#8217;s Google is in discussions with Marvell Technology to co-develop two new artificial intelligence chips targeting inference workloads processing tasks rather than model training, according to people familiar with the matter cited by The Information. The talks signal Google&#8217;s push to expand its custom silicon strategy amid intensifying competition with Nvidia in the AI accelerator market.<\/p>\n\n\n\n<p>One chip under discussion is a memory processing unit designed to work alongside Google&#8217;s existing tensor processing units, or TPUs. The second is a next-generation TPU built specifically for running, not building, AI models. The companies aim to lock down the memory chip&#8217;s design as soon as next year before moving to test production, per the report.&nbsp;<\/p>\n\n\n\n<p><strong>ALSO READ: <\/strong><a href=\"https:\/\/nervnow.com\/ro\/cerebras-systems-moves-ahead-with-ipo-after-2024-exit\/\" target=\"_blank\" rel=\"noopener\" title=\"\"><strong>Cerebras Systems Moves Ahead with IPO After 2024 Exit<\/strong><\/a><\/p>\n\n\n\n<p>Google and Marvell did not immediately respond to requests for comment. Reuters reported it could not independently verify the discussions.&nbsp;<\/p>\n\n\n\n<p>The move comes as Google works to position its TPU line as a credible alternative to Nvidia&#8217;s graphics processing units, which have dominated AI infrastructure spending for the past several years. TPU revenue has become a meaningful contributor to Google Cloud&#8217;s growth. Cloud computing sales climbed 47% to more than $16 billion in the fourth quarter of 2025, and the unit&#8217;s backlog expanded 55% to $240 billion compared to the prior quarter, per Alphabet&#8217;s earnings disclosure.&nbsp;<\/p>\n\n\n\n<p>The formalization of any agreement remains uncertain, as both entities operate in a highly competitive custom silicon landscape where alliances frequently evolve or terminate before tape-out.<\/p>\n\n\n\n<p>Google already works with Broadcom on TPU development. Broadcom disclosed a long-term agreement with Google to extend TPU collaboration through 2031, and separately announced an expanded arrangement with AI company Anthropic to provide TPU computing access via Google Cloud infrastructure.&nbsp;<\/p>\n\n\n\n<p>Marvell, for its part, has built a growing custom silicon business serving hyperscale cloud customers. The company&#8217;s stock has advanced roughly 56% in 2026, reflecting investor confidence in demand for tailored AI silicon beyond Nvidia&#8217;s off-the-shelf GPU offerings.&nbsp;<\/p>\n\n\n\n<p>Separately, Wells Fargo has estimated that it&#8217;s custom chip intellectual property licensing could generate more than $10 billion in high-margin fees across 2026 and 2027, though that projection remains a Wall Street estimate and has not been confirmed by Alphabet.\u00a0<\/p>\n\n\n\n<p>The company is hosting its annual Google Cloud Next conference this week, beginning Wednesday, an event where the company has historically used TPU announcements to court enterprise cloud customers. First-quarter earnings are scheduled for April 29. Alphabet has forecast 2026 capital expenditures of $175 billion to $185 billion, roughly double last year&#8217;s pace, a figure that has drawn scrutiny from investors over the return-on-investment timelines.\u00a0<\/p>\n\n\n\n<p>The inference chip push reflects a broader industry shift. As foundation model training cycles lengthen and stabilize, cloud providers are turning attention to the economics of serving those models at scale, a workload that demands efficient, high-throughput chips optimized for speed and memory bandwidth rather than raw training power.<\/p>\n\n\n\n<p>NVIDIA has not stood still. The company is developing new inference-focused silicon and has incorporated technology from AI chip startup Groq into its roadmap, according to industry reports.<\/p>\n\n\n\n<p class=\"has-white-color has-palette-color-9-background-color has-text-color has-background has-link-color wp-elements-aa82dc399ed5e94ab97aec7e172de36a\"><strong><em>Disclaimer: This news is based on publicly <\/em><span style=\"margin: 0px;padding: 0px\"><a href=\"https:\/\/www.reuters.com\/business\/google-talks-with-marvell-build-new-ai-chips-inference-information-reports-2026-04-19\/\" target=\"_blank\" rel=\"noopener\" title=\"\"><em>available\u00a0<\/em><\/a><em>information<\/em><\/span><em>. NervNow has not independently verified any claims<\/em>.<br><br>MORE ON GOOGLE\u00a0<br><a href=\"https:\/\/nervnow.com\/ro\/google-releases-offline-ai-dictation-app-for-ios-with-no-subscription-fee\/\" target=\"_blank\" rel=\"noopener\" title=\"\">Google Releases Offline AI Dictation App for iOS With No Subscription Fee<\/a><br><a href=\"https:\/\/nervnow.com\/ro\/google-expands-gemma-family-with-new-ai-models\/\" target=\"_blank\" rel=\"noopener\" title=\"\">Google Expands Gemma Family With New AI Models<\/a><\/strong><\/p>","protected":false},"excerpt":{"rendered":"<p>Google unit seeks custom memory processor and next-generation TPU; companies aim to finalize memory chip design as soon as 2027, before test production begins<\/p>","protected":false},"author":2,"featured_media":6404,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_gspb_post_css":"","om_disable_all_campaigns":false,"_monsterinsights_skip_tracking":false,"_monsterinsights_sitenote_active":false,"_monsterinsights_sitenote_note":"","_monsterinsights_sitenote_category":0,"footnotes":""},"categories":[103,94],"tags":[196,267],"class_list":["post-6403","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-ai-now","category-news","tag-global","tag-google"],"blocksy_meta":[],"aioseo_notices":[],"_links":{"self":[{"href":"https:\/\/nervnow.com\/ro\/wp-json\/wp\/v2\/posts\/6403","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/nervnow.com\/ro\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/nervnow.com\/ro\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/nervnow.com\/ro\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/nervnow.com\/ro\/wp-json\/wp\/v2\/comments?post=6403"}],"version-history":[{"count":2,"href":"https:\/\/nervnow.com\/ro\/wp-json\/wp\/v2\/posts\/6403\/revisions"}],"predecessor-version":[{"id":6409,"href":"https:\/\/nervnow.com\/ro\/wp-json\/wp\/v2\/posts\/6403\/revisions\/6409"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/nervnow.com\/ro\/wp-json\/wp\/v2\/media\/6404"}],"wp:attachment":[{"href":"https:\/\/nervnow.com\/ro\/wp-json\/wp\/v2\/media?parent=6403"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/nervnow.com\/ro\/wp-json\/wp\/v2\/categories?post=6403"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/nervnow.com\/ro\/wp-json\/wp\/v2\/tags?post=6403"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}