{"id":4452,"date":"2026-02-25T09:01:42","date_gmt":"2026-02-25T09:01:42","guid":{"rendered":"https:\/\/nervnow.com\/?p=4452"},"modified":"2026-02-25T09:06:16","modified_gmt":"2026-02-25T09:06:16","slug":"matx-bags-major-funding-in-bid-to-rival-nvidia","status":"publish","type":"post","link":"https:\/\/nervnow.com\/ro\/matx-bags-major-funding-in-bid-to-rival-nvidia\/","title":{"rendered":"MatX Bags Major Funding in Bid to Rival Nvidia"},"content":{"rendered":"<p><em><strong><strong>Former Google TPU engineers are building a purpose-designed processor aimed at accelerating large language models.<\/strong><\/strong><\/em><\/p>\n\n\n\n<p>AI chip startup MatX has secured fresh capital to advance development of its custom processor designed specifically for large language model workloads, the company shared in an <a href=\"https:\/\/matx.com\/research\/series_b\" target=\"_blank\" rel=\"noopener\" title=\"\">official statement<\/a>.<\/p>\n\n\n\n<p>The $500 million Series B round was led by Jane Street and Situational Awareness LP, an investment fund founded by former OpenAI researcher Leopold Aschenbrenner. Participants include Spark Capital, Daniel Gross and Nat Friedman\u2019s fund, Patrick Collison and John Collison, Triatomic Capital, Harpoon Ventures, Andrej Karpathy, Dwarkesh Patel, Alchip Technologies, and Marvell Technology, among others.<\/p>\n\n\n\n<p>Founded in 2023 by former Google TPU engineers Reiner Pope and Mike Gunter, MatX is focused on building hardware tailored for AI training and inference at scale. Pope previously led AI software development for Google\u2019s tensor processing units, while Gunter worked on TPU hardware design before launching the startup.<\/p>\n\n\n\n<p>The company\u2019s flagship chip, MatX One, is based on what it describes as a splittable systolic array architecture. MatX says the design targets high throughput for large language models while maintaining low latency. Its architecture combines SRAM-based latency characteristics with high-bandwidth memory (HBM) to support longer context workloads.<\/p>\n\n\n\n<p>The new funding will help complete development and scale manufacturing in partnership with Taiwan Semiconductor Manufacturing Co. (TSMC), with shipments planned for 2027.<\/p>\n\n\n\n<p>The Series B follows a roughly $100 million Series A raised in 2024. The company has not disclosed its latest valuation.<\/p>\n\n\n\n<p>MatX is entering a competitive AI hardware landscape where <a href=\"http:\/\/L&amp;T, NVIDIA Partner to Build Gigawatt-Scale Sovereign AI Factory\" target=\"_blank\" rel=\"noopener\" title=\"\">Nvidia remains the dominant supplier<\/a> of GPUs for AI training and inference. However, sustained demand for AI compute has created room for startups developing specialized accelerators built specifically for large language models.<\/p>\n\n\n\n<p>Related:<br><a href=\"https:\/\/nervnow.com\/ro\/ai-hiring-startup-hirebound-raises-2m\/\">AI Hiring Startup HireBound Raises $2M<\/a><br><a href=\"https:\/\/nervnow.com\/ro\/nasdaq-rebounds-as-meta-teams-with-amd\/\">Nasdaq Rebounds as Meta Teams With AMD<\/a><br><a href=\"https:\/\/nervnow.com\/ro\/ibm-hits-25-year-low-in-single-day-drop-on-ai-fears\/\">IBM Hits 25-Year Low in Single-Day Drop on AI Fears<\/a><br><br><\/p>","protected":false},"excerpt":{"rendered":"<p>Former Google TPU engineers are building a purpose-designed processor aimed at accelerating large language models.<\/p>","protected":false},"author":2,"featured_media":4460,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"_gspb_post_css":"","om_disable_all_campaigns":false,"_monsterinsights_skip_tracking":false,"_monsterinsights_sitenote_active":false,"_monsterinsights_sitenote_note":"","_monsterinsights_sitenote_category":0,"footnotes":""},"categories":[103,94],"tags":[257,196,256],"class_list":["post-4452","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-ai-now","category-news","tag-ai-news","tag-global","tag-matx"],"blocksy_meta":[],"aioseo_notices":[],"_links":{"self":[{"href":"https:\/\/nervnow.com\/ro\/wp-json\/wp\/v2\/posts\/4452","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/nervnow.com\/ro\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/nervnow.com\/ro\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/nervnow.com\/ro\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/nervnow.com\/ro\/wp-json\/wp\/v2\/comments?post=4452"}],"version-history":[{"count":1,"href":"https:\/\/nervnow.com\/ro\/wp-json\/wp\/v2\/posts\/4452\/revisions"}],"predecessor-version":[{"id":4461,"href":"https:\/\/nervnow.com\/ro\/wp-json\/wp\/v2\/posts\/4452\/revisions\/4461"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/nervnow.com\/ro\/wp-json\/wp\/v2\/media\/4460"}],"wp:attachment":[{"href":"https:\/\/nervnow.com\/ro\/wp-json\/wp\/v2\/media?parent=4452"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/nervnow.com\/ro\/wp-json\/wp\/v2\/categories?post=4452"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/nervnow.com\/ro\/wp-json\/wp\/v2\/tags?post=4452"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}