© 2026 NervNow™. All rights reserved.

MatX Bags Major Funding in Bid to Rival Nvidia
Former Google TPU engineers are building a purpose-designed processor aimed at accelerating large language models.

Former Google TPU engineers are building a purpose-designed processor aimed at accelerating large language models.
AI chip startup MatX has secured fresh capital to advance development of its custom processor designed specifically for large language model workloads, the company shared in an official statement.
The $500 million Series B round was led by Jane Street and Situational Awareness LP, an investment fund founded by former OpenAI researcher Leopold Aschenbrenner. Participants include Spark Capital, Daniel Gross and Nat Friedman’s fund, Patrick Collison and John Collison, Triatomic Capital, Harpoon Ventures, Andrej Karpathy, Dwarkesh Patel, Alchip Technologies, and Marvell Technology, among others.
Founded in 2023 by former Google TPU engineers Reiner Pope and Mike Gunter, MatX is focused on building hardware tailored for AI training and inference at scale. Pope previously led AI software development for Google’s tensor processing units, while Gunter worked on TPU hardware design before launching the startup.
The company’s flagship chip, MatX One, is based on what it describes as a splittable systolic array architecture. MatX says the design targets high throughput for large language models while maintaining low latency. Its architecture combines SRAM-based latency characteristics with high-bandwidth memory (HBM) to support longer context workloads.
The new funding will help complete development and scale manufacturing in partnership with Taiwan Semiconductor Manufacturing Co. (TSMC), with shipments planned for 2027.
The Series B follows a roughly $100 million Series A raised in 2024. The company has not disclosed its latest valuation.
MatX is entering a competitive AI hardware landscape where Nvidia remains the dominant supplier of GPUs for AI training and inference. However, sustained demand for AI compute has created room for startups developing specialized accelerators built specifically for large language models.
Related:
AI Hiring Startup HireBound Raises $2M
Nasdaq Rebounds as Meta Teams With AMD
IBM Hits 25-Year Low in Single-Day Drop on AI Fears







