MatX Secures $500 Million to Build AI Chip Accelerators Promising 10x GPU Performance
MatX, a semiconductor startup designing custom accelerators for large language model training, closes a $500 million Series B led by Jane Street — claiming its chips deliver roughly 10x the performance of current GPUs for transformer workloads.
MatX, a semiconductor startup designing custom accelerators specifically optimized for large language model training, has closed a $500 million Series B round led by Jane Street and Situational Awareness. The company claims its chips deliver roughly 10x the performance of current GPUs for transformer-based workloads, a claim that has attracted significant attention from AI labs seeking alternatives to Nvidia's dominant hardware.
Architecture and Performance Claims
MatX's chip architecture is purpose-built for the specific computational patterns of transformer models — the architecture underlying virtually all modern large language models. By optimizing for the attention mechanism, matrix multiplication patterns, and memory access patterns specific to transformers, MatX claims to achieve an order of magnitude improvement in performance per watt compared to general-purpose GPUs that must support a broader range of workloads. The performance claims have not been independently verified at scale, though the company has shared benchmark results with investors and early access partners.
Market Opportunity
The timing of the raise reflects the enormous demand for AI training compute. Training a frontier language model currently costs between $500 million and $2 billion in compute alone, with the majority of that cost going to Nvidia GPU rentals. A chip that delivers 10x the performance for transformer workloads could dramatically reduce training costs, making frontier AI development accessible to a larger number of organizations. Even a modest improvement — 2x or 3x rather than the claimed 10x — would represent billions of dollars in annual savings for the AI industry.
Competitive Landscape
MatX joins a crowded field of AI chip startups including Cerebras, Groq, SambaNova, and Tenstorrent, all targeting some aspect of the AI compute market. The differentiation is in the level of specialization: while most competitors design chips for broad AI workloads, MatX is narrowly focused on transformer training. This specialization is a bet that transformers will remain the dominant architecture for years to come — a reasonable assumption given current trends but one that carries risk if fundamentally different architectures emerge.
Related Articles
Google Gemini 3.1 Flash-Lite Targets Enterprise Scale at $0.25 Per Million Tokens
Google has launched Gemini 3.1 Flash-Lite in preview, the fastest and most cost-efficient model in its Gemini 3 family, priced at just $0.25 per million input tokens with 2.5x faster time-to-first-token than its predecessor. The model targets high-volume enterprise workloads where cost and latency matter more than peak capability.
Mandiant Founder Kevin Mandia Raises $190 Million for AI Cybersecurity Startup Armadin
Kevin Mandia, who sold Mandiant to Google for $5.4 billion in 2022, has raised a record-breaking $190 million in combined seed and Series A funding for Armadin, a startup building autonomous AI security agents. Backed by Accel, GV, Kleiner Perkins, and the CIA's In-Q-Tel, Armadin is already working with Fortune 100 companies.
Nscale Raises $2 Billion Series C — the Largest Funding Round in European Tech History
London-based AI infrastructure company Nscale closes a $2 billion Series C at a $14.6 billion valuation — the largest funding round in European history — backed by Citadel, Dell, NVIDIA, and Nokia, with former Meta COO Sheryl Sandberg joining the board.