Skip to main content
AI & Machine Learning 2 min read 225 views

MiniMax Releases M2.5: Open-Source Model Matches Frontier Performance at 1/20th the Cost of Claude Opus

Shanghai-based MiniMax releases M2.5 under a modified MIT license, a 230-billion-parameter MoE model that achieves 80.2% on SWE-Bench Verified and runs at roughly one-twentieth the cost of Claude Opus 4.6.

TD

TechDrop Editorial

Share:

MiniMax, a Shanghai-based AI startup, released M2.5 on February 12, 2026, an open-source model that achieves near-frontier performance at approximately one-twentieth the cost of Anthropic's Claude Opus 4.6. The model is available on Hugging Face under a modified MIT license, with two variants: M2.5 and M2.5 Lightning.

Performance at a Fraction of the Cost

M2.5 is a mixture-of-experts model with 230 billion total parameters and 10 billion active parameters. On the SWE-Bench Verified benchmark — the standard evaluation for AI-assisted software engineering — M2.5 achieves 80.2%, placing it within range of closed-source frontier models. On Multi-SWE-Bench, which tests multi-repository code understanding, it scores 51.3%. On BrowseComp, a web navigation benchmark, it reaches 76.3%.

The cost differential is the headline number: M2.5 runs at approximately $1 per hour at 100 tokens per second — roughly one-twentieth the cost of Claude Opus 4.6 for equivalent workloads. The 37% speed improvement over the previous M2.1 model on SWE-Bench Verified indicates that MiniMax is improving both capability and efficiency simultaneously. The model was trained across more than 10 languages in 200,000+ real-world environments, suggesting a diverse training distribution.

Open-Source Economics

The release under a modified MIT license means that any organization can download, run, and modify M2.5 without licensing fees. For enterprises that currently pay API fees to access frontier-class models, the availability of a competitive open-source alternative at a fraction of the operating cost creates immediate economic pressure on proprietary pricing. The gap between the best closed-source models and the best open-source models continues to narrow, and each narrowing makes the premium that proprietary API vendors can charge harder to justify.

The Chinese Open-Source AI Wave

M2.5's release comes alongside Alibaba's Qwen 3.5 and ahead of DeepSeek V4, continuing a pattern in which Chinese AI companies release capable open-source models at rapid intervals. The combined effect is a growing library of freely available models that cover coding, reasoning, and multimodal tasks at performance levels that were exclusive to proprietary models just months ago. For developers and enterprises building AI-powered products, the practical implication is that the cost of accessing frontier-class AI capabilities is falling faster than most pricing forecasts anticipated.

Related Articles