Skip to main content
Startups 2 min read 553 views

Callosum Raises $10.25 Million Seed to Orchestrate AI Workloads Across Chip Architectures

London-based Callosum, co-founded by Cambridge neuroscientists, raises $10.25 million in seed funding led by Plural to build software that intelligently distributes AI workloads across Nvidia, AMD, Intel, and custom accelerator chips.

TD

TechDrop Editorial

Share:

Callosum, a London-based startup co-founded by Cambridge neuroscientists, has raised $10.25 million in seed funding led by Plural to build software that intelligently orchestrates AI workloads across diverse accelerator chip architectures — addressing the growing challenge of running AI models efficiently across Nvidia, AMD, Intel, and custom silicon.

The Problem

As the AI chip market fragments beyond Nvidia's dominance, organizations increasingly deploy workloads across multiple accelerator types. Different chips excel at different tasks: Nvidia GPUs remain dominant for training, but AMD and Intel accelerators offer cost advantages for inference, while custom ASICs from companies like Google (TPUs) and Amazon (Trainium) are optimized for specific model architectures. Currently, optimizing workload placement across these diverse architectures requires significant manual engineering, creating a bottleneck that Callosum aims to automate.

The Approach

Callosum's software sits between the AI application layer and the hardware layer, analyzing workload characteristics — model architecture, batch size, latency requirements, cost constraints — and automatically routing computations to the most efficient available accelerator. The system learns from execution telemetry, continuously refining its placement decisions as it observes real-world performance across different hardware configurations.

Market Context

The funding comes as the AI chip market is projected to exceed $200 billion by 2027, with Nvidia's market share under increasing pressure from AMD, Intel, and a wave of AI chip startups. Organizations that can efficiently utilize multiple chip architectures stand to reduce their AI compute costs by 30-50% compared to single-vendor deployments — a compelling economic incentive that creates a natural market for orchestration software like Callosum's.

Related Articles