Ayar Labs and Wiwynn Unveil 1,024-GPU Photonic Rack System at OFC 2026
Silicon photonics startup Ayar Labs and server manufacturer Wiwynn have unveiled a rack-scale reference platform at the Optical Fiber Communication Conference that connects over 1,024 GPUs using optical interconnects instead of copper. The 100% liquid-cooled system promises dramatically lower power consumption while enabling the massive GPU clusters needed for next-generation AI training.
At the Optical Fiber Communication Conference in Los Angeles, silicon photonics startup Ayar Labs and server manufacturer Wiwynn are showcasing what could be the future of AI data center architecture: a rack-scale reference platform that connects over 1,024 GPUs using optical interconnects instead of traditional copper cables. The demonstration addresses one of the most pressing bottlenecks in AI infrastructure — the power, cooling, and bandwidth limitations of electrical interconnects at scale.
Why Optics Matter
Modern AI training clusters require thousands of GPUs to communicate with each other at extraordinary speeds. Current copper-based interconnects work well within a single server, but they become increasingly problematic at rack and cluster scale — consuming enormous power, generating significant heat, and imposing strict distance limitations that constrain data center design. A typical copper-based AI rack consumes 600 or more kilowatts, with a substantial portion of that power going to the interconnects themselves rather than to useful computation.
The Ayar Labs and Wiwynn reference design uses co-packaged optics (CPO) — optical engines integrated directly onto the GPU package — to replace copper links with light. The result is a system that draws 100 to 200 kilowatts per rack, roughly a third of the power consumption of equivalent copper-based designs, while supporting higher bandwidth over longer distances.
Technical Details
The system uses Ayar Labs' TeraPHY optical engines powered by SuperNova remote light sources. TeraPHY converts electrical signals from the GPU into optical signals that travel over standard fiber optic cables, enabling rack-to-rack communication at terabit-per-second speeds with minimal latency. The entire system is 100% liquid cooled and uses high-voltage direct current (HVDC) power distribution for maximum efficiency.
The reference design supports NVIDIA, AMD, and custom accelerator packages — making it vendor-agnostic at the GPU level while standardizing the interconnect layer.
From Demo to Deployment
Ayar Labs, which recently raised $500 million in Series E funding backed by NVIDIA, AMD, and MediaTek, now has total outside funding of $870 million. The company's co-packaged optics technology has moved from research labs to customer evaluations, with hyperscale cloud providers and AI startups testing the platform for next-generation data center builds scheduled for 2027 and beyond.
OFC 2026 runs through March 19 at the Los Angeles Convention Center.
Related Articles
NGINX 1.29.6 Adds Native Sticky Sessions and Fixes QUIC Reset Packet Overflow
NGINX 1.29.6 mainline release introduces a sticky-session directive for upstream blocks, enabling cookie-based session affinity without external load balancers and solving session-loss issues during worker restarts. The release also fixes oversized QUIC reset packets and improves SCGI backend proxying.
FreeBSD 14.4 Delivers Post-Quantum SSH, OpenZFS 2.2.9, and Intel E610 Support
FreeBSD 14.4-RELEASE has arrived with OpenSSH 10.0p2 defaulting to hybrid post-quantum key exchange, OpenZFS 2.2.9, and new driver support for Intel Ethernet E610 NICs. The release also adds 9P filesystem support for Bhyve virtualization guests and patches vulnerabilities in OpenSSL and libarchive.
OFC 2026: Coherent and Broadcom Demonstrate 3.2 Terabit-Per-Second Optical Transceivers
At the Optical Fiber Communication Conference in Los Angeles, Coherent and Broadcom have demonstrated 3.2 Tbps optical transceiver modules — doubling the bandwidth of current-generation 1.6T interconnects. The technology is designed for the next wave of AI data center buildouts, where single training runs require moving exabytes of data between thousands of GPUs.