CorporateTuesday, December 23, 2025

Lucidean raises $18M seed to power AI data center optics

Source: Business Wire
Read original

TL;DR

AI-Summarized

Santa Barbara–based startup Lucidean announced an $18 million seed round on December 23, 2025 to develop its CohZero coherent-lite optical interconnects for AI and data center networks. The round was co-led by Entrada Ventures and Koch Disruptive Technologies, with several deep-tech and corporate VCs participating, and coincided with the appointment of photonics veteran James Raring as CEO.

About this summary

This article aggregates reporting from 1 news source. The TL;DR is AI-generated from original reporting. Race to AGI's analysis provides editorial context on implications for AGI development.

Race to AGI Analysis

Lucidean is going after one of the least glamorous but most important bottlenecks in the AI stack: optical interconnects between racks and clusters. As models grow and training runs span tens of thousands of GPUs, the old tradeoff between cheap, short‑reach IMDD links and expensive, power‑hungry coherent optics starts to bite hard. Lucidean’s CohZero architecture promises “coherent‑class” reach and performance while slotting into today’s IMDD ecosystem—lasers, DSPs and pluggable modules—with much lower cost and power.

If the technology works as advertised, it could materially increase effective bandwidth for AI clusters without requiring a full rethink of data center network design. That’s strategically significant because it helps hyperscalers keep scaling model sizes and context windows without being constrained by the optical layer. The investor roster—Koch’s disruptive-tech arm, M Ventures, Cerberus and others—underscores how much capital is now flowing into “picks and shovels” for AI infrastructure, not just model companies.

From an AGI-race perspective, improvements in power-efficient, high-throughput interconnects effectively expand the feasible training budget for everyone who can afford large clusters. That tends to compress timelines: the easier it is to wire up massive GPU fabrics, the more experiments leading labs can run, and the more room they have to explore extremely large or more agentic architectures.

May advance AGI timeline

Who Should Care

InvestorsResearchersEngineersPolicymakers