Cisco unveiled its Silicon One G300 switch chip at Cisco Live EMEA on February 10, 2026, delivering 102.4 Tbps bandwidth for large AI clusters. The chip underpins new Cisco N9000 and 8000 systems and aims to cut AI job completion times by about 28% while competing directly with Nvidia and Broadcom in AI networking.
This article aggregates reporting from 5 news sources. The TL;DR is AI-generated from original reporting. Race to AGI's analysis provides editorial context on implications for AGI development.
Cisco’s G300 announcement is a pure infrastructure play, but it matters a lot for the AGI race. Frontier models are increasingly bottlenecked not by raw GPU FLOPs but by how quickly data can move across tens of thousands of accelerators. A 102.4 Tbps Ethernet switch with deep buffers and congestion-aware routing is effectively turning the network into part of the AI compute fabric. That’s exactly what hyperscalers and sovereign clouds need to run GPT‑5‑class models and beyond at scale.([investor.cisco.com](https://investor.cisco.com/news/news-details/2026/Cisco-Announces-New-Silicon-One-G300-Advanced-Systems-and-Optics-to-Power-and-Scale-AI-Data-Centers-for-the-Agentic-Era/default.aspx))
Strategically, this is Cisco planting a flag in a market that Nvidia and Broadcom have largely owned. If Cisco can deliver the promised utilization and energy-efficiency gains, it gives cloud providers a credible, standards-based Ethernet alternative to Nvidia’s tightly integrated stacks. That could lower the effective cost of large training and inference runs, which in turn enables more experiments, more specialized models, and more frequent iteration on frontier architectures.
Competitively, the move intensifies the arms race around AI data center plumbing: silicon, liquid cooling, optics, and observability software. For AGI-focused labs, better networking shrinks the penalty of scaling to ever-larger clusters. It also diversifies their supply chain so they’re less constrained by any one vendor’s roadmap or pricing power.



