Skip to main content

For pricing, call your Approved Networks sales partner, or Request a Quote from us

OCP Global Summit 2025 — Reflections from the Heart of the AI Infrastructure Revolution

Posted by Frank Yang on Oct 23, 2025

It’s hard to put into words the energy at this year’s OCP Global Summit. With more than 11,000 attendees, the atmosphere in San Jose felt electric — part trade show, part think tank, part glimpse into the future. Everywhere you turned, people were talking about one thing: the future of AI. 

 

Gigawatt-scale Data Centers and the 1 MW Rack Era 

Hyperscalers are all in—building gigawatt-scale AI data centers, each pushing the boundaries of what a rack can handle: up to 1 megawatt (MW) per rack. Just a few years ago, the industry talked about 30 kW racks as “high density.” Today, that’s entry level for AI infrastructure. Power distribution and cooling are being completely reimagined to keep pace with this unprecedented demand.  

See my comparison chart below highlighting how far we’ve come between the OCP Global Summit in 2021 and 2025. 

Meanwhile, neocloud providers are racing to differentiate themselves. Instead of just offering “GPU as a Service,” they’re exploring how to add value through software stacks, orchestration layers, and custom network designs—anything that can make their AI infrastructure smarter and more efficient. 

 

Optics Takes the Spotlight 

If any technology stood out the most, it was optics. The buzz around 3.2 Tb/s pluggable optics using 448G per lane was intense. With AI networks hungry for bandwidth, optical interconnects are quickly becoming the backbone of modern compute fabrics. 

Debates over pluggable optics, Co-Packaged/Near-Packaged Optics (CPO/NPO) versus Linear Pluggable/Linear Retimed Optics (LPO/LRO) architectures dominated the sessions. My take is that CPO won’t replace pluggables at 1.6T or 3.2T. The ecosystem around pluggable optics is just too strong and mature. But CPO will find its place, especially in deployments where integration and thermal constraints demand it. The market will evolve to support both architectures. 

 

The AI Conversation: Training vs. Inference 

It’s no surprise that AI dominated nearly every keynote, panel, and hallway conversation. Most of the attention, though, was focused on AI training—massive GPU clusters, high-power racks, and advanced cooling systems. 

But I believe AI inference deserves more of the spotlight. Training may build the intelligence, but inference brings the intelligence to life— it’s where real-world use cases exist. 

Accelerating AI inference at the edge using IP over DWDM technologies will play a significant role in the future of intelligent networks. The edge is where scalability, latency, and connectivity truly converge, and where innovation will be most impactful in the years ahead. 

 

The Question Everyone Is Asking 

For all the excitement and momentum, one big question kept surfacing in the sessions, the hallways, and even the after-hours meetups: When will the return on these massive AI investments be realized? 

No one has an answer yet. But that uncertainty doesn’t dampen the enthusiasm—it fuels it.