Skip To Content

About

Optica Executive Forum at OFC 2026

Don't miss the most important annual event in for leaders in optical networking and communications. This one-day event features C-level panelists in an informal, uncensored setting discussing the latest issues facing the industry and your business. Leaders from top companies will discuss critical technology advancements and business opportunities that will shape the network in 2026 and the future.

This is your opportunity to:

  • Spend a full day with senior executives
  • Connect with the leaders and decision-makers in the industry
  • Participate in high-level networking
  • Ask your challenging questions
  • Leave with critical information

 

 

 

 

 

Who Should Attend?

  • Service Provider Network Executives
  • Service Provider Technology Evaluators
  • Data Center Managers
  • Enterprise Network Managers
  • Business Development Executives
  • Communications Technology Development Managers
  • Optical Systems Developers and Managers
  • Core Router Developers
  • Test Equipment Vendors
  • Data Center Switch and Router Vendors
  • Semiconductor Manufacturers
  • Venture Investors
  • Financial and Market Analysts

Defining the New Scale on the Block: Scale Across Networking

By Sterling Perrin, Senior Principal Analyst, Optical Networks and Transport, GTM Telecom Insights and Advisory, OMDIA

The 2026 Optica Executive Forum took place in front of a sold-out audience, and with 640 registrants was the largest audience I have seen at the annual pre-OFC summit. The undistributed theme of the Executive Forum – and for OFC itself – was “scale up, scale out, and scale across.”

Unsurprisingly, the intra-data center topics (scale up and scale out) dominated the agenda, but as the newest of the Nvidia-driven scaling concepts, “scale across” made its Executive Forum debut in a panel hosted by Acacia marketing vice president, Tom Williams. A key question of the panel was: what exactly does “scale across” mean?Sterling Perrin

Microsoft Azure Senior Principal Network Developer Yawei Yin defined scale across as the connectivity for AI backend networks for large-scale distributed training as well as inference. For Microsoft, the key is that scale across addresses East-West AI backend connectivity in distributed clusters, at whatever distances that can be accomplished. In fact, none of the panelists placed strict distance bounds on scale across beyond inter-data center “distribution.”

Cisco Fellow Rakesh Chopra further drew distinctions between scale across networking and “traditional” DCI. He positioned both scale across and traditional DCI as two variants of DCI with different use cases and very different attributes.

There was a strong consensus among panelists on what is driving the need for this new breed of DCI for AI: GPU requirements for AI training have outgrown the space and power available within a single data center.

To illustrate, training compute for frontier models increased by a factor of 750×/2 years from 2018 to 2022 while transistor density increased just 2x/2 years over this period, according to research cited by Yin. More and more compute is needed to build more sophisticated models – which occupies DC space and also drives up power consumption. For example, while a traditional data center has a power capacity of 5-300MW, next-gen training clusters require 1-5GW of power, beyond the capability of a single site.

There is a strong industry consensus that what is needed are AI clusters spanning multiple buildings and behaving as a single cluster. But what are the key attributes of the new scale-across network?

  • Optical capacity: Interconnect needs within the data center are greatest (i.e., scale up and scale out), but Cisco estimates that scale across will require 14x the bandwidth of a traditional WAN/DCI network. The bulk of interconnectivity needs will be in campus and metros, but not exclusively. Several speakers noted the need for scale across connectivity at 1,000km+ distances, again reinforcing that while most scale across applications will be <100km, scale across is not strictly confined to campus and metro networks or defined by a strict distance range.
     
  • Low latency and jitter: Low latency and low jitter are defining characteristics that separate scale across from traditional DCI. Yin noted that latency in AI training use cases mean that GPUs/TPUs are waiting for data, which costs money. Inference use cases are especially sensitive to latency, with Yin stating that time to first token is important metric of service. Inference use cases are driving renewed interest in edge data centers that process data close to users.

    Jitter (or packet delay variation) is especially problematic for AI training. In distributed training, collective operations require every participating GPU to take part in the same operation at the same time. Progress through the training step cannot continue past that collective until the operation has completed for all participants. If one lags, everyone else waits.
     
  • Reliability: Telco “5 nines” network reliability has been the Gold standard in communications networks for decades, but AI workloads are introducing new, stricter metrics. The synchronized AI training operations that are sensitive to jitter (as described above) are also sensitive to failures. Loss of a single GPU limits progress for the entire cluster. The greatest number of interconnects are in the scale up and scale out networks, so most of the focus on laser and module reliability is here. However, the scale out network cannot become the weakest link in the distributed AI cluster chain. The goal must be no downtime caused by scale across link failures.

Scale across is the newest of the three AI “scales.” In his opening comments, Acacia’s Williams stated that it had been 207 days since Nvidia first coined the term. Newness is perhaps the best explanation for why hyperscaler performance metrics for scale up and scale out networks are much more well-defined and specific compared to scale across today. Defining the new segment is an important first step, but as distributed AI clusters move toward the mainstream, scale across performance metrics must become clearer and more public.

 
Image for keeping the session alive