• Skip to main content
  • Skip to search
  • Skip to footer
Cadence Home
  • This search text may be transcribed, used, stored, or accessed by our third-party service providers per our Cookie Policy and Privacy Policy.

  1. Blogs
  2. SoC and IP
  3. Scale-Up and Scale-Out IP for Optical Interconnect for Accelerated…
Hui Wu
Hui Wu

Community Member

Blog Activity
Options
  • Subscribe by email
  • More
  • Cancel
CDNS - RequestDemo

Have a question? Need more information?

Contact Us
AI data center
AI factory
Data Center architecture

Scale-Up and Scale-Out IP for Optical Interconnect for Accelerated Computing

6 Feb 2026 • 4 minute read

Optical connectivity is foundational to modern data centers, enabling high-bandwidth, low-latency data movement across switches, routers, servers, and racks. With the rise of AI factories, its importance has increased dramatically. Optical links provide the massive bandwidth, ultra-low latency, and scalability required for GPU clusters, overcoming the physical limits of copper and eliminating bottlenecks in large-scale AI training and inference workloads. They enable both scale-up (linking hundreds or thousands of GPUs into a single, unified GPU) and scale-out (connecting multiple GPU clusters), which are essential for training large language models (LLMs) such as ChatGPT, Gemini, Copilot, and Claude.

AI Data Center Interconnect Architecture

AI data centers use a mix of copper and optical connectivity. Copper is typically used within a tray and inside a single rack, while optics serve the inter-rack domain where distances and bandwidth demands escalate. These optical interconnects make it possible to link thousands of GPUs into a single cluster and then to aggregate multiple clusters into systems containing tens of thousands of GPUs.

Key technologies of optical connectivity include:

  • Pluggable optics
  • Active Optical Cables (AOCs)
  • Emerging Co-Package Optics (CPO)

All three technology types are critical for enabling high-performance AI infrastructures.

Pluggable optics are modular, hot-swappable transceiver modules that convert electrical signals to optical signals (light) for high-speed data transmission over fiber-optic cables. They plug into standardized ports on servers, switches, and routers, offering flexible, scalable, and cost-effective network connectivity, which is essential for AI infrastructure, data centers, and telecom. Common form factors include Small Form-factor Pluggable (SFP), Quad SFP (QSFP), and Octal SFP (OSFP), of which OSFP provides the highest density and is widely used for 800G and 1.6T deployments in AI centers. AOCs operate similarly but have fiber permanently attached to the transceivers at both ends.

Pluggable modules themselves come in three types: full retimed optics (FRO), transmit retimed optics, also known as linear receive optics (TRO or LRO), and linear pluggable optics (LPO).

Block Diagram of FRO

Block Diagram of TRO (LRO)

Block Diagram of LPO

FRO modules offer the highest levels of signal integrity and interoperability by employing DSP-based retiming on both the transmit and receive paths. This superior signal performance comes at the cost of higher power consumption, increased latency, and greater expense.

LPO represents the opposite end of the spectrum by removing the DSP entirely from the module and relying on the host SerDes to manage retiming and signal conditioning. This approach achieves much lower power, latency, and cost but introduces challenges in signal robustness and interoperability.

TRO/LRO strikes a middle ground by applying DSP retiming only on the transmit side while depending on the host for receive-side processing. This configuration balances performance, cost, and power in a way that aligns well with many AI infrastructure designs

Block Diagram of CPO

CPO represents a transformative shift in high-speed optical interconnects. It integrates the optical engine directly alongside the host ASIC within the same package, eliminating the need for traditional pluggable modules. By dramatically shortening electrical channels, CPO reduces insertion loss and lowers overall power consumption while enabling the extreme bandwidth density required for the next generations of connectivity: 800G, 1.6T, and especially 3.2T. CPO optical engines are built on silicon photonics, where modulators, photodetectors, and waveguides form a tightly integrated optical integrated circuit. Pluggable external lasers enhance reliability and serviceability in these dense, high-power environments.

However, the advantages of CPO are accompanied by significant challenges. Power density around the integrated optical engine is extremely high, demanding advanced thermal solutions. CPO, like LPO, depends on DSP and retiming within the host ASIC, which shifts complexity into the system architecture. Integrating CPO requires deep co-optimization between system and optics teams, alignment across a diverse supplier ecosystem, and careful navigation of standardization and vendor-related business considerations. Even so, CPO remains one of the most promising technologies for addressing the physical limits of copper and pluggable optics in the 3.2T era.

Comparison of Optical Connectivity Approaches

Optics Type

FRO

LRO (TRO)

LPO

CPO

Performance

Excellent

Good–Excellent

Good

Good–Excellent

Latency

High

Medium

Low

Low

System Power

High

Medium

Low

Low

Deployment Cost

High

Medium

Low

Medium–High

Development Complexity

Low

Low

Medium

Very High

Interoperability

Excellent

Good

Poor

Whether the optical interface is delivered through traditional pluggable modules or next-generation CPO designs, the foundational enabler at the heart of every solution is advanced SerDes PHY IP. These high-speed SerDes, together with their transmit and receive DSPs, must correct severe impairments in both electrical and optical domains. These corrections include loss, reflections, crosstalk, laser and photodetector noise, nonlinear characteristics of optical engines and fiber channel, and multi-path interference. They must operate reliably at extremely high speeds: 112 Gbps per lane for 800G, 224 Gbps for 1.6T, and 448 Gbps for 3.2T. Achieving this requires precise analog design and sophisticated, adaptive DSP algorithms capable of maintaining signal integrity across process, voltage, and temperature variations in large-scale AI deployments.

Cadence provides high-performance, protocol-agnostic SerDes PHY IP that supports both electrical and optical connectivity for host switches and ASICs, NICs, and module DSPs, regardless of whether the architecture uses FRO, LRO, LPO, or CPO. These PHY solutions can be licensed on their own or paired with Cadence controller IP for UALink, ESUN, Ultra Ethernet, or Ethernet to deliver a complete interconnect stack. As a leader in high-speed SerDes technology, Cadence enables customers to build scalable, power-efficient AI infrastructure across every generation of 800G, 1.6T, and future 3.2T connectivity.

Learn more about Cadence IP for optical interconnect: Cadence Leads the Way at PCI-SIG DevCon 2025 with Groundbreaking PCIe 7.0 Demos


CDNS - RequestDemo

Try Cadence Software for your next design!

Free Trials

© 2026 Cadence Design Systems, Inc. All Rights Reserved.

  • Terms of Use
  • Privacy
  • Cookie Policy
  • US Trademarks
  • Do Not Sell or Share My Personal Information