Never miss a story from Breakfast Bytes. Subscribe for in-depth analysis and articles.
At last year's Photonics Summit, actually held earlier this year due to technical issues when the videos were meant to go live, the keynote was given by James Jaussi, Senior Principal Engineer and Director of the PHY Research Lab in Intel Labs. It was titled Transitioning from Electrical to Optical I/O.
The ever-increasing movement of data from one server to another is taxing the capabilities of network infrastructure today. The industry is quickly approaching the practical limits of electrical I/O performance. As demand for compute continues to increase, electrical I/O power-performance scaling is not keeping pace and will soon limit available power for compute operations. It is possible to overcome this performance barrier by integrating optical I/O directly with compute silicon. Optical I/O has the potential to dramatically outperform electrical in the key performance metrics of reach, bandwidth density, power consumption, and latency.
The diagram above compares the best technology for different lengths of connection, from multiple kilometers at the apex to on-chip interconnect at the base, a centimeter or less. The numbers on the left of the triangle show the number of connections, ranging from trillions of on-chip connections up to thousands of long-distance links. Of course, these are inversely related, short distances lots of them, but long distances, few of them. Today, electrical has its strengths on-die, chip-to-chip, and board-board. Above that, it is the strength of optical. There used to be a contention that electrical performance would never get above 10 Gb/s so the transition to optical would happen sooner and lower on the triangle. But that never happened, and electrical performance has improved a lot (we are at 112G today with 224G coming in a couple of years, with papers already being presented on it). But there is a real practical limit as to how far we can push this. The right part of the diagram is not very clear on its own. It shows that as link performance increases, the point at which it is economical to switch from electrical to fiber moves too. Above the blue lines, everything has to go optical. Below, it should stay electrical. You have to think of the lines over time, starting with the one on the left.
The promise of silicon photonics is to bring these technologies together and get the best of both worlds. Intel has been working on this since 2000, and there have been lots of important milestones. But electrical has not been standing still, so Intel's focus has been on pluggable modules, going from 100G to 200G and (soon) 800G.
Take one datapoint, 200Gb/s per lane. Let's see where things transition dependent on length. Electrical with good materials like micro-coax can scale up to about 200 Gb/s/land over about a meter. With multi-mode optics such as VCSEL (vertical cavity surface emitting laser), then the lower end is limited by cost (in comparison to copper) and the upper end by performance. Single-optics provides much longer distances and supports wavelength-division multiplexing (WDM), which brings in multiple light sources that are separated in the spectrum, which provides additional scaling and performance. At these distances, electrical struggles to do anything close.
So the motivation to make the transition to optical is:
James next took a look at how datacenters are put together. The rack of servers is connected electrically up to the top-of-rack (TOR) switch, and that is connected optically to a leaf switch, which connects to multiple racks. So there is a clear divide between electrical and optical.
If we look at datacenters scale, we group a dozen racks to form a cluster and then group multiple clusters.
This gives him a way to look at the difference between co-packaged optics and optical I/O and how they are configured.
Co-packaged optics requires the modulated to be miniaturized (to fit in the package). The modulator is intended to vary the laser power between full-power and a low value. The difference between those two provides either a 1 or a 0. These ring modulators (on the right) are about 1000X smaller than the old way of doing things (on the left). The modulator either traps the light or releases it. Because these modulators are tuned to specific wavelengths, we can tile several of them together and so get (in the diagram above) 4 bits on different wavelengths. We can then slide these devices underneath the silicon and integrate many of them into a single package.
So we can go directly from the server out through the network.
With that co-packaged optics technology, we can return to look at the topology of the network, with our priorities being cost and reach, bandwidth, power, and latency (in that order). We also need to preserve compatibility with existing Ethernet standards such as 112G (and, in the future, 224G).
Above is 112Gb/s CMOS PAM4 optical transmitter (on the left) and receiver (on the right). The transmitter has an integrated laser, micro-ring, and a CMOS transmitter. On the receiver, we have the photodetector integrated with the receiver electronics. The CMOS is all in an advanced process node.
James starts taking a look inside the server and the cores involved. The order of priorities is cost and reach, latency, power, and bandwidth. The sweet spot for power and datarate seems to be 32-64 Gb/s NRZ leveraging additional wavelengths to meet bandwidth requirements.
The simple version in the picture above with a CPU/GPU/DPU (henceforth just XPU) and the optical I/O chip. Beneath is a little more detailed look at the core components. So this is all the key building blocks to build a link.
Above is a demonstrator, using 4 different wavelengths over a single fiber.
On the left are Ethernet compliant pluggable modules, and underneath the performance, in particular, look at bandwidth per shoreline, which is in the 5-20 Gbps/mm.
Next, in the center, co-packaged optics, which has much better bandwidth per shoreline at 50-200 Gbps/mm.
On the right, optical I/O. The bandwidth per shoreline can go as high as 10 Tbps/mm, energy at under 1 pJ/b (in a next-generation version).
So, the summary is:
Watch the videos of the summit (including this keynote, which is the first one).
Sign up for Sunday Brunch, the weekly Breakfast Bytes email.