Never miss a story from Breakfast Bytes. Subscribe for in-depth analysis and articles.
You probably already know that much of the traffic inside the data center, and indeed the backbones routes of the internet, all use fiber optic (glass) rather than copper. There are a number of reasons, the main one being is that it is easier to achieve higher bandwidth. You've probably even heard the phrase "fiber to the curb", meaning that if we all want to have gigabit per second internet connections at home, it is not good enough to have fiber running around the neighborhood, it needs to cover that last connection from the pole in front of the house to the router. Coax (cable) or twisted pair (phone line) isn't good enough.
The amount of data is continuing to increase, with predictions that it is doubling every year or two and the bandwidth requirements are extreme. When I taught the course on networking at Edinburgh University, one exercise I used to make students calculate was the bandwidth of a semi truck (we'd call them artics in the UK) going at 60 miles an hour full of magnetic tapes. Part of the exercise was to astound the students at how large a number it was compared with the sorts of numbers that networks used (about 64Kbps in those days). Just a couple of weeks ago, I read about Amazon's Snowmobile which is...a truck full of disk drives going 60 miles per hour. It still takes two to three weeks to load the truck, since that is up to 100 petabytes of data, but then it can drive to an AWS datacenter and unload it. Their slogan is "load exabytes of data per week." To give you an idea of how long this would take using a network, an exabyte of data over a 10Gbps link would take 26 years.
Until recently, the way that fiber would be used is that the electronic data processing chips would drive signals across to a laser that would create the optical signals. That worked fine when only a few long-distance routes needed such high bandwidth (such as the Apollo North OALC-4 SPDA cable between the US and Europe, that comes ashore close to where my father lives in Cornwall), but when every connection between servers in a datacenter needs a fiber connection, this approach is too expensive and too power hungry.
There are other areas where fiber is attractive. Cars and planes have heavy wiring harnesses, perhaps 100kg in a plane, 50kg in a car. Almost all of this weight can be saved with fiber, and it is much less sensitive to electrical interference. Automotive Ethernet is so price sensitive that there are even standards for EOPOF, Ethernet Over Plastic Optical Fiber.
Recently, Gilles Lamant, a Cadence distinguished engineer, presented an introduction to silicon photonics for a large group of Cadence employees. Some of what he presented is confidential, but the basics of silicon photonics are something that every semiconductor designer should have at least a little knowledge about.
There are three ways to look at light. Depending on how far you got in physics, you will know about some of these:
Just as with cellular the dream is to get as much as possible of all the radios on-chip, with photonics the dream is to get as much as possible of the optical interface on-chip. What if we could get it all on-chip? This is what silicon photonics is. We build chips that convert electrical signals into light that is emitted directly into a fiber, and at the other end we have light coming down a fiber that directly is converted into an electrical signal. In fact, there are two solutions, one where everything is integrated onto a single chip, and one with a traditional 3D stack where the photonics is on a separate die from the traditional electronics (in much the same way as CMOS image sensors are usually now mounted on top of their electronics in phones—and digital point-and-shoot cameras, if anyone still buys those). Both these solutions allow us to leverage the huge investment in fabrication facilities and technologies for semiconductors, rather than requiring completely new factories.
What we think of as an optical fiber is, in fact, just one case of a wave guide. A wave guide is made by creating two reflective edges. The light then bounces between the two edges. One difference with light is that we can put different modes (frequencies, polarization, etc.) down the same wave guide without them interfering, although some wave guides are designed to only allow a single mode, providing almost lossless transmission of that mode while attenuating everything else. For example, WDM4 is used for 100G optical Ethernet, which uses four different frequencies/wavelengths. These can be merged and separated and put down a single fiber, rather than requiring four separate fibers.
On chip, designing waveguides is complicated by the fact that light doesn't like sharp bends, unlike electrical signals, which (until you get to RF anyway) are unaffected. So waveguides need to have smooth curves so that the light is bent rather than reflected.
If silicon photonics is going to be practical, there are two key components: a way to convert electrical signals into optical signals, and a way to convert optical signals back into electrical signals at the other end.
Remember that light consists of photons, as particles. The density of particles around it will affect the speed at which it moves (look up Soref Equation for more details). The PN junction on a chip can change the number of electrons/holes in a diode junction so you can "slow" the light signal by making it go through a loooooooong PN junction. It turns out these junctions are very sensitive to heat, which means that they can be tuned but that the temperature must be controlled.
This means that we can split the light into multiple beams using a multi-mode interferometer. Using the PN effect just described, one of the beams can be slowed, then when the two signals are recombined they will interfere constructively or destructively to produce ampltude-modulated light. This is called a Mach-Zehnder interferometer.
We have succeeded at one of the keys to silicon photonics, building a transmitter: the capability to turn electrical signals into modulated light which can then be transmitted via optical fiber to the receiver. One challenge is that, in practice, these devices are very large, at least compared to other structures on the silicon. How large? Well, they are about 6000um which is huge compared to a 16nm transistor.
Turning light into electrical signals is not a new problem that needs to be solved for silicon photonics. Photodiodes have existed for a long time.
The basis of turning light into electrical signals depends on the fact that the absorption of a photon by an photoconductor material results in the generation of a free electron. This scales up—a traditional solar cell is a huge photodiode. For silicon photonics, the challenge is that we need a fast response time, unlike for a solar cell.
Obviously there is a lot more to silicon photonics that what I've described here. This is a bit at the level of showing that we can build a NAND gate using CMOS transistors and so...ta-da...we can build a microprocessor...and so smartphones, and self-driving cars.
Cadence has partnered with Lumerical Solutions and PhoeniX Software to create an integrated electronic/photonic design environment (EPDA). Lumerical provides the tools for creating optical components analyzing the optical effects, PhoeniX for curvilnear photonic layout generation, and Cadence, specifically the Virtuoso platform, provides all the electrical aspects and the overall integration.
The above diagram shows how it all fits together. The 50,000 foot view of the basic flow is:
Details of the solution are on the Cadence Photonics page.
Previous: Branko's Elephant Chart