Google FeedBurner is phasing out its RSS-to-email subscription service. While we are currently working on the implementation of a new system, you may experience an interruption in your email subscription service.
Please stay tuned for further communications.
Get email delivery of the Cadence blog featured here
At the recent DesignCon, Cadence and customer IBM presented a tutorial on Advanced IBIS-AMI Techniques for 32 GT/s and Beyond. If you are not well-versed in SerDes and signal integrity, your first need for a tutorial might be to explain the title.
I thought of a good analogy while I was in the audience: noise-reduction headphones. If you want to get music from your smartphone (the transmitter) to your brain (the receiver) without being distorted by noise, then these headphones do a good job. They play the music but at the same time they sample the ambient sound around you and feed in an appropriate amount of a negative copy of that to suppress the background noise. It is quite uncanny the first time you wear a pair of noise-reduction headphones, especially the earbud ones, and flick the switch to turn on the noise reduction. By working out what the ambient noise is going to do to the sound by the time it reaches your eardrum, the transmitted signal is adjusted to compensate.
SerDes transmitters and receivers don't work quite like that, it's more complicated, but part of the process is the same: work out what distortion the channel is going to add, and then compensate for that at the transmitter. There is a whole set of additional issues that need to be handled at the receiver, I suppose the equivalent to some of the audio processing that goes on in your brain. I'm sure you've seen eye diagrams (there's one on the right in case not), but the receiver has two big things that it needs to do, which are to make sure it has the right voltage level to discriminate between 0 and 1, and to make sure it has the right timing that it is looking in the middle of the eye. These both move around, hence the need for adaptive equalization (as opposed to something with all the parameters fixed at design time).
IBIS is a twenty-year-old standard for modeling the channel, and AMI is the Algorithmic Modeling Interface. Cadence was involved in both specifications, so it was appropriate that the tutorial was presented by Ken Willis, Kumar Keshavan, Mehdi Mechaik, and Ambrish Varma of Cadence, along with Greg Edlund of IBM.
The tutorial was split into a few sections, and different experts presented the different areas:
After the introductions, the very first slide of the presentation provided the motivation for doing things the old way, which was to use a circuit simulator. That worked acceptably in the era of parallel interfaces which were slow and didn't require a lot of bits to be run to validate that things would work. Today, we use IBIS/AMI and channel simulation:
The implication of those facts is that to accurately simulate multi-gigabit serial links you need to simulate very large bit streams with fast and accurate simulation models. When using adaptive equalization, you will need to discard a lot of the first part of the simulation, since you can't get any useful information (such as whether the eye is open) until the receiver has locked in.
The above diagram shows why equalization is required. On the left is the serial data to be transmitted. In the center is the frequency response of the channel (attenuating high frequencies a lot, but not with a simple linear response). On the right is the signal as it arrives at the receiver. It is very distorted and difficult to recover the data (for example, the black arrow on the left is a 0, on the right is a 1, but the voltage for the 1 is lower than the 0). The transmission is also self-clocking, but it is hard to recover the clock from the distorted waveform.
The solution is to add equalization. These all have acronyms that everyone in this space assumes you know. In order, from the transmitter, and through the chain of equalizers in the receiver, they are:
The diagram above shows the objective. The shaded blue area shows where the signals go. The white eye in the middle is open, which is good, meaning that both the DFE is working well (keeping the signals out of the eye in the vertical voltage direction) and the CDR is working well (keeping the signals out of the eye in the horizontal time direction). If the eye is open, then the data values and the clock can be recovered, and the whole SerDes transmission is working correctly. Some newer standards, such as DDR4 and the upcoming DDR5, define an area within the eye that must always be clear to meet the standard.
Having said that you can't use circuit simulation, the very first thing that you need to do is to... use circuit simulation. You need to put a step function on the input to the setup (the transmitter output stage, the channel, the receiver input stage) and measure the response. Almost surprisingly, that single simulation contains all the information required to measure the distortion the channel will introduce, and so, like the noise-reduction headphones, you can compensate for in the transmitter. The diagram above shows that circuit simulation.
The FFE consists of taps and presets that are put into the AMI model to ensure that a good signal gets through the channel. The above diagram shows the input signal at the top and the value at the receiver (before going through all the receiver equalization) at the bottom. Blue shows without equalization, green and red the effect of two of the tap values.
The first stage at the receiver is AGC (automatic gain control) that centers the waveform voltage ready for the next equalization stages. This is shown in the above diagram.
CTE (or CTLE) is used to "squeeze" the distribution of the voltages for 0 and 1 to keep the eye open. It is used with older and slower standards since it is lower power than DFE. Early versions of USB, MIPI, and others only used CTE at the receiver. Higher bandwidths require DFE.
DFE and CDR work together. CDR recovers the clock, but that recovered clock is required during DFE to both get the data and adjust the equalization periodically. On the other hand, CDR needs the signal cleaned up by DFE to recover the clock. The way these two elements work together is one of the reasons that it can take tens or hundreds of thousands of bits before the receiver "locks on" and is recovering good clock and data, and the eye is open.
CDR has to locate the sampling point, the "center" of the waveform. This is the essential starting point for adaptive equalization. It has to tolerate a certain amount of jitter, but also eliminate low-frequency jitter as the input signal drifts. There are two main types of CDR, known as bang-bang and Müller. The CDR identifies the eye center, filters uncorrelated (high frequency) jitter, rejects low frequency jitter (but moves the clock window in the longer term), Over a number of samples (such as 16 or 48) the CDR sees whether most samples are early or late (the clock transition is too early in the window, or too late), and then scales to adjust. This used to be done with analog circuits (linear), but these days it is all digital.
The key tradeoff is between eye-opening and jitter tolerance. If the adjustments are made fast, then there is a lot more jitter tolerance at the expense of a smaller eye. Slower adjustments can result in a much larger eye but at the expense of jitter tolerance.
The adaptive equalizer chain is compensating for channel loss, temperature, semiconductor process corner, voltage. But not crosstalk, high-frequency power supply noise, EMI. Each equalizer adapts at its own rate, every N bits, with the equalizer nearest to the package, AGC, changing the slowest, and the one nearest the latch, DFE, the slowest.
One obvious question is how these equalizer parameters are initialized. This is where AMI/IBIS is used with a back-channel from receiver to transmitter (that doesn't exist in reality) to close the loop between transmitter and receiver.
I said above that the equalizers adapt to process PVT corner. But actually, these parameters don't exist in the model. There is an assumption that if the adaptation works for all the other things, then it will work for process corner too, but that can be "a bit of a stretch" and so it is good to actually verify that this is true.
The half-day presentation covered a lot of detail about how you actually run these simulations in practice, such as injecting jitter, but that is beyond the scope of a blog post like this (and it is already long enough), so I'll leave it there.
Sign up for Sunday Brunch, the weekly Breakfast Bytes email.