Get email delivery of the Cadence blog featured here
Over a year ago I wrote a blog entitled, "Whatever
Happened to Statistical Timing?" A Cadence webinar earlier this week
provided one answer - it's becoming part of a continuum of capabilities for
on-chip variation (OCV) analysis, and will most likely join the mainstream at
32nm and below.
Entitled "Getting back timing margins: Traditional OCV
alternatives," the webinar was the first in a series of Digital
Implementation and Signoff webinars that will be rolling out over the next
few weeks. In this initial webinar Mike Jacobs, senior product manager at
Cadence, started with a basic explanation of OCV and then described a range of
potential analysis capabilities.
As the name suggests, OCV represents variation across the
die itself. This includes random variations (doping fluctuations, gate oxide
thickness) and systematic variations (lithography, CMP). OCV is serious
business at advanced process nodes. It can cause chips to fail, or result in
excessive margins that prevent chips from meeting their performance or power
targets. Traditional OCV uses a lumped de-rating factor and is, as Mike said,
"inaccurate and pessimistic."
A Range of Solutions
Mike described a range of solutions that might be
appropriate at different process nodes (numbers mine):
0 - No OCV at all. Probably okay at 130nm
1 - Traditional OCV
with a "lumped" de-rating factor, which typically applies only to clock nets. May
work down to 65nm.
2a - Location-based
OCV (LOCV). The foundry gives you a set of tables, most likely for
stage-based (random) variation, with de-rating factors for different types of
paths and cells. LOCV uses logic level, cell complexity, and physical location
to select optimal de-rating factors. Good approach for 65nm, possibly
applicable down to 28nm.
2b - Advanced
stage-based OCV (AOCV). Main difference from LOCV is that you can apply
weighting factors to individual cells in a design. Again, 65nm-28nm.
3 - Design-specific
statistical OCV. If you have access to a statistical library, and don't
want to run a full statistical static timing analysis (SSTA), you can use
statistical analysis to generate OCV de-rating factors and then view the same
timing reports you'd see with static timing analysis (only more accurate). Most
likely use is at 32/28nm.
4 - Statistical
static timing analysis. Full SSTA returns statistical distributions rather
than best case/worst case absolute numbers, potentially saving dozens or
hundreds of corner-case runs. In addition to a highly accurate OCV analysis,
you can determine which percentage of chips will yield at a given frequency.
While used by a few early adopters today, the "sweet spot" for full SSTA will
be at 32nm and below.
So rather than a new "tool" or methodology, SSTA is just an
additional capability -- albeit a very powerful one -- for analyzing variability.
This may help explain why SSTA, once seen by some observers as the "next big
thing" in IC design, has kind of faded from public view in the past few years. The
Timing System has quietly supported full SSTA for some time, along with the
other capabilities listed above.
Upcoming webinars in the Digital Implementation and Signoff
Webinar Series include the following. All take place at 10:00 am Pacific time.
Registration and information is available here.
The webinars will be archived here
roughly one week after each live presentation.
Dan -- I think there could be a place for both gate-level and transistor-level SSTA, but I would expect the latter only on a handful of the most critical paths.
I'm curious if gate-based and cell-based SSTA tools will give way to transistor-level SSTA tools because of accuracy concerns. What do you think?