• Skip to main content
  • Skip to search
  • Skip to footer
Cadence Home
  • This search text may be transcribed, used, stored, or accessed by our third-party service providers per our Cookie Policy and Privacy Policy.

  1. Blogs
  2. Breakfast Bytes
  3. A History of Timing Signoff
Paul McLellan
Paul McLellan

Community Member

Blog Activity
Options
  • Subscribe by email
  • More
  • Cancel
Tempus
STA
static timing
gate level simulation

A History of Timing Signoff

26 Aug 2021 • 5 minute read

 breakfast bytes logo Today, when all timing signoff is done using static timing analysis with a tool such as the Tempus Timing Signoff Solution, you have to be a certain age to remember that static timing wasn't always around. In fact, at VLSI Technology we developed one of the first static timing engines QTV, and we were the first semiconductor company to use static timing for signoff of chips. So what came before QTV? Well, in the case of VLSI, QSIM came before, and before that TSIM, and before that VSIM. Let's go back to the beginning.

Simulation

Before there was static timing, signoff was done using simulation. In fact, simulation was done before there was timing at all. The earliest of the simulators I just mentioned, VSIM, was a unit-delay simulator. That is every clock cycle took one time unit, and if you wanted to make sure that the more critical paths in the design actually worked fast enough for this to be reasonable then you used SPICE (or ASPEC) which were circuit simulators. VSIM predated standard-cells, gate-array macros, and the like. It was a transistor-level simulator. Given the non-existent sense of timing and the need for the designer to run circuit simulation on any important paths, digital design then was much more like analog design still is today. By the way, to give a sense of the timeline, SDA, the forerunner of Cadence, had not yet been founded. This is the early 1980s.

The next generation was TSIM. This was still a transistor-level simulator but now there was timing. The interconnect model was very simple. Resistance was not modeled. Lateral capacitance was not modeled since, in those days, interconnect lines were thin (vertically) and far apart, so it was still second order. Capacitance to the substrate and when one piece of metal crossed over another was modeled. Gate capacitance was modeled, of course, since it was significant—we were still in the happy scaling days of Denard scaling. Since resistance was not modeled, there was a single transition time for a node: when the output switched, after a delay depending on the total capacitance of the node, all the signals on the gates (in the transistor sense, not the NAND-gate sense) would transition. Slopes were not modeled, signals transitioned with an instantaneous rise or fall.

For TSIM, the values for the interconnect capacitance values and the transistor sizes came from our circuit extractor VLSIExtract, which I wrote. I asked someone in our technology development (TD) department once how accurate were the values I'd been given for the extractor technology file. "Pretty accurate, within about 30%". This was not because TD couldn't measure them well, but that was the variability from wafer to wafer (5-inch, by the way). And no, we didn't model that, we took what we guessed to be worst-case values. The big issue in those days was whether the critical paths were fast enough, so using worst-case timing was appropriate.

The first real gate-level simulator we had was QSIM (Quick Simulator). This was no longer transistor-based. It had standard cells (or gate-array macros). Slopes were not modeled during simulation but were during standard cell characterization. This was done with SPICE. An input slope was picked for the whole library. It would be applied to the inputs of the gate, one at a time. The output slope was measured. The "time" of the gate (for that input) was the delta between when the input slope reached 50% and when the output slope reached 40% or 60% (depending on if it was falling or rising). In a few rare embarrassing cases, the output would reach its threshold before the input reached 50%, so the gate showed up with negative timing that we just set to zero. I forget if we had setup/hold checks yet, or whether we just simulated how the gate would behave.

Static Timing Verification

We then created QTV, for Quick Timing Verifier. This might have been the first commercial static timing tool, although it was built on work done in academia. Quad's Motive came into existence somewhere around the same time. I can't find much about it anymore, such as what models it used.

The big pioneering moment was when VLSI Technology decided to be the first ASIC company to switch from gate-level simulation for timing signoff to using static timing analysis. As I put it in my blog post OpenROAD: Open-Source EDA from RTL to GDSII:

I assume there were earlier static timing projects in academia but QTV was the first one to get used in earnest. VLSI led the industry in signing-off all ASIC designs with static timing instead of gate-level simulation, starting in about 1990. Of course, all designs are signed off with static timing today, but back then it was considered "out there". Even by Tom Spyrou, who was a little worried that all VLSI silicon was being signed off using the tool that he had created almost single-handedly. I created VLSI's first circuit extractor and remember thinking something similar, that everyone's timing depended on the parasitics and transistor sizes my code calculated.

Tom would go on to create PrimeTime, which was the standard timing signoff tool for foundries for some time. Cadence then got serious about producing an industrial-strength timing signoff tool aimed at the largest designs running the largest datacenters. Tempus was released in 2013. I was still at Semiwiki at the time, but you can still read my coverage of the announcement event in my post Tempus: Cadence Takes on PrimeTime.

As always, it is almost comical how small "big" designs were not all that long ago. The example design that Cadence used to show how good Tempus was at launch consisted of:

  • 28nm, 44M instances, 12 views
  • Existing flow: 10 days to fix hold violations, could only work on 7 views due to capacity limitations
  • Tempus: 99.5% hold violations fixed in one ECO iteration with no degradation in setup timing. Before using Tempus, there were 11,085 timing violations and after there were just 54

I have been at HOT CHIPS for the early part of this week. The biggest chip discussed is, I think, Intel's Ponte Vecchio at over 100 billion transistors on multiple tiles from multiple processes. Plus Cerebras (not to be confused with our new deep-learning-aware physical design tool Cerebrus) presented the latest iteration of systems built using their "chips" that are actually whole wafers. Transistors are so 2020, it has multi-millions of cores!

Learn More

The Tempus Timing Signoff Solution Product Page has links to download a datasheet and a white paper. There are also some videos if you prefer that.

 

Sign up for Sunday Brunch, the weekly Breakfast Bytes email.

.