• Skip to main content
  • Skip to search
  • Skip to footer
Cadence Home
  • This search text may be transcribed, used, stored, or accessed by our third-party service providers per our Cookie Policy and Privacy Policy.

  1. Blogs
  2. Breakfast Bytes
  3. Omnia Simulation in Tres Partes Divisa Est
Paul McLellan
Paul McLellan

Community Member

Blog Activity
Options
  • Subscribe by email
  • More
  • Cancel
verilog-xl
Verilog
rocketick
rocketsim
simulation

Omnia Simulation in Tres Partes Divisa Est

17 Aug 2016 • 3 minute read

  "Omnia Gallia in tres partes divisa est" were the opening words to Julius Caesar's account of the Gallic war. All Gaul is divided into three parts. He went on to explain that they were the Belgians in the north east, the Aquitaine south of the Loire river, and the Celts in Normandy and Brittany. Well, simulation is divided into three parts, too. Four if you count gate level.

The zeroth phase was gate-level simulation before there were any hardware description languages. Everyone in EDA, most particularly the DMV, had their own simulator. The DMV were Daisy, Mentor, and Valid. Daisy went bankrupt, Mentor still exists, and Valid was acquired decades ago by Cadence. I was at VLSI Technology in that era, and we had two simulators. First VSIM, which didn't even have timing. It seems amazing to me now that we could design chips without timing (and there was no STA back then, that came a lot later). Then, ta da, we added timing and had TSIM. Being a software guy, I had to learn that there is really a digital illusion. Signals are actually analog, so you have to make a deliberate decision as to when the input transitions and when the output does. We used 50% on the input and 60% (or 40% for falling signals) on the output. Even so, you could occasionally build a gate with negative delay, where the output met its threshold before the input did.

The big change came when hardware description languages started to be developed. The most significant was Gateway Design Automation's Verilog. It was created by Phil Moorby ,who just recently was inducted into the Computer History Museum's Hall of Fame. I put on my tuxedo and went along to shake his hand a few months ago. Cadence aquired Gateway in 1989 for what, at the time, seemed a very high price. In retrospect, it has to be one of the best EDA acquisitions ever. You can see from the above page from Phil's notebook that we might all be using SystemLogol or SystemValgol today.

When Verilog was first developed, it was interpreted. I don't know the details of Verilog-XL but I assume the code was compiled into bytecodes and an interpreter ran them, like early versions of Java. That was the first era of simulation, the interpreted era. Of course, Verilog could also run gate-level simulations. In fact, part of its early success was that it was the fastest gate-level simulator of its era, so even if you didn't care about the Verilog language, it was the best signoff simulator for the ASIC designs at that time.

In his acceptance speech, Phil praised the team at Chronologic for developing VCS and thus driving the technology forward. VCS read in the Verilog, compiled it into C, and then compiled the C, thus creating something that ran much faster. Cadence created NC-Verilog (the NC stood for Native Compiled) that cut out the C step and read in Verilog and turned it straight into a raw stream of CPU instructions. Synthesis came of age and designs continued to get bigger and bigger. This was phase 2, the compiled simulation phase.

 We are now at the third phase. Performance of chips is no longer dictated by how much you can crank up the clock rate, the power constraints are too severe. Instead, performance increase has to come from parallel architectures and multi-core. So lots of clock domains. To simulate these systems, we need to build a parallel simulation engine in order to cope with...tell me if you have heard this before...speed and capacity. That is what RocketSim did and is why Cadence acquired them.

Each of the phases, interpreted, compiled, parallel, has lasted about 20 years. Well, not paralllel, that is just starting. But it is the basic foundation of simulation for the next couple of decades. Of course we don't know what the future will hold: Quantum computing, biological system computing, something even newer. But the characteristics of all of these are to look at a problem in all of its parallel conditions at once and so we will need simulation systems that can do the same. In the nearer term, the challenges are high reslience and redundancy. Remember, we can't increase the clock speed so performance increase (in the SoCs) has to come from more parallel structures. Managing power is the limiting factor to everything in semiconductor design. Managing parallel structures is the limiting factor in simulation.

Next: Palladium and Protium Platforms, the Hardware Twins

Previous: A Perspective on Perspec