• Skip to main content
  • Skip to search
  • Skip to footer
Cadence Home
  • This search text may be transcribed, used, stored, or accessed by our third-party service providers per our Cookie Policy and Privacy Policy.

  1. Blogs
  2. Breakfast Bytes
  3. Computational Logistics
Paul McLellan
Paul McLellan

Community Member

Blog Activity
Options
  • Subscribe by email
  • More
  • Cancel
computational logistics
computational software
verification

Computational Logistics

13 Aug 2020 • 3 minute read

cadenceLIVEGeneral Omar Bradley famously said: “Amateurs talk strategy. Professionals talk logistics.” And Napoleon (perhaps) said "An army marches on its stomach".

That's not to underestimate other aspects of armies, such as weapons and ammunition. But the Winter War, at the battle of Suomussalmi, between Finland and Russia, proved the point. The Russians were largely bogged down on the forest road with tanks and other vehicles, and kitchens in between. The Finns, on Nordic skis, would swoop in unseen in their white camouflage until the last moment, ignore the tanks, and destroy the field kitchens, before vanishing back into the forest where the Russians, having no skiing skills, could not follow. By destroying their food supply, the Finns did to the Russians in 1939 what the Russians did to Napoleon in 1812.

UPS has a similar proposition. It has vans, trucks, and planes. A truck is not better than a plane, it is different. But it is UPS's logistics at marshaling all these forces that are so important. UPS thinks of themselves as a logistics company, rather than an airline, despite having a big fleet of planes (not to mention 120,000 trucks). UPS has a phrase "package throughput" which is engines times logistics. The best engines. The best logistics.

 In verification, we need best-in-class engines. And at Cadence we have four: Xcelium for simulation, JasperGold for formal, Palladium Z1 for emulation, and Protium X1 for FPGA prototyping. These all have different purposes. But the computational logistics are what makes everything work together. One of the big challenges in verification is to do everything just once. You don't need to run the same vectors in emulation if you've already run them in simulation. You don't need to use simulation to check an assertion if you've already formally checked it in JasperGold. 

So verification throughput is like the UPS case: Engines times logistics. The best engines, the best logistics.

The primary computational logistic tool is vManager. I wrote about this recently in my post vManager: One Manager to Rule Them All and also in Are We There Yet? Metric-Driven Signoff (which covered an STMicroelectronics presentation from CDNLive India a couple of years ago, but still stands up well today).

Yesterday, at CadenceLIVE Americas, Anirudh announced another tool in the computational logistics arsenal, Xcelium ML. You can read about that in more detail in my post Xcelium ML: Black-Belt Verification Engineer in a Tool. This uses machine learning techniques to do much more fine-grained guidance of randomized simulation. The result is achieving the same coverage with as little as 20% of the vectors.

We already have machine learning in formal with JasperGold Smart Proof. For more details on that see my post from last year's Jasper User Group event, JasperGold: the Next Generation.

For complex system-level analysis, we have Perspec, which supports the Portable Stimulus Standard (PSS). For more details on Perspec and PSS, see my posts Portable Stimulus and Correct Designs and MediaTek's Experience with Perspec.

The focus of all this computational logistics that is driving verification is to find the most bugs per dollar per compute day. It is an exaggeration to say that a run that doesn't find a bug is a run wasted, but ultimately everyone knows that there are bugs there and it is finding them as fast as possible (and using the least compute power) that is important in moving towards verification closure. Using good logistics and good engines, the design team can focus on verification throughput.

The workhorse for verification remains simulation, although formal approaches are increasing, and emulation an FPGA prototyping are used when huge amounts of throughput are required, such as booting the operating system and software debugging. As you can see from the table below, it is widely used across the industry, with many segments where 9 out of 10 companies use it.

 And now Xcelium ML pushes the performance up again for the random simulations where machine learning increases performance by up to 5X.

 

Sign up for Sunday Brunch, the weekly Breakfast Bytes email.