• Skip to main content
  • Skip to search
  • Skip to footer
Cadence Home
  • This search text may be transcribed, used, stored, or accessed by our third-party service providers per our Cookie Policy and Privacy Policy.

  1. Blogs
  2. Verification
  3. We and Our Competitors Agree (Well, Almost!)
archive
archive
Blog Activity
Options
  • Subscribe by email
  • More
  • Cancel
DAC
ICE
System Design and Verification
Palladium
SoC
FPGA

We and Our Competitors Agree (Well, Almost!)

12 Nov 2009 • 2 minute read

It’s rare in EDA to see competitors agreeing, but an interesting article  in EEtimes Europe this week caught my eye, by Lauro Rizzatti the VP Mktg of EVE.  Lauro discussed a survey EVE ran during DAC, where they asked customers how they felt about the current state of hardware-assisted verification, what their priorities were, etc.

One paragraph really stood out (emphasis mine): “More interesting was the ranking of six criteria in selecting the next hardware-assisted verification platform, including run-time performance, compilation performance, visibility into the design, in-circuit emulation (ICE), four-state support and price.  Visibility into the design and compilation performance scored high, but run-time performance and price finished close behind.”

How ironic that he was naming exactly those criteria where Palladium has competed for years and consistently won! I could dive into a lecture on the strengths of Palladium, but I won’t (you can see the marketing literature here http://www.cadence.com/products/sd/palladium_series/pages/default.aspx).  But,  I want to say there were two major criteria missing from Lauro’s survey that are arguably more important than any of the six he mentioned. The first is flexiblity/scalability.  Whenever I talk to customers about to make a major investment in hardware-assisted verification, the key question everybody asks is “How can I maximally leverage this?”. To maximize leverage from one’s investment in hardware assisted verification one needs:

  1. Flexible capacity - i.e. change system gate-capacity and configuration, without having to change/relayout boards, mess with timing problems, etc.
  2. Flexible loading – i.e. change number of users
  3. Flexible usage – i.e. migrate between simulation vs. acceleration vs. emulation, directed-tests vs. metric/coverage driven verification, different levels of abstraction, hardware debug vs. software debug, etc. (This is one of the areas where Cadence invests the most by far.)
  4. Portfolio of extensions/accessories – i.e. TBA VIPs, Speed rate adaptors/bridges, etc.

The second key criterion is what many people call “debug loop time” (for lack of a better word, it’s the time required to find/fix a bug and resume whatever you were doing). This is actually more important than runtime performance per se; besides visibility into the design, controllability of the debug/runtime environment is also critical.

The EVE/Lauro Rizzati article mentioned neither of these two criteria. Maybe is it because FPGA-based architectures are known to have issues with these from time to time? (Palladium’s architecture is processor-based.)

However, I think EVE neglecting to mention these other factors was too self-serving. Customers deserve a balanced picture. Ultimately what customers really care about is: “How fast can I verify my current and future SoCs to a desired level of confidence” and “How much will it cost me now and in the future”?

Every customer has different needs. Some solutions are a better “fit” than others. Unfortunately, Consumer Reports doesn’t evaluate EDA tools (yet), so I would urge customers to do their homework and evaluate carefully.

Steve Svoboda

© 2025 Cadence Design Systems, Inc. All Rights Reserved.

  • Terms of Use
  • Privacy
  • Cookie Policy
  • US Trademarks
  • Do Not Sell or Share My Personal Information