As heralded in a prior posts, we recently hosted some "alumni" from our recent Techtorials, and other area customers, in a "deep dive" workshop focused exclusively on Metric Driven Verification (MDV) here on the San Jose campus. One of the architects of our MDV flow, and the creator of this specific workshop, is my colleague John Nehls. I've asked John to comment on how the program unfolded, and speak about the state of MDV in general.
Following John's remarks, I have some video comments on the event from our colleague Roxan Saint-Hilaire, an Application Engineer here in Silicon Valley, who has co-hosted this workshop with John. The import: we plan to be rolling out these workshops around the world in 2009, so the goal of this post is to give you a flavor of these events and welcome you to sign-up when they come to your area, ask us to setup an event in your area, or even on-site at your company.
[JOE] John, welcome! Before we dive into a recap of the event, please tell the readers a bit more about your background.
[JOHN] Thanks! My background is originally in design, with an EE degree from the University of Florida, followed by 5 years doing ASIC design in the DSP & communications space for Harris in Melbourne Florida. From there I branched into EDA, initially supporting ASIC design and verification, then shifting into the verification area full time for over 10 years now.
[JOE] What is metric driven verification ("MDV"), and how does it improve upon the more familiar coverage driven verification ("CDV") approach?
However, beyond that you also track the status and trends for a host of other important completion and efficiency metrics in the verification process verses the project's planned milestones. These can include details on failures and bug cycles, effectiveness of simulations, "churn" rates in the code base, low-power compliance, closure rates, license and simulation farm utilization -- basically whatever metrics the team finds valuable in order to track the progress of the given project.
Note that this means metric collection & reporting for MDV extends into metrics outside of traditional simulation realm, including inputs from formal analysis tools, hardware/software co-verification tools (like our own "ISX" option to IES), and into the ESL space and/or acceleration & emulation. Finally, in case it isn't obvious, this metric-driven approach scales from simple block level verification, all the way up to the system-level where data from each block, cluster, chip, and system is aggregated into a hierarchical roll-up of data.
[JOE] Tell me a little about theses events themselves: is there a lot of Q&A before, during, and after each segment? What issues were at the top of the attendees' lists of concerns?
[JOHN] The workshops very interactive, with many hands on labs to give people a tangible feel for the material. The "students" pepper us with questions about how they might apply these ideas, methodologies, or specific projects. One thing that's clear from the Q&A is that peoples' existing planning & management processes and tools -- things like Excel spreadsheets and maybe the occasional wiki bulletin board and intranet pages -- are very manually driven, and are becoming harder and harder to maintain as verification increases in complexity. This growing reality underscores one of the main points of the event: regardless of the tools used, rigorous planning up front, at the very beginning of the project, is critical to success. In a phrase, you just can't "wing it" with ad hoc methods anymore.
[JOE] What's the #1 verification challenge these Silicon Valley users were reporting?
[JOHN] I'd have to say that what I just mentioned is the #1 issue I heard from the Silicon Valley group: there is a lot of manually driven verification being done in the Valley, and customers are realizing the limits of manual processes. In a nutshell, the latest design spec/project will not "fit" their current processes, and there is a lot of uncertainty about how to adopt new technologies and methodologies they know are needed. Put another way, many customers are coming to terms with the idea they need help in adopting methodologies that get the most out of automation -- where "automation" is defined as something more than just a raft of clever Perl&Tcl scripts and adaptations of Excel spreadsheets.
[JOE] What is the most common misconception and/or false assumptions the students had about a metric driven flow?
[JOHN] Given the familiarity with coverage driven verification, there is a temptation to assume that the metric driven approach we are supporting is merely a matter of aggregating different forms of "traditional" coverage -- code coverage, function coverage, assertion coverage, etc. into one set of statistics. In fact, as noted above we are going beyond these traditional metrics by adding the ability to take coverage and/or record data on *arbitrary* metrics of the customers choosing -- firmware coverage, man hours, compute cycles -- whatever you the end user, project manager, or executive management see fit to track and report over the life of the project. When people try this out for themselves in the workshop, it's a real eye-opener!
[JOE] I know you travel a lot, seeing customers around the country, and around the world. This is a very loaded question, and it invites huge generalizations (but I can't help but ask anyways): how would you compare the level of "verification maturity" and experience with coverage & metric driven flows here in Silicon Valley with what you see in other regions?
[JOHN] You are right -- it's really hard to generalize by geographic region -- I see all different levels of "verification maturity" in all regions. I will note, er, generalize, that in the EU verification community many companies have always carved out time, and even had whole groups, focused on process and methodology development. As you would expect, in such customers this leads to a natural alignment with a metric driven approach to verification. By the way, there is a related trend that started in the EU, and I see spreading world-wide, and that is the creation of teams of engineers that are dedicated exclusively to the task of verification.
[JOE] Finally, when & where will you be running these workshops next?
[JOHN] Our goal for 2009 is to run these workshops on quarterly basis, and we are happy to hold them on site at customers as well -- just let your friendly local Application Engineer know of your interest.
[JOE] Thanks for this report!
[JOHN] You're welcome! Again, I invite everyone to get in touch and/or contact us about hosting an event in your area.
More to the story:The following video segment is an evaluation of the workshop by our colleague Roxan Saint-Hilare, an AE here in Silicon Valley. Roxan offered to co-present this instance of the workshop to both provide feedback on the content, and become familiar with the event's flow so he can replicate it on-site for interested customers. Here is Roxan's feedback:
Again, as noted above in interview:If you are interested in hosting this specific workshop, or workshops on other verification and systems validation technology and methodology topics, please contact your local Cadence Application Engineer (or send me a note to forward your request to the corresponding person in our Field organization).
Finally, to the alumni of these techtorials and workshops:Please feel free to post your feedback on these events below, or contact me offline.