As the leader of the Cadence OVM development team, I was reading Richard Goering's recent article about the Cadence, Mentor, and Synopsys support for the OVM and VMM class libraries, and I wanted to make sure some key technical points were not lost.
Before I get to that, I have to say I found it interesting that Synopsys does not plan to support OVM as Goering reported, "Bartleson said, however, that Synopsys has no intent to support OVM" - not sure how that will sit with the 4000+ customers who have downloaded the OVM from OVMWorld. Then again, Synopsys is supposedly not supporting the e language either, right Karen (wink, wink), and yet I have heard from multiple customers around the world that Synopsys has told them they are supporting the e language, including this thread at the end of last year on the Verification Guild where several customers discussed VCS e support.
I guess only time will tell, but I would be surprised if Synopsys really does not plan to support the SystemVerilog IEEE1800 standard within VCS, which is all that is needed for them to support the OVM. Hopefully if you read Richard's article, it is clear that both Mentor and Cadence are providing a migration path to help customer's who have legacy VMM code easily move to the OVM.
When we initially decided to develop the OVM and donate it into the open-source community, one of our main objectives was to provide an open class library and more importantly, a methodology to enable an industry-wide VIP ecosystem. I hear a lot of people talking about the VMM vs. OVM class libraries, but the class libraries just provide the basic building blocks for constructing reusable verification environments.
Beyond the class libraries, the key to having plug & play, reusable VIP is following a complete methodology for constructing the environments. VMM provided some methodology guidelines, but it has not provided enough to enable full verification component reuse without customers having to add a lot of custom class library and methodology extensions, which end up yielding customer-specific VMM variant methodologies, which in turn limits reuse. While the OVM is new to SystemVerilog, the methodology is based on the most mature and widely used commercial verification methodology today, the eRM, originally developed by Verisity and Verisity customers in 2002 and the AVM developed by Mentor Graphics a few years ago (as JL Gray pointed out in his recent blog on this topic).
There are many hundreds of commercial and internally developed reusable eRM verification components used by customers all over the world, and there are many examples of customers sharing these verification components both internally and across companies. With the OVM introduction at the end of last year, SystemVerilog customers have also started realizing these same reuse benefits. There are many technical advantages that OVM offers over VMM for enabling better module to system, and project to project verification IP reuse including:
The other significant benefit of the OVM is that it was built for multi-language support (not as an after-thought) in order to provide VIP interoperability with SystemC models and eRM verification components (I recently blogged on this here).
Putting all the technical benefits of the OVM aside - when you consider the momentum and the fact that more than two-thirds of the EDA market is supporting the OVM, including support for not only SystemVerilog but also SystemC and e, I think we are on track for fulfilling the objective of enabling an Industry-wide VIP ecosystem around the OVM. Unlike Karen, I'm not too concerned about customers getting confused about this...
Harry, You are right that there are many areas where VMM and OVM are incompatible. Beyond the items you mention, there are many other incompatibilities like controlling and coordinating stimulus across multiple interfaces, controlling and printing out debug messages, verification component architecture, DUT error messages, etc... As I mention above, I also believe that VMM did not do enough to enable full reuse even between different VMM users - it definitely went a long way from having no methodology, but it currently does not support the same level of reuse that OVM/eRM customers have experienced over the years. The point I am trying to make in my blog is that if you want to get to the next level of reuse, you should move to OVM vs. making your own custom extensions to VMM since you will need to change anyway in order to get better VIP reuse and leverage VIP from the larger Industry VIP ecosystem. Since there are customers who have used various aspects of VMM, we now have a way to run code that on the Incisive Simulator, and with AE help we now have a path for these customers to migrate to OVM. I never claimed we solved all of the VMM/OVM incompatibilities.
JL, this illustrates one of the key points that I am trying to make - yes, VMM shows an "optional" way a user might try to mimic virtual sequences, but with OVM we provide a _standard_ way for modeling stimulus so that every testbench developer will use the same approach. This is key for enabling real plug & play reuse. If one user models stimulus one way, and another user models stimulus another way, it is cumbersome to reuse the VIP between the different users. Whereas with OVM sequences, we are prescribing one way to model stimulus so that regardless of where I get my OVM VIP, it will always have the same kind of stimulus interface. In addition, OVM sequences have much more power and flexibility compared to VMM scenarios - for example, late generation to enable reacting to DUT state, unlimited layering as opposed to just two layers of stimulus in VMM to support things like high level protocol layering (like PCI-Express having 3 protocol layers), less setup for the user to use sequences, no need to use explicit paths to the stimulus generators which inhibits module to system reuse - to name a few...
(Mike: Same comment posted on JL's blog). It seems to me that there are (at least) two areas that would need to be standardized to establish "compatibility" between the methodologies and the related VIP:
1) The simulation phases. AVM and eRM had different phases that ran the simulation (e.g. new, post_new, elaborate, pre_run, run, etc...). The OVM reconciles those differences by combining, borrowing, merging, etc. to create the new set of simulation phases. All OVM components support those phases, via methods, so that the verification components run in-sync. VMM has a similar (but slightly different) set of phases that would have to be reconciled. I supposed this could be done with wrapper methods, but it would be cumbersome and would need to be done for each component.
2) The class hierarchy. Again, AVM and eRM has different but similar classes that were merged as part of OVM. As I understand, even more of that will appear in OVM 2.0. For VMM, the lower level classes (e.g. monitors, drivers)could probably stay the same since those would be inside VIP, but the classes and methods used to communicate outside the VIP would need to be reconciled. VMM uses channels whereas OVM uses TLM FIFOs. VMM uses Callbacks. I think this will be hard to reconcile.
In short, although the VMM class library for Questa and IES is a necessary first step for running VMM legacy code, there is still a lot of work that needs to be done to make them actually play together in a simulation. Until someone figures out how that is done, any claims by either Mentor or Cadence that you can now run legacy VMM with OVM are marketing BS.
harry the ASIC guy
Welcome to the blogosphere, Mike ;-). Actually, the VMM does support layered stimulus generation in the form of VMM scenarios. Though few people seem to use them in the same way as OVM sequences and virtual sequences it is possible to create regular or multi-stream scenario generators in the VMM. Check out the "Multi-Steream Generation" alternate guideline 5-32 from the VMM book. I believe this is similar to push-mode sequences in the eRM (which are not supported in the current version of the OVM). JL