Get email delivery of the Cadence blog featured here
I'd like to share with you a story from many, many, many moons ago when I first evaluated e as a potential verification language solution for the company I was working for. At the time, our verification group was using the basic Verilog behavioural constructs for verification (memories to represent data structures, events to synchronize on, tasks calling tasks calling tasks).
Sound familiar to anyone? In addition to running chip level simulations, our small team of verification specialists was responsible for assembling top-level system simulations which consisted of about 5-6 hefty chips together. We were finding it very tedious to implement and maintain these massive environments using Verilog alone ... constantly looking at bits and bytes and comparing memory contents to packet contents on paper, using waveforms and print statements as the primary form of debug at the time.
Additionally, we were struggling with ways to maximize reuse within our company, as many individual DUTs share common interfaces and even common environment components (such as configuration mechanisms). Since we were a small team and needed to increase productivity, we decided that, for the next round of designs (already in the design stage, and incredibly more complex) we would have to bring our verification environment to the next level of abstraction. We looked at all of our options at the time (accelerators, other languages, etc) and decided it was time to start shopping around for a specialized Hardware Verification Language (HVL).
Narrowing the List
After weeding out several other solutions for our group moving forward (based on a long, carefully constructed list of criteria we had), we narrowed down our list to two: Verisity's e language and another solution offered by Verisity's biggest competitor at the time (it was the year 2000, so you can probably guess who it might have been). We chose a fairly significant block to develop an verification environment around and, for each vendor, started from ground 0 with nothing other than a design specification and the DUT code to go on.
Each vendor had a 2 week timeframe to build a verification environment with me in their respective toolset. It would be an intense engagement for any EDA vendor as I had no prior experience with object-oriented programming (other than in University) and I had never used any language other than Verilog for verification, so I would need to be significantly ramped up in the process.
First in the door was Verisity. An extremely sharp and enthusiastic AE at the time started filling my head with all sorts of strange terminology. Structs, subtypes, extensions, temporal assertions, Aspect Oriented Programming (AOP), coverage, file partitioning (according to functionality) and the like were all completely new to me. The language was almost like writing English sentences, which was a strange concept coming from the Verilog world.
A New Dimension for Verification
It took me almost a full week to finally grasp what the heck we were doing together but then, when that light turned on in my head, it was like walking into a different dimension, where all of the verification pains I had encountered in the past using our previous solution vanished and a world of opportunity was dropped in its place. Items that would have taken our team months to be developed and debugged previously could now be done in weeks and, in some cases, days.
Development of verification components could easily be done in parallel, with each person developing specific functionality through extensions to the base code, creating little to no interference between developers. Thanks to e's powerful extensibility, paired with the eRM methodology, maintaining and reusing large amounts of code became a non-issue. Through extending base classes within tests, I found we could write completely random or very directed scenarios all in a few short lines of code.
The amount of coverage we could achieve and the advanced modeling capabilities was mind boggling. The sky was the limit! (Call me a geek for becoming so excited about a verification language but, after years of struggling with the limitations of Verilog, the e language was a dream come true). It was quick to ramp up on, incredibly productive, and the methodology forced reuse to be front and center.
Next up was Verisity's closest competitor at the time. The competitors' solution was cool as well. Though it took a little longer to learn, and a few more lines of code to do each task, I did see clear benefits over our previous solution. Its syntax was fairly familiar to me (as it was close to Verilog) and I thought our team, and the wider audience of design teams, could ramp up in a decent timeframe. Rather than a light bulb going off when I finally understood what we were doing, it was more like a gradual dimmer switch being turned slightly brighter each day.
As we progressed further and further into our development efforts though, I started noticing that the language had limitations. There were many times where the AE and I would sit at my desk, scratching our head wondering ... now how the heck can we model that? There were several times where the AE needed to call in to home base to ask R&D if they knew how to work around some issue we had encountered.
More often than not, we needed to move forward without a solution. The language was missing key checking features and other features that seemed clunky and unnecessarily complicated to use. Reuse was not front-and-center and test writing was more difficult to implement. Modeling and maintaining our reusable verification components looked like it would still present a challenge. At the time, there weren't any AOP features present (they were later bolted on to the language, with some severe usage restrictions). Overall, the solution seemed like an incremental jump in our overall verification capacity, not the leap we were looking for.
After a careful evaluation that included 30+ criteria that were weighted and graded according to our needs, and a thorough review with several groups both inside and outside of our division, we selected e as our solution. In the next few months, it became increasingly apparent that we had made the right decision. Our team of 4-5 engineers became an army creating one of the most sophisticated re-use environments that I have seen to date, capable of assembling full chip and system level simulation environments in a matter of days.
Development time of key verification environment components was reduced by a factor of ~4X and the code written was not only easy to understand, but very easy to maintain. Additionally, by keeping the same number of verification engineers for much larger projects moving forward, we were also able to reduce our overall project costs in the process while continuing to hit our time to market window.
Not everyone within our division was sold on this strategy, though. Since the e language was new to our company at the time, it was decided that, for the next round of chips, there would be two environments for each chip. One would be a traditional Verilog verification environment that the design team used to write tests on and the second would be an e "pilot" environment created by one member of our team.
The findings were almost textbook. Through adopting a Coverage Driven Verification (CDV) approach where we were running random seeded simulations and analyzing coverage results to provide us direction in our test writing, one member of our specialized verification team working with e was capable of finding more bugs with 16 tests than a team of engineers working on 100+ Verilog tests. Time and time again, we would present a complex corner case bug to a designer only to hear them come back with "How the heck were you able to find this?"
Our team was blasting 100's of random seeds in our regressions every night, shaking the bugs out of our DUTs at a very high rate when compared with the traditional testbench. When the time came to run system simulations, transition from chip to system level simulations was almost seamless as chip level environments were re-used "as is," with extensions layered on to configure the environment properly.
Still the Most Productive Choice
Why am I sharing this story with all of you? Sometimes, the more things change, the more they stay the same. The competitor's solution referred to in the above story eventually became, for the most part, the SystemVerilog language. Just as we were looking 10 years ago, I know that there are many companies out there using Verilog/VHDL for verification that are looking for ways to improve their verification productivity and flows. Considering what language/methodology to use at their company moving forward is one of many decisions that will need to be taken.
For our team, choosing e and adopting an improved re-use methodology was the best decision we ever made to improve our productivity. Just as it did 10 years ago, the technical advantages of the e language continue to surpass its competitors and it is, by far, the most productive verification language in the industry today.
Enough about me and my story though. I would be very interested in hearing from you on what led you to use e and/or their experiences with using e for verification from a productivity perspective. Please feel free to share stories of your own successes by posting a comment; however, please refrain from mentioning specific company/project names.
Happy bug hunting!
I am using 'e' since last 6 years , I also got chance to work on SV and no doubt I found 'e' better. Main reason behind choosing 'e' is AOP concept. It's very easy to maintain and re-use with help of some excellent AOP concept.
A very interesting article was just posted where ST overviews their reasons for using e vs. SV here: www.cadence.com/.../user-view-is-e-or-systemverilog-best-for-constrained-random-verification.aspx
Nostalgic story :)
I used 'e' for 4+ years and so far still feel that it is a better HVL even in comparison to System Verilog. Prime reason is simplicity & limited lines of code achieving the same.