Get email delivery of the Cadence blog featured here
Key Findings: Many more chip programs are crossing the tipping point and need advanced mixed-signal verification methodologies and technologies. A deterministic march to closure is needed. The Cadence party for mixed-signal verification is the hottest ticket in town.
With some important public events now behind us and more on the horizon, the agendas make it clear that there is mounting pain in the realm of verifying chips with a significant blend of analog and digital circuitry. While this is not news to those in the know, it is fresh pain for a growing legion of SoC verification teams. It is starting to look as though the industry may be ready to cross the proverbial chasm if the topics at these events are reliable indicators. And the good news is that there has been a mixed-signal verification party going on for more than 20 years at Cadence, and for all of the newcomers, we’d like to welcome you!
What’s driving party attendance?
Back in 2009, some really good work was done, probably about five years ahead of its time. This was when the seminal work on applying the industry’s best practices on digital SoC verification was applied to the mixed-signal SoC verification problem. But, before diving into the particulars, let’s review the driving forces.
At the speed of Moore’s Law, two worlds are colliding. Advanced nodes (28nm and below) have been key drivers in the evolution of analog design with needs for digital compensation and calibration on a steep rise. Massive integration capability of digital SoCs has led to the inclusion of more, and more complex, analog IP blocks. So whether you started from the “analog side” or the “digital side,” chances are you now have to be a lot more cognizant that it is a blended world and act accordingly.
Figure 1: Drivers of advanced mixed-signal verification methodologies
Figure 1 shows how complexity comes from all over the place, and that complexity creates challenges that rapidly transform into critical business issues. Left unchecked, these issues are pretty bad news. What I hear consistently is that there has been more bad news. But, sometimes, bad news is the impetus to drive change, and that can be a great thing.
When we talk about the industry best practices in mixed-signal SoC verification, we are, of course, talking about a host of elements that include technologies, capabilities, and methodologies. But let's not overlook the value of experience.
I am going to “limit” the scope to mixed-signal simulation technology and methodology. What we have proven in terms of formal, static, and other areas can be covered in coming posts (cheers).
So, the party is raging and everyone is having a great time, but they are all dancing around that big elephant right in the middle of the room. For mixed-signal verification, that elephant is named modeling. More about that in my installment of The Low Road.
Critical elements for a good party
Back to the party. Just like a great party has to have fun people, good food, and great music, there is a lot that goes into a great mixed-signal simulation solution. Starting with the basic needs perspective, the crucial elements are:
Let’s face it, most of us resist change unless there is some kind of pain or anticipated pleasure ahead (some may combine the two, but that’s a blog for a completely different forum). Thus far we have seen some common threads on how teams complete the journey from bifurcated analog and digital design verification to mixed-signal SoC verification. You really can’t get started until the conceptual hurdle of multi-domain (continuous analog and discrete digital) simulation is jumped. If there is not enough impetus in your chip projects to drive the adoption of multi-domain simulation, then there is probably not enough upside to looking towards MD-UVM-MS.
The currently most common drivers of mixed-signal verification methodology change are the runtime problem of co-simulation of SPICE engines with RTL digital engines and the verification of complex power management modes. Figure 2 depicts this path from multi-domain simulation to real number modeling and power management.
Figure 2: Most common adoption paths observed for MD-UVM-MS
Adaptation of metrics-driven methodology and the rigor of formally connecting the verification planning and management have generally followed. The industry-standard Unified Verification Methodology (UVM) has remarkable momentum and deployment amongst SoC verification teams. This methodology is extensible up to system verification and software-driven testbench approaches.
In summary, it is not an all or nothing proposition to get to MD-UVM-MS. There are steps along the way that build in the right direction as familiarity, experience, and expertise are built in an organization.
And, the party does not end there! Moving up to incorporate both the early and on-going systems-level work into the SoC verification picture is an active avenue of innovation.
Come to the party. It is raging on, and there is a ton of productivity-filled, quality pleasure ahead!
-- Archived Webinar: Cadence, ARM Forge Design Flow for Mixed-Signal Internet of Things (IoT) SoCs