Get email delivery of the Cadence blog featured here
Nimish Modi is senior vice president for front end research and development at Cadence. In this interview, he discusses Cadence’s front end strategy in such areas as low power, mixed signal, system development, enterprise verification and predictive design. He also explains why he thinks EDA technology is undergoing a “paradigm shift” towards higher levels of abstraction, integration, and reuse.
Q: Nimish, what is Cadence’s Front End Group responsible for?
A: One can think of us as being responsible for everything from the netlist on up. This includes our system development activities, our enterprise verification efforts, and our logic design activities that encompass logic synthesis, formal checking, and design for test.
Q: How can the front end flow address the pain points that designers are experiencing?
A: The front end is the place to address designer pain points. It’s imperative that accurate design decisions are made early on, and that logic is architected and designed correctly, since the degrees of flexibility to address any design issues decrease dramatically as you go down the design hierarchy. Design executives are getting increasingly concerned about rising development costs for SoCs and time to market. This in turn translates into huge challenges as well as opportunities for improving what we call PPQ, or the Productivity, Predictability and Quality of designs. All these areas are significantly modulated by decisions made during the front end of the development cycle.
Q: How is the Front End Group approaching system-level design, and how does this differ from earlier attempts at ESL?
A: Earlier forays into ESL have met with varying degrees of success, but I think the one common theme is that all had been focused on addressing a specific, discrete issue and as such had been pretty much point tool focused.
Our approach is unique in that it leverages our very strong, differentiated, foundational technologies, which are woven together to address a comprehensive TLM [transaction level modeling] driven design and verification flow. This allows you to seamlessly traverse through the development flow while preserving the reusability of your TLM-generated design IP and verification environment. We’re building our systems portfolio on top of our very strong implementation, verification and hardware pillars, and it is uniquely qualified in this regard. Clearly, a migration to the higher level of abstraction is not going to be a step change that happens overnight, so we’re ensuring the flow works within the context of a mixed-level, multi-language environment.
Q: What’s needed to bring TLM-driven design and verification into mainstream use?
A: Given the rising complexity of today’s SoC designs with increasing integration of heterogeneous functionality, I think there’s a growing realization that continuing to operate at the RTL level is just not tenable. It’s not just a “want” now – there’s a strong need to move to the next level of abstraction. Culturally, I think there’s an acceptance of that fact, and that’s one big driver that will get folks to really look at what’s available.
Additionally, I think the technology has matured to the point where it does deliver on the promise of an integrated, seamless TLM-to-GDSII design and verification flow. It’s no longer about having just niche tools and being in a position where customers need to choose amongst equally important parameters, say between productivity and quality of results. Our technology delivers on the productivity promise while allowing informed tradeoffs between area, power and timing in the context of mixed-level designs.
Another aspect is that there’s no existing prototypical entity that specializes in operating at this level. Is it going to be a morphing of the software guy who wants to understand more about hardware, and be in a position to do design entry at TLM? Or is it the RTL designer who needs a better appreciation of software design and programming concepts? Pragmatically speaking, I think it’s going to be some of both, but the point is from a customer base perspective, we need to develop the kind of individual or engineer who can go off and do a good job of programming at this level.
Q: How will Cadence help with embedded software development?
A: Embedded software development and validation are clearly big challenges. Our recent announcement, where we talked about having a TLM-driven IP design and verification platform, lends itself very well to helping out with embedded software development and complexity, as the TLMs we create can in turn be integrated with virtual platforms that are used for software development. Also, our Incisive Software Extensions enable verification testbenches to access the software that’s running on a hardware model. This unique capability allows the full power of our plan-based, metric-driven verification approach to be applied to software.
Q: What efforts are underway to help bring verification to a higher level of abstraction?
A: Our system development strategy is not just focused on design creation – it focuses on verification as well. Aside from extending our VIP portfolio to the TLM level, we’re looking into creating reusable verification IP that can traverse the abstraction stack across the continuum of TLM, RTL and the hardware environment, thereby greatly improving productivity and reducing risk.
Additionally, our C-to-Silicon Compiler generates cycle-accurate Fast Hardware Models that facilitates extremely fast simulation speeds, thereby helping the turnaround time on verification significantly.
Q: What’s Cadence’s current focus with respect to verification, and what’s distinctive about it?
A: We have innovated around a plan-to-closure approach, and we provide an open, scalable, metric-driven offering that has resonated very strongly with our customers. Our verification solution is multi-specialist and multi-domain in nature, comprehending both hardware and software and enabling an integrated view of verification closure. Key tenets include the multi-language OVM [Open Verification Methodology], which helps with verification scalability and reuse, and a broad verification IP portfolio that through reuse helps reduce time to market and verification costs. And all of this is provided in the context of an executable, plan-driven environment based on an enterprise metrics database, giving visibility into verification closure progress to both engineers and managers, thereby further improving predictability and reducing project risks.
These technologies are complemented by design and implementation formal verification technologies that we provide through the Conformal product lines, which provides equivalency checking signoff as well as an automated ECO implementation and verification solution. Another key initiative is in the space of mixed signal verification where we’re extending our leadership metrics driven, methodology based approach in the digital space to the mixed signal environment.
Q: Verification IP is a fairly new market for Cadence. Why get involved in this market, and why do customers find it important?
A: Verification is already a huge challenge for our customers, and we are working to do everything possible to contain customer investment and reduce their risks in this area. Derivative designs are becoming much more prevalent, reusability is increasing, and there’s more use of standards-based, non-differentiated IP. Providing relevant verification IP that enables customers to do high quality protocol verification with compliance checking reduces customers’ verification investments, risks and cycle time. Based on these trends, we recently expanded our multi-language OVM Verification IP portfolio and now lead the industry in both protocol breadth as well as automation depth.
Q: Mixed-signal is an important technology for Cadence. Where are you focusing R&D efforts in this area?
A: Virtually every SoC at 65nm and below is a mixed-signal SoC, and by some accounts analog circuitry accounts for half the respins even though they occupy less than 10-15 percent of the area. Given that classical black box approaches to AMS IC verification don’t scale well here, there are new challenges that need to be overcome.
While there are multiple aspects to the verification issue, there are two prevalent ones where customers are seeing big challenges. One is the ability to natively support analog behavioral models in a digital SoC environment, and be able to regress those at the full-chip level at digital speeds. We had an Incisive release last year that incorporated wreal modeling capabilities. That helps significantly in this regard, enabling comprehensive mixed-signal SoC verification.
The other area is connectivity checking. You’d be surprised at the number of issues that slip through when analog and digital IP blocks are integrated together, and there’s a great opportunity to address these through formal based approaches.
Q: There’s a lot of discussion these days about low-power or power-efficient design. How is the Front End Group addressing needs in this area?
A: When you look at the design flow, you can say that 80 percent of a chip’s power is determined by the time you get to RTL. Because of that, we’ve always had a lot of focus on low power in the front end space. Cadence has the industry leading low power solution spanning design, verification and implementation, with Front End products such as Conformal Low Power, Incisive Enterprise Simulator, RTL Compiler and Encounter Test being broadly utilized for power management. Moreover, we been putting a lot of energy into power exploration and estimation, with technologies such as Cadence Chip Planning Solution, C-to-Silicon Compiler, and Incisive Palladium Dynamic Power Analysis to name a few, that help customers make early power tradeoffs and determine the right power architecture.
Q: What are you most excited about in the R&D area?
A: There is no one specific thing I can point to, but my overall feeling – and I’ve been in the industry for over 20 years– is that this is the most challenging environment I’ve ever seen. We are at a classic inflection point where the traditional approaches and linear improvements in automation that we’ve seen in the past are just not going to scale with the complexity and cost explosion of doing SoC development. The pressures we are seeing for productivity, predictability, and quality are immense and I think a real paradigm shift is needed to successfully address these.
It’s now about differentiating through reuse. It’s about faster integration. It’s about time to market. It’s about finding new ways to muzzle the verification beast. It’s not just about hardware now -- it’s about the overall system with the attendant software stack. It’s about getting beyond talking about ESL and on to living ESL. And all this is right smack dab in the Front End space. We’ve got many breakthrough innovative technology solutions that address these challenges and several others in the pipeline, and we are working at unprecedented levels of partnership with our bleeding edge customers to collaboratively deliver on the promise…very exciting times indeed.