Get email delivery of the Cadence blog featured here
At a discussion at the ICCAD conference last week, EDA notables Jim Hogan and Paul McLellan talked about “what EDA needs to change for 2020 success.” One topic they emphasized is “software signoff,” and they encouraged those present – mostly bloggers – to go forth and write and tweet and blog about it. It’s an interesting concept, but I think it raises a number of questions, as listed below.
1. What is software signoff? A GDSII file is “software,” as is an application program written for an off-the-shelf microprocessor, so we have to clearly define what “software” we are talking about. Hogan, a former Cadence executive fellow and current private investor, defined software signoff as “signoff from behavior to an implementation in embedded software and/or a hardware implementation fabric.” Both he and McLellan, author of the EDA Graffiti blog, indicated they were talking about behavioral C/C++ code, not SystemC.
However, the word “behavioral” can mean many different things. Is it purely algorthmic, or does it define an architecture? What exactly is signed off, by whom, and to whom? How is existing silicon IP handled? Are constraints provided? In sum, what are the deliverables for software signoff?
2. What are the advantages and tradeoffs? The idea is appealing – you write some code in C/C++, and turn it over to machines and/or people who will convert it into a programmed system on chip (SoC) or FPGA. But what are the implications for performance, power, area, and unit cost?
3. What are the tooling requirements? Either the algorithm-to-silicon path will need to be automated, or you’ll be turning the behavioral software specification over to a hardware design team using traditional methodologies, in which case you’re moving work to a different location rather than actually reducing it. There has been tremendous progress in high-level synthesis with tools like the Cadence C-to-Silicon Compiler, but no tool automates the entire flow from algorithms to GDSII. Software signoff will also require some really good estimation, prototyping and profiling tools.
4. What are the silicon requirements? It seems to me software signoff will work best with some kind of predefined fabric, such as an FPGA, where someone has already taken care of issues relating to manufacturability, yield, and process variability. Otherwise, these will need to be dealt with.
5. What about analog integration? Hogan commented that analog design is inherently “algorithmic.” True, but algorithm-to-transistor synthesis has not worked very well in the analog world. Nearly all SoCs going forward will be mixed-signal, and somebody has to design and integrate the analog portions.
6. How is verification handled? Somebody needs to verify that the implementation matches the spec, is functionally correct, and meets timing and power requirements. Who does that and how?
7. Where will software signoff make sense…and not? I don’t think software signoff will be used for applications that need highly optimized performance, power or area, or those that need a very low unit cost. It could work well, however, for people want to accelerate applications using an FPGA or hardware acceleration platform, but who don’t want to, or can’t, do hardware design themselves. (I am assuming that software signoff involves some custom hardware creation or reconfiguration – otherwise, you’re just writing software for off-the-shelf hardware).
In conclusion, I think “software signoff” is one way that some people will create embedded applications. But there will be other methodologies as well. Right now, the most logical move is from RTL to a SystemC transaction-level modeling (TLM) based design and verification flow, with high-level synthesis and virtual platforms. That flow is available today.
I really have only one prediction about EDA in 2020 – that one size will not fit all.
Note: Slides from the Hogan-McLellan presentation are available at the Si2 web site. Other blogs commenting on the discussion are listed at http://leepr.com/Home.html.