Last year, analyst Gary Smith proclaimed that a "working" electronic system level (ESL) flow had finally arrived. At the recent Electronic Design Process Symposium (EDPS 2014), a panel of ESL experts took a detailed look at the requirements, challenges, and opportunities for ESL flows.
Topics included ESL power simulation, SoC and software validation, emulation, interconnect verification, and lowering ESL costs with cloud computing. Smith was the chairman and moderator for the panel. The following panelists (shown left to right in photo below) gave brief presentations, followed by a Q&A session with workshop attendees.
In a presentation titled "The Heart of ESL," Swan related how he worked with Smith to analyze the evolution of the today's "complete" ESL flow. Initially, the flow included the architect's workbench, a software virtual prototype, and a silicon virtual prototype. But power information wasn't getting back to the architect, and hardware designers couldn't meet their power budgets.
The hardware designers added accelerators, GPUs, and NPUs, and the flow added more refined virtual prototypes. This worked, but now the software virtual prototype model was wrong. Software developers discovered that they really needed hardware-based tools for acceleration and emulation. Finally, the flow was reshuffled once again so that firmware engineers could write hypervisors and schedulers.
The ESL flow is still a work in progress. "One problem with the flow is that we have no behavioral standards for SystemC," Smith said. "The architect is taken out of the flow early in the process. We need a behavioral SystemC standard so we can push the design up through the architect again."
Saving Power Upfront
Matter (Docea) emphasized the importance of making power decisions at the electronic system level. "For many years power was an afterthought, and performance and cost were number one," he said. "But we found that you can make the biggest impact during early exploration."
It's important to validate thermal and power mitigation schemes, Matter said. He showed a solution that captures a thermal model with an Excel-based tool, generates a compact thermal model, sets up a simulation environment, and provides a results dashboard. "We found a way we can generate compact thermal models, couple them with existing power models, and run realistic dynamic use cases with co-simulation of a virtual platform. We are running actual software," he said.
Klein (Mentor) presented two scenarios for executing and debugging drivers using emulation. One approach is to place the entire design in the emulator, which provides high accuracy but very slow execution. The other approach is to run a fast instruction-set simulator on a Linux workstation. This is very fast, but may not be accurate enough for some verification tasks.
Software developers often want to use JTAG debugging, but this is "slow, intrusive, and expensive" according to Klein. An alternative called "trace-based debug" extracts information from the processor and replays it after the emulation run, providing a response time up to 100MHz.
As the number of IP blocks per SoC design continues to rise, the challenge shifts from creating the blocks to integrating them, according to Schirrmeister (Cadence). In 2017, he said, SoCs may have an average of 180 IP blocks with more than 80% re-use. Software will be distributed across cores, and more than 60% of the effort will be in software.
System-level integration poses a number of requirements, Schirrmeister said. Given challenges such as multi-core cache concurrency, complex use cases need to be verified in simulation and emulation. While virtualization continues to be a hot topic, in-circuit emulation - where the emulator is physically connected to rate adapters and boards - is still necessary. But not everything can be abstracted to a high level. Today, the interconnect has become so complex that it is verified at the register-transfer level (RTL).
Sehgal (Intel) noted that emulators cost millions of dollars, and could potentially be shared with a number of people in a cloud computing environment. An EDA company could rent tools by the hour rather than selling licenses. But how will vendors recover their R&D costs? Further, Sehgal noted, while EDA tools are in private clouds at several large companies, a complete EDA flow Is not available in public clouds. "One tool is not enough. The whole flow needs to be there."
Takeaways from Panel Discussion
Fewer designs are constrained by power
Up to 2011, the number of gates used in a design was restricted by power. By 2014 we came up with enough fixes in the power area so we could use a higher percentage of the gates. We went from about one-third to about 80% of the gates we were given. It's a major breakthrough. (Gary Smith)
Emulation cycles are the cheapest cycles you can run
This is absolutely right, if you do it on a cycle basis. But there is a barrier to entry - the price. In 1999 Cadence started QuickCycles, which is essentially a private cloud. The emulators are in a datacenter and they are accessible through a network. (Schirrmeister)
Help wanted - ESL expertise is missing
A lot of design centers are small. They don't have the resources to do the ESL flow that's being talked about today. They have their staff of RTL coders and their staff of software coders, and they don't have engineers to dedicate to TLM or SystemC. (Swan)
The value is in the connection
The role of emulation has changed and evolved, and one thing I would not like to see is for it to become a walled garden where it seals in all its value only for itself. Emulation becomes more valuable when it interconnects and plays well with others. (Matter)
Emulation or FPGA-based prototyping?
We have fast compile speeds for getting designs into the [emulation] box. That's why emulation is so much better than FPGA prototyping when you have to massage the RTL. Some customers map stable portions of the design into FPGA boards and connect them to the emulator, because that's where the RTL that is just a couple of weeks old can be compiled very quickly. (Schirrmeister)
Public clouds still problematic for EDA
We have projects today where we are hosting emulation for customers, but it's a 1-1 connection. We have seen two challenges for the Amazon [public cloud] model. One is that the security issue is really difficult. Another thing is that the amount of verification data that needs to be synchronized is huge. (Schirrmeister)
ESL standards are still needed
At the ESL level, we need definitions for models. Then we need to look at the parameterization of models, then characterization. You need stimulus - what drives the model? Is it timed or untimed? I don't believe standards are a way to solve everything, but there's an emerging role for standards. (Matter)
Slide presentations are available at the EDPS web site.
Related Blog Posts
Gary Smith Webinar: "The True ESL Flow is Now Real"
EDPS 2014: Creative Ways to Use Pre-Silicon Prototyping Platforms
EDPS 2014 Keynote: What Intel Needs from Pre-Silicon Prototyping