Get email delivery of the Cadence blog featured here
At CDNLive in Boston, Andrew Ross of AMD presented a wealth of practical information about how best to use emulation.
He broke things down into four main areas: hardware systems, workflow, stimulus, and design. He pointed out that there are really two phases in using an emulator: bringing up the design and getting it functional, and perhaps tuning it. Then running the tests. And then the next code drop arrives and it is back to bringup. What is appropriate to run can depend on which phase the design is in, especially if the company has a farm of emulators. Typically bringup will be done on just one emulator and then the stable, tuned design can be spread to as many emulators as the team can get their hands on.
The above table shows Andrew's list of hardware-system-related problems. For example, the top line addresses long build times. Obviously you are getting the RTL ready for the emulator, you are not running the design. So it makes sense to make sure you do the build on the fastest servers available, and, when you can, that you use incremental compile.
However, the best way to get away from long build times is to build less often. One gotcha is that even if the RTL hasn't changed, if a rebuild is required to add other features, then any previous save point becomes invalid, even though from the user's perspective, nothing has changed. This is a double whammy since not only do you lose the time to do the rebuild (which could be tens of hours) but you also lose the time taken to get to the save point, which also could be hours. It often takes an especially uninteresting activity, such as booting the target operating system for the umpteenth time, before you get to where the emulation starts to become interesting and productive.
An emulator like the Palladium Z1 can be used simultaneously by many people, the limit is over 2000. Of course, this means that emulation on many projects can be done at the same time. But on a single project, many people can share a lot of the effort. For example, it doesn't make sense for each user to do a full boot of the target operating system, it makes much more sense to do that once, and create a save point. Then everyone else restores the save point and runs their own tests. For this process to be efficient, good communication is essential. Everyone needs to know what save points exist, what vectors exist, and what tests have already been run.
One area that can get complex is when the emulator is hooked up to real hardware such as a PC. If the PC is physically located in, say, California, then it can be a problem for engineers in, say, India, if the PC needs to be rebooted. There needs to be some way to accomplish this, or better to use a virtual solution. With real hardware, save and restore of the whole system is not possible, sometimes physical disks need to be swapped, cabling is complex and so on. It is a lot more effective to dynamically reconfigure the targets via VM. But the need for real hardware will (ICE) will never go away, since eventually you want to run against "the real thing" and not a model that might have unknown imperfections.
Using an emulator effectively is as much an art as a science. Experience is extremely important. So on big piece of advice is to make sure that there is at least one experienced engineer per project. Otherwise inexperienced engineers can become the critical path, both taking a long time to accomplish their own work and creating unnecessary additional work such as additional builds. The Palladium Z1 helps solve the problem of what Andrew calls "emulator calendar tetris" since jobs can be relocated even once they are running. As long as all the resources on the emulator are not exhausted, it should be possible to run other jobs. This doesn't completely solve the problem, because there are almost never enough emulation resources for everything that everyone wants to do, so there is still a need to prioritize access.
The "stimulus" on most systems is the target software. There is a tradeoff to be made here. Booting the operating system and running production code might not stress the system heavily. But using special close-to-the-metal code risks not covering the real-world cases, even though the amount of code needed to exercise the hardware might be orders of magnitude less.
There are many aspects of productivity that are design related. This is typically not related directly to the RTL, which is what it is, but to the other aspects such as debug output, monitoring, and assertions. One big problem that almost every team faces is the different level of maturity of different parts of the design. Some parts are very stable and others are incomplete. When the design fails at the system level, it can be hard to pin down exactly what caused the failure.
Andrew's conclusions are that you can get a lot more productive use of emulation by having a disciplined approach:
A copy of this presentation (and everything from CDNLive Boston) will soon be available on the CDNLive page.
Next: Are These Codecs Any Good? Netflix Tests Them
Previous: Ethernet IEEE 802.3