Get email delivery of the Cadence blog featured here
It makes sense, right? When developing an expensive hardware/software system—think cell phone, server, car, or fighter jet—you might want to make a virtual model alongside it to make sure that it works the way you want it to, but also to do things with it that are hard or impossible to do in the final system. Seems like a no-brainer. So why is this the hot new thing that people are talking about? Gartner even names it as number four of the Top Ten Strategic Technology Trends for 2018.
This concept is called “digital twinning”, that is, making a digital representation of a real-world entity or system. Data from multiple digital twins can be aggregated for a composite view across several real-world entities. Now, this concept isn’t new. Ever since we started using CAD to design stuff, we have been doing this, to a certain extent. What makes a digital twin slightly different from just a regular CAD simulation is that the physical system and its digital twin co-exist. They even may communicate with each other, with one representation operating in real space and being connected to a network, where the simulation (the digital twin, that is) gathers physical data from the real system in real time, which then uses that data to improve its simulation, which, in turn can be used to improve the system. For instance, in the digital twin, an addition to the system can be tested out prior to actually building it. Or a digital twin can do things—extreme things—that are not advisable to do in the real world, like pushing a car to its extreme speed over prolonged periods of time.
Imagine, if you will… a world where </Rod Serling voice> you can don an AR/VR headset to view a factory floor and the system acknowledges the gestures of the user to see what happens when you flip this switch or change that setting or adjust a process flow. The factory floor may exist in the real world (what my son calls “meatspace”), but to improve the process efficiency of that factory floor, you can test the ramifications of making a change to the flow. Before implementing the change (which may be costly and have unexpected consequences), you are adjusting the digital twin of that complicated system of the factory. Using that data, you can refine the process until you have the most efficient and high-quality end product.
No Twilight Zone scenario here. This is a real, meatspace example of how digital twinning works. In the recent embedded world in Germany earlier this month, Avnet had a demo of this exact scenario, using a Microsoft HoloLens. This is reality, folks! (For a photo and more information, see Frank Schirrmeister's article, Embedded World 2018: Security, Safety, and Digital Twins.)
Having a digital twin of a physical object also provides opportunities for monitoring, troubleshooting, or data acquisition for better iterative designing. The answer to the question “what if” becomes something that you can ask without potentially damaging the prototype. More accurate tests can also be conducted without the cost of having to build a complete physical replica—something that is especially valuable in industries where production is costly.
Now, imagine, if you will, trying to develop a new hardware/software system where the chip, the compiler, the operating system, and your own code are all moving pieces and are being developed in parallel.
Here again, this is hardly theoretical. This is the process flow that many companies have to follow. The software development has to take place in parallel with the hardware design. Maybe the software developers don’t entirely trust the hardware team, so they have a set of use cases to check that they have good hardware. There is no point in a software developer trying to chase down a subtle bug if it turns out that the hardware is the source of the error. However, the purpose of these test suites is not to help hardware verification, it is to give the software team confidence that the hardware works correctly, at least in the areas they need it to.
Later in the design process, however, running a real software load allows for instance accurate power measurements to be gathered or performance to be analyzed, something that can’t really be done entirely at the RTL level since there just aren’t good vectors for power analysis that can simply be created within the chip design team. Often users start with pure virtual prototypes for software development using transaction-level models (TLM), then once RTL is available, the software development process can continue on more accurate representations. Simulation—like Cadence’s Xcelium—allows for lower level software to execute. Then emulation—like Cadence’s Palladium—can bring up operating systems and execute software in the MHz range. Before the final system runs at full speed, FPGA-based prototyping—like Cadence’s Protium S1—enables software developers to run in the 10s of MHz range. The process that started with pure virtual TLM models continues with subsequent RTL drops until RTL is frozen, the chip tapes out, and, eventually, silicon is received. The wheel just churns away, with no real stop in sight, except for the hard deadline after which no more changes can be made.
Turning and turning in the widening gyre / The falcon cannot hear the falconer; / Things fall apart; the centre cannot hold…—William Butler Yeats, “The Second Coming”
It can be an endless cycle, coders fruitlessly tweaking and adjusting their code, which affects the hardware design, which then affects the software implementation, which then must be reviewed and debugged, which affects the hardware design… It’s a “widening gyre” of hurry up and wait and wait and wait.
Here come verification twinning tools to the rescue! Using digital twinning for early driver bringup, the task can be partitioned to run multiple jobs at the same time, and perform parallel verification of different drivers. With the Cadence Palladium and Protium platforms, it becomes easier—and, more importantly, faster—to examine any signal, capture the contents of memory, stop the system, fix a bug, and then see how that change affects the system. Palladium and Protium work hand-in-hand to reduce the constant churn of the development cycle. In the process above, users actually often create digital quadruplets: a virtual platform, RTL simulation, emulation and an FPGA-based prototype—all operating at different fidelity, speed, and ability to probe signals and registers for debugging. When the actual chip and the hardware/software system arrive, these digital quadruplets can easily reproduce issues found in the actual hardware/software system, but they allow much better debug and make it easier to find and correct defects that then can be applied to the real system.
Using digital twins of the chip under development can save the developers oodles of design time, breaking that turning and turning in a widening gyre, where the center cannot hold in the endless cycles of tweaking.
P.S. Thanks to Paul McLellan and Frank Schirrmeister for their help in writing this post!