Get email delivery of the Cadence blog featured here
DVCon Europe took place last week for the third time. If you are in China, you have your own DVCon coming next year for the first time, April 19 in Shanghai. DVCon Silicon Valley will be February 27 to March 2. And there is a DVCon in India...dates to be announced. Presumably it will be in September, and DVCon Europe will then come around again in October.
One thread that ran through much of DVCon Europe was automotive. I think that this is for a couple of reasons. Germany is home to BMW, Daimler-Benz, Volkswagen, Audi, Porsche in the OEM tier, and Bosch as a major Tier-1 supplier. So there is plenty of interest. Also, automotive is growing in importance as ADAS and self-driving technology advances much faster than anyone ever expected, along with neural network technology, a key component of visual/lidar/radar processing. But I will cover the automotive aspects of DVCon in a separate post next week.
The keynote on the first day was by Hobson Bullman of ARM. He did a PhD in experimental physics at Cambridge but got sucked into the ARM vortex anyway, and had a career focused on software tooling. Now he is the GM of ARM's Technology Services Group (TSG). There are thousands of engineers at ARM and their partners. That results in hundreds of ARM products, leading to thousands of designs containing them, leading to billions of parts manufactured every year. Hobson said he would talk about how ARM does design and verification.
As he put it, "My team doesn’t produce any of this, just the methodology”. TSG is an unusual organization since it is part of engineering and part of IT, producing infrastructure for design in the broadest sense. Design methodology, EDA tools, server farms, and more. The methodology, in particular, is not centralized and prescribed in a hard and fast way. Hobson regards his group as the "doctor not the policeman." Having said that, outstanding technology infrastructure is a differentiator for ARM. They provide rules for quality and performance (along with some of ARM's secret sauce). A recent development has been hiring data scientists so that they can learn across projects and turn the diversity of their projects into an advantage.
In verification, their focus is similar to most design groups, attempting to shift-left so that problems are discovered earlier. Since finding bugs is largely a case of running a lot of processor cycles, that translates into how can more cycles be run earlier. This is always problematic because the best ways to get a lot of cycles tend to be late in the design cycle. For example, you can't use an FPGA prototyping system until the project is stable, but that is pretty late.
However, that has been a trend and they have been using more FPGA prototyping. By using enterprise FPGAs (board with 6 Xilinx v7 arrays) they can run petacycles early. That's the only way to get that sort of count other than on actual silicon, which is the very embodiment of "too late".
The result of the focus on shift-left has indeed been more bugs found earlier. One big payoff is the use of formal techniques. Formal finds 35% of the bugs but uses only 11% of the CPU cycles. The details:
Hobson admitted that one challenge with formal is that it is hard to recruit formal engineers ("if you want a job, come and talk to me") so they typically end up putting graduates on the problem. He defined graduates as "motivated bright guys who don't know what's impossible."
They will start to do more of their engineering on ARM-based servers, eating their own dogfood as the saying goes (or "drinking your own champagne" which sounds a lot better).
There was an Accellera panel presentation of the portable stimulus standard (PSS). Unfortunately, this was scheduled at the same time as the functional safety tutorial and I couldn't be in both places. However, Cadence is heavily involved in the standardization process so I will find out more about what was presented sometime in the next couple of weeks. This was apparently a "repeat" of what was presented at DVCon India last month.
Later in the day, Cadence's Larry Melling presented A Model-Driven Approach to Software-Driven Verification. In some ways, this was a continuation of the earlier PSS tutorial that Larry had organized. Despite the name, PSS is not so much about a portable stimulus, as a portable model of the design that can be used to create stimulus. Cadence's product for doing this is called Perspec System Verifier (which always seems to autocorrect to "perspex" if you don't keep an eye on it). You can read an overview I wrote about it at A Perspective on Perspec System Verifier.
The purpose of a tool like Perspec is to make it possible to run complex verification scenarios that involve software and hardware. A typical large SoC might contain multiple cores, cache-coherent interconnect, operations that can run concurrently but compete for resources, a complex power-down architecture that might mean that blocks need to be woken up in a timely manner, and so on. Here's an example: we want to check that powering down a core doesn't mess up the cache-coherency. So at a 50,000-foot level, we want to do some stuff, power down a core, do some more stuff on the other cores, power the core back up again, and then carry on and check that everything is good. At the level of writing code and vectors, this test is extremely hard to create.
The idea at the heart of both Perspec and PSS is that at a high enough level, blocks on a chip are simple: a camera can take a picture, an Ethernet interface can receive or transmit a packet, and so on. If the blocks are modeled at that level, then it is not a big task. The above picture shows some operations in YAMM, yet another mobile model.
The big gain from doing the modeling at this level is that the Perspec tool itself can formally reason about what needs to be done to create use cases. The above example shows reading some video from the USB and displaying it and uploading it. Perspec "knows" how to read packets from the USB interface, how to transfer them into memory, how to transmit them over the modem, for example. So it can take care of all those details, and at the same time add a lot of randomization of data, of places where there are choices (such as running on different cores or using different memories) and generally exercising the system in as many ways as are consistent with the semantics. So unlike UVM, it is not just the data that is randomized but also the control paths.
There is a lot more detail at my earlier post. One area of focus is ARM-based systems where a lot of pre-created libraries exist so putting together the Perspec model is even simpler than for a system built from the ground up.
That evening, Bob Smith of the ESD Alliance gave the dinner keynote. He assumed that few people in Europe knew much about the ESD Alliance so mostly presented an overview of what they did. In case the name seems unfamiliar, ESD Aliiance is what used to be called EDAC. When Bob took over he want to make the name less EDA-centric since there are members who are involved in IP, embedded software, 3D packaging, and so on. Indeed, one of the things that he pointed out is that semiconductor IP revenue now exceeds CAE revenue (CAE is front-end design, the largest category in EDA).
This ecosystem is very heavily leveraged. EDA and IP and embedded software is measure in tens of billions of dollars (depending a bit on what you include), which drives the semiconductor market measured in hundreds of billions of dollars (I think semiconductor worldwide is $350B last time I saw a number), which drives the electronics industry measured in thousands of billions of dollars (aka trillions). If you include all the software that runs on datacenters, etc., then this is an enormous market. None of it would exist without the key technology that companies like Cadence provide.
One area in particular where ESD Alliance has recently been focused has been multi-die ICs (also known as 3DIC, and is one of the "More than Moore" trends). It remains to be seen just how the supply chain changes over time, but potentially there will be markets for known-good die (KGD, pre-tested die) and wafer-level assembly. Even panel-level assembly, making use of the same technology as is used for developing flat panel displays and TVs, which can be up to 80" today and will presumably only get larger. There is a Multi-Die Design Guide available from the ESD Alliance website. The diagram below shows some of the other areas where the ESD Alliance has committees serving different aspects of the ecosystem.
Next: Make Sure Your Car Doesn't Break Too Often...When It Does, Make Sure You Catch It
Previous: Video Cameras: No Service For You