Get email delivery of the Cadence blog featured here
This week it has been the 13th European CDNLive, held in Unertschleißheim in the suburbs of Munich. The event starts on Monday afternoon with some technical tracks. The big day is Tuesday, starting with the keynotes, and then going to a full palette of technical tracks. There are tutorials and other presentations on Thursday morning, although I had to leave on Tuesday evening to visit imec, which I'm sure I'll be writing about in more detail in the coming weeks.
Paul has a new job. He was in charge of the front end of digital design, all things Genus. But now he has all of verification: simulation, formal, emulation, and FGPA prototyping, plus the tools that wrap around those base technologies.
You've probably heard the phrase that data is the oil of the new economy. Paul had some market cap numbers to show just how true it is. A decade or two ago, the top companies by market cap were all oil companies. Now they are Apple, Alphabet (Google), Microsoft, Amazon, Facebook. Yes, they are tech companies, but increasingly they are the companies with the big cloud datacenters full of...it's there in the name...data. Data is the currency of the world now. It's driving the whole semiconductor market. That has been ticking over at about 2.1% for years, but the forecast going forward is 6.2%, so three times as much. The top five cloud guys are installing $60B per year of capital—that's a lot. It's growing 20-30% per year.
Automotive, off a much smaller base, of course, is growing at 105% CAGR. As it happened, Cadence had an announcement to make in that space that day. But Paul is a digital guy, so he brought up Vinod Kariat to do the honors. he is the VP of automotive reliability..
Vinod came up on the stage to talk about the new Legato Reliability Solution. I had already written a post about it, which went live at almost the same moment. For that, see my post Legato: Smooth Reliability for Automobiles. If you want the details, look at my post, but it is basically a solution to make it easy to get reliability in the analog part of mixed-signal chips. The digital part can do all sorts of things like running on-chip BIST at regular intervals, to having safety processors implemented with triple redundancy. Analog has to have the reliability designed in more directly, even when chips run in hostile hot environments, and have to run for over 15 years, the lifetime of the car.
There are three thrusts to the Legato Reliability Solution:
Paul came back to talk a little about verification. Of course it is one of those jobs that you can never finish, since you can never be certain that there isn't something you missed if you ran just a few more vectors.
Paul said that he is pushing his engineering teams to deliver:
Paul talked a little about Palladium, where we have 50 new logos such as Cavium, Fuitsu, Huawai, Imagination, and more. It is "the ultimate debug machine" with our own custom processor under the hood, and the capability to examine any signal at any time. Protium is faster still, but with less debug capability, so more suitable for software development once the hardware is fairly stable. There are over 20 new logos for Protium, and repeat orders.
The focus with Xcelium has several themes. Incremental (30X faster) and parallel build (3X faster) to get the simulation started faster. But a focus on single-thread performance, since that is very important for big UVM tests suites that might run a huge number of relatively small simulations. But also the long runs making use of multiple cores. The single-core performance has improved 2X in the last couple of years, and Paul is confident "we have another 2X in the next couple of years.”
Then on to formal, where 17 of the top 20 companies designing chips are using Jasper Gold. Business improved 30% year on year. Paul's PhD is actually in formal, but he never really expected it to become a mainstream part of chip design, even with that background.
"The future is bright," Paul said to wrap up (although it didn't seem to be so bright you had to wear shades).
For a bio of Philipp Slusallek, see my post CDNLive EMEA Preview.
Philipp started with a picture of the Uber accident in Phoneix where a woman was killed. He had some recent news that the car had apparently recognised the person as a person, but classified them as irrelevant. He hadn't heard anything about why that might have happened. For more about the accident, see my post In Other News, 100 People Were Killed by Cars Driven by People.
DFKI, one of the places that Philipp heads up, is that largest AI research center in the world. It has five sites in Germany. He described it as build a "computer with eyes, ears, and common sense." It is a private/public partnership with a list of gold-plated shareholders such as BMW, Intel, Google, and Microsoft.
One major project is the AnyDSL compiler framework. The idea is simple, but the implementation is compilcated. The plan is to have a single high-level representation of an agorithm, but then map to CPU, CPU, FPGA, custom hardware, multicore processors, vecctor processors, and so on. The early results are already impressive, with 10X smaller code, 25-50% faster than openCV on CPU and GPU. The work on FPGAs is ongoing.
One key part is that general algorithms can be optimized for special cases, such as adding parameter values to do a Gaussian blur, rather than just running a full general stencil over an image. The results are 25% faster than hand-optimized code on an Intel GPU, 50% faster on a CPU, 45% faster on an NVIDIA GPU. All with the code written just once.
Philipp moved back to automotive, and how to prevent the type of accident that happened in Phoenix. Normal driving can perhaps be trained with real physical cars. But exception conditions cannot. Some are just not possible ("we can't send lots of kids in front of cars to generate data") and others happen too rarely for learning. He talked about AlphaGoZero at the end of last year that used reinforcement learnng to get to world champion level in a very short time (for more background on AlphaGoZero, see my post Deep Blue, AlphaGo, and AlphaGoZero). But Go and Chess are simple since we know the rules. In other tasks, like recognizing melody, or driving well, the rules have to be learned at the same time.
This is a "virtual crash test dummy" looking for the extremes of long-tail distributions. Once you have driven in a city a fair bit, whether a human or a computer, you won't learn more once you achieve proficiency. It is the critical situations that happen with very rare probability that are the challenge. This is where training the car in a virtual environment comes in. But where do you get a virtual environment? You can do it by hand but that doesn't scale. Instead, his team is taking reality and building partial models of how the world works. "Then we can model realistic models of kids, for example." But this needs to consider varations (small kid, tall kid, left to right, right to left, night, etc). That model is used to create synthetic sensor data, which is input for what a real car would see in that situation. Now that synthetic data can be used to train the car.
This is all done in an open framework, since each model gets more valuable as more models are added, since it gets closer to a picture of how reality looks. They are also collaborationg with TÜV SÜD (as is Cadence, see my post What is Automotive Tool Confidence Level 1?).
Philipp is trying to kick off a big EU-wide flagship proposal on building an open platform for research involving academics and industry. He uses CERN (the partical accelerator on the Swiss-French border) as a model. If I get the typography right, it seems to be Humane AI. The proposal was just submitted, for €100M over 10 years.
So definitely driving to the future...autonomously.
Sign up for Sunday Brunch, the weekly Breakfast Bytes email