As of July 1, 2021, Google will discontinue the RSS-to-email subscriptions service.
Hence, the email alerts will be impacted while we explore other options. Please stay tuned for further communication from us.
At 11:10am Korean time this morning, Cadence's Elias Fallon delivered one of the keynotes at ISOCC (International System On Chip Conference). It was titled EDA and Machine Learning: The Next Leap in Semiconductor Design and Productivity. He wasn't actually in Korea, he'd recorded his presentation from his spare bedroom earlier.
EDA is full of NP-hard problems, meaning that they don't have optimum solutions (except, perhaps, for trivially small examples). As the size of the problem increases, the run-time goes up dramatically and soon becomes prohibitively long. One of the earliest NP-complete problems identified was, in fact, Boolean satisfiability, which is used in synthesis programs like Cadence's Genus Synthesis Solution. What's fun about EDA is the number of approaches we've discovered with a reasonably good solution found in a reasonable amount of time for all these problems where it is intractable to find an optimum solution. Another well-known intractable problem is the traveling salesman problem, which has applications in routing. From a software engineering point of view, we need a toolbox of computational software algorithms to apply.
Now, machine learning and deep learning are starting to be integrated into every complex system. The EDA industry is enabling new products with machine learning through the chips that are being designed, and we will be able to use machine learning and deep learning to make the design process of those chips and accelerators better as well.
Most people in the audience at ISOCC are working on SoCs targeting the major industry trends illustrated above.
Three trends are coming together to drive a convergence in computational software:
Elias moved on to looking at the details of what are the opportunities for using machine learning in EDA. At the top level, most EDA tools are very similar, with one or more complex computational algorithms at the heart, transforming the input into the output. Those algorithms are actually the core value of an EDA tool. There is more—databases, GUIs, infrastructure—that transform the input into the form that is required for the algorithm to do its work, and then transform results into the form the user requires. This is actually what EDA engineers spend a lot of their time doing.
There are often lots of options to control the algorithm. We try and reduce the number to make the tool easier to use, but complex problems often lead to complex solutions. Once we take several EDA tools, we get a whole flow that brilliant designers can use to construct extremely sophisticated systems. This also leads to specialization. A designer who specializes in running the digital synthesis flow will probably be lost if asked to design and simulate an analog circuit (and vice versa). Different engineering teams would struggle to run each other's workflows, even if all the blocks end up on the same chip.
With the model in mind, let's look at two of the ways to apply machine learning to EDA problems. We call these approaches machine learning inside and machine learning outside. In both these approaches, we use machine learning techniques to train the model that we can then use in an EDA flow. There are other topics in machine learning in EDA that are interesting, such as unsupervised learning, reinforcement learning approaches, and perhaps using these approaches to replace some of the algorithms that are at the core of EDA tools. However, these are definitely long-term EDA research.
These are the most promising approaches for near-term deployment in realistic EDA design flows. At a high level, we can categorize ML inside as using machine learning in ways that are not visible to the user, they are somewhat hidden inside the tool. These models might be used to determine algorithm inputs or hyperparameters, or might be used to adapt the algorithm. One exciting example is using deep neural networks as a proxy cost-function that can potentially be more accurate than human cost functions.
Machine learning outside models are typically more visible to the user, used to set options, constraints, and control flows outside individual EDA tools.
Elias moved on to a few examples. One externally funded Cadence project just wrapping up was looking at full automation of an analog flow: optimization and layout generation. IC layout for analog circuits depends a lot on the designer's experience as to how to group and inter-digitate devices that need good matching. In fact, that experience is often gained from chips that didn't work. The challenge is to use ML to learn from good hand-crafted layouts, and move to a no-human-in-the-loop approach where we are not relying on designers' experience as to what to match, we want to go for full automation.
Training is a challenge since we have access to only limited amounts of data from any given customer, and are even more restricted in the way that data from different companies can be mixed. So typically we only have five to seven designs to be used to train, and so we look at each group of transistors and see if they are grouped in the layout. That way we can take a few designs and turn them into thousands of datapoints sufficient to train our ML model. So we've been able to take experience from existing designs and use that to apply the model to new designs, making the process that used to be manual become automatic.
As you can see from the above results, the performance and yield both improve. This model seems to work well across different processes, depending more on the circuit design style. This is an example of ML outside since it creates a user-visible impact on the flow (the schematic groupings are visible).
Elias went over some other examples from the analog flow research:
Another group has been using a deep neural network to classify the probability of PCB routing. It is not feasible to actually run the full router during PCB placement, but the heuristics that are used instead have historically often been poor predictors. The idea of using ML to predict the success of future flow steps has good success in university research, too.
This is an example of ML that is in production use and is incorporated into the shipping tool. This also involves using existing automation flows to generate data. You can see in a couple of examples that we get 14-21% improvement in the total negative slack, which results in improved PPA
Elias wrapped up with four examples:
But this is just the beginning of the use of ML, both inside and outside the tools. This will be an important tool for EDA developers over the next few years.
Sign up for Sunday Brunch, the weekly Breakfast Bytes email.