• Skip to main content
  • Skip to search
  • Skip to footer
Cadence Home
  • This search text may be transcribed, used, stored, or accessed by our third-party service providers per our Cookie Policy and Privacy Policy.

  1. Blogs
  2. Breakfast Bytes
  3. DAC: Digital Lunch Does Not Mean Finger Food
Paul McLellan
Paul McLellan

Community Member

Blog Activity
Options
  • Subscribe by email
  • More
  • Cancel
digital design
artificial intelligence
ml
deep learning
dl
machine learning
AI

DAC: Digital Lunch Does Not Mean Finger Food

27 Jun 2019 • 6 minute read

 dac logoThe Cadence lunch on Tuesday was the turn of digital with the panel set to consider Machine Learning and Its Impact on the Digital Design Engineer. The panel was moderated by Professor Andrew Kahng of UC San Diego.

The panelists were:

  • Vishal Sarin, Analog Inference (neural network processor ICs)
  • Andrew Bell, Groq (software-defined compute ML platforms)
  • Haoxing Ren, NVIDIA
  • Paul Penzes, Qualcomm
  • Venkat Thanvantri, Cadence (VP R&D leading AI machine learning development in digital and signoff)

Note that there are two Andrews, the moderator and Groq's. I think that the context makes things clear so I won't bother to always disambiguate the two of them.

Andrew kicked off with a question to Paul (of Qualcomm): What makes today the opportune time for machine learning (ML) when the technology has been around for 30 or 40 years?

Paul answered that although AI has been around for a while, we're running out of steam in traditional ways to solve certain kinds of problems. So the alternatives are producing diminishing returns. So, at least in the short term, using AI looks like good return-on-investment. In the meantime, there is a lot of design data available and that is a good fit for certain types of ML. And the third thing is the speedup in hardware. Accelerators can speed up some types of inference.

Vishal agreed on the hardware speedup. "When I graduated, I looked into neuromorphics but at the time the hardware technology wasn't good enough."

Andrew asked Vishal if fundamentally different architectures are emerging.

Vishal said there were, especially if you focus on training or inference.

You can be more specialized if you only focus on one. But it’s more than accelerating MACs where a lot is today, because of the Von Neumann bottleneck between memory and compute to make it happen. So that lends itself to architectures where you bring memory closer to compute. Neural networks require many trillions of operations. GPUs are there, but not good enough for that. What I see evolving are spiking neural nets, but the network models are not yet right. Analog neural networks can have very low energies and high performance.

Andrew Khang realized that everyone in the room had a job to return to after DAC, back to squeezing implementation through synthesis and physical design. "Where do you look for ML to help you there the most?" he asked.

Andrew said he's seen ML both used and abused. It's good for finding bugs. It's abused for mining log files. So he's concerned that it can be deployed in right ways and wrong ways. "Automated floorplanning is still behind what a good engineer can do."

Venkat said that on macro placement, some problems are deterministic and some we can apply ML to guide.

Andrew wants to reduce iteration times with partial builds that otherwise can be hours or days of turnaround:

If there was a way to do deltas in minutes, then that's a watershed moment. For speed and PPA performance, EDA tools should embrace statefulness and use stuff learned from previous runs.

Paul said to find yourself doing roughly the same thing time and time again in the same context, one clear example is power optimization (dynamic or leakage). Always do locally similar things and keep repeating it. DRC checking. You run for a fixed number of cycles, and don’t really check the rate of improvement. You can often stop early and save a lot of runtime.

Andrew Khang moved up into the cloud. "When you can launch 10,000 threads and only one needs to be acceptable, that’s a whole new mindset. What kinds of new methodologies would you write a big check for, and what do you feel you are leaving on the table?"

Andrew said he didn't write checks but:

high level synthesis (HLS) since exploring architectural space is very important. It's hard to know how much you are missing out on. 

Vishal wanted tools to help get the power consumption numbers down. Silicon is getting very expensive, so if there are tools that can use AI to improve yield, he felt that could be another area.

Hoaxing though that EDA so far is just scratching the surface. It's all supervised learning and prediction. There's a lot more, such as unsupervised and reinforcement learning, which is not mainstream even in research. For many areas, computation is very expensive. For example, analyzing self-heat in FinFET using SPICE when there are 10B devices. With ML, you can do a model that doesn't require SPICE. Combinational algorithms can often be improved with ML, too. Then there are problems that we don't know how to solve like analog design automation.

Design automation is already superhuman, beyond human capacity, but for ML/DL to become mainstream, we need a sort of AlexNet moment in the EDA field, not just incremental improvement. We need to get people excited.

Andrew asked Cadence's Venkat whether the digital EDA flow is being disrupted.

Venkat said there are two ways Cadence is using ML for the flow:

ML inside: Core engine improvement, increment in PPA. ML outside: How to make the designer more productive, how can we get better recommendations for the designers and make every designer a star designers, with fewer and faster iterations. We have analytics, big data platform to make sure we have a thin interface layer with all the digital tools so we can put the metadata back in the database and so provide Cadence analytic applications. Customers want to do that, too, with their own analytics and their own designs.

A question from the audience for Andrew of Qualcomm about what he actually does. He gave one example that when doing timing closure every violating path has 30 or 40 metrics in a tidy format. He has then built tools on top of that using Python. "I've found this is a really effective way to do timing closure, outside of the EDA tools themselves.." Venkat said that Cadence is developing analytics and can get a big productivity gain. Haoxing said that the open-source ML community provides a lot. The hardest part is getting the data out of the tool.

Another audience question about what type of data everyone is collecting for bug prediction. Haoxing said they have done a lot and even published papers about verification coverage. The idea is that you can take ML models to predict into various categories to reduce the number of runs. Many test suites are the same and you don't need to run both if they are predicted to find the same bugs. "We can even predict if a vector will improve overage or not, but it's an active research area."

The last question of the lunch. "What should folks keep an eye out for? And what should folks not hold their breath for?"

Venkat emphasized that ML is a top initiative inside Cadence and we've added a lot of resources.

We've already delivered products that support it, and we see both better silicon and better productivity. You will see more in the next six months.

Paul said that Qualcomm has a big investment in the underlying hardware, and some of that is already in your phone (if you have the right phone). They are actively looking at where ML is a natural candidate to solve problems.

Haoxing said that NVIDIA is using more GPUs in design flows. "But I don't think ML will replace designers, just empower them to improve QoR."

Andrew got in a quick bit of marketing: "Look out for Groq. That's my pitch. Hardware engineering jobs aren't in peril, they still have higher order insights that ML can't touch."

 Vishal told everyone not to hold their breath for new types of deep-learning silicon for general AI, that's further in the future. Having said that, there are lots of new ideas in silicon."

With that, we all thanked the panelists and then went back to LVCC to see if we could collect all the squishies that Cadence was giving away at the expert bar.

Sign up to get the weekly Breakfast Bytes email: