Google FeedBurner is phasing out its RSS-to-email subscription service. While we are currently working on the implementation of a new system, you may experience an interruption in your email subscription service.
Please stay tuned for further communications.
Get email delivery of the Cadence blog featured here
There was a recent panel discussion at Cadence on the future of EDA. If you didn't see it, then read yesterday's post first, which introduces everyone and has the view from academia: The Future of EDA: The View from Academia.
Anirudh has to deliver product every day and so he came down to earth a little: What is happening and how does it apply to us at Cadence? He talked about a slide he saw recently that had two boxes multiplied together to give a third. The two boxes multiplied were IoT and networking. To the right was the cloud.
In IoT, Cadence is well positioned. We have a lot of experience in analog design. Networking we know, especially 5G which is racing ahead. Cloud is an area where we have done lots of CPU designs, and machine learning is the new part where Tensilica is well positioned.
He said that in his day at school (CMU) there were no machine learning courses, now it is a department. He took a course in convolutional neural networks. He said that at one level, it is just a non-linear optimization problem, which he already knows. His PhD was on circuit simulation. He mentioned it to his Dad (who was a mathematics professor at IIT Delhi). He said that it was just first-order ordinary differential equations. There is a sense in which that is true, but in EDA we have to solve 10M or 100M at once, so it is no longer pure math, it is the challenge of how to apply at a very large scale. Cadence has done some good work with our "-US" products using 4, 8, 16 CPUs, even 100. But we need to do a better job with massive parallelism, thousands or even hundreds of thousands of CPUs.
We also need to move up a level. EDA always needs to move up a level. As Anirudh pointed out, Alberto has been working on system-level abstractions forever—"before you were born"—but we still need technologies that can work at the system level, and give us a framework like Virtuoso but at the system level, worrying about power, thermal, software interaction, rather than transistors.
Chi-Ping started off by answering the "Are we there yet?" question with a solid "No." There are 1014 neurons in our brain, the largest chip-based networks are 108, so we have another factor of a million to go. If you look at IBM neural nets, they are 10,000 times worse than the human brain for power efficiency, another long way to go.
He talked about what used to be called the design gap. In design, we cannot handle the complexity that manufacturing can afford. When we were doing 28nm and talking about 16nm, we estimated it would be one month of CPU time to run just one iteration of RTL to GDS.
Three more concrete areas that are opportunities for EDA:
Chi-Ping concluded that the future is bright (do we have to wear shades?) with many challenges. We are still 1M times less than what the brain can do and 10K times less power efficient.
To be continued tomorrow...
Next: Future of EDA: The Q & A
Previous: The Future of EDA: The View from Academia