Never miss a story from Corporate and Culture. Subscribe for in-depth analysis and articles.
What does the term computational software mean? The shortest explanation is that computational software is computer science plus math. That is what is at the core of Cadence technology, and what the engineering tools industry has been creating since the late 60s—for more than 50 years. A lot of really smart people have made many contributions to the industry, enabling a $500B semiconductor market that is increasingly diversified and projected to reach $1T in just a few more years.
In EDA, examples of the original phase of computational software (1.0) are characterized by point tools. They are inside-out representations of the physics equations and are, for the most part, single-threaded implementations. They enabled the progress of Moore’s Law, primarily improving designer productivity to keep up with the vast number of transistors that today’s process technologies can fabricate on a chip.
Moore’s Law was driven by integration, by the hardware becoming more and more parallel, especially in the last 20 years. The next phase of computational software (2.0) is where the software also has to become more and more parallel. Cadence has continued to invest in parallelizing algorithms, in which the problems can be solved on a large number of CPUs, whether they are in one machine or multiple machines. This is true in EDA and has been true for all supercomputing applications, such as weather simulation.
The hunger for more capabilities and higher performance is driving the convergences of electrical and mechanical and hardware and software, and these convergences are creating a new class of challenges that designers must overcome. And these complex challenges cross traditional roles and require exploration. The emergence of simulation plus optimization, or simulation plus AI, is a very important development to address those challenges with automation. In this way, AI is now a crucial technology for enabling design—it is no longer an option for a narrow set of theoretical problems.
There is a potential issue with AI, in that everything new in software seems to be called AI. AI is more of a pattern-based algorithm or a data-driven approach. What is meant by data-driven? The simplest way to look at it is the simple function “Y is equal to f(x).” Knowing f() in our business comes from the basic principles of physics or chemistry or mathematics. If one knows f() as a representation of a classical algorithm, then it can be simulated.
If one doesn't know f() and one only knows input and output, x and Y, then it is possible to interpret that in some way. And this can be a pattern-based or a data-driven algorithm.
In reality, the combination of both classical and pattern-based approaches is critical. Without pattern-based, one has to know f(). If one doesn't know f(), then there are no other optinos. But there are a lot of times when f() isn’t completely known and it is possible to merge that with a pattern-based or data-based approach. This combination of a more physics-based approach with a pattern or data-based approach can provide true value. These innovations in EDA computational software have become our DNA and are applicable to other domains.
What does this mean in terms of chip and system design? There are certain classical things that are well understood. There is a natural process of applying AI to a variety of problems and seeing what works and what doesn't work. If there's a classical replacement formulation that has worked over the last 20 years, then it has been adopted. But it is no longer working well in EDA, as tool design has historically focused on a single run. The user provides some inputs to perform a task, some simulation or optimization, which produces an output. There has been no mathematical way to transfer knowledge from one run to the next run, whereas designers are not doing only one run, no matter how good the results. They are typically doing a chip design or a block design, they run one block, they analyze the output using intuition, and then they run it again and repeat.
EDA has not provided a mathematical framework or an algorithmic approach to optimize multiple runs, whether in logic simulation, where you're running a regression, or physical design. That's where this data-driven approach is useful because you need multiple runs. There is not enough mathematics normally to transform from one physics-based formulation and transfer that knowledge from one run to the next. Unlike classic algorithms that are tied to specific physics, the data-driven algorithms of AI can be applied to a variety of applications outside of pure EDA.
Looking ahead to the next phase, computational software (3.0) can be applied to a lot of other products including electronic systems, mechanical systems, life sciences, and more beyond those. It's very exciting field to be in. The last 10 years have been big in terms of more procedural software that everybody is familiar with, especially social media and other applications that we see starting here in Silicon Valley. Over the next 10 to 20 years, even those kinds of software will become more and more computational in nature. The industry is witnessing an incredible burst of technology innovation enabled by computational software. Everything ranging from satellite internet, self-driving cars, home automation, gesture recognition to the meta-verse. It’s an exciting time!