Get email delivery of the Cadence blog featured here
At the recent EDPS, Cadence's Patrick Groeneveld presented about a course that he and other EDA luminaries taught at Stanford last year. I say "Cadence's Patrick" since he currently works here, but he worked for me at Compass Design Automation, then was CTO of Magma, and thus ended up at Synopsys for a time when they got acquired.
The other two "professors" on the course were Raúl Camposano (ex CTO of Synopsys) and Antun Domic (currently CTO of Synopsys). So that's a lot of years of EDA expertise teaching the course.
The motivation for the course was that EDA is not taught at universities as much as before, but in the machine learning era, custom hardware is more important than ever before. In fact, almost any computation can only be sped up today by custom hardware, both the process people (end of Moore's Law) and the computer architecture people are out of ammunition to improve general purpose processor performance.
So EE292A was born.
The idea was to expose students to the complexity of the EDA flow with its many abstraction levels and complex interactions. There was a hands-on lab to synthesize convolutional hardware. This was done using FPGAs for practical reasons.
They assumed basic circuit design, and basic programming skills. This was not an introductory course. It was structured during the spring semester with 18 classes of 80 minutes each, twice a week over 9 weeks (about 55 slides per session). There were also homework and lab assignments. They capped the course at 38 students and in a good-news/bad-news result, the course was over-subscribed. At least enough people are interested in EDA for that! Stanford supplied two TAs (who were apparently "great").
The course was popular, with great students (they all passed). It was an uplifting experience for everyone involved. In the future, two things they plan to change for next year, are to use Xilinx arrays on AWS instances as the programming fabric. Also, students wanted to experience running "real" EDA tools. With the pedigree of the instructors, I'm sure that can be arranged.
The first decision was depth versus breadth, cover a few things deep, or cover everything. They went for breadth, everything from system-level to layout, on the basis that it is a era of the resurgence of what Carver Mead called "the tall thin engineer" who can do everything. Because of its topicality, they focused on hardware design for machine learning.
One bit of putting things in perspective for students was a bit like my table about how long various computer operations would take if the clock cycle was a second (see my post Numbers Everyone Should Know). If wires on a 10nm SoC were as wide as roads then a chip is:
The USA is about:
Another comparison was one that I have used before if you ever saw my EDA 101 presentation anywhere. An SoC has 4B transistors, cost $50M to develop,, in 1 year, with 100 people (your mileage may vary). A new plane, say the Airbus 380, has 4M parts (0.0001 times fewer), cost $28B to deveop (560 times larger), over 10 years (10 times longer), with a development team of 10,000 people (100 times as large). Something I pointed out in EDA101 is that, unlike the real plane, the SoC is pretty much committed to ship unchanged in a product, equivalent to taking the plane for a couple of test flights and then taking passengers on board without making any modifications.
As part of the course pointing out that the death of Moore's Law might be somewhat exaggerated, they used the Apple Ax family of application processors as an example. The above diagram shows the progression over the last decade.
The student design started above what we call "system level" in the IC world, up at Tensor Flow (any discussion about system level always reminds me of Pierre Paulin's comment that "system level is one level above whatever is the highest level you are working at").
Design is largely about levels of abstraction, with "synthesis" tools that move the design from one level to the next, and analysis tools that move some of the key data back up to earlier levels.
The physical hierarchy has 3 levels: standard cells that hide a lot of the physics and lithography, "blocks" that consist of millions of standard cells and their wiring, and then the chip level that consists of a few dozen top-level blocks. On the other hand, the logical hierarchy has many levels, under control of the designer, analogous to procedures in software.
If there is one word that comes up all the time in design, and EDA, it is hierarchy. All EDA algorithms require hierarchy to manage complexity, and different levels of abstraction. There are simply too many low-level objects (transistors, say) to handle all of them at a high level (RTL simulation is not SPICE on steroids, it is at a completely different level).
There is a lot of interest in EDA at the undergraduate level, despite its relative unsexiness. Machine learning, in particular, has made hardware and architecture relevant again. Two of the students on the 2018 course have signed up to be TAs for the 2019 version. A quick glance at the course directory for Stanford shows you can sign up for:
Sign up for Sunday Brunch, the weekly Breakfast Bytes email.