• Skip to main content
  • Skip to search
  • Skip to footer
Cadence Home
  • This search text may be transcribed, used, stored, or accessed by our third-party service providers per our Cookie Policy and Privacy Policy.

  1. Blogs
  2. Breakfast Bytes
  3. Computational Fluid Dynamics, Software, and Chip Design
Paul McLellan
Paul McLellan

Community Member

Blog Activity
Options
  • Subscribe by email
  • More
  • Cancel
CFD
computational software
EDA
software analysis

Computational Fluid Dynamics, Software, and Chip Design

18 Jan 2022 • 5 minute read

 breakfast bytes logoThere are a lot of commonalities between computational fluid dynamics (CFD), software, and chip design. All of them create a specification of the object of interest, and eventually use that representation as part of the manufacturing process. In between, some sort of verification is done to check that the design is going to behave as expected, but there are huge differences too.

My background is as a computer scientist, a fancy name for software. I've worked in the semiconductor and EDA industries all my career, so I like to say I have silicon in my veins. Despite doing some fluid dynamics courses as an undergraduate, I'm not an expert at all, and I have no experience of doing CFD designs in the real world. But since Cadence acquired NUMECA and Pointwise last year, I've learned a lot. nasa ames wind tunnel

By the way, if you live in the Bay Area, when you drive past Moffat Airfield you can see NASA Ames Research Center. that big building you see in the distance is a wind tunnel. It is so large it can fit a full-size Boeing 737 inside. When they have an open day, you can go and stand inside and really appreciate both the size of the test chamber and the size of the fans. It is apparently the world's largest wind tunnel. 

Of the three domains, let's start with software. I've been programming since the late 1960s, and the basic approach has not changed much since then. You write the "specification" in a programming language such as Fortran, C++, or Python. In the past, "manufacturing" might have involved magnetic tapes, CDs, or file transfers. But what goes on the CD is basically identical to what the programmer uses to "verify" (we call it debug for software) and so there is no separate process. It is so cheap to compile and run a program, it doesn't make sense to have a separate verification process to avoid that cost. The verification process would be more expensive.

Modern chips are very different. The "specification" is typically written in SystemVerilog, but manufacturing a chip takes several months and costs tens of millions of dollars So the verification stage is very extensive (and expensive) since it is simply not feasible in time and money to make lots of trial chips to make sure it is working. This has changed a lot over the years. When I started at VLSI Technology in the early 1980s, we had a problem with a big (for its time) chip that had a power-ground short, but this was before circuit extractors existed (I would end up writing one of the first a year or so later). The only solution was to plot out the layout, tape it all together, lay it out on the floor, and color the power nets red and the ground nets green (with as many people doing this as could fit around the plot) until the problem was located. Today, a lot more software is used (and hardware in the form of emulators and prototyping systems) since chips are so much more complex. Sometimes, IP providers build test chips, but generally, an SoC chip design is expected to work the first time and there is no "compile it and run it" attitude like with software. Tapeout is expected to be final.

Interestingly, FPGA design used to be more like software, just compile it and see if it works. But as compile times have got to the 24-hour point or more, doing a lot of chip-style verification has gradually become standard.

CFD used to be done in the physical world, hopefully not involving coloring a huge plot in red and green. Let's consider the body of a car. In the past, before good CFD software, modelmakers would shape the car out of clay and the aerodynamic verification meant putting this model into a wind tunnel. Some of this could be done with scale models, but building full-size models of the car and putting them in a large wind tunnel was very expensive. Today, the shape of the car is modeled in some sort of 3D modeling software, and the verification is done with CFD analysis software. Eventually, a model may be built and put in a wind tunnel, but since this is only done right at the very end, this is a lot cheaper than the wind-tunnel-only approach. Of course, you can't just transfer a car to manufacturing by pressing a button and having a factory spin up to volume production, like you can with a chip or software. In my EDA 101 presentation, I like to point out that chip design is more like designing a commercial aircraft while you take reservations for its maiden flight, loading them into the plane, and then flying it.

I said software has not changed much, but increasingly other tools than just "running the program" have come into being. As Edsger Dijkstra famously said:

Program testing can be used to show the presence of bugs, but never to show their absence!

More powerful tools can be used to track down bugs, such as static code analysis such as Klocwork, tools for analyzing storage use for correctness such as Valgrind, and so on. As with the changes in CFD and IC design, this is a move towards replacing a lot of expensive approaches with making use of large amounts of computer power. This, of course, has got increasingly cheap over the years (thanks Dr. Moore) so the tradeoff between using wind-tunnels, and even between coloring plots, and using computational software has moved.

 Underlying all of these computational approaches are matrix and graph operations on very large datasets. So while, at first sight, CFD has nothing to do with IC design, that's only true until you look under the hood to see what is going on.

The other big trend of the last decade has been the move from using single really big servers to making use of lots of cores and servers, either in dedicated datacenters or, increasingly, in the cloud. This has required big changes to these matrix and graph algorithms since they don't just get parallel automatically by running them through some clever compiler.

The result of this is that increasingly, software analysis, CFD, and many aspects of IC design all depend on similar highly-parallel matrix and graph analysis.

 

Sign up for Sunday Brunch, the weekly Breakfast Bytes email.

.