• Skip to main content
  • Skip to search
  • Skip to footer
Cadence Home
  • This search text may be transcribed, used, stored, or accessed by our third-party service providers per our Cookie Policy and Privacy Policy.

  1. Blogs
  2. Breakfast Bytes
  3. DesignCon: PCB and Packaging Take Center Stage
Paul McLellan
Paul McLellan

Community Member

Blog Activity
Options
  • Subscribe by email
  • More
  • Cancel
si/pi
EMI
DesignCon
deep learning
Power Integrity
machine learning
Signal Integrity
dnn
CNN
neural network

DesignCon: PCB and Packaging Take Center Stage

2 Feb 2018 • 8 minute read

 breakfast bytes logoYou wouldn't really know it from the name, but DesignCon is all about the design and analysis of printed circuit boards and packages. It's an exaggeration to say that IC design is never mentioned, but this is one of the premier conferences for the designers who worry about everything except the chips. I hope I don't offend anyone by saying that until five or ten years ago, this was a relatively sleepy backwater, with less advanced technology than the contemporary IC design tools.

But that's all changed.

Moore's Law scaling and increasing numbers of I/Os used to be the solution to every problem. But two things happened. Moore's Law wasn't the only game in town, and all the More than Moore technologies such as 3D packaging became more than just research projects. Very high-speed serial interfaces took over as the way to get more data in and out of chips, as opposed to more pins, leading to the need to handle signal and power integrity properly.

Yesterday, I covered the opening panel session on SI, PI, and EMI. This was mostly about how the three separate domains are coming together, which is both a technical challenge and an education challenge. Engineers spent their last twenty years in one of these disciplines, and now they need to learn about the other ones. Just glancing through the sessions and the bootcamps on the DVCon agenda makes it clear that DVCon is a major resource for doing this.

Machine Learning

 Like everywhere else, machine learning (ML) is a big area of interest. On the first day, there was one session on using machine learning in electronic design. Different speakers gave different perspectives. It was a mix of industry and academia. The panelists were:

  • Paul Franzon (North Carolina State University)
  • David White (Cadence)
  • Madhavan Swaminathan (Georgia Institute of Technology)
  • Sashi Obilisetty (Synopsys)

The panel was moderated by Chris Cheung of Hewlett-Packard Enterprise. This is actually the second year that DesignCon has run a machine learning panel. It is done under the umbrella of CAEML, the Center of Advanced Electronics through Machine Learning. This has a lot of industry partners, including Cadence. But they are not just EDA partners, GF, Samsung, and Xilinx are all partners, as are Qualcomm, Nvidia, Cisco, and more. It is a hot area.

In fact, whenever I hear discussions on R&D for design tools and methodologies, a focus is on two things. First, how to replace human-driven iteration with machine learning, so that when, for example, static timing signoff fails, that adjustments are made automatically to the design, the constraints, the tool parameters or whatever, and repeated. The holy grail is to be able to do implementation from Verilog to signed off layout simply by throwing computer power instead of black-belt experts at the problem.

The other big area of interest is to use big data techniques so that a run of a tool doesn't start completely from scratch each time, as is typically the case today. Typically, on a run of a tool, a lot goes right even when the result is not perfect. When tools are taking a day, or even days, to run, throwing away everything each time is very expensive.

Chris started by talking about deep learning in the context of HP disk drives. They monitor over 1M disk drives which report back voltage, fan speed, temperature and current, through a standard called SMART (self monitoring analysis and reporting technology). They want to predict when drives are going to fail in time to replace them before they do. The three big challenges where they are using ML are accuracy (identifying all the drives that will fail), limited false positives (not replacing drives that turn out to be good), and failure lead time (it's not much good predicting a failure five minutes before it happens, a week or two is required.

Paul talked in a little more detail, specifically about EDA. ML is mostly about learning from data, especially massive labeled data, and using that to train a neural network. He emphasized that humans do not learn this way: "you, as a child, did not see 10,000 cats before you knew what a cat was."

He divided learning into three. Offline learning (the usual train the net, then do inference later), incremental learning (where additional training is done to correct the weights during inference), and inline learning (this is learning in the field, often with unlabeled data, the canonical example being a robot that builds a map of a building as it explores).

However, there is more to ML than just neural networks for identifying cats. One that shows a lot of promise in the EDA environment is surrogate modeling. This is where a difficult to evaluate model (like a complicated numerical solver) is replaced by a model that is much faster. Historically, in EDA, we have done this manually. We come up with timing models that match SPICE but don't require running SPICE, we extract resistance and capacitance without a full-blown 3D solver, and so on.

Madhavan also talked about this, the capability to use a “fast-to-evaluate learned model to replace a detailed slow model in design, which may be 100 times faster."

Another promising approach is Bayesian optimization, where you start with a very small model and use the errors in the predictions to evaluate what should be simulated next, and so incrementally improve the model.

In the feasibility studies using these approaches, they have got physical implementation down to three iterations when a human designer takes 20 iterations. Not iteration-free, but promising.

Cadence's David White needed some machine learning applied to his Powerpoint parameters, because somehow in moving his slides from Cadence format to the format used for DesignCon, the text had ended up as very-light-grey on white, so was impossible to read. Luckily, much of the information was in the diagrams.

David talked about crossing the machine learning chasm. There are five phases in a practical machine learning solution. First, get the algorithms to work. Second a cool prototype. Then comes the chasm. On the other side is scalable deployment, further learning and adaption in the real world, and eventually human-like interactions and personalization. Obvioiusly, if you can get across the chasm, you provide a lot of value. But before, you can do prototypes (and get PhDs).

Virtuoso has used ML since 2013 for fast parasitic extraction, basically training it to recognize patterns and thus avoid using a field solver. The next step is electrically driven (analog) place and route.

One challenge is that Cadence has to deliver tools for advanced nodes, so there is not much training data by definition. Maybe just a couple of test cases. But he had some promising data showing that even with two test cases, training on the first was within 97.7% on the second, and training on the second was within 91% on the first (obviously the designs got very close to 100% on when used on the same design they were trained on).

Rick Merrit from EETimes asked the only question there was time for, about how far things have come since last year. The panel thought that CNN and Bayesian in DFM have advanced significantly and now looking at inline learning during the design process itself. But one challenge is access to data. Real world data has to come from companies, and it's mostly proprietary, so effort is going into how academia can work with industry to create models. I presume this means running some software at the company, and only returning the NN weights, or something similar.

Anyway, ML has not taken over the EDA world completely, but every year it is advancing. I assume there will be a panel again next year, so check back at DesignCon in January 2019, or look for my blog post on the panel then. 

Other Sessions

I didn't manage to make the keynote on reliability for autonomous vehicles. Paul Kocher, who is a legend in the security world due to his discovery of differential power analysis of chips, was presenting about Spectre and Meltdown. He left Rambus a couple of years after his company, Cryptography Research, was acquired. So, at a loose end, he poked around microprocessors a bit, and discovered the Spectre vulnerability last summer. He is the lead author on the paper. I went to his presentation, which I will cover in its own post soon.

I did manage to make the keynote about the New Horizons spacecraft journey to Pluto and beyond. It was fascinating. I'd have liked a little more technical detail about the electronics, but I'll cover that in a separate post.

The other big area, to me, was the increasing importance of IBIS-AMI (algorithmic model interface), which has allowed proprietary models to be created without needing to expose all the secrets that go into them. Cadence has an AMI builder GUI interface that people who are not C-programmers or MATLAB experts can use to create their own models. AMI has been most heavily used for modeling the channel for high-speed SerDes. With SerDes going from 16GHz, to 25GHz, to 56GHz, to 112GHz, this is an area that will become more important. Perhaps even more important, the technology is being adapted for DDR memory interfaces. These are not pure point-to-point since typically there are multiple DIMMS on a motherboard, meaning reflections and other complications. You won't be able to ignore AMI since the next-generation DDR standards will incorporate it. I'll summarize all the stuff going on in AMI from a panel session, and a couple of other presentations I attended, in a separate post.

An unexpected treat was a presentation on "What Happens in a Patent Lawsuit?" given by John Strawn and Thomas Millikan. Thomas is a patent lawyer who used to be a semiconductor designer at TI before going to law school. John is very techical expert witness with several patent trials under his belt. Any patent lawsuit is basically two battles going on at the same time, one in the legal sphere, and one in the technical sphere. Watch for that post in a week or two.

 

Sign up for Sunday Brunch, the weekly Breakfast Bytes email.