Home
  • Products
  • Solutions
  • Support
  • Company
  • Products
  • Solutions
  • Support
  • Company
Community Blogs Breakfast Bytes The Chiplet Summit

Author

Paul McLellan
Paul McLellan

Community Member

Blog Activity
Options
  • Subscriptions

    Never miss a story from Breakfast Bytes. Subscribe for in-depth analysis and articles.

    Subscribe by email
  • More
  • Cancel
featured
chiplets
AMD
3dhi
3DIC
heterogeneous integration
chiplet summit

The Chiplet Summit

30 Jan 2023 • 8 minute read

 breakfast bytes logochiplet summit logoLast week it was the Chiplet Design Summit in San Jose. Actually, the organizers called it the First Annual Chiplet Design Summit. Since everything was oversubscribed — not enough chairs in the keynote ballroom, not enough box lunches — this doesn't seem all that arrogant. And in fact, the date for next year's summit has already been announced. It will be January 23rd to 25th, 2024, still at the DoubleTree (although I wouldn't be surprised if it gets moved to a bigger venue).

I will cover some of the presentations in more detail in future blog posts, but today I will focus on a few themes that ran through many of the presentations. The conference was structured as a day of tutorials on Tuesday and then keynotes on Wednesday morning, and the rest of the conference with several parallel technical tracks.

One obvious question is whether it was worth attending, and, in particular, should you plan to attend next January. I thought it was excellent, especially the tutorial day, and so "yes" would be my answer to whether you should attend. If you have anything to do with system-on-chip (SoC) integration, where everything is on a single die, then you will get involved in chiplets in the future. That is not to say that there will be no monolithic integration anymore, but it is clear that for the most advanced nodes (3nm, 2nm, etc.), that only the part of the design that will benefit from the most advanced process will be designed in that process, and everything else will be put onto chiplets in older nodes, often called N-1 or N-2 nodes (so for 3nm, N-1 is 5nm, N-2 is 7nm).

A decade or two ago, every presentation about EDA and design started off with a generic Moore's Law graph (and often a generic "design gap" slide). Well, Moore's Law may be dead or dying, but Gordon Moore still is the man to quote even in a chiplet conference because he wrote the following in that same Electronics Magazine article where he used four datapoints to predict that the number of transistors on a chip would double every couple of years. He actually said it would last for about ten years, but actually, it lasted over 50 years. His other quote from that same article is:

It may prove to be more economical to build large systems out of smaller functions, which are separately packaged and interconnected.

gordon moore quote on heterogeneous integration

Well, after 50 years, that day has arrived. Yole Group forecasts the chiplet-based semiconductor market to be over $205B by 2032. Samsung Foundry estimated that over 50% of advanced node designs are chiplet-based.

The Story So Far

The biggest chips are now so large that they either exceed the maximum reticle size for manufacture, or they are so big that they simply will not yield well. One example pointed out during the summit was that four 10x10mm die yield 30% more good die than a single 20x20 die. Pioneers in using chiplets to address this have often presented at HOT CHIPS in recent years, and Cisco's "Suds" Sudhakar revealed that Cisco has been working on chiplets for over a decade; it just didn't talk about it in public. The most public early interposer-based design was Xilinx which split a large FPGA into four smaller die on a silicon interposer. I think this was a proof of concept as much as anything, This happened before Breakfast Bytes started, but you can still read my 2013 post on the topic at Semiwiki.

amd mi 300

The very latest news on this topic was the keynote by Lisa Su, AMD's CEO, at CES in January. She announced (and showed) the Instinct MI300. As I said in the update post linked below:

Make no mistake, the Instinct MI300 is a game-changing design - the data center APU blends a total of 13 chiplets, many of them 3D-stacked, to create a chip with twenty-four Zen 4 CPU cores fused with a CDNA 3 graphics engine and 8 stacks of HBM3. Overall the chip weighs in with 146 billion transistors, making it the largest chip AMD has pressed into production.

See my posts:

  • Xilinx and TSMC: Volume Production of 3D Parts (Semiwiki)
  • HOT CHIPS: Chipletifying Designs
  • HOT CHIPS Day 1: Hot Chiplets - Breakfast Bytes
  • HOT CHIPS Day 2: AI...and More Hot Chiplets
  • HOT CHIPS: Two Big Beasts
  • Linley: Chiplets for Infrastructure Silicon
  • Linley: Enabling Heterogeneous Integration of Chiplets Through Advanced Packaging with AMD/Xilinx
  • 3D Heterogeneous Integration (3DHI)
  • CES 2023: AMD, Stellantis, Cadence, and More and January 2023 Update: Automotive Security, Chiplets...and Roman Emperors!

Companies like AMD and Intel have done fairly complex multi-chiplet designs. NVIDIA and Apple have both created designs where two big die are joined together using an interconnect bridge to make an even bigger design, Grace-Hopper in the case of NVIDIA, and Apple's M1 Ultra, which consists of two M1 Maxes. The thing that is common to all these chiplet-based designs is that they are done within a single company. The chiplets are designed to work together to build up a system, in many cases using proprietary interfaces. There is no technical sense, let alone commercial sense, in which, for example, someone other than AMD could use one of its chiplets.

chiplet store

One of the themes that ran through the summit is that everyone wants to be able to go to the chiplet store with their supermarket cart and pick whatever chiplets they want off the shelf, and then be able to put together a system-in-package (SiP) that relies on them all working together. On the other hand, anyone who put forward any sort of timetable for this said "five to ten years." The one big exception to this is HBM, high-bandwidth memory. Nobody builds their own HBM, but there is a market (the various generations of HBM have been standardized by JEDEC). One of the panel sessions on the last afternoon of the summit was How to Make Chiplets a Viable Market. It was standing room only, which never happens for a panel session. I will cover it in a separate post next week.

The intermediate case is that someone with a key chiplet, such as a processor, creates an ecosystem around it. In the panel session I just mentioned, Ventana said that it was doing just this since its datacenter processor is available as a chiplet. A processor cannot stand on its own (it cannot boot an operating system, for a start), so it has to be surrounded by other chiplets to create a full system.

So the situation today is that single-company multi-chiplet designs are shipping in volume, tentative steps are being made with some chiplets to build ecosystems of partners around them, and the dream of a chiplet store is sufficiently far off as to remain a dream for the time being.

Why Chiplets?

moore's law scaling

The diagram above, from Denis Dutoit of CEA-List in Grenoble, shows one of the big motivations for using chiplets at the most advanced nodes. The straight diagonal line shows Moore's Law on the assumption it applies equally to logic, memory, and analog. The line that flattens out shows how scaling really works. Analog doesn't scale much, if at all, and memory scales much slower than logic. Indeed, it is unclear if 3nm memory is going to be any smaller than 5nm memory, which is the ultimate lack of scaling.

When scaling operates like that, moving analog and large memories into the latest process node gains little in area and costs a lot more. The obvious response is, "Well, don't do that," and the way to not do that is to put the memory and analog on separate chiplets manufactured in less advanced processes (so potentially much cheaper). For example, AMD's famous Zen2 SiP has a varying number of processor chiplets (in 7nm, I believe) and an I/O chip built in 12nm FD-SOI.

Another reason for putting I/O onto a separate chiplet when designing in a very advanced node is to avoid having test chips for the SerDes (Ethernet, PCIe, etc.) being on the critical path. If you put a SerDes on the most advanced node, you have to build a test chip and characterize the silicon before the real chip can tape out. It is much easier to use a SerDes that already exists and has seen silicon in an older node, or even, like AMD, in a completely different process technology.

Chiplet Connectivity

There are a number of interconnect standards (as well as some proprietary ones). For chiplet-based designs that are in progress, it seems that most of them use the Open Compute Project's (OCP's) Bunch of Wires or BoW.

 The other standard, which has a lot of heavyweights behind it, is UCIe. For more details on that, see my post when it was announced Universal Chiplet Interconnect Express (UCIe), and then when we announced our product UCIe PHY and Controller—To Die For. There is a product page for the PHY and controller, which contains a summary of capabilities:

The UCIe physical layer includes the link initialization, training, power management states, lane mapping, lane reversal, and scrambling. The UCIe controller includes the die-to-die adapter layer and the protocol layer. The adapter layer ensures reliable transfer through link state management and parameter negotiation of the protocol and flit formats. The UCIe architecture supports multiple standard protocols such as PCIe, CXL, and streaming raw mode.

Some aspects of the UCIe standard are still in development, but I would say the received wisdom at the summit was that "UCIe will win once it is finished" given all the companies that are behind it.

Chiplet Marketplace

I will cover this in a separate post next week.

 

Sign up for Sunday Brunch, the weekly Breakfast Bytes email.

.


© 2023 Cadence Design Systems, Inc. All Rights Reserved.

  • Terms of Use
  • Privacy
  • Cookie Policy
  • US Trademarks
  • Do Not Sell or Share My Personal Information