Home
  • Products
  • Solutions
  • Support
  • Company
  • Products
  • Solutions
  • Support
  • Company
Community Breakfast Bytes Imec Visit Part Two

Author

Paul McLellan
Paul McLellan

Community Member

Blog Activity
Options
  • Subscriptions

    Never miss a story from Breakfast Bytes. Subscribe for in-depth analysis and articles.

    Subscribe by email
  • More
  • Cancel
system-in-package
Memory
3DIC
imec
CMOS

Imec Visit Part Two

28 Jun 2022 • 6 minute read

 breakfast bytes logoThis is the second post about my visit to imec last week. The first was yesterday, My Second Visit to imec.

I had three presentations from technologists at imec:

  • Geert Van der Plas on advanced packaging
  • Sri Samavedam on future CMOS technologies
  • Arnaud Furnémont on memory and storage

Geert Van der Plas

3dic

The above image shows the 3D technology requirements for multicore processor partitioning. One the left, the "wires" leading into the blue cuboids are silicon photonics fibers. In the center are the processors, partitioned into chiplets, probably with logic on memory. On the right is some variety of HBM, with DRAM chiplets stacked up on a logic die at the bottom containing all the memory interfaces (xDDRx). At imec, work is going on on the base packaging technologies, in particular making all the bumps smaller (closer together) and thinner (so that the stack of die and bumps can be thin). The base is a cheap organic interposer, what Geert calls "plastic silicon." Between the HBM memory stack and the cores is something akin to Intel's EMIB that allows very high bandwidth linking of logic and memory (in this example). In the center, the long-term goal is to get the density and parasitics of the connectivity down enough that no special interface, such as UCIe, is required. Buffers on one die simply drive gates on the other die in the same way as if it was just a long wire crossing the same die.

3d thermal at imec

One of the big challenges with advanced 3D packaging is thermal, getting the heat out. The more die are packed into a given volume, the greater the thermal density. Also, one die acts as both a heatsink and a thermal barrier to the die underneath. The current state of the art, using heat spreading plates, is to dissipate about 100W per square centimeter. Using an approach like in the above diagram, imec think this can be got up to 300W/cm2. The fluid used is simply purified water ("DI water") which is used in fabs in vast quantities. There is no heatspreader. The coolant takes the heat directly from the die stack. The key words for this are "direct liquid jet impingement cooling."

This construction assumes backside power delivery network, so in effect, electricity flows in at the bottom of the diagram and heat flows out of the top. The power is delivered at a higher voltage than the chip needs, and voltage regulators are built into the substrate. This is more efficient than trying to deliver 100W all at the chip level voltage. Instead it is stepped down from 4:1 to 8:1.

In the nearer term, instead of putting all of a microprocessor (core logic, IO, and cache memory) on a single die, it is more cost-effective to build a three-layer sandwich (like a club sandwich) as in the diagram above. Some parts are assembled using wafer-to-wafer bonding, and others use die-to-wafer bonding. Where it says "CMOS N-1," that means one processor generation behind the process used to manufacture the processor itself, which has the advantage of being cheaper, and also gets things like the qualification of high-speed SerDes in that process off the critical path for this process generation and onto the next generation design.

I have seen a presentation, I think by IBM, who reckons you should put the processor on top closest to the heatsink, and power it using TSVs that run through the memory die. But this was in the era before backside power delivery networks were a thing, and the received wisdom in imec today is that you put the memory on top, the processor in the middle, and the IO die on the bottom of the stack.

Sri Samavedam

 Sri discussed upcoming developments in CMOS over the next, approximately, ten years, going from 2nm, which is imminent next year or so, down to 5Å (A5 in imec process naming terminology). The assumption is that high-NA EUV will come into production in about 3 or 4 years time. This is very aggressive. For more about high-NA EUV, see my post What Is High-NA EUV? Until we have high-NA EUV, we will need to use some double patterning, even with EUV's 13.5nm wavelength.

imeg logic scaling roadmap

That allows something like the above logic scaling roadmap. Nanosheets (although under various different proprietary names) are already on all the advanced foundry roadmaps, as is backside power distribution. The next scaling booster is to use at least some ruthenium (Ru), gradually increase the aspect ratio of the metals, and add air gaps to keep performance high. Over to the right, there are two changes. First, switching interconnects from a pure metal to alloys of multiple materials, and then for the transistors, stacking one on top of the other, known as complementary FET or CFET.

 Sri discussed the two primary options for constructing CFETs, known as monolithic CFET and sequential CFET. He thinks that CFETs will first be built using the monolithic approach on the left but that there are advantages to the sequential approach if the technical challenges can be solved. In the monolithic approach, there are a huge number of steps since all sorts of protection needs to be offered to preserve the bottom transistor while the top transistor is being constructed. In the sequential approach, the top transistor is built on a separate wafer, which is then flipped over. The big challenges are the gate-to-gate connection (the red line that runs through the blue in the diagram on the right), and the need for void-free wafer bonding at transistor-level scales.

Arnaud Furnémont

Any new memory approaches, even non-so-new ones like phase-change, have to cope with the fact that the established memory hierarchy requires enormous financial gains to justify change, and the existing memory types (on-chip SRAM, DRAM, NAND, HDD, and so on) all have billions of dollars per year invested in them.

The biggest challenge in memory right now is finding a way to take DRAM into the 3rd dimension, in the same ways as 3D NAND has already done. DRAM is very (very very) price-sensitive and so it is necessary for this to deliver cheaper-per-bit memories. Arnaud and I discussed the possible approaches to this. The challenge is to build a lot of bits at once vertically and then do something similar to the very high aspect ratio etch that 3DNAND uses to drop a word line through the stack.

Here are all the memory technologies being worked on at imec. If you don't know what all this soup of acronyms stands for, then Google will be your secret decoder ring.

imec memory areas of research

imec does a lot of work on the conventional memory technologies, although obviously the big memory companies are spending a lot more. Anything market with a star, which is everything, is being researched in Leuven. And, off the chart, specialized memories for AI are another area of active research.

cold storage technologies

For cold storage (aka deep storage), it is necessary to use the dimension of time. At the bottom are recirculating bits, which take time to access but can be packed in densely. At the top is the ultimate science-fiction approach to storage using the bases on DNA to hold the bits. The density is huge, but to get the bits out, the DNA needs to be sequenced (and synthesized to write new data). There is work being done on biology at imec, but when I mentioned this, Arnaud said that the biologist says that this is at most organic chemistry and not biology.

More

I will be attending ITF USA in San Francisco, which takes place during DAC, so I expect all this and more will make an appearance.

 

Sign up for Sunday Brunch, the weekly Breakfast Bytes email.

.


© 2023 Cadence Design Systems, Inc. All Rights Reserved.

  • Terms of Use
  • Privacy
  • Cookie Policy
  • US Trademarks
  • Do Not Sell or Share My Personal Information