Never miss a story from Breakfast Bytes. Subscribe for in-depth analysis and articles.
The afternoon of the Monday of SEMICON West is always the Imec Technology Forum (ITF) USA, held in the Marriott. Now that DAC is co-located with SEMICON West, this doesn't work so well since there are presentations at both events that overlap. However, ITF is one of the few events that says that the presentations will be available after the conference and actually follows through, sending out links through email the very next day.
If you don't know who imec is, then read my posts from when I visited them a few weeks ago:
There is a reason both imec and the EU headquarters are in Belgium, it is small and neutral. The forerunners of the EU wouldn't countenance the HQ being in Paris or Berlin. Similarly, the major semiconductor companies and the major equipment and materials companies would not be keen on an imec equivalent in Silicon Valley, Tokyo, or Hsinchu. But in little Belgium, with no real semiconductor industry of its own, all the leading edge manufacturers of the semiconductor ecosystem can fund the organization and work together on pre-competitive research. Obviously, research can not pan out and so not everything tried at imec makes it into high-volume manufacturing (HVM). But I can't think of anything that has made it into HVM that wasn't first worked on at imec. For example, imec is doing a lot of work on next-generation interconnect materials—what comes after copper? It's a big list, and not all materials will work out, but it is a safe bet that whatever does come next will be on that list.
So when imec talks about the semiconductor future, it is worth listening to.
ITF USA always opens with a keynote from Luc Van den hove, the CEO of imec. Since imec works in biology and medicine (and more) as well as semiconductor, Luc covers that. His presentation was titled Deep Tech: The Lodestar to Meet the Challenges of the 21st Century. I am going to focus on the semiconductor roadmap part of his presentation, although the first third or so was about pandemic-related technology mixing biology and silicon, such as accelerating genome sequencing by leveraging the power of silicon or breath-based PRC testing. Silicon-based medical device platforms have the possibility to affordable, personalized therapies.
It seems to be compulsory for all keynotes to include graphs showing the explosive growth of data (bonus points for saying data is the new oil) and the compute needs for artificial intelligence (AI). Until about 2010, the compute needs for AI were doubling every two years, but since then, they have been doubling every six months. This brings to mind economist Herb Stein's adage that if something cannot go on forever, it will stop. However, this does point out the need for a more aggressive semiconductor roadmap, and Luc's target is 100X.
There are several walls in the way, though:
Moore's Law is simply not going to stop, and there are various ways to continue it:
Beyond the 2nm node will require high-NA EUV. I wrote about this recently in my post What Is High-NA EUV?, so I won't repeat all that here. One challenge Luc pointed out is that the big mirror (manufactured by Zeiss) needed to collect the light from the tin vaporization is 1 meter in diameter with an accuracy of 2 picometers. If you enlarged the mirrors to the size of the earth, the biggest aberration would be the thickness of human hair.
The first generation of EUV was notoriously late, taking ten years to go from the first NXE3100 prototype available in 2010 to the actual insertion of the NXE3400 in 2019, about ten years later. The first high-NA EXE5000 prototype will be available next year in the joint ASML and imec lab in Veldhoeven. The plan is the for insertion of the EXE5200 to take place in 2026, just 3 years later.
Today, mainline processes use FinFET, although Samsung has started volume manufacturing of its 3nm gate all around (GAA) process just a few weeks ago. After FinFET, all the leading edge manufacturers are going to some version of nanosheet. The next development that imec has been researching is forksheet, whereby various manufacturing advances lead to a tighter gap between the N and P transistors. The next step is CFET, where the N and P transistors are stacked vertically, further reducing the size of a gate. That is followed by the atomic channel.
I covered imec's work on forksheet transistors in my post What Comes after 2nm GAA? These extend logic and SRAM scaling roadmaps to the 1nm generation.
Next is the CFET or complimentary FET, where the N and P transistors are stacked on above the other. The big challenge with this approach is getting connections to the bottom transistor without wasting a lot of space.
Finally, using 2D materials to build atomic channel transistors (see the above photomicrograph).
But it's not all about the transistors, there is interconnect. For more details on imec's work on new interconnect materials, see my post mentioned earlier What Comes after 2nm GAA?
One of the big potential changes in interconnect is backside power distribution.
Putting all that together leads to the imec potential roadmap extension going out to 2036.
For memory, the big upcoming challenge going 3D. For flash memory, we already have 3D NAND, and there is no reason that the number of layers cannot be increased even more. But moving DRAM into the 3rd dimension is still a big challenge. See the second post about my recent imec visit Imec Visit Part Two.
This is taking the advanced packaging processes developed in the last 5 to 10 years and putting them on steroids.
Here, for example, is a 3D stacked core processor SoC, with an SRAM layer on top, the processor logic underneath, then the backside power distribution network, and finally the real I/O to get out of the package.
The future of scaling needs system level thinking. For example, reducing the need for "dark silicon" increases the density of useful silicon, which is what we really care about. For the last few nodes, there has been a lot of Design Technology Co-Optimization, or DTCO. This is a combination of process features that don't directly allow increased scaling, such as buried power rail, but which allow routed logic (standard cells) and SRAM to be done at increased density by allowing, for example, the number of tracks in standard cells to be reduced. But that has reached a limit since you can't take any more tracks out nor reduce the number of transistors in a FinFET gate to less than one.
To go further requires System Technology Co-Optimization. When we ran out of power with single-core microprocessors, we switched to multi-core. But another big potential change is adding domain-specific architectures. There is clearly lots we can learn from the brain, what Luc calls "brain-inspired computing".
Sign up for Sunday Brunch, the weekly Breakfast Bytes email.