So the new Doctor Who will be a woman. Who(!) would have guessed?
As it happens, I saw the very first episode of Dr Who (in B&W). Because of a macabre coincidence, I can even tell you the date. It appeared the day following President Kennedy's assassination the night before (British time) on November 23, 1963. William Hartnell was the first doctor. But he became too sick to continue after a couple of seasons, and so the idea of "regeneration" was invented to allow the series to continue. In those days, videotape was so expensive that most shows were not preserved, and all the early episodes have been lost except for the regeneration scene when Patrick Troughton takes over as the second doctor. It survived only because it was shown on the children's show Blue Peter, started in 1958 and still on the air, in an episode that was not erased. Of course, now it is on YouTube.
But that's the fourth dimension, time travel. Here on earth we are in the process of moving semiconductors from two dimensions into a third.
Flash memory went first. Memory is under scaling pressure, too, for some of the same reasons as semiconductor in general and for some special reasons. DRAM relies on charge storage on a capacitor, and flash by charge storage on a floating gate. As the capacitors and gates get smaller, the number of electrons that can be stored gets smaller, with all sorts of fairly obvious challenges.
2D flash reached the limit whereby the next process generation would be very challenging, both because it would require expensive lithography and because there were charge storage challenges. Instead, a new set of challenges were attacked: if the memory could be built up into the third dimension, not only could a lot more bits be stored in the same die area, the lithography could be less aggressive and the charge storage would have fewer issues. But we are not talking about just a couple of bits vertically, we are talking 24, 32, 64, or even 128. This means a huge number of steps, and some very challenging etching to get down through the whole stack. To get an idea of just how complex and how many steps this takes, watch this movie Coventor made with their virtual fabrication software:
At the imec technology forum (ITF, or maybe we have to say itf since imec is really into lower case), Diederik Verkest talked about the challenges of moving logic into the third dimension. Memory has the advantage that it is regular, but logic does not. As traditional area scaling has run out of steam, we have moved towards design technology co-optimization (DTCO) where we optimize tools, IP, and usually some process kickers to get higher density anyway (things like buried power that allow us to make smaller standard cells).
The above diagram shows the challenges. Back on the left, what An Steegen (also of imec) always calls "happy scaling", the focus was on scaling devices and wires. In the DTCO area, it was on scaling logic cells with tweaks to the process. In the future, we will need to scale system functions.
One big gain in terms of moving things into the third dimension is the complementary FET (CFET) where the P and N devices are stacked vertically with a common gate. This simplifies access to the device, too, which is one area where a lot of area is lost. This should allow standard cells to be scaled down to four tracks (and maybe three with buried power).
The next step in "going vertical" is vertical transistors (VFET). In the above diagram, you can see immediately the advantages and disadvantages. One advantage is that the gate length can be varied without it changing the device footprint (since the channels are vertical). The devices are also structurally isolated. But...while it is easy to get to the top electrode, getting to the bottom one is a challenge.
For SRAM, these VFET approaches can be used to extend SRAM scaling. As usual, regular structures are easier to scale. In particular, there are only a few word lines and bit lines (along with power).
But keeping Moore's Law on track requires logic to scale, too. The problem for logic cells is that they don't scale when the bottom electrode is hidden. The interconnect needed to get to it loses all the gains.
One approach is to add the regularity back in and embed logic in nanofabrics. This reminds me of when I first learnt anything about IC design, in the era before standard cells and automated place and route, when we had programmable logic arrays that had the same advantage of regularity. But working out where to put the implants was tough since synthesis hadn't yet been invented and there wasn't really any automated approach for it. We were still in the Karnaugh map era.
Nanofabrics have the potential to reduce the amount of random interconnect, just as they do with FPGAs. However, just as with FPGAs, a completely new logic synthesis flow will be required. As an aside, one of the dirty secrets of synthesis is that FPGA synthesis is harder than synthesis for SoCs, but people think it is easier since the FPGA companies give it away, and so the amount third parties can charge is constrained. But it will require a similar approach, although not necessarily field-programmable, based onLUTs (look-up tables).
Diederik's conclusions were:
Basically, there'll be some changes made.
Sign up for the weekly Breakfast Bytes email: