• Skip to main content
  • Skip to search
  • Skip to footer
Cadence Home
  • This search text may be transcribed, used, stored, or accessed by our third-party service providers per our Cookie Policy and Privacy Policy.

  1. Blogs
  2. Breakfast Bytes
  3. SEMI Industry Strategy Symposium: The Technology
Paul McLellan
Paul McLellan

Community Member

Blog Activity
Options
  • Subscribe by email
  • More
  • Cancel
semi
iss
industry strategy symposium

SEMI Industry Strategy Symposium: The Technology

3 Feb 2021 • 7 minute read

  At the recent SEMI Industry Strategy Symposium, the second day had a section devoted to technology. I covered some of the first day in my earlier post SEMI Industry Strategy Symposium: The Outlook.

In the technology section, there were three presentations:

  • "The Challenges of Parts Per Quadrillion" from the Materials Supplier Perspective by Paul Burlingame of Air Liquide Advanced Materials. By the way, Air Liquide is a French company, and so "liquide" is pronounced something like "lickeed"
  • Logic Leadership in the PPAC Area by Scotten Jones of IC Knowledge, and also a blogger at my old stomping ground of SemiWiki
  • The End of 2D Scaling Leading to Revolutionary Advancements in Computing Architecture by Jason Abt of TechInsights

Air Liquide

The title in the agenda was "The Challenges of Parts Per Quadrillion" from the Materials Supplier Perspective. However, Paul's first slide had a punchier title Every Atom Matters: The Parts Per Quadrillion Opportunity.

 Air Liquide is big, with over 66,000 employees. But like many companies that are in the supply chain for important products, you might never have heard of them. But it is a name I'm very familiar with. I lived in France for 5 years, and they are a public company there, one of the companies in the CAC40 ("cack karraunt") index, so one of the stocks that would regularly be mentioned on France Info, a radio station that carries financial news in the evening when I would drive home from the office (remember offices?).

 The purity of gases and other materials used in semiconductor manufacturing is obviously important, but it's quite a stretch from the areas the Breakfast Bytes mostly focuses on, so I'll stick to the high-level. But Paul started with giving some comparisons for what one part per quadrillion (PPQ) really means. It is like finding a single hair on a single individual in the whole world (~8 billion people). It is like 2.5 minutes out of the 4.5 billion years that Earth has been in existence. 

Why is this important? There is a mixture of psychological, and business reasons, along with a change in the products in which semiconductors are used from ones (like phones) where a failure is inconvenient, to others (like medical and automotive) where a failure can be life-threatening:

  1. Our industry is full of perfectionists
    • Driven to attain accomplishment
    • Leave no stone unturned
  2. Criticality
    • Core functionality
    • From inconvenient to dangerous/lethal
    • Price of repair enormous
  3. Business Drivers
    • End market requirement
    • Liability

One of the big challenges that the semiconductor materials industry faces is that the raw materials tend to be byproducts of mining or other industrial streams. The business volumes are tiny. For example, most semiconductor interconnect is copper today, so it is critical — but the amount of copper used in semiconductor manufacture is tiny compared to the amount used for other purposes like plumbing.

  • Semiconductor materials a very small percent of their business
  • It needs significant investments with uncertain returns
  • They lack core competencies & sophistication semiconductor manufacturers require
  • They incur significant costs, requires extensive effort, is inconsistent from supplier to supplier, and therefore risky

One big change is moving to real-time process control and using AI techniques to handle the huge amount of data produced. This requires a tighter partnership and integrated data-sharing.

Paul's summary:

  • Greater materials control creates tremendous value
  • Higher purity and greater control is not easy but can be achieved
  • Data and AI is key
  • Full value only achieved through enhanced long-term partnerships

IC Knowledge

Scotten Jones presented a history of how Intel went from being clearly the leader in developing semiconductor processes, which often gets the somewhat generic-sounding name "Technology Development" or "TD", to clearly behind the foundries. Remember, Intel was the first to FinFET at 22nm (they called it TriGate). All the foundries stuck with planar transistors for their 20nm processes, and pretty much phased that node out as fast as possible once they had 14/16nm FinFET with the same 20nm BEOL due to excessive leakage current and thus excessive static power.

 The above image is not from Scotten's presentation, but it shows the dramatic falloff of the number of companies that are capable of funding a competitive process at various nodes. The old joke used to be that processes were named by the number of fabs for that process, so there would be 5 fabs at 5nm. But in fact, the falloff has been even faster. There are only 3 companies trying to manufacture at 5nm (we'll count Intel's 7nm in that mix since the widths and spacings are pretty much the same as the foundries' 5nm).

Scotten's presentation was titled Logic Leadership in the PPAC Era. Everyone knows that PPA is power, performance, area. The "C" is for cost. Increasingly, Moore's Law is slowing down less because of any of the three PPA parameters, but for economic reasons. The cost per transistor is not coming down fast enough, and the cost of doing a design successfully is too high to amortize except at the largest production volumes. Another letter that some are adding is time-to-market, giving PPACt. The complexity of the process, in terms of the number of steps required, is driving cycle times up. EUV has given a little respite versus multiple-patterning, but that is a one-time saving, and EUV will need to be double patterned on some layers going forward to 3nm.

Linear scaling has not kept up and increasingly Moore's Law can only be continued by design technology co-optimization or DTCO. Principally, this means reducing the number of tracks in standard cells by adding process features that enable another track to be knocked out. Things like supervias, buried power rail (BPR), or backside power distribution, For more on DTCO, see my post IEDM: Novel Interconnect Techniques Beyond 3nm.

Scotten ran through all the process nodes from foundry 28nm (and Intel 22nm) down to foundry 3nm (and Intel 7nm) which is roughly looking ahead a year. Both processes are in TD but not yet in HVM. I'm not going to run through all the nodes, partially due to lack of space, but also because Scotten has done my work already. I mentioned he was a blogger at Semiwiki, and he has blogged his own presentation in great detail: 2,500 words not even counting the words on all of his slides, all of which he has included in the post.

Let me just reproduce the first process node slide, which takes us back in time a decade to 2011:

And the last process node slide, which takes us ahead roughly a year to 2022:

You can see all the intermediate process nodes at Scotten's blog post I linked to above. Finally, Scotten's conclusions: HNS is "horizontal nanosheet", a gate-all-around (GAA) transistor, which Samsung is using at 3nm.

Once again, a link to Scotten's own commentary on his presentation.

TechInsights

Jason Abt of Techinsights presented The End of 2D Scaling Leading to Revolutionary Advancements in Computing Architecture. I'm going to skip some of what he said since I think we all accept that 2D scaling is reaching a limit.

The chart above summarizes how gate length scaling has been slowing for a decade, compared to the ITRS/IDRS roadmaps. Also, Intel's scaling has been further delayed waiting for 10nm to ramp, which at CES it said was happening now in Q1 2021.

Jason then took a look at memory, specifically 3D NAND flash, using SK Hynix's 128 layer as an example.

  Some of the learning that has gone into, and continues to go into, 3D flash can be extended to logic, which is gradually moving towards fine-grained 3D, first with vertical GAA, and eventually true 3D stacking at the transistor level. This is all drawn from the latest 2020 IRDS roadmap. You can download it from the IEEE (it is a 37 page PDF).

Going 3D, with transistor-on-transistor makes it possible to switch from von Neuman architecture, which is a horizontal architecture with computation and memory separated, to in situ data transfer with much denser interconnect. As you can see in the little table at the bottom above, connections go from hundreds with wire bonds, to hundreds of thousands with 3D transistors (but that is perhaps a decade into the future, so maybe 2030).

 

 Jason took a look at two of the most advanced AI systems around today, Graphcore and Cerebras. These are using regular 2D processes of course. But imagine combining the techniques here with 3D transistor-on-transistor processes.

 

Sign up for Sunday Brunch, the weekly Breakfast Bytes email.