Home
  • Products
  • Solutions
  • Support
  • Company
  • Products
  • Solutions
  • Support
  • Company
Community Breakfast Bytes The History of Lithography, Part 1: From Stones to Lase…

Author

Paul McLellan
Paul McLellan

Community Member

Blog Activity
Options
  • Subscriptions

    Never miss a story from Breakfast Bytes. Subscribe for in-depth analysis and articles.

    Subscribe by email
  • More
  • Cancel
lithography
193i
RET

The History of Lithography, Part 1: From Stones to Lasers

2 Jan 2020 • 7 minute read

breakfast bytes logo Lithography was originally a way of printing using a flat stone. Lithos (or λίθος) is the Greek for stone. The stone would be coated with wax, then a drawing made in the wax with some sort of stylus. You had to use a special type of limestone, which is dissolved by acid and it would etch away the parts of the stone not protected by the wax. That's an example of lithographic limestone in the exciting picture at the top of the post. You could then print by covering the stone with ink, and pressing paper onto it. The ink only touched the paper, and thus printed where the stone had not been etched away. Later, it developed into a printing technique still used today, using aluminum plates and photographic chemicals, which are a bit more convenient than stones and wax. I worked in a print shop when I was a teenager, so I've actually made such plates, and printed paper with them.

If you are reading this blog, you probably think of lithography as having something to do with manufacturing silicon wafers. It is, in some ways, a combination of the ancient and modern printing lithography. Instead of coating the stone with wax, the wafer is coated with a special layer called photoresist. This is actually put on the center of the wafer, then the wafer is rotated fast so that it is spread into a uniform thin layer (with the excess flying off the wafer completely). The wafer is then exposed to laser light through a mask (more accurately called a reticle, but most people still say mask). The unexposed resist is then dissolved away. Finally, something is done through the photoresist that only affects the exposed areas: diffuse impurities, oxidize, plasma etch, ion implant. In the early days, a mask really did cover the whole wafer, but as dimensions scaled, that was too inaccurate, and a reticle was used that was 5X or 10X the size of the design and was stepped across the wafer exposing one die at a time. That's a highly oversimplified version of semiconductor lithography.

Here's a video of spin on a layer of photoresist. This is being done manually on a 4" wafer, but this way you can see what is going on. In a modern fab, this is done in a completely automated flow on a 12" wafer.

The rest of this post is going to ignore everything that goes on in the fab except lithography, of which there are two main parts:

  • Given the layout we want, what do we put on the mask?
  • How do we expose the wafer through the mask with light to pattern it?

The two questions are heavily inter-related since what we need to put on the mask depends on how we are going to use the mask to transfer the pattern to the wafer. At a CEDA event a few years ago, before I rejoined Cadence, Lars Liebmann, then of IBM, now of Tokyo Electron (TEL), gave a presentation. I'm going to steal his diagram that he called the Rosetta Stone of Lithography, but also bring it up to date with the current status of lithography that is centered around extreme-ultra-violet, better known as EUV.

The first thing I need to warn you is that lithography people and process people (especially marketing people in foundries) use different terminology. Process names are just names these days. There is nothing that is 7nm on a 7nm process. The important thing is actually the minimum pitch that is allowed on a layer. For example, at 20/22nm, the minimum pitch is 80nm. At 7/10nm, the minimum pitch is 48nm. These numbers vary slightly from company to company but not by much. Also Intel and imec's terminology is typically a process node later in terms of naming. So Intel 10nm is roughly equivalent to the foundries' 7nm. Sometimes the half-pitch is used. Unsurprisingly, that is half the pitch, and if the width and spacing on the layer are the same, then it will equal them both.

The fundamental equation of lithography is that the resolution (half-pitch this time) is k1 × λ ÷ NA where:

  • k1 is the Rayleigh parameter, which is a measure of the lithography complexity. Yield is affected if it drops below 0.65 and then we need to do something about it (what, we will get to later).
  • λ (lambda) is the wavelength of the light. Since we are using a laser (or EUV), only certain values are available. In fact, for what seems like forever, the answer has been 193nm, as we will see.
  • NA is the numerical aperture, which is the sine of the largest angle captured or emitted by the lens. We want NA to be as big as possible, but it is really hard to manufacture a lens with NA>0.5, but worse, the depth of field scales as 1/NA2 making the planarity of the wafer more and more critical as NA increases. If you are a photographer, NA is related to f-stops, where you have a similar tradeoff between how much light you collect and depth of field.

In the early days of lithography, before the Rosetta Stone diagram even starts, we scaled by scaling λ, the wavelength of the light. First, we used G-line at 436nm, and (in about 1984) we went to I-line at 365nm. Then in 1989, we switched to KrF light sources, and in 2001 to ArF at 193nm and then...nothing. We are still using 193nm for all non-EUV lithography. The expectation was that we would next go to 157nm, but that never happened since it was too difficult to build effective optics and masks. So the next step after that turned out to be EUV at 13.5nm with all its complications (it has to be in a full vacuum with reflective optics, most notably—more about that in Part 2).

The Rosetta Stone of Lithography

Here is Lars' Rosetta Stone of Lithography. It starts at 130nm (or 0.13um as we actually called it back then). That was the first generation process using 193nm light. There is a huge amount of information in this chart. Across the top, the technology node name, and the minimum pitch at that technology. The wavelength...always 193nm as I said above. The NA of that era. The middle grid of the chart shows the k1 value. The left colums are the lithographers' way of looking at the world. The right-hand column is the solution view of the world, mostly EDA solutions. Across the bottom, a sort of description of the type of design, going from litho-friendly-design to double-patterning-aware design, and lastly what Lars called GRATE, for gratings of regular arrays and trim exposures, which is a cute way of saying lines and cut masks.

But let's start at the top, with 130nm. Life was simple, we could just flash the laser through the reticle onto the photoresist with only the most rudimentary corrections on what we printed on the mask (to stop corners getting rounded off, for example).

Then, since we couldn't scale the wavelength (λ)  we have had to scale by doing things to NA and k1. We could scale NA in three ways.

First, just better lenses.

Second, we used off-axis illumination and asymmetric illuminations. Without going into all the details, one of the inputs into the equation of to what angle to tilt the illumination is the pitch of the patterns on the wafer. So for DRAM, not such a big issue, but for logic, we had to introduce a lot of rules about the dominant direction on a layer and increasingly complicated design rules since some pitches were simply not allowed. 

Third, we switched from having air between the lens and the wafer to having water in the gap, which has a higher refractive index. This was known as 193i, with the "i" standing for "immersion" and the whole approach known as immersion lithography. This all came to an end at 28nm since it was impossible to manufacture better lenses, and we had already made the one-time gain of switching to immersion. We were left with k1 as the only scaling lever remaining. 

 By 90nm we also needed resolution enhancement technology (RET) to scale k1. This is also known as optical proximity correction (OPC). This turned masks into something more like a diffraction grating, depending on the interference of the light waves in just the right way to give us something approaching the pattern that we desired. I pinched the example here from Wikipedia: the designers' layout (out of Innovus or Virtuoso) is the neat blue shape (hard to see). What has to go on the mask after resolution enhancement technology (RET) is the green weird shape. What the shape ends up after lithography is the rounded red shape.

So we couldn't get square corners since RET is a sort of low-pass filter. Vias were more circular than square. From an EDA point of view, RET couldn't correct everything and so we needed tools to check the design, locate so-called hotspots that OPC would fail to correct and get the designer to fix them. This was one reason that design rule deck sizes started to explode (the processes got more complex in other ways too).

Tomorrow: Multi-patterning and EUV

 

Sign up for Sunday Brunch, the weekly Breakfast Bytes email.


© 2023 Cadence Design Systems, Inc. All Rights Reserved.

  • Terms of Use
  • Privacy
  • Cookie Policy
  • US Trademarks
  • Do Not Sell or Share My Personal Information