Get email delivery of the Cadence blog featured here
At the epicenter of nanometer yield challenge are increasing defects due to systemic design-process interactions. There are areas in which design features and process limitations are so close to the edge that they cross the line, resulting in subtle defects – typically categorized as timing, transition, or small delay defects (SDD). Process technology advances, including changes in materials and aspect ratios of wires and vias, have introduced an increasing number of resistive bridges and vias, resulting in resistive shorts and opens. And more aggressive reticle enhancement techniques introduce more physical complexity and more inaccuracy in modeling. Traditional “in-line” yield diagnostic methodologies based on physical diagnostic analysis (PDA), including optical scanning and defect classification, do not adequately account for broader scale design-process interactions. While process control and in-line inspection techniques can minimize process variability, these techniques have no knowledge of design, and therefore they cannot productively or predictively identify design-process defects. And this predictability is the “Holy Grail” to profitable yield.
In addition to the technological challenges, there are macro-economic influences -- a consumer driven industry with decreasing product windows, increasing volumes, and relentless price pressures which require foundries and IDMs to optimize yield in a much narrower time frame. With macro-economic influences as they are, IC products do not reach expected yields during product lifetimes. This is unsustainable! Foundries are trying to copye by expanding the number and type of design rules. However, these too add to the overall complexity. Simply put, physical analysis (i.e. precision failure analysis) and verification are not keeping pace with the increasing number of design-process interactions resulting in systemic yield loss.
So how do we resolve this? As indicated earlier, the answer has been with us for some time, but fundamentals remain absent from some of today’s attempts at implementation.
Thanks so much for sharing your comments ... and knowledge.
Understood - combined RnD investment/focus of in-line and volume diagnostics methods and efficiencies in feedback of resulting discoveries in design and process status (intelligence) will optimize the yield (and profitability) gap.
Ed... Wow... that's certainly a lot to parse through!
The primary comment I would make is that from my point of view, it's not a question of traditional inline methods vs. volume diagnostics. In-line methods are improving and continue to prove enormously useful. For instance, inspection tools can use the physical design to determine critical areas that require more attention, and therefore address some of the design/process interactions.
From my point of view, the "Holy Grail" is to utilize inline methods along with methods such as volume diagnostics to complement each other in contributing to an overall understanding of processes, products, and the interactions between them. Part of this vision includes feedback of the results of this understanding to both design and process "monitors" that span the spectrum from DFM rules to in-line inspection recipes.
Tom, thank you for your thoughtful comments.
Given the increasing parameters and associated variability these points you raise are necessary for implementing a "smart" yield learning system - one that leads to greater accuracy and predictability between GDS and actual silicon.
Our concern with design closure of key quality parameters (area, timing, power, testability) naturally extends to the actual silicon behavior through modeling accuracy. Reducing iterations in both physical design and silicon production (respins) is critical to timely, profitable production.
The awareness and conern iwth these issues must be shared with an equal sense of importance (and urgency) across logic, test, physical, and modeling design teams.
You state "Traditional “in-line” yield diagnostic methodologies ... do not adequately account for ..."
I would argue that equally important is enhancing the "traditional" delay test capabilities. Using SDQL (statistical/Small Delay Quality Level) analysis to drive a timing intelligent ATPG tool ensures you are targetting the most prevalent small delay defects that are not "timing redundant" (so small they would not result in circuit failure) or so large that a generic, no timing information, delay test will detect.
Under "Modeling Accuracy" I would add that knowledge of the physical layout can be an extremely valuable input to effectively using an ATPG tools "broad range of fault models, ...". That physical knowledge could be applied to all instances of a specific library cell in the form of specific patterns to be applied or via lef/def physical layout information that identifies two nets that are physically close and are thus more likely to have a bridging defect.