Get email delivery of the Cadence blog featured here
The way in which IC designers model potential faults has a direct impact on the quality of silicon and the cost of test. At advanced nodes, fault models must evolve to more accurately represent real silicon effects - without significantly increasing time on the tester. This conflicting set of demands is one of the key challenges facing design for test (DFT) methodologies today.
I wrote some articles about DFT in the 1980s and 1990s, and fault models were fairly simple back then. Designers were representing silicon defects with stuck-at-0 and stuck-at-1 fault models. The idea was that internal nodes could literally be "stuck" at one of these values due to a manufacturing error.
A recent conversation with Mike Vachon, engineering group director for Encounter Test at Cadence, brought me up to date. He noted that the "stuck-at" fault models were typically all that was needed until the early 2000s, when designers working below 180nm discovered defect types that were not well explained by the stuck-at fault model. This lead to the introduction of transition faults, which occur when a node is slow to switch from a 0 to a 1 or a 1 to a 0.
Crossing the Bridge
As technology nodes continued to shrink, designers noticed defects that behaved as if two nets were shorted together. This led to the introduction of the bridging fault model. This is different from earlier fault models, Vachon noted, because the behavior of one net in the silicon is dependent on another net.
Cadence, meanwhile, has offered a pattern fault model since its acquisition of IBM test technology in 2002. The pattern fault allows Encounter Test users to create different kinds of fault models to drive their test pattern generation. Basically, pattern faults can model any defect whose behavior can be described in terms of input values and transitions, and expected output values and transitions.
As shown below, pattern faults can model any type of bridge behavior in Encounter Test. The tool automatically creates bridging fault models from net pair lists. Encounter True-Time ATPG generates delay (transition) tests that target bridging fault models, and Encounter Diagnostics uses the same faults to isolate bridges.
"Pattern fault modeling provides the flexibility to let users do all kinds of experimentation with robust fault modeling," Vachon said. For example, the pattern fault model can support gate exhaustive testing, in which a set of test patterns exercises all possible input combinations for a given library cell. Cell exhaustive testing takes this up one level by generating patterns that exercise all possible input combinations at cell boundaries.
There's been some discussion lately about a cell aware fault model. By targeting defects inside a library cell, this approach can create some additional faults that are targeted by test pattern generation. The technique involves looking inside library cells, doing some analysis on physical layout, and figuring out where potential defects might occur. The pattern fault model can support the cell-aware fault model, and Cadence is currently partnering with customers to evaluate the benefits and costs of the cell-aware approach.
Evaluating Test Costs
Advanced fault models do not come for free. "As you go through this progression of adding more and more faults to your fault model, you are also driving up your test costs," Vachon said. "More patterns means more tests, and more tests means more time in the test socket." It may also demand more expensive testers with more memory. Thus, he said, "customers are very wary about adding new faults to their fault models until they have very solid proof that they're going to have a payback" in terms of better yields.
The cell-aware approach, for example, increases test costs, and generating a cell-aware library is not easy. It may make sense in one situation but not another. It's thus important, Vachon observed, that customers have the ability to choose the best fault modeling approach for a given project. Pattern fault modeling, he said, "gives customers the flexibility to do their own experimentation and make their own decisions, as opposed to having a dictated solution."
So what's the future of fault modeling, given that some IC designs are now targeting a billion faults?
"I think the future is one of very cautiously moving to fault modeling beyond static, transition, and bridging," he said. "We need to cautiously grow the fault population based on real silicon data, as opposed to a blanket movement to a much more expensive fault model that will drive up test costs without justification."
You don't hear much about DFT these days, but it is far from a solved problem. Developments and decisions are underway that will ultimately affect all IC designs.
Related Blog Post
Front-End Design Summit: The Future of RTL Synthesis and Design for Test