Cadence® system design and verification solutions, integrated under our System Development Suite, provide the simulation, acceleration, emulation, and management capabilities.
System Development Suite Related Products A-Z
Cadence® digital design and signoff solutions provide a fast path to design closure and better predictability, helping you meet your power, performance, and area (PPA) targets.
Full-Flow Digital Solution Related Products A-Z
Cadence® custom, analog, and RF design solutions can help you save time by automating many routine tasks, from block-level and mixed-signal simulation to routing and library characterization.
Overview Related Products A-Z
Driving efficiency and accuracy in advanced packaging, system planning, and multi-fabric interoperability, Cadence® package implementation products deliver the automation and accuracy.
Cadence® PCB design solutions enable shorter, more predictable design cycles with greater integration of component design and system-level simulation for a constraint-driven flow.
An open IP platform for you to customize your app-driven SoC design.
Comprehensive solutions and methodologies.
Helping you meet your broader business goals.
A global customer support infrastructure with around-the-clock help.
24/7 Support - Cadence Online Support
Locate the latest software updates, service request, technical documentation, solutions and more in your personalized environment.
Cadence offers various software services for download. This page describes our offerings, including the Allegro FREE Physical Viewer.
Get the most out of your investment in Cadence technologies through a wide range of training offerings.
This course combines our Allegro PCB Editor Basic Techniques, followed by Allegro PCB Editor Intermediate Techniques.
Virtuoso Analog Design Environment Verifier 16.7
Learn learn to perform requirements-driven analog verification using the Virtuoso ADE Verifier tool.
Exchange ideas, news, technical information, and best practices.
The community is open to everyone, and to provide the most value, we require participants to follow our Community Guidelines that facilitate a quality exchange of ideas and information.
It's not all about the technlogy. Here we exchange ideas on the Cadence Academic Network and other subjects of general interest.
Cadence is a leading provider of system design tools, software, IP, and services.
Get email delivery of the Cadence blog featured here
The way in which IC designers model potential faults has a direct impact on the quality of silicon and the cost of test. At advanced nodes, fault models must evolve to more accurately represent real silicon effects - without significantly increasing time on the tester. This conflicting set of demands is one of the key challenges facing design for test (DFT) methodologies today.
I wrote some articles about DFT in the 1980s and 1990s, and fault models were fairly simple back then. Designers were representing silicon defects with stuck-at-0 and stuck-at-1 fault models. The idea was that internal nodes could literally be "stuck" at one of these values due to a manufacturing error.
A recent conversation with Mike Vachon, engineering group director for Encounter Test at Cadence, brought me up to date. He noted that the "stuck-at" fault models were typically all that was needed until the early 2000s, when designers working below 180nm discovered defect types that were not well explained by the stuck-at fault model. This lead to the introduction of transition faults, which occur when a node is slow to switch from a 0 to a 1 or a 1 to a 0.
Crossing the Bridge
As technology nodes continued to shrink, designers noticed defects that behaved as if two nets were shorted together. This led to the introduction of the bridging fault model. This is different from earlier fault models, Vachon noted, because the behavior of one net in the silicon is dependent on another net.
Cadence, meanwhile, has offered a pattern fault model since its acquisition of IBM test technology in 2002. The pattern fault allows Encounter Test users to create different kinds of fault models to drive their test pattern generation. Basically, pattern faults can model any defect whose behavior can be described in terms of input values and transitions, and expected output values and transitions.
As shown below, pattern faults can model any type of bridge behavior in Encounter Test. The tool automatically creates bridging fault models from net pair lists. Encounter True-Time ATPG generates delay (transition) tests that target bridging fault models, and Encounter Diagnostics uses the same faults to isolate bridges.
"Pattern fault modeling provides the flexibility to let users do all kinds of experimentation with robust fault modeling," Vachon said. For example, the pattern fault model can support gate exhaustive testing, in which a set of test patterns exercises all possible input combinations for a given library cell. Cell exhaustive testing takes this up one level by generating patterns that exercise all possible input combinations at cell boundaries.
There's been some discussion lately about a cell aware fault model. By targeting defects inside a library cell, this approach can create some additional faults that are targeted by test pattern generation. The technique involves looking inside library cells, doing some analysis on physical layout, and figuring out where potential defects might occur. The pattern fault model can support the cell-aware fault model, and Cadence is currently partnering with customers to evaluate the benefits and costs of the cell-aware approach.
Evaluating Test Costs
Advanced fault models do not come for free. "As you go through this progression of adding more and more faults to your fault model, you are also driving up your test costs," Vachon said. "More patterns means more tests, and more tests means more time in the test socket." It may also demand more expensive testers with more memory. Thus, he said, "customers are very wary about adding new faults to their fault models until they have very solid proof that they're going to have a payback" in terms of better yields.
The cell-aware approach, for example, increases test costs, and generating a cell-aware library is not easy. It may make sense in one situation but not another. It's thus important, Vachon observed, that customers have the ability to choose the best fault modeling approach for a given project. Pattern fault modeling, he said, "gives customers the flexibility to do their own experimentation and make their own decisions, as opposed to having a dictated solution."
So what's the future of fault modeling, given that some IC designs are now targeting a billion faults?
"I think the future is one of very cautiously moving to fault modeling beyond static, transition, and bridging," he said. "We need to cautiously grow the fault population based on real silicon data, as opposed to a blanket movement to a much more expensive fault model that will drive up test costs without justification."
You don't hear much about DFT these days, but it is far from a solved problem. Developments and decisions are underway that will ultimately affect all IC designs.
Related Blog Post
Front-End Design Summit: The Future of RTL Synthesis and Design for Test