Get email delivery of the Cadence blog featured here
In the EDA history of design rule check (DRC), there have been two distinct eras so far. The first is the era of a single run on--a single CPU that enabled designers to run their DRC on technology nodes ranging from 1um to 0.18um, which we refer to as DRC 1.0. The second is the era of the multi-threaded DRC--runs that were introduced to meet the performance requirements at process technologies ranging from 0.18um to 28nm and below, which we refer to as DRC 2.0. The demarcation between these two eras was clearly driven by the faster turnaround time need, which was due to the design size and design rule complexity increase from node to node. Fortunately the parallel rise of multi-core CPUs platform enabled this needed transition.
Incidentally, the leading tool that dominated the DRC 1.0 era could not architecturally be enhanced to serve the market needs to run DRC on technologies that required multi-threading performance. Therefore, as is the case in a competitive market, a new technology emerged from an EDA vendor in the late 90s and grew to dominate the DRC 2.0 era. The main reason for the technology leadership change was the ability to address the multi-threaded processing, which needed a brand new infrastructure and which could not be achieved by enhancing the existing technologies of the DRC 1.0 era. Interestingly, it took about approximately a decade for the newcomer to become the leading DRC tool in the semiconductor market.
Having a robust multi-threaded processing platform enabled the transition from DRC 1.0 to DRC 2.0, and this has served the DRC market for years, enabling designers to run their full-chip physical signoff DRC step overnight. Fast forward to the deep submicron process technologies, where the introduction of the multi-color decomposition, FinFET technologies, and the growth in the design size and design rule complexity has highlighted the limitations of the current DRC software tools in the market. Now, designers can no longer get their full-chip DRC runs completed overnight, even when running market-leading DRC tools on 100+ CPUs.
To address the expectations gap, multi-threaded processing systems gave way to by distributed processing ones, and the rise of cloud computing. Even though EDA vendors have worked hard to make their DRC 2.0 software leverage distributed processing, it is not sufficient to meet the ongoing performance improvements required by designers at the advanced technology nodes. This has forced designers to schedule additional time into the design cycle to compensate for the performance gap in market tools. This issue has become amplified at the very advanced nodes, as there is no new technology in the DRC market to address the true turnaround time needs, and to give them back their overnight full-chip DRC run.
Learning from the DRC 1.0 to DRC 2.0 transition, it is clear that that instead of trying to enhance the current DRC tools, the real solution should be to enable the DRC 3.0 era with a tool that offers massive scalability and uses truly distributed processing, so that it can leverage any cloud processing environment. But, DRC 3.0 cannot happen by patching DRC 2.0. It will need to be developed from the ground up using new architecture and technologies. Of course, as of today such software is not available on the market, and even when it will be, it will take time for the market to transition, not only from an ecosystem standpoint (I will touch base on this topic in a subsequent blog), but from an habit standpoint.
Right now, designers plan their tapeout timeframe to include the additional time needed due to DRC performance limitations, even though it is inconvenient, and this costs them an additional one to two days from achieving their time-to-takeout window. They do not have an alternative. Where is DRC 3.0?