What are the "gotchas" as design teams move to 40 nm process
nodes and below? The best way to find out is to hear from someone who's been
there. At Management
Day at the recent Design Automation Conference, Jitendra Khare, director of
central engineering at AppliedMicro,
presented the most comprehensive and informative list I've seen of the
challenges that emerge as SoCs move to 40 nm.
While Management Day looked at both the technical and
business challenges of complex SoCs, Khare's presentation, in a paper session I
moderated, stayed on the technical side. (I previously blogged about a Management
Day panel on which Khare and four other presenters appeared). Management
Day was sponsored by Cadence.
Khare opened his presentation by talking about trends that
are driving SoCs to lower process nodes, including multiple embedded cores,
complex interfaces, smart power management, and cost concerns. The need to
support a variety of applications with low-cost hardware is a key overall
But there are some things that "need to be done differently
for 40 nm SoCs," as Khare said. Here are some of the challenges he cited.
Hard IP Procurement
There could be dozens of IP blocks on large SoCs, and many
are "hard" IP blocks that have already gone through physical implementation.
"One thing we notice is that timing corners just explode at 40 nm," Khare said.
"You need to negotiate all corners in advance with IP vendors." The use of
low-power design techniques causes the number of corners to increase even more,
Khare noted that hard IP must also be compliant with design
for manufacturability (DFM) requirements, and must be available for all of the
metal stack options that may be employed in the SoC design (or, I would think,
"The main thing at 40 nm is leakage power," Khare said. "You
cannot underestimate the significance of leakage. It can kill your chip." At
high temperatures, he said, leakage power can be twice the dynamic power at 40
So what can you do? For memory leakage, you can't do much
about bit cells, but you can use high voltage threshold (HVT) decoders, memory
sleep modes, and latch-based RAMs. Standard cell leakage can be controlled with
HVT cells, but there's a performance tradeoff. Another possibility is using a
50 nm cell library. This improves leakage and timing performance, but as you'd
expect, there's an area penalty.
Test and Reliability
Khare talked at length about this topic. Key points include:
Main message here is that this is a very important
consideration, especially with the current push towards low-cost packaging. You
need to simulate every package design for power and signal integrity.
On-package capacitors are becoming necessary. Packages must be designed for
current surges due to at-speed scan tests.
Khare noted that high-speed interfaces require
chip/package/board co-simulation for signal integrity effects. And hard IP
blocks must be simulated as well. "You cannot trust what the IP vendor is
telling you. You have to do simulations in house," he said.
Why Bother to Move?
With a list like this, you may wonder why people don't just
stay at 65 nm or 90 nm (or even 130 nm or 180 nm, which are still commonly used
for analog design). Many design teams will, for now. But die size,
multi-function performance, and unit cost requirements will drive an increasing
number of design teams to move to 40 nm and below. To make it more practical,
we do need to get SoC development costs under control. The recent EDA360 vision paper has some suggestions
The good news is that design tools and libraries are ready
for 40 nm, and that there's good information available from those who have
paved the way. This Management Day talk was one example.