Google FeedBurner is phasing out its RSS-to-email subscription service. While we are currently working on the implementation of a new system, you may experience an interruption in your email subscription service.
Please stay tuned for further communications.
Get email delivery of the Cadence blog featured here
I was asked the following question recently.
No longer are we seeing increasing amounts of functionality being crammed into chips, except under very special circumstances. Chips today are trending towards the leaner side. Is this only when power is a primary concern? Does this apply to all technology nodes or only to larger ones? What about pins – I/O has often been the limiter. What impact will this have on IP? Will this derail any notion of chip lets? Does it impact EDA tools?
So here are my observations and opinions.
Before we debate if this observation is correct or not, let’s take a quick walk down the memory lane. Remember how this super-integration started? Somewhere around the early 80s, we were designing SBC boards with 7400 logic family chips. Remember those 16-pin, 24-pin and 28-pin DIP (dual-in-line) chips? The NAND, NOR, XOR chips and the D-FFs. The familiar acronyms in those days were SSI and MSI. We were not quite at LSI yet. Then we saw the arrival of gate arrays and then cell-based ICs (CBICs). Then they morphed into ASSPs with the emergence of chipsets for the PC market. This was the genesis of the fabless industry. We witnessed the benefits of Moore’s Law and saw more integration in each subsequent new generation of chips, even when the process technology did not change – you just built larger die because we weren’t stressing the limits or capabilities of the process technology/die size. Also, by consolidating “discrete logic designs” and integrating them into a single-chip solution unleashed an unprecedented desktop PC market that drove chip industry innovations for years to come. This trend lasted for more than 25 years riding the process technology wave from 2 micron to 90nm. This is when system-on-chip became popular. At 90nm, you really can put a lot more functionality in a chip. You no longer need a full chipset to build a laptop. The guts of a laptop were just an Intel processor, a Northbridge and a Southbridge chip, and of course DRAMs.
Then we saw the emergence of Palm Pilots and Microsoft Pocket PCs using highly integrated SoCs, to be followed by the massive adoption of SoCs for consumer electronics. Think DVD players, digital still cameras, digital video cameras, hand-held PCs, etc., and then networking and routers as the industry grew with the emergence of the internet.
Then something happened when we hit 45/40nm. With the arrival of the iPhone, we entered the golden era of super-integration and complex SoCs. And the rest is history. We saw SoCs with heterogenous architectures, where you have a CPU core compute engine, graphics core for the display engine, DSP for the modem, and lots of embedded SRAM. This trend is continuing as we see newer and more dedicated functions being added to the new chips (we now called them applications processors or super-computers on a chip). Just look at the latest generation of applications processors in 12nm/10nm/7nm – they now have multi-core CPU engines, multi-core GPUs, dedicated DSPs for audio, dedicated neural network engines for AI, integrated RF, dedicated cellular modems, WiFi, specialized protocol interfaces for memory, display, camera, networking and wireless all crammed into a die size no bigger than 9.7mm on a side.
Now, the reality . . .
As technology reached 28nm and smaller geometries, we still have the density benefits based on Moore’s Law. But we are concerned about tradeoffs between performance, power and cost for the first time. At sub-28nm, the cost of design has skyrocketed due to process technology complexity. We now have to deal with lithography effects, multi-patterning and FinFET design, amongst many technical challenges. Just look at the mask costs for 28nm versus 16nm versus 10nm. Dare we ask how much a 7nm set of masks costs? And then the proliferation and acceleration and migration to newer and higher-speed interface protocols. DDR2/3/4/5, LPDDR2/3/4/4X/5, USB2/3, PCCIe2/3/4, 28G/56/112G SerDes, 10G/25G/40G Ethernet, etc.
Unless you have a SoC product that can ship 50+ million units or you are building (high-value, expensive) high-performance networking chips for data centers – you probably cannot afford to start a design at 7nm. Luckily for the semiconductor industry, there are quite a few “unicorn SoCs” that fit that criteria (applications processors, 4G modems, crypto-mining chips, etc.). The needs of the end market/applications are driving demand for higher and higher performance (optical networking, servers and switches for datacenters, as well as chips for autonomous driving).
So, what are we talking about chip dis-integration?
The semiconductor industry is very innovative. There are no challenges that we cannot overcome. So, how do we cope with this trend?
Another reason we are seeing more dis-integration is that new features on SoCs are not conducive to integration on the same chip due to their specific requirements, such as RF, wireless, MRAM, etc. Some functions may need GaAs, GaN or other esoteric processes, while mainstream features will continue to rely on bulk CMOS. We have seen the transition from PolySiON to HKMG to FinFETs and are now beginning to see the first implementation in EUV. We are not that far from 3nm, where there will be another major technology shift to carbon nanotubes or GAA (gate all around) technology.
By and large, I see some dis-integration going on. But the economics of chip making still favor riding Moore’s Law for another one or two generations. But that depends on the end product you are trying to build.