• Home
  • :
  • Community
  • :
  • Blogs
  • :
  • Tensilica and Design IP
  • :
  • Chip Dis-integration

Tensilica and Design IP Blogs

  • Subscriptions

    Never miss a story from Tensilica and Design IP. Subscribe for in-depth analysis and articles.

    Subscribe by email
  • More
  • Cancel
  • All Blog Categories
  • Breakfast Bytes
  • Cadence Academic Network
  • Cadence Support
  • Computational Fluid Dynamics
  • CFD(数値流体力学)
  • 中文技术专区
  • Custom IC Design
  • カスタムIC/ミックスシグナル
  • 定制IC芯片设计
  • Digital Implementation
  • Functional Verification
  • IC Packaging and SiP Design
  • In-Design Analysis
    • In-Design Analysis
    • Electromagnetic Analysis
    • Thermal Analysis
    • Signal and Power Integrity Analysis
    • RF/Microwave Design and Analysis
  • Life at Cadence
  • Mixed-Signal Design
  • PCB Design
  • PCB設計/ICパッケージ設計
  • PCB、IC封装:设计与仿真分析
  • PCB解析/ICパッケージ解析
  • RF Design
  • RF /マイクロ波設計
  • Signal and Power Integrity (PCB/IC Packaging)
  • Silicon Signoff
  • Solutions
  • Spotlight Taiwan
  • System Design and Verification
  • Tensilica and Design IP
  • The India Circuit
  • Whiteboard Wednesdays
  • Archive
    • Cadence on the Beat
    • Industry Insights
    • Logic Design
    • Low Power
    • The Design Chronicles
TomWong
TomWong
27 Jun 2018

Chip Dis-integration

I was asked the following question recently.

No longer are we seeing increasing amounts of functionality being crammed into chips, except under very special circumstances. Chips today are trending towards the leaner side. Is this only when power is a primary concern? Does this apply to all technology nodes or only to larger ones? What about pins – I/O has often been the limiter. What impact will this have on IP? Will this derail any notion of chip lets? Does it impact EDA tools?

So here are my observations and opinions.

Before we debate if this observation is correct or not, let’s take a quick walk down the memory lane. Remember how this super-integration started? Somewhere around the early 80s, we were designing SBC boards with 7400 logic family chips. Remember those 16-pin, 24-pin and 28-pin DIP (dual-in-line) chips? The NAND, NOR, XOR chips and the D-FFs. The familiar acronyms in those days were SSI and MSI. We were not quite at LSI yet. Then we saw the arrival of gate arrays and then cell-based ICs (CBICs). Then they morphed into ASSPs with the emergence of chipsets for the PC market. This was the genesis of the fabless industry. We witnessed the benefits of Moore’s Law and saw more integration in each subsequent new generation of chips, even when the process technology did not change – you just built larger die because we weren’t stressing the limits or capabilities of the process technology/die size. Also, by consolidating “discrete logic designs” and integrating them into a single-chip solution unleashed an unprecedented desktop PC market that drove chip industry innovations for years to come. This trend lasted for more than 25 years riding the process technology wave from 2 micron to 90nm. This is when system-on-chip became popular. At 90nm, you really can put a lot more functionality in a chip. You no longer need a full chipset to build a laptop. The guts of a laptop were just an Intel processor, a Northbridge and a Southbridge chip, and of course DRAMs.

Then we saw the emergence of Palm Pilots and Microsoft Pocket PCs using highly integrated SoCs, to be followed by the massive adoption of SoCs for consumer electronics. Think DVD players, digital still cameras, digital video cameras, hand-held PCs, etc., and then networking and routers as the industry grew with the emergence of the internet.

Then something happened when we hit 45/40nm. With the arrival of the iPhone, we entered the golden era of super-integration and complex SoCs. And the rest is history. We saw SoCs with heterogenous architectures, where you have a CPU core compute engine, graphics core for the display engine, DSP for the modem, and lots of embedded SRAM. This trend is continuing as we see newer and more dedicated functions being added to the new chips (we now called them applications processors or super-computers on a chip). Just look at the latest generation of applications processors in 12nm/10nm/7nm – they now have multi-core CPU engines, multi-core GPUs, dedicated DSPs for audio, dedicated neural network engines for AI, integrated RF, dedicated cellular modems, WiFi, specialized protocol interfaces for memory, display, camera, networking and wireless all crammed into a die size no bigger than 9.7mm on a side.

Now, the reality . . .

As technology reached 28nm and smaller geometries, we still have the density benefits based on Moore’s Law. But we are concerned about tradeoffs between performance, power and cost for the first time. At sub-28nm, the cost of design has skyrocketed due to process technology complexity. We now have to deal with lithography effects, multi-patterning and FinFET design, amongst many technical challenges. Just look at the mask costs for 28nm versus 16nm versus 10nm. Dare we ask how much a 7nm set of masks costs? And then the proliferation and acceleration and migration to newer and higher-speed interface protocols. DDR2/3/4/5, LPDDR2/3/4/4X/5, USB2/3, PCCIe2/3/4, 28G/56/112G SerDes, 10G/25G/40G Ethernet, etc.

Unless you have a SoC product that can ship 50+ million units or you are building (high-value, expensive) high-performance networking chips for data centers – you probably cannot afford to start a design at 7nm. Luckily for the semiconductor industry, there are quite a few “unicorn SoCs” that fit that criteria (applications processors, 4G modems, crypto-mining chips, etc.). The needs of the end market/applications are driving demand for higher and higher performance (optical networking, servers and switches for datacenters, as well as chips for autonomous driving).

So, what are we talking about chip dis-integration?

Five things:

  • Escalating costs of adopting next-generation process
  • Costs associated with chasing the next new and higher-speed interface protocols
  • Technical challenges and difficulties in designing and implementing new protocols and design with FinFETs
  • Rapid transition from established standards to the next new thing
  • Shorter product life cycles (application processors have a lifecycle of 18 months and a design cadence of 12 months! A new chip every Xmas)
  • Lack of IP and design reuse is driving even higher costs and delaying time to market 

The semiconductor industry is very innovative. There are no challenges that we cannot overcome. So, how do we cope with this trend? 

  • Instead of designing a new chip at every new node, skip one generation. Many products can benefit from this.
  • Instead of doing everything in-house, leverage the industry ecosystem (3rd party IP).
  • For difficult and expensive functionality (25G/50G SerDes), design them as chiplets so you can re-use them for two generations by leveraging new packaging technology such as 2.5D interposers.
  • Don’t migrate the entire complex SoC from one node to the next. Divide and conquer. Only migrate the portion of your design that needs the highest performance offered by the next process node. Keep the complex functionality IP that you have spent so much time verifying, continue to use it in the form of chiplets and utilize advanced packaging such as 2.5D interposers. Maximize your investment before moving to the next node. Remember, the cost of the chip is proportional to die size! A 7x7mm chip will yield better than a 12x12mm chip. Yes, it is no longer a technology decision, it is an economic decision! 

Another reason we are seeing more dis-integration is that new features on SoCs are not conducive to integration on the same chip due to their specific requirements, such as RF, wireless, MRAM, etc. Some functions may need GaAs, GaN or other esoteric processes, while mainstream features will continue to rely on bulk CMOS. We have seen the transition from PolySiON to HKMG to FinFETs and are now beginning to see the first implementation in EUV. We are not that far from 3nm, where there will be another major technology shift to carbon nanotubes or GAA (gate all around) technology.

By and large, I see some dis-integration going on. But the economics of chip making still favor riding Moore’s Law for another one or two generations. But that depends on the end product you are trying to build.

Tags:
  • chiplets |
  • IoT |
  • Design IP and Verification IP |
  • moore's law |
  • 2.5D interposer |