Home
  • Products
  • Solutions
  • Support
  • Company
  • Products
  • Solutions
  • Support
  • Company
Community SoC Integration Designing for the Future - Managing the Impact of Moore…

Author

TomWong
TomWong

Community Member

Blog Activity
Options
  • Subscriptions

    Never miss a story from SoC Integration. Subscribe for in-depth analysis and articles.

    Subscribe by email
  • More
  • Cancel
Design IP
IP
LPDDR
PCIe Gen4
MIPI
USB
SerDes

Designing for the Future - Managing the Impact of Moore's Law

15 May 2019 • 3 minute read

With Moore’s Law, the industry assumes that when you go from one geometry to the next finer node, you will have performance gains. All this is automatic. Chip designers have tried to leverage improvements in process technology to get performance improvements for many years now. Let’s examine whether or not this assumption is still valid.

When we were at more mature technologies, such as 90nm to 65nm, there may have been some observable and immediate benefits. But as you go down to 28nm and below, SoC performance is dictated more by interconnects (metal system) than transistor performance. You probably noticed that mainstream CPUs for PCs/laptops have hovered between 2GHz and 3GHz because Moore’s law scaling can no longer give you “performance gains” in terms of clock speed. Something else has to be done to get more performance than relying on process migration to the next finer node. This is when CPU designs went from single core to dual core to quad core. Also, running devices at a high clock rate can also get you into trouble with heat (thermal issues) and high packaging and cooling costs. Unless you are designing for servers in the datacenter, low power is the most important spec. Even for chips used in modern-day datacenters, they are not performance at all costs. After all, one of the major costs of running a datacenter is electricity (power). This is why you see mega-scale datacenters located near hydroelectric power plants, where the price of electricity is lower.

Fast forward to 2019, when we now spend lots of time talking about SoCs for smartphones, drones, ML/AI, crypto mining, cameras, automotive, etc. Let’s take a look at applications processors. The industry has been innovating for more than 10 years now. All apps processors now have a heterogenous architecture to achieve the features and performance needed by the end applications. It is not uncommon to see a SoC with a cluster of processors, a cluster of graphics cores, or a cluster of DSPs for audio, communications, video processing, etc. We are seeing the adoption of newer protocols to gain performance and reduce power. For example, most smartphones today use LPDDR4 at 3200 speed, and there is a migration to LPDDR4/4X at 4266 to improve performance in the memory subsystem. We have seen the MIPI interface move from v1.1 (1.5Gbps per lane) to MIPI D-PHY v1.2 (2.5Gbps per lane), again to increase system throughput (performance). Similarly, PCIe2 (5GT/s) interfaces have given way to PCIe3 (8GT/s) in SSD controllers and flash storage interfaces. We are now seeing specialized low-power DSP cores being used to address audio, video and baseband applications. Additionally, we are seeing the emergence of neural net processors to aid in applications such as facial recognition, object detection and a slew of requirements needed to address the computational needs of autonomous vehicle chips.

With heterogenous architectures, you can no longer rely on a simple bus architecture. Case in point: take a look at all the modern SoCs and see what they have in common. Fabric. They all have a NoC (network on chip) to connect these specialized cores and manage the traffic in these complicated systems. This all started with complex SoCs for apps processors about 10 years ago and is now the standard methodology for designing complex chips today ranging from more advanced apps processors to ADAS chips to ML/AI SoCs. This trend will likely continue for many more years.

On the I/O front, we have witnessed the move to lower signal swing as well as the adoption of differential signaling. This will be a challenge in a few years’ time as you can no longer scale the signal swing. You can only go so low until the S/N ratio says you cannot do this anymore.

What we are seeing now is really the end of Moore’s law as we know it. Innovation will need to come from doing things smarter. We already ate all the low hanging fruit!


© 2023 Cadence Design Systems, Inc. All Rights Reserved.

  • Terms of Use
  • Privacy
  • Cookie Policy
  • US Trademarks
  • Do Not Sell or Share My Personal Information