• Skip to main content
  • Skip to search
  • Skip to footer
Cadence Home
  • This search text may be transcribed, used, stored, or accessed by our third-party service providers per our Cookie Policy and Privacy Policy.

  1. Blogs
  2. Breakfast Bytes
  3. Rob Aitken of ARM Research on System Design
Paul McLellan
Paul McLellan

Community Member

Blog Activity
Options
  • Subscribe by email
  • More
  • Cancel
SDE
system design
Rob Aitken
system design enablement
ARM
Breakfast Bytes

Rob Aitken of ARM Research on System Design

8 Dec 2015 • 2 minute read

Breakfast BytesI wrote yesterday of how there is a transition going on as system companies discover that they need to do their own semiconductor design if they are to have products that are differentiated from their competition. To control their destiny, they need to deal with everything from application software to transistors. At a presentation today, Rob Aitken of ARM Research made an almost identical point (although he went all the way down to process, which is probably a step too far for most system companies who have to live with what they get from their foundry). Integration is the key but not all integration approaches are equal. Architecture becomes a question of integrating everything in the right way.

Semiconductor Architecture

One of the major drivers of this need is that everything has a power envelope and there are different tradeoffs depending on the rest of the system architecture. Rob had a very interesting slide showing what you could use 100 picoJoules for. He made the very real point that everything costs energy and the whole system has to be optimized depending on what you want to achieve. Computation takes energy. Access to memory takes energy. Transmitting data takes energy. And obviously driving an electric car takes energy although it is interesting just how much is needed. It takes a lot of femto-meters to go a hundred miles.

100 picoJoules

As silicon has scaled, the type of optimization possible has changed. In the days of single core processors, the main constraint was fitting everything on the chip and competition was about frequency. Next we entered the era of multi-core chips and the key constraint was power, and how much throughput was possible. Today, we have both multi-core processors and lots of specialized offload processors such as GPUs and audio processors. The measure is really now throughput/joule—otherwise thermal effects mean that we face the problem of "dark silicon" where we cannot fire up the whole chip at the same time.

multi-core constraints

The big challenge is that there is an insatiable demand for more of everything. This is especially the case in the smartphone market where bandwidths and performance goes up a lot every year. Since the battery is not very large this is one of the most overconstrained design areas. But even in datacenters, which have huge amounts of power, or automotive where batteries are more forgiving, power still drives almost everything.

Rob didn't use the phrase System Design Enablement but clearly he is talking about the same thing, optimizing the entire system concurrently rather than trying to build systems out of individually optimized building blocks, which leads to a suboptimal global solution.