• Skip to main content
  • Skip to search
  • Skip to footer
Cadence Home
  • This search text may be transcribed, used, stored, or accessed by our third-party service providers per our Cookie Policy and Privacy Policy.

  1. Blogs
  2. Breakfast Bytes
  3. Why Create an SoC?
Paul McLellan
Paul McLellan

Community Member

Blog Activity
Options
  • Subscribe by email
  • More
  • Cancel
SoC
design
risk

Why Create an SoC?

11 May 2020 • 10 minute read

 breakfast bytes logo

 I've been in the semiconductor and EDA industries for nearly forty years. One thing I've noticed that hasn't changed is that we love our chips and the tools to design them. Designing a chip that is a perfect match to a unique purpose is the pinnacle of what we do. For some customers, it is exactly what they need, too. But not all.

I'm sure tailors all think that every man should have a bespoke custom-tailored suit. And in an abstract sense, most men who need a suit would probably like one. But what we actually do is go to Men's Wearhouse or Nordstrom and come home with a suit the same day for a small percentage of the price of the Savile Row experience.

ASIC

 I started my career at VLSI Technology and we (along with LSI Logic) pretty much invented what became called ASIC, which stands for Application-Specific Integrated Circuit. We actually were using the name CSIC, for Customer-Specific Integrated Circuit, which is a much more accurate name. True application-specific chips came along later, and we had to find a new name and so we called them ASSPs, for Application-Specific Standard Products. ASIC was actually a business model as well as a type of chip. The customer had all the system knowledge and would do the front-end design. The semiconductor company (we didn't call them foundries in those days) had all the semiconductor knowledge and so would do the layout and manufacture. Later, system companies would take over doing all the design, including the physical design, and this was called COT, for customer-owned tooling (not to be confused with what the military calls COTS for commercial off-the-shelf). Later, as these chips got bigger, we started to call them systems-on-chip or SoCs. I actually think this name obscures a key difference, whether a company is building a chip to sell it in the merchant market, or whether they are building a chip for themselves. From the point of view of an EDA company like Cadence, they are similar chips requiring similar tools. But for an end customer, they are as different as chalk and cheese.

Why Design an SoC?

Cadence's Intelligent System Design is about designing all sorts of systems, not just chips: boards, packages, connectors, cables, antenna, thermal, and more. But it is true that the heart of most modern systems is one or more chips.

(This is not a post specifically about advanced packaging and chiplets. So I'm just going to use the word "SoC" to also cover systems implemented with multiple die in the same package, sometimes called system-in-package or SiP. The only word we seem to have today to cover both SoC and SiP is "system", which is so generic as to be meaningless. And to say "SoC or SiP" for a whole post will annoy both of us.)

This brings me back to my opening point. We love the perfectly optimized chips and we think everyone else's babies are ugly. We think that every company wants to design its own chips, or at least should. But the reality for the customer is that designing its own chip is the last thing it wants to do if it can possibly avoid it. It is incredibly expensive in terms of up-front cost. It requires a team of specialized engineers who need to be recruited. It takes a long time. It has all sorts of risks. It is a very unforgiving endeavor. In my EDA101 talk, I like to point out that designing an SoC is like designing a large passenger aircraft...except that when you finish the design, you press a button, and a few months later an automatic factory delivers the plane and you are expected to load it up with passengers and enter revenue service without changing anything.

Many companies think of designing a chip as equivalent to designing an airplane in another way. It is expensive, risky, and they don't know how to do it. Designing a chip and designing an airplane seem equally challenging. Much easier to just go and buy "a standard product" from Boeing or Airbus. Or Broadcom or Mediatek.

 When I was the VP Engineering at Ambit I remember having a meeting with someone senior at Rio. You probably don't remember Rio but they were the first company to build a successful portable MP3 player. It was FPGA-based, with 32MB (yes, not GB) of memory to hold the music. Rio was selling an order of magnitude more players than in its wildest dreams and so it was considering turning the FPGA into an ASIC to reduce its costs by a lot. Of course, I wanted them to do that, and use Ambit synthesis to do it. It really needed a bespoke suit, and I had the sewing machine.

But the reality was that they had a single engineering team, and so Rio's choice was to cost-reduce its current design or do an FPGA-based Mark II design, which is the choice that it eventually took. Of course, the iPod came along a few years later and Rio couldn't compete. But note that iPod was built entirely out of standard products, too. It wasn't until iPhone 4 that the A4 custom application processor arrived on the scene (there never was an A1 to A3, at least in a real product).

Many companies make the same decision: they can build a product out of standard products and software. It may not be as elegant as a bespoke SoC, but it's a better fit for what the customer regards as valuable enough to pay for. The Rio guys assumed that customers would rather have a Rio Mark II than a slightly cheaper Rio Mark I. I'm sure iPods cost more than Rios, so in that sense he was correct.

The Reason to Build an SoC

We in the industry want everyone to build an SoC that fits their needs perfectly. But their needs include cost, ability, schedule, and risk. A lot of what EDA tools do is helping to minimize some of those. Clearly, verification is so big a percentage of design to minimize risk (the plane has to fly). Design tools scale into the cloud to reduce schedule. IP often "reduces" the ability required—most design teams could not design their own DDR5 controller even if they were foolish enough to try. But EDA tools and IP do not reduce the overhead of building at SoC versus using a standard product to zero. Savile Row suits are always going to more expensive than off-the-rack.

If a company is going to design its own SoC it needs a good reason. And there is really only one—it will create a better product in a way that its customers care about. And it will more than cover its costs.

 I've pointed out before that all the leaders in the smartphone market design their own application processors. And they do have a choice. By the time you get down to the mid-range smartphones, they mostly use merchant chips.

At the investor conference where Tesla introduced its SDC, Elon Musk said it was cheaper than the NVIDIA GPU in the previous version of its hardware. I'd be willing to bet that is only true if you don't amortize all its design costs across its volume. It built 357K cars last year, so a volume of about 700K chips per year (there are two in each car). That's not nothing, but it's clear the motivation for doing the design was not cost reduction. You can read my original coverage of the SDC in my posts Tesla Drives into Chip Design and HOT CHIPS: The Tesla Full Self-Driving Computer.

I mentioned Amazon/AWS building its own data center chips. This is Nitro Project. Again, it may or may not save some cost, especially if you amortize the design costs, but the motivation for the project was to improve the services that AWS provides (and, presumably, make them more profitable). You can read about the Project Nitro Project in my two posts HOT CHIPS: The AWS Nitro Project and Xcelium Is 50% Faster on AWS's New Arm Server Chip.

Features that Make an SoC Attractive

So what are the features of an SoC that makes it attractive, at least sometimes?

Most electronic systems have an enormous software component. Just consider the examples I mentioned: smartphone APs, autonomous driving computers, hyperscale data centers with software-defined everything. So why not just buy a microprocessor and run the software? Typically because it is orders of magnitude too slow, dissipates far too much power, and requires too many additional chips to hold all the peripherals making it physically too large. Given the software, the ideal SoC is, in essence, the perfect chip to run that software. Now that processors don't get faster just by waiting for the Moore's Law crank-handle to make another turn, increasingly that means implementing specialized functions on specialized processors, rather than just adding more general-purpose cores. This is especially the case in adding neural-network functionality, for which general-purpose CPUs are especially ill-suited.

Another big motivation for an SoC is security. Board-level systems are simply less secure. The traces are more accessible, components are more easily replaced or substituted with clones, and so on. Even FPGA-based systems suffer from these problems. I attended a scary presentation at GOMACTech last year (this is basically a conference of government and military managers) where a professor from the University of Florida showed how his team had read out the AES security codes from a Xilinx array of the type widely used in the electronics those managers ran. He didn't even need to recap the chip (dissolve its plastic packaging). This allows the root-of-trust bitstream to be attacked, and either the FPGA reprogrammed or the secure boot of any software compromised.

 I'll just mention one other aspect since this post is getting too long already. Going back to when I was at VLSI, we had a family of gate arrays. Some were seas of logic gates (since one big use for gate arrays was "glue logic" in the era before you could embed a processor in a chip and have enough room left over to do anything else). But most uses required some memory, so we had bases configured with various combinations of logic gates and memory, trying to have all the right combinations. They were never right. The big advantage of gate-arrays is supposed to be that when you place an order, the wafers have already had all the transistors manufactured (FEOL) and would be sitting in wafer bank. it was only necessary to add the interconnect (BEOL) which was much faster. But in practice, every array needed its own ratio of gates and memory and so there would not be any wafers sitting waiting for orders. So the customers would risk ending up with all the disadvantages of gate-arrays (bigger and slower) without the compensating advantages (shorter time from order to delivery). We would then switch them to cell-based implementation and cut the die size in half.

Well, in the same way, SoCs also all have lots of memories on them, and one of the advantages of designing your own SoC is that the area devoted to memory versus the area devoted to everything else can be optimally partitioned. The alternative is not a gate-array, of course. But it may be a standard product without enough memory. Or too much, but not enough of something else.

Hammers and Tailors

 There's a well-known saying that "if all you have is a hammer, everything looks like a nail." And if you are a tailor, every man looks like a walking customer for a bespoke suit. And if you are an EDA company, every system company looks like a candidate for an SoC design.

Often, there are great reasons for doing an SoC design, and many areas where only an SoC design will work—mobile phones are not made out of FPGAs for a reason. But many readers of this blog have a hammer. Or a sewing machine.

 

Sign up for Sunday Brunch, the weekly Breakfast Bytes email.