• Skip to main content
  • Skip to search
  • Skip to footer
Cadence Home
  • This search text may be transcribed, used, stored, or accessed by our third-party service providers per our Cookie Policy and Privacy Policy.

  1. Blogs
  2. Breakfast Bytes
  3. Designing for the Cloud
Paul McLellan
Paul McLellan

Community Member

Blog Activity
Options
  • Subscribe by email
  • More
  • Cancel
cloud
accelerator
gsa silicon summit
gsa
datacenter

Designing for the Cloud

23 Jun 2016 • 4 minute read

 At the recent GSA silicon summit, there was a panel session on designing for the cloud. The panel was moderated by Linley Gewennap of the Linley Group. The panelists were Ivo Bolsoens, the CTO of Xilinx, Ian Ferguson from ARM, and Stephen Pawlowski of Micron.

I think that there are two fundamental questions about the direction of the cloud. One is whether Intel's domination of the processor socket will continue or whether other architectures, in particular ARM, will get more than a toehold. The other is whether accelerators of some sort will become an important factor in the market.

Linley opened with his survey of the market. Many applications can benefit from hardware accelerators but Intel doesn't provide them, although possibly with the Altera acquisition, Intel will provide some FPGA-based solutions in the future. He says that new workloads are coming with a different balance between CPU, memory, and I/O. Within a couple of years, he expects that ARM® partners will have processors with performance competitive with Intel (today the Xeon E5 defines the mainstream, E7 is too expensive except for a few applications).

Ivo said that the world is changing with major new datacenter workloads such as video transcode, machine learning, NFV, in-memory databases, neural network training, and more. These require different compute requirements with less insistence on heavy-duty floating point. Regular cache memory requires very structured access to data, so we will also need new memory architectures. Accelerators are going from being slaves to peers. One important standard is CCIX, the cache coherent interface for accelerators that was recently announced. This already has backing from industry heavyweights such as AMD, ARM, Huawei, IBM, Mellanox, Qualcomm, and, of course, Xilinx.

Ian from ARM said his plan was to "be brief, be brilliant, be gone." He pointed out that ARM is okay with compromising on single-thread performance for non-linear gains in energy efficiency. But really it is about the whole system. Once the CPU is "good enough" then the most important areas are memory, I/O, and on-chip hardware accelerators.

Of course the big question with ARM in the server space is, "Are we there yet?" He said boxes are now being deployed. China is leading other geographies, which is one reason that it may seem from afar that little is happening. There are other silicon platforms that have not been announced that are in the hands of evaluators. But it does take work. You can't just take a system written for Intel and just put it on a 64-core ARM server. The software ecosystem needs to move from ported to optimized.

Steve, from Micron, was the day's memory expert. He pointed out that new memory technologies are rare and need overwhelming compelling value to be introduced in high volume. There is currently a focus on RRAM (resistive RAM) due to scaling issues. But DRAM scaling will continue, although latency will not improve. There is more and more focus on total bandwidth. He didn't really talk about Micron's 3D Xpoint technology (jointly developed with Intel), which has the potential to disrupt the traditional memory hierarchy.

China is a big part of the cloud story. Linley pointed out that three of the top seven largest datacenters in the world are in China. The push by the country to become more self-sufficient in semiconductors (China imports more semiconductors than oil, which is an amazing statistic) means that western companies need to partner. IBM is active there in OpenPOWER, AMD is licensing x86, ARM is active, too. Probably it is not going to all come down to the CPU architecture.

Ivo was asked what it would take to get accelerators deployed. He said that Microsoft is saying 30% of servers will be equipped with accelerators. There's lots of development going on with libraries and frameworks to make it all seamless, so that the same code runs with or without the accelerator (obviously a lot faster with). Google has just introduced a neural network chip. The figure of merit for a machine-learning platform is just not the same as a regular CPU.

Steve said that, as a memory guy, he loves machine learning, it really pushes memory performance, and it is one of the things that will push towards special capabilities. Hybrid memory cube (HMC) is starting at the high end and will migrate into the mainstream as costs come down.

What is clear is that the hardware is going to get more complex. You can't just get a speedup by waiting a couple of years for Moore's Law to deliver faster processors, that era is over. But that means that programming is going to get harder and that semiconductor companies are going to have to invest more in software. Even things like microcontrollers are getting more complex because they are hooked back into the cloud, not just standalone and independent.

On IoT everyone agreed that the money will be in the services (over-the-air updates, device provisioning, delivering services securely). For example, Joe Costello's company Enlighted replaces all your lightbulbs and optimizes energy use and is paid on part of the savings on the energy bill. Something like that is more likely to be the model than a company going and purchasing hundreds of smart lightbulbs and linking them back to their own server farms.

One thing that nobody mentioned was RISC-V. I think that there is a good possibility that it could become a significant force in future datacenters. Imagine if Facebook or Google (or Facebook and Google) said they were standardizing on RISC-V. Or their Chinese counterparts like Tencent and Baidu were. If you don't know much about RISC-V now, I think you will keep hearing more and more about it. You can get an introduction from my post RISC-V—Instruction Sets Want to Be Free.

Previous: Security for IoT is a Requirement Not a Choice