Get email delivery of the Cadence blog featured here
At the recent Linley Spring Microprocessor Conference, Linley Gwennap kicked off with the opening keynote on what is clearly the biggest thing to hit processors in a long time: deep learning.
Linley started with an overview of deep learning and the latest trends. I'm going to skip that since I've covered the basics in many posts already. Start here if you need to get up to speed:
Even if you don't have time to read these, a quick summary would be that:
You probably already know that NVIDIA and Intel dominate in the datacenter. Intel has added AI-boost instructions to the new Cascade Lake that triples performance (at least for things like 8-bit arithmetic). Intel presented that processor at the conference, and I'll cover that in another post. Or, if you want a preview, Intel gave some details about it at HOT CHIPS last summer, which I covered in Intel's Cascade Lake: Deep Learning, Spectre/Meltdown, Storage Class Memory).
Today, the most popular training option is the NVIDIA V100 Volta. This is a GPU with tensor cores delivering 125 Tflop/s (FP16) at 300W. The NVIDIA T4 provides a PCIe accelerator for AI inference, with integer-optimized cores, and an estimated 80 TOPS of 8-bit arithmetic at 70W.
But new challengers are emerging.
In addition, cloud providers have their own:
The general trend is to move AI to the edge. Today, products like Alexa and Siri send a compressed voice recording up to the cloud to do voice recognition and natural language processing. But, especially in aggregate, there is a huge amount of processing power at the edge and it makes sense to use that rather than have to add more cloud capability. It also reduces latency, works while offline, and is better from a privacy point of view.
High-end smartphones have AI capability:
This AI inference is creeping down into both mid-range phones and also IoT devices such as voice assistants, smart security cameras, and drones. In particular:
One particular type of "edge device" is the automobile. (I won't cover trends in automotive. I have covered them repeatedly in the last few years. If you need to get up to speed then start with Automotive Summit: The Road to an Autonomous Future if you need a kick-start). The current competitors in the merchant market are NVIDIA and Intel.
It was too late to get a mention at the conference, but Tesla recently announced their FSD processor SoC. See my recent post Tesla Drives into Chip Design.
But if you have been reading Breakfast Bytes regularly, you already knew that. But this post contains a lot of detail about who the players are and what their current (and in some cases, future) offerings are.
Sign up for Sunday Brunch, the weekly Breakfast Bytes email.