• Skip to main content
  • Skip to search
  • Skip to footer
Cadence Home
  • This search text may be transcribed, used, stored, or accessed by our third-party service providers per our Cookie Policy and Privacy Policy.

  1. Blogs
  2. Breakfast Bytes
  3. Linley Spring Processor Conference—In Your Own Living R…
Paul McLellan
Paul McLellan

Community Member

Blog Activity
Options
  • Subscribe by email
  • More
  • Cancel
deep learning
linley processor conference
linley group
neural networks
AI

Linley Spring Processor Conference—In Your Own Living Room

31 Mar 2020 • 4 minute read

 breakfast bytes logo

The annual Linley Spring Processor Conference is coming up next week. It was planned to be in the Santa Clara Hyatt as always, spread over two full days. Well, that's obviously not happening. Instead, the event is going virtual and is spread over four mornings, from Monday, April 6 to Thursday, April 9. Each day starts at 9:00am and ends soon after 1:00pm (the exact end time varies from day to day).

The format is roughly the same as normal, with what would have been the first day spread over Monday and Tuesday mornings, and what would have been the second day spread over Wednesday and Thursday mornings. The most noticeable change, apart from it being virtual, is that each day ends with an hour-long breakout session with all the speakers from that morning. I'm not quite sure how that's going to work, but I assume we can ask them all questions from our living room couches to theirs. The plan is that each speaker will be assigned a separate "meeting room".

Keynotes

As usual, there are two keynotes. These would normally have opened the two days. With this format, they open the sessions on Monday and Wednesday.

 The conference kicks off, as usual, with Linley Gwenapp presenting The Next Generation of AI Processors. The hottest area in the semiconductor market in general and in processors in particular is AI. Specifically, specialized processors for training neural networks and specialized processors for inference at the edge. AI training was initially done using the server CPU. But GPUs turned out to be great for AI training, too, since both graphics and neural network training are both largely doing a lot of matrix operations as fast as possible. That is changing since GPUs are not the optimal solution in the data center, and they are too costly in silicon area and power dissipation for most edge devices. Linley's abstract is:

In the data center, new architectures are emerging to challenge the GPU's dominance in AI training and inference. Embedded systems such as smart cameras, smart vehicles, and smart robots require powerful accelerators. AI accelerators are even moving into IoT and smart-home devices, running simple neural networks on milliwatts of power. This presentation will describe the latest trends in AI acceleration while addressing how these accelerators are used across this range of end applications.

 The "second day" keynote is always an invited industry luminary. This year it opens Wednesday morning. The presenter is Geoffrey Burr of IBM with a presentation titled Advancing Broad AI with Algorithms and Architectures for Digital and Analog AI Acceleration.

The rapidly evolving field of AI-hardware accelerators has led to a proliferation of approaches. Most use commercial CMOS technology, and to scale performance, flexibility and accuracy are often traded off. We will present results from our co-development of algorithms and accelerators, including materials and technology innovations, that achieve large gains in performance without degrading flexibility or accuracy. This approach is central to enabling the transition from single-task single-mode "narrow" AI workloads to multi-modal multi-task "broad" AI.

Sessions

The rest of the four mornings are split up into presentations grouped into various topic areas. I won't reproduce the entire schedule here, I'll put a link at the end of this post. But the topic areas are:

  • Monday: AI for Ultra-Low Power Applications (starting after Linley's keynote)
  • Tuesday: Accelerating AI and Other Embedded Workloads
  • Wednesday: 5G and AI at the Network Edge (after Geoff's keynote)
  • Thursday: Data Center Processors and Accelerators, followed by Processor Technology

Cadence

 Cadence's Yipeng Liu is presenting on Tuesday from 10:40am to 11:00am. Her presentation is titled Efficient Machine Learning on DSPs Using TensorFlow.

Historically, neural network models have often been hand-coded to fit in the power and memory constraints of edge devices. But TensorFlow (and TensorFlow Lite) can provide a more automated path, feeding into the Tensilica compilation flow directly and providing an end-to-end framework, along with the Tensilica neural network libraries. Yipeng will walk through the details of building a keyword-detection model and deploying it on Cadence's industry-leading Tensilica HiFi DSP, using this toolchain.

Attend

Full details of the conference, including both a detailed agenda and a link to register (it's free for qualified attendees) are on the Linley Group's website. There are still sponsored breaks in the middle of each morning. I was hoping the CEOs of the sponsoring companies were going to show up at my front door with coffee and bagels. But apparently, the sponsors of the original breaks will be able to display stuff on our screens while we make our own coffee.

I plan on attending and so you can expect some Breakfast Bytes posts on the conference in the week or two afterward.

 

Sign up for Sunday Brunch, the weekly Breakfast Bytes email.