• Skip to main content
  • Skip to search
  • Skip to footer
Cadence Home
  • This search text may be transcribed, used, stored, or accessed by our third-party service providers per our Cookie Policy and Privacy Policy.

  1. Blogs
  2. SoC and IP
  3. What Will It Take to Bring DNN to Embedded?
PaulaJones
PaulaJones

Community Member

Blog Activity
Options
  • Subscribe by email
  • More
  • Cancel
CDNS - RequestDemo

Have a question? Need more information?

Contact Us
architecture
Vision C5
Tensilica
vision
dnn
CNN
neural nets
embedded

What Will It Take to Bring DNN to Embedded?

12 Jul 2017 • Less than one minute read

If you missed Michelle Mao’s presentation at the recent Autosens conference in Detroit, “What Will It Take to Bring DNN to Embedded?”, you missed an important evaluation of how designers can do four fundamental things to lower the power budget and bring deep neural networks (DNNs) to embedded systems:

  • Optimize the network architecture
  • Optimize the problem definition
  • Minimize the number of bits per computation
  • Use optimized DNN hardware

Paul McLellan, Cadence’s Breakfast Bytes blogger, wrote two in-depth posts on her talk. The first one, “CactusNet: One Network to Rule Them All” introduces CactusNet, Cadence’s state-of-the-art CNN benchmark optimized for embedded applications. CactusNet is used to optimize DNNs. But that’s too much to explain here, so please read Paul’s post.

His second post, “CactusNet: Moving Neural Nets from the Cloud to Embed Them in Cars,” discusses how to optimize the problem definition, minimize the number of bits per computation, and use an optimized DNN architecture. Again, this is really worth reading, with lots of meaty information.


CDNS - RequestDemo

Try Cadence Software for your next design!

Free Trials

© 2025 Cadence Design Systems, Inc. All Rights Reserved.

  • Terms of Use
  • Privacy
  • Cookie Policy
  • US Trademarks
  • Do Not Sell or Share My Personal Information