Home
  • Products
  • Solutions
  • Support
  • Company

This search text may be transcribed, used, stored, or accessed by our third-party service providers per our Cookie Policy and Privacy Policy.

This search text may be transcribed, used, stored, or accessed by our third-party service providers per our Cookie Policy and Privacy Policy.

  • Products
  • Solutions
  • Support
  • Company
Community Blogs Breakfast Bytes > Embedded Neural Network Summit—How to Build a Silicon B…
Paul McLellan
Paul McLellan

Community Member

Blog Activity
Options
  • Subscribe by email
  • More
  • Cancel
deep learning
enns
Tensilica
convolutional neural networks
embedded neural networks
Breakfast Bytes

Embedded Neural Network Summit—How to Build a Silicon Brain

19 Jan 2016 • 2 minute read

Breakfast BytesOn February 9, Cadence is hosting an all-day Embedded Neural Network Summit., with a focus on "Extending Deep Learning into Mass-Market Silicon." It will take place in the building 10 auditorium on the Cadence campus at 2655 Seely Avenue, San Jose.

Neural networks can be used to process large amounts of data to make intelligent decisions in various application areas, notably image recognition, pattern recognition, speech recognition, natural language processing, and video analysis. The application areas for convolutional neural networks are growing in mobile, automotive, consumer, and IoT segments. Google, Facebook, and others use the technology for recognizing what is in photos, faces, and so on.

A neural network consists of a number of artificial “neuron” circuits, with the inputs of some neurons connected to the outputs of some others. As in a real neuron, the connections have weights associated with them. Typically the values of the weights are determined during a training process and then are used during the recognition process to actually perform identification. A CNN is a special case of a general neural network, inspired by the visual cortex in the brain. It consists of a number of layers that receive input from a small part of the previous layer (or the image for the first layer) and can extract primitive features such as edges and corners that can then further be combined. A number of these convolutional layers are followed by fully connected layers that perform the classification itself. The process of training and using a neural network is sometimes known as deep learning or deep machine learning.

Typical block diagram of a CNN

For example, Google's image-recognition algorithm is described as follows:

The software is based on the structure of biological brains, and it is trained by being shown millions of images. It constantly adjusts until it is able to accurately recognize, say, a schnauzer or a stove. Information will filter from neuron layer to neuron layer until it reaches the final layer and delivers its response.

The focus of the day will be on on-chip neural networks. There are a number of ways to implement CNNs, ranging from networks of analog cells that are much closer to real neurons, up to programs running on general-purpose computers (or lots of them). But for embedded, neither of those solutions are ideal. What is needed is a special-purpose core (or cores) that gives the best tradeoff between the flexibility of software and the power/performance of custom silicon solutions. Billions of multiply-accumulates (MACs) per second is the requirement, plus high bandwidth and low power.

Speakers at the summit will include:

  • Jeff Bier, BDTi president and founder of Embedded Vision Alliance
  • Bill Dally, NVIDIA chief scientist and SVP of research, Stanford professor
  • Sumit Gupta, IBM vice-president HPC and OpenPower
  • Sumit Sanyal, minds.ai founder and CEO
  • Pete Warden, Google staff research engineer
  • Michael Leventhal, Xilinx technical manager, data center acceleration
  • Anshu Arya, MulticoreWare solution architect
  • Samer Hijazi, Cadence engineering director CTO office
  • Chris Rowen, Cadence CTO, IP group
  • Panel session

Register for the Embedded Neural Network Summit.