Get email delivery of the Cadence blog featured here
On February 9, Cadence is hosting an all-day Embedded Neural Network Summit., with a focus on "Extending Deep Learning into Mass-Market Silicon." It will take place in the building 10 auditorium on the Cadence campus at 2655 Seely Avenue, San Jose.
Neural networks can be used to process large amounts of data to make intelligent decisions in various application areas, notably image recognition, pattern recognition, speech recognition, natural language processing, and video analysis. The application areas for convolutional neural networks are growing in mobile, automotive, consumer, and IoT segments. Google, Facebook, and others use the technology for recognizing what is in photos, faces, and so on.
A neural network consists of a number of artificial “neuron” circuits, with the inputs of some neurons connected to the outputs of some others. As in a real neuron, the connections have weights associated with them. Typically the values of the weights are determined during a training process and then are used during the recognition process to actually perform identification. A CNN is a special case of a general neural network, inspired by the visual cortex in the brain. It consists of a number of layers that receive input from a small part of the previous layer (or the image for the first layer) and can extract primitive features such as edges and corners that can then further be combined. A number of these convolutional layers are followed by fully connected layers that perform the classification itself. The process of training and using a neural network is sometimes known as deep learning or deep machine learning.
For example, Google's image-recognition algorithm is described as follows:
The software is based on the structure of biological brains, and it is trained by being shown millions of images. It constantly adjusts until it is able to accurately recognize, say, a schnauzer or a stove. Information will filter from neuron layer to neuron layer until it reaches the final layer and delivers its response.
The focus of the day will be on on-chip neural networks. There are a number of ways to implement CNNs, ranging from networks of analog cells that are much closer to real neurons, up to programs running on general-purpose computers (or lots of them). But for embedded, neither of those solutions are ideal. What is needed is a special-purpose core (or cores) that gives the best tradeoff between the flexibility of software and the power/performance of custom silicon solutions. Billions of multiply-accumulates (MACs) per second is the requirement, plus high bandwidth and low power.
Speakers at the summit will include:
Register for the Embedded Neural Network Summit.