Get email delivery of the Cadence blog featured here
At the recent MWC Barcelona, the conference fka Mobile World Congress, Cadence had a booth, as usual. The focus of what we showed was using Tensilica for various applications relevant to customers who attend MWC. Nobody wanders into our booth looking for static timing tools. So what are these applications?
Before I get to the applications, let me talk about how we run demos on Tensilica. We have three platforms:
Pictures of these boards appear in the descriptions of the demos that were running on them.
Alango provides software for voice enhancement. It takes input from a field of microphones. It is processed on a Tensilica HiFi 3 DSP. The little black components on the circular board are microphones, although only a single ring of them is used at a time. These are the same type of microphone arrays as in devices like Amazon Echo or Google Home devices. The array allows the direction of arrival to be detected, which improves the ability to suppress noise and echo and deliver a clean signal without the noise.
The technology works over a greater distance, in noisier environments, and delivers better results.
The demo was set up with an automated barista from whom you could have a natural conversation and order coffee. Unfortunately, there was no coffee machine connected, so you could see that the system had correctly recognized what you ordered, but never delivered the coffee.
Nestwave was showing an enhanced GPS (or more generically, GNSS). You almost certainly have some form of GPS in your phone, used for navigation and other location-based services. Your phone is always on, and so can get increasing accuracy over time. IoT devices have a different problem: they spend a lot of their time asleep, and when they awaken, they need to get a location fix as quickly as possible so that they can do whatever it is they do and then go back into deep sleep mode to preserve battery power. That is where Nestwave comes in. Their technology, based on a Tensilica Fusion F1 DSP with additional TIE instructions, does a fix in 200ms and has less than 10% of the power of competing solutions.
Their demo was running on a Xilinx board, rather than one of the SoC-based boards since the additional instructions obviously can't be implemented after-the-fact on an SoC. It also achieves breakthrough indoor accuracy, using novel multipath mitigation and machine learning. The design has been created to use hybrid geolocation, triangulating on network base stations and WiFi routers, as well as satellite-based location.
Sonarax uses a Tensilica HiFi 3 to do ultrasonic networking. This uses commodity speakers and microphones, not any special ultrasonic transducers. The advantage of this style of networking is that it doesn't require setup or special equipment. It can work where other networks are not available. The ultrasonic approach works in a noisy environment (such as MWC...trade shows are not quiet). it is compatible with any device (that has a microphone and speaker).
The demo used a normal Android phone and was transmitting keystrokes as they were typed, even from outside the demo room and with music being played to create even more background noise.
Cadence showed a number of AI demos showing different forms of inference at the edge. One was running an image recognition algorithm. You can see the setup above, with the camera looking at the screen on the laptop. It is not just identifying the image but also context. It does this running two networks, an Inception v3 for feature extraction, and LSTM-based RNN for attention over the image. It is optimized with a fixed point implementation using both 8- and 16-bit values. The end result is a description like "a man eating a hot dog in a bun" or a "woman in a kitchen with a refrigerator".
This was running on the Dreamchip SoC board.
Another demo was showing off the Cadence toolchain for compiling neural networks into code on the Tensilica processor. This was shown using the Hisilicon board. The flow starts from one of the standard NN systems such as Caffe, then that is taken through the Tensilica Neural Network Compiler, and then the code is run on the board; in this case, the Hisilicon one shown below.
Plus, I filmed that week's Sunday Brunch Video Version at the booth (although I forgot completely to say where I was):
Sign up for Sunday Brunch, the weekly Breakfast Bytes email.