• Skip to main content
  • Skip to search
  • Skip to footer
Cadence Home
  • This search text may be transcribed, used, stored, or accessed by our third-party service providers per our Cookie Policy and Privacy Policy.

  1. Blogs
  2. Breakfast Bytes
  3. MWC: Voice Enhancement, GPS, Ultrasound, and More
Paul McLellan
Paul McLellan

Community Member

Blog Activity
Options
  • Subscribe by email
  • More
  • Cancel
alango
nestwave
MWC
Tensilica
sonarax

MWC: Voice Enhancement, GPS, Ultrasound, and More

12 Mar 2019 • 4 minute read

 breakfast bytes logoAt the recent MWC Barcelona, the conference fka Mobile World Congress, Cadence had a booth, as usual. The focus of what we showed was using Tensilica for various applications relevant to customers who attend MWC. Nobody wanders into our booth looking for static timing tools. So what are these applications?

Before I get to the applications, let me talk about how we run demos on Tensilica. We have three platforms:

  • A Xilinx FPGA board, on which we can run a synthesized version of the appropriate Tensilica core, including TIE custom instructions.
  • A Dreamchip SoC board, which is an automotive SoC designed by the company Dreamchip and fabricated by GlobalFoundries in Dresden. For more details about it, see my post from when they presented it at CDNLive Dream Chip: A Vision for Your Car. The chip contains four Vision P5 cores, although when we use the board for demos we only use one.
  • A HiSilicon (Huawei subsidiary) HiKey 960 evaluation board. The chip contains a Vision P6 core.

Pictures of these boards appear in the descriptions of the demos that were running on them.

Alango Voice Enhancement

Alango provides software for voice enhancement. It takes input from a field of microphones. It is processed on a Tensilica HiFi 3 DSP. The little black components on the circular board are microphones, although only a single ring of them is used at a time. These are the same type of microphone arrays as in devices like Amazon Echo or Google Home devices. The array allows the direction of arrival to be detected, which improves the ability to suppress noise and echo and deliver a clean signal without the noise.

The technology works over a greater distance, in noisier environments, and delivers better results.

The demo was set up with an automated barista from whom you could have a natural conversation and order coffee. Unfortunately, there was no coffee machine connected, so you could see that the system had correctly recognized what you ordered, but never delivered the coffee.

Nestwave Positioning

Nestwave was showing an enhanced GPS (or more generically, GNSS). You almost certainly have some form of GPS in your phone, used for navigation and other location-based services. Your phone is always on, and so can get increasing accuracy over time. IoT devices have a different problem: they spend a lot of their time asleep, and when they awaken, they need to get a location fix as quickly as possible so that they can do whatever it is they do and then go back into deep sleep mode to preserve battery power. That is where Nestwave comes in. Their technology, based on a Tensilica Fusion F1 DSP with additional TIE instructions, does a fix in 200ms and has less than 10% of the power of competing solutions.

Their demo was running on a Xilinx board, rather than one of the SoC-based boards since the additional instructions obviously can't be implemented after-the-fact on an SoC. It also achieves breakthrough indoor accuracy, using novel multipath mitigation and machine learning. The design has been created to use hybrid geolocation, triangulating on network base stations and WiFi routers, as well as satellite-based location.

Sonarax Ultrasonic Networking

 Sonarax uses a Tensilica HiFi 3 to do ultrasonic networking. This uses commodity speakers and microphones, not any special ultrasonic transducers. The advantage of this style of networking is that it doesn't require setup or special equipment. It can work where other networks are not available. The ultrasonic approach works in a noisy environment (such as MWC...trade shows are not quiet). it is compatible with any device (that has a microphone and speaker).

The demo used a normal Android phone and was transmitting keystrokes as they were typed, even from outside the demo room and with music being played to create even more background noise.

Cadence AI at the Edge

Cadence showed a number of AI demos showing different forms of inference at the edge. One was running an image recognition algorithm. You can see the setup above, with the camera looking at the screen on the laptop. It is not just identifying the image but also context. It does this running two networks, an Inception v3 for feature extraction, and LSTM-based RNN for attention over the image. It is optimized with a fixed point implementation using both 8- and 16-bit values. The end result is a description like "a man eating a hot dog in a bun" or a  "woman in a kitchen with a refrigerator".

This was running on the Dreamchip SoC board.

Another demo was showing off the Cadence toolchain for compiling neural networks into code on the Tensilica processor. This was shown using the Hisilicon board. The flow starts from one of the standard NN systems such as Caffe, then that is taken through the Tensilica Neural Network Compiler, and then the code is run on the board; in this case, the Hisilicon one shown below.

Sunday Brunch

Plus, I filmed that week's Sunday Brunch Video Version at the booth (although I forgot completely to say where I was):

 

Sign up for Sunday Brunch, the weekly Breakfast Bytes email.