Never miss a story from Breakfast Bytes. Subscribe for in-depth analysis and articles.
Cadence has been at Mobile World Congress all week. Actually they are still there today, but by now I am in Nürnberg in Germany where it snowed last night, so not exactly Barcelona weather. Tensilica came to MWC for several years before Cadence acquired them, and it is still the focus of what Cadence exhibits on their booth at the show.
One of the challenges with a conference like MWC is that it spans such a huge range. There are network operators being wooed by the equipment vendors for contracts in the billions of dollars. There are handset vendors announcing new products hoping to be the new "hot" must-have handset that gets some traction against Apple and Samsung. There are companies supplying every component you might need in a phone (sensors, for example) or a basestation (any number of antennas). There are chip companies like Intel, Mediatek, and Qualcomm. Then there are IP companies such as ARM and Imagination. And, of course, Cadence.
On the booth were some Tensilica demos that I have not seen before. One had a live camera catching people walking by and identifying where entire bodies were and outlining them in real time. As Mark Zuckerberg pointed out in his discussion here, a lot of what people call AI is actually recognition and classification. Much of what is required for autonomous driving, for example, is recognizing and classifying street signs, traffic lights, other vehicles, pedestrians, lane markings, and irrelevant extraneous objects like mailboxes and outdoor cafés. The technology for doing this is called convolutional neural networks (CNNs) and the challenge is to do this in a device with a limited compute budget and a limited power budget, as opposed to using a few hundred cores in a datacenter. But lots of specialized compute power with a limited power envelope pretty much sums up a couple of Tensilica's major value propositions. Another demo showed facial recognition.
One of the big challenges is to do recognition in a noisy environment. I don't mean noisy in the sense of people shouting in the background, although for speech recognition that is one of the challenges, but noisy in the sense of signal-to-noise ratio. For example, one of the demos shows a Tensilica core removing fog from a scene. On the left above is the input with the fog, and on the right it has been cleaned up and most of the fog has been taken out. In this demo nothing is being done with the cleaned up image, but, obviously, if you were going to try and recognize what is in the picture the right-hand image is a much better starting point.
The challenge with recognizing street signs is not doing it with the perfectly pristine images that appear in the driving manual or online, it is to do it with signs where there is mud on them, or a bit of added graffiti, or in the rain.The perfect signs could probably be classified by simply looking at a grid of perhaps a hundred dots across the sign. But the noise recognition requires CNN with a large collection of training photographs of signs to derive all the weightings for the network. This is done in a datacenter, but the actual recognition has to be done in real time on the vehicle itself.
I've written before about USB Type-C and how it is going to take over the world (see One Connector to Rule Them All: USB Type-C). At MWC Cadence managed to get hold of one of the first smartphones using USB Type-C. It is a Microsoft Lumia phone. If you look at the picture above you can see the phone (it has its screen facing away so it is not that obvious). The USB Type-C connector is plugged into it. If you assumed that the phone was being charged through that connector you would be right. But see the big display, the keyboard, the mouse (okay, that is out of the picture, but you can see its connecting wire). They are all also running over that same USB Type-C connector. The phone is acting as a "laptop" driving the display and taking data through the keyboard and mouse.
The connector has the possibility that it will disrupt parts of the compute ecosystem. If your smartphone is as powerful as your laptop already, and has lots of storage, then do you really need a laptop as well as a smartphone? Why not just keep everything in your pocket (and the cloud) and then connect it to displays and keyboards when you need a big screen and to be able to touch-type (although my daughter can already type faster with two thumbs on her iPhone than she can on a regular keyboard, my thumbs are not so nimble).
There were other demos showing the Tensilica HiFi audio processors that are semi-standard as offload processors for all the various audio standards. These have built up a surrounding ecosystem so that audio engineers who know next to nothing about semiconductors or processors can tune up their solutions.