Google FeedBurner is phasing out its RSS-to-email subscription service. While we are currently working on the implementation of a new system, you may experience an interruption in your email subscription service.
Please stay tuned for further communications.
Get email delivery of the Cadence blog featured here
Cadence has attended MWC ever since it acquired Tensilica, and they attended for years before that. Tensilica DSPs are used in many areas within mobile for audio processing, video processing, high-performance modems, embedded neural networks and more.
The most dramatic silly looking thing to put on your head is the Microsoft Hololens. This actually only looks silly to people watching you, since when you are wearing it, the experience is completely immersive. One game I played starts by having you look around the room so that the game can work out where the walls are, because robotic scorpions are going to appear on the walls and you have to shoot them before they shoot fireballs at you (and if they do, you can duck out of the way). If you want to access menus, you just look at the menu item and tap your finger on it as if you were clicking a mouse. Of course, this looks especially silly to anyone watching you.
Under the hood (or perhaps, since it is on your head, we should say as part of the hood) are 24 Tensilica processors with a couple of hundred special instructions (created through the Tensilica Instruction Extension, or TIE). Between them, these processors are playing a game, detecting movement of your head, sensing where your eyes are pointed, sensing the room around you and more. It sits on your head, doesn't contain fans, and can't get hot, so the power constraints are pretty severe for the amount of computing power that is provided.
It is hard to describe it clearly, you really have to put a headset on and experience it for yourself. Currently, this is more of a "concept car" than something ready for market, although anyone can buy one, you just have to think it is worth $3000. At that price it is not aiming for volume, it is getting early adopters to experiment with it. Presumably, at some point, it will be priced closer to a consumer price point.
The next silly looking thing to put on your head is the Waves 3D Audio demo. This allows the sound to be subtly altered as you move your head. One obvious application is if you are wearing a virtual reality headset, but it can also add additional realism to just listening to music or even watching a video on a smartphone. In all of these cases, the sound field feels more real if it adjusts as you move your head.
Waves is a company that was originally focused on taking old tube (valve in English English) equipment and re-creating the sound using high-precision digital processing. These are known as plug-ins and can be used with pretty much any mixing board and other equipment. As an example, one of their latest plug-ins is for the Abbey Road Studios in London where they had full access to gather the data. They can model sound cutting a vinyl master, and after taking that master and pressing. They can do it in just a good way, adding the "warmth" of vinyl without the hiss, pops, and motor imperfections and noise. Or they can add all that stuff too. You too can be in London recording The Beatles. Obviously, this technology is targeted at professionals in studios and on live tours.
Waves Maxx is their consumer product line, although the 3D demo is not yet a product, just a prototype. You can see from the picture on the right that the packaging is not exactly ready for prime time, with a little circuit board stuck on the top.
Vision recognition is an area of increasing importance, most notably for automated driving, but also for other things such as security and identification. We showed a demo of AlexNet (which didn't involve anything silly on your head).
AlexNet is a widely available convolutional neural network, originally developed at Stanford, for vision recognition. The demo at MWC was set up in two parts. On a monitor, pictures were displayed. The AlexNet was running on an FPGA-based implementation of a Tensilica Vision processor (so not running at full SoC silicon speed) and was giving the probabilities for what was being seen: a peacock, a baseball and so on. The implementation was not cheating, either in the sense that there was no communication between the program showing the images and AlexNet. In fact, you can put any image you want in front of the camera, not just the ones pre-programed to be displayed as the demo cycles through, and it will do its best. In a demo at the embedded vision conference a couple of years ago, Yann LeCun showed their equivalent at that time. When shown a doughnut it said it was a bagel. "Hey, it was trained in New York," he said, where he was a professor before he joined Facebook.
For more information on the Cadence Tensilica Vision product line, start here.