• Skip to main content
  • Skip to search
  • Skip to footer
Cadence Home
  • This search text may be transcribed, used, stored, or accessed by our third-party service providers per our Cookie Policy and Privacy Policy.

  1. Blogs
  2. Breakfast Bytes
  3. DAC Tuesday 2017: Siemens, SiP, Simon & Lucio, Neural Nets…
Paul McLellan
Paul McLellan

Community Member

Blog Activity
Options
  • Subscribe by email
  • More
  • Cancel
Simon Segars
SiP
Semiwiki
silicon in package
Denali Party
Lucio Lanza
Siemens
neural networks
7nm
ARM
Breakfast Bytes
Mentor

DAC Tuesday 2017: Siemens, SiP, Simon & Lucio, Neural Nets, Nenni, Denali, and More

21 Jun 2017 • 15 minute read

 dac logoYesterday was Tuesday at DAC here in Austin. For reports on the last couple of days see Guide to Austin for Newbies, and DAC Sunday Kickoff and DAC Monday 2017: Joe, Lip-Bu, China, Under 40s, Smurfs, and More.

Siemens PLM (Mentor's German Overlord)

tuesday keynote at dac chuck grindstaffThe opening keynote was by Chuck Grindstaff, who heads up Siemens PLM (product lifecycle management). He himself was acquired into Siemens when they acquired UGS (via Unigraphics) and his division is the one that acquired Mentor Graphics. His keynote was titled The Age of Digital Transformation.

As part of the motivation for acquiring Mentor, Chuck pointed out that IC is the heart of innovation in the digital world. Increasingly, cars being the most obvious, innovation in many areas is centered around electronics in general and semiconductor in particular.

Siemens is "digitalizing" as a company, not just design to manufacuturing but also onto the operational phase. For example, they used to sell wind-turbines as a product. But now the business models are changing and they sell the capacity of a wind farm. That means they are responsible for keeping it running optimally. So they need instrumentations, predict failure, optimize usage. Similar things are happening in many industries.

Another example. If you are in the automotive industry, your business model today is based on selling cars to people who will park them unused for 95% of the time. That business model is at risk. "My kids don't care about cars," Chuck said. "They just want to get where they are going."

It is a threat too. "Digital is the main reason that half the companies in the Fortune 500 have disappeared since 2000" is how Pierre Nanterme, CEO of Accenture, put it. The stark reality is that if you can't make the transition and innovate fast enough, you are at risk, or at least a competitive disadvantage. To show how fast things are changing, Chuck showed the chart below. On the right is the familiar Moore's Law, transistors, cost falling, the stuff we live with every day. On the left are some specific examples. These are not costs compared to 1960 or something. This is 2007 to 2013-15. A robot that used to cost $½M is now $20K.

decline in the cost of things over the last ten yearsIn the past, systems were relatively separate and now need to be designed together. For example, the heating in a car had nothing to do with propulsion other than using some waste heat, but now in an electric car it has to use the same batteries and so needs to be tied into the other aspects of battery management. There used to be boundaries between disciplines like electrical, electronic, mechanical and environmental but now there is more and more co-design.

Look at the IoT device below. It is tiny, and low cost, but it contains almost everything: MEMS, PWB (printed wiring board, a much better term than PCB), RF and more.

iot device showing all the comonentsOne thing that Chuck pointed out, which is one of the themes that underlies System Design Enablement at Cadence, is that system companies are doing more and more chip design, such as Huawei, Amazon, Tesla, Google, ZTE and, of course, Apple. High tech companies such as Amazon, Google, Apple and Samsung are becoming mobility providers too. A result of these changes is that a growing portion of EDA is focused on systems, such as emulation used for software development.

examples of system companies designing chipsChuck put his Siemens hat on again for a moment and pitched MindSphere, which is their cloud analytics platform for handling all the data generated by the "stuff" that they build.

In the Q&A, Chuck mentioned that he hadn't talked much about the mechanical side of the business, since this is an EDA conference, but Siemens has 3 key areas of focus that they call:

  • Ideation: capture, model, simulate and ready to commit
  • Realization: manufacture, maybe solve yield issues, production ramp etc
  • Utlization: after shipment, at the customer

With this high level methodology, a factory is just another complex machine.

Chuck was asked what acquiring Mentor does to their competitive position. He feels good that it improves both Mentor's postion and Siemens PLM's position. Obviously it puts some pressure on partnerships since Siemens is partnered with other EDA companies and Mentor is partnered with other PLM companies. Of course, they are not "done" with integration. Luckily Siemens prides itself on being agnostic about who can connect to them, and Mentor has generally been similar.

A final question was whether people coming from university are prepared for this era of mass customization. He thought that the bigger issue was not university but one level down at technical schools, junior colleges. The journeyman level working in a factory needs to understand networks and programming. "It is not just a guy hitting metal with a hammer."

Silicon in Package

 Dick James, who no longer works for Chipworks but still lives in Canada (eh?) gave a content-rich presentation on what is happening in SiP. Short answer: a lot. Longer answer: the content was so rich that I changed my plan to write about it here and I will give it a post of its own next week.

Just to whet your appetite, hot off the press just last week, Nokia (which is now a networking company, the handset business went to Microsoft who killed it) announced the FP4 service router SiP. It has bandwidth of over 100TB/s. It is (probably) on a silicon interposer. There are 22 chips in the package, including custom memory. It won't be cheap, it is going in an enterprise router which is far from a consumer product.

7nm Lunch

Every day at DAC, Cadence has a lunch. Today's was 7nm for lunch (it's a very small lunch, every couple of years the boxes get half the size). The moderator was Jim Hogan. Actually, moderator isn't really a word applicable to Jim, more like the herdsman of the cats. Today's cats were:

  • Rob Christy (ARM)
  • Anand Rajogoplan (Mediatek)
  • Kazuhiro Takahashi (Renasas)
  • Tom Quan (TSMC)
  • Mitch Lowe (Cadence)

cadence 7nm panel at dac 2017I will cover the details of what were discussed in a separate post next week.

Simon Segars (in Japan) and Lucio Lanza (in Austin)

Ed Sperling, fresh from interviewing Aart before lunch, was back after lunch to his next CEO. Simon Segars, CEO of ARM, was present in mind but not body, by teleconference from Japan (where it was 4 in the morning). Physically in Austin was Lucio Lanza of Lanza Tech Ventures. The topic was billed as Exploring the Connection between the Digital World and the Physical World, but everyone knew that was just a long title for IoT, the Internet of Things.

simon segars and lucio lanza interviewed by ed sperling at dac 2017Simon kicked off pointing out that there is a lot of experimentation going on as to what can be supplied to consumers and industry, it is still a very early stage. Lucio said IoT has taken off as a term, but many of the "things" have been around for a long time. It's not as if our houses didn't have thermostats before Nest but we didn't classify them that way before. IoT, in terms of its impact on society, is just beginning.

Looking forward, Lucio reckoned we can draw a parallel with the Internet era (and his hair being black era). It was the internet of computers and the dreams was to connect millions of computers with cables. But that became the Internet of people and we connected billions of them. Now it is the Internet of Things and we will connect trillions of them. But just as we didn't anticipate e-commerce or social media, a lot of IoT will be things nobody has thought of yet. Simon agreed, and today we are installing cameras and sensors and taking the data, but we will see a lot of innovation and new types of companies. There will be surprises, for sure.

Ed's favorite all-purpose question (mine too) is how do we solve security? Lucio said it will go by segment, what you need for a car is different from a thermostat. Simon is more pessimistic and doesn't think you can "solve" security in the short term, just manage it. But it is so easy to connect things to the Internet and every time you do, you create a new potential security hole. It needs to become a managed problem and not an "apologize when something goes horribly wrong" problem. Ed is suspicious that nobody is willing to pay for security. In some ways, Lucio said, it depends on who got hurt yesterday. Not that your company was compromised, but one like it.

Another security challenge is that some of these devices last ten years or more. Simon says you have to get humans out of the equation, especially in the consumer market. The type of attacks will change over the device's lifetime. Who updates the firmware on their router? It's just too hard. It needs to be automated. Lucio is optimistic that this will get solved and "people will get security whether they understand the issue or not."

Another of Ed's favorite questions is that medical is the next big thing, coming next year, but people have said that every year for 20 years. Simon says it is starting to be here. You can now get heart pacemakers that will upload from your smartphone to your doctor. The lower costs of delivering healthcare will come. This issue will get more acute as society ages and chronic managed diseases dominate the cost (I think that is already the case). Lucio reckons that machine learning will play a bit part. You will have so much information on yourself that you will need machine learning to sort through it all. Otherwise you need to go to the doctor to find out if you ought to go to the doctor. Somehow, in the cloud, there will be devices to manage all the data, based on more medical information than any individual doctor could learn in a lifetime.

Simon went further, saying "I think that machine learning is the whole point of IoT." Data on its own is not interesting. Some will be in the cloud and some at the edge. The edge, at the very least, has to work out which is the useful data. The benefit of gathering data is to gain insight from it.

Simon got the last word since it was now 4.30am in Japan. Design can be much more distributed than before, he said, and the EDA community can help make that a reality. Look at how smartphones grew once they became a platform for everything. He expects to see the same thing with IoT. Once people can access data securely in a friendly way, companies will be created all over the world (cue company being created in Japan in the background behind Simon...no, didn't happen while we were watching).

Neural Networks

neural network panel at dac 2017The ubiquitous Jim Hogan, having put down his guitar from last night, and escaped from the Cadence 7nm lunch, popped up in the DAC pavilion to talk about Neural Networks. Well, mostly he recruited three other people who really knew what they were talking about. Each of them gave a short presentation, then there was a discussion.

First up was Chris Rowen as himself (actually, he has his consulting and investment company called Cognite Ventures). If you have seen any of his presentations recently he leads off with the pixel explosion. If you haven't then you can dip your toe in the water with my post Rowen: How to Start an Embedded Vision Company. There are now more image sensors than people, generating a petabyte per second so more than any network can transmit or storage system can store. So most camera data has to be consumed locally meaning vision silicon. There also needs to be a computer architecture revolution since for years we have worked on the basis of writing code where the computer follows the code exactly. But deep learning (DL from now on) is different, more statistical. You want a pretty good answer to a problem of such complexity that you couldn't write a normal program to do it (think of recognizing breeds of dogs for an archetypal example). We can do 2 orders of magnitude better, maybe more, than using a regular CPU.

300 startups working in AI June 2017Chris has been looking into the startup scene. There are maybe 3,000 AI startups. A lot are not really AI, but it is so hot that everyone wants to say that's what they do. There are about 15 new silicon startups and, Chris continued, "I have identified 3 or 4 more since finishing this wallchart 3 or 4 weeks ago". Note: the above picture is linked to the wallchart sized (huge) image.

Jim pointed out that there wasn't s single funded semiconductor startup in the US in 2014 so the fact that there are now 6 or 8 is a big deal (and these are $100M type funding requirements).

Next up was James Gambale of Lomasoft Corp, who used to be a patent attorney for Qualcomm. I can say that, having worked on the other side of the negotiating table from them, that Qualcomm's IP attorneys are really good. He talked about the cognitive application ecosystem. To master machine learning, there are a daunting number of technologies that have to be mastered. Caffe from Berkeley is widely used and continues to be important in developing training models fast, which can then be compressed for implementation. The big change was using GPU for gradient descent and training NNs. Google has TensorFlow, Facebook has Caffe2 for both training and deployment. You need to learn these tools and this ecosystem.

To execute at the edge there are many different schemes. ARM, Intel, Xilinx, NVIDIA etc all have their own,most of which rely on OpenCL except for NVIDIA with CUDA. 

The final panelist was Raik Brinkmann, CEO of OneSpin. OneSpin make formal verification products. Raik pointed out that theorem proving was one of the first application areas for AI back in the 50s and 60s. Some of these concepts then moved into what formal verification (in the EDA sense). It is not statistical though. There is an interesting circle that silicon made NN better, and now we are using DL techniques to improve EDA to make better silicon. Deep learning is an engineering approach to build things that work, rather than worrying about philosophical aspects of AI. The big challenge is power. We can do much better by giving up some flexibility by using a GPU or DSP, but these are still way too thirsty for edge devices like wearables. The received wisdom is that moving data closer to computation is the way to go.

We have to get to the edge due to latency, reliability and privacy but we need low power, reconfigurability and security. Doing learning at the edge is a big challenge. Incremental learning will come. There is a huge verification quesiton since retraining and the statistical nature make it different from regular IC design. You can no longer "prove" anything about the system, only that it works in most cases.

Jim asked everyone what was their biggest concern. James said privacy and security, the implications are not well understood. Chris went for how we convert the current enthusiasm into real products since the number of really knowledgeable people is so small. When Jim went to SJSU he studied general engineering since there was not yet a computer science school. Now SJSU graduates 4,000 students a year. He things that in ten years there will be schools of cognitive science at both the research universities and places like SJSU that fill the companies round silicon valley. Raik's biggest worry was reliability and trust. Statistical models can't be explained in the same way as "code" that an engineer wrote.

The general conclusion was that the entire area is moving fast and deep learning is going to be a significant change, just how it is too early to tell.

Dan Nenni on Semiwiki

At the Minalogic Showcase, Dan Nenni of Semiwiki gave the keynote. Full disclosure: I wrote Fabless with Dan...and you can buy a copy from me for $50...or download it for free from Semiwiki like 150,000 other people have done, it was definitely "a labor of love" which is one way of saying we didn't make any money. Dan gave a little of the history. He started his own blog about the foundries, and by the end of the year, thousands of people were reading it. Around the same time I started EDAgraffiti covering EDA. Dan decided to start Semiwiki and see if he could monetize his content and invited me to join. It turned out to be much more successful than I expected.

Here are some analytics:

  • 2M people have used Semiwiki coming from 85K different domains (some of this is double counting people on their laptop and phone, for sure)
  • 4,000 blog posts
  • 19M views
  • average 4,500 views per post

The traffic comes 41% from search, 32% direct, 19% from social media and 8% from referral. 

One encouraging sign is that the age profile is going down and down (meaning that it is not just old fogies like me that read it, young semiconductor and EDA professionals do):

Denali Party

So, as is traditional, Lip-Bu Tan, Nimesh Modi, and Qi Wang were game enought to dress up as this year's giveaway...Smurfs.

but it really was Lip-Bu and the team. Want proof?

As always, Disco Inferno played. But after all the disco stuff, the wigs came off and they played more recent music...well, OK, only 80s and 90s, but post-disco. Here's the set list: