Get email delivery of the Cadence blog featured here
Next month at Arm TechCon, one of the key discussion topics with be the internet of things (IoT), especially after Masayoshi Son, Arm's "parent" Softbank’s CEO, took the stage last year and boldly predicted that “more important, in the next 20 years, we will see 1 trillion internet of things devices.”
So that means 1 trillion devices in 2035. That’s not too far in the future! Are we on the right track? Well, let’s first clearly identify the target and put it in perspective. By 2035, the world population is expected to approach 9 billion. That means each person will have about 113 devices.
Where are we today? Taking my house as an example, I have about 30 light switches with associated bulbs, so about 60 devices right there. My alarm has three cameras and an additional seven sensors through various rooms. Add in items I use daily—my Fitbit tracker for workouts, my step tracker, my phone, devices in my car and on my bicycle, plus additional devices controlling my solar installation, my PG&E wireless measurements, my “Ring” doorbell with additional devices throughout the house, and my home wireless network with one switch and three routers, I easily get to 90 devices and counting.
The majority of them are very small, and suddenly Enlighted’s CEO Joe Costello’s keynote from this year’s DAC becomes very personal—embedding IoT aspects into lights hits about one third of all my devices today. Most importantly, 113 devices on average seems a conservative estimate, when taking into account the classic adoption curve, which gets us to 50% adoption when the “early majority” is reached. A set of good adoption curves is here, and most recent consumer devices reached quite a penetration within the target timeframe of 20 years. Add in non-consumer items like health and industrial and you can really say “A trillion devices or more—here we come.” So we are on the right track.
But there is a cost. There will be several potholes/landslides/car wrecks that we need to avoid. In my mind, it comes down to ethics, security, safety, and system architecture. Is it right to let the car decide which human to hit once a collision has become unavoidable?
Safety and security need to be sorted out. Can a pacemaker be used remotely to kill somebody as shown in the television series “24”? How do we avoid the catastrophic failure stemming from a hacker attack on a city’s infrastructure as shown in the action movie “Live Free or Die Hard?” There are tough decisions and implementation choices ahead.
And system architecture becomes crucial. Can we just record everything, upload it to the cloud, and process it there? A brilliant Robin Williams in “Final Cut” comes to mind, but there are basic issues around energy (hint: there is not enough energy to transmit everything) and data processing/transmission: Do I process at the node, the hub, or in the cloud? And they are real today, as this Google example shows: Three minutes of voice recognition by all Android users would require twice as many datacenters as currently exist.
Skimming through the Arm TechCon program, from the keynotes alone it looks like an event not to miss to understand the future and how to get there. And I’ll be participating on a panel on the very topic we’re discussing right now, the road to 1 trillion devices.
In a recent discussion with Rob Aitken of Arm, he mentioned some of the aspects that I am sure will be detailed in his keynote “How to Build and Connect a Trillion Things.” For example, a trillion small devices would take up about 30% of TSMC’s current wafer production. Seems doable and nice growth. Energy is an issue—as mentioned above. There is a classic conflict between the robust design that is needed in many areas of the IoT and low power design. Deep sub-threshold design will be crucial. Bandwidth to transmit data will become an issue—it goes back to the system architecture aspects above—both in terms of physical bandwidth available as well as regulatory issues of how to administer that bandwidth. 5G, as futuristic as it sounds today, will just be a stepping stone.
And of course, design and verification effort itself will be a challenge. There is a natural limitation of RTL developers, IC design teams, and software developers writing code. Making chip design more efficient—and especially make it at the edge node “plug and play”—will be required to fuel all the silicon needed to get to a trillion devices. Our joint demo at DAC—connecting Arm SoCrates to Cadence Verification tools, you can read Jim Wallace's blog on it here—is a good step into the right direction. And of course emulation, prototyping, and smart verification will be key going forward.
So will it be worth it to get there? Absolutely! To me, Simon Segars’s keynote called “Humanizing Technology” sounds like it will be touching on the elements that excite me.
Assuming we figure all this out right, life will be much more human and enjoyable, with the proper support of technology.