• Skip to main content
  • Skip to search
  • Skip to footer
Cadence Home
  • This search text may be transcribed, used, stored, or accessed by our third-party service providers per our Cookie Policy and Privacy Policy.

  1. Blogs
  2. Breakfast Bytes
  3. Software 2.0
Paul McLellan
Paul McLellan

Community Member

Blog Activity
Options
  • Subscribe by email
  • More
  • Cancel
software 2.0
neural networks
andrej karpathy
AI

Software 2.0

16 Nov 2022 • 5 minute read

 breakfast bytes logoI recently came across the idea of "software 2.0". I was watching a Lex Fridman interview with Andrej Karpathy called Andrej Karpathy: Tesla AI, Self-Driving, Optimus, Aliens, and AGI | Lex Fridman Podcast #333. I'll embed the podcast at the end of this post, but I should warn you that it is nearly three and a half hours long (although it won't drag in the slightest, but you won't really lose anything watching at 1.5X speed). Andrej was, until recently, the head of a lot of AI at Tesla Motors. As it says on his own website:

I was the Sr. Director of AI at Tesla, where I led the computer vision team of Tesla Autopilot. This includes in-house data labeling, neural network training, the science of making it work, and deployment in production running on our custom inference chip. Today, the Autopilot increases the safety and convenience of driving, but the team's goal is to develop and deploy Full Self-Driving to our rapidly growing fleet of millions of cars.

I covered a lot of what Tesla is up to in AI in my post Tesla AI Day 2022, NOT CHIPS: Tesla's Project Dojo, and HOT CHIPS Day 2: AI...and More Hot Chiplets.

Andrej was also a founder member of Open.AI, which produced the DALL-E engine that produces artwork from simple descriptions. I wrote about DALL-E in my post What Is a Lagrange Point in Space? And DALL·E 2

Software 2.0

If you want to hear Andrej on Software 2.0 in the Lex Fridman podcast, then skip to 1:06 (that's one hour, six minutes) for the ten minutes to 1:16.

The basic concept is that until recently, to get anything to work on a computer, someone who is an expert had to develop algorithms and write an explicit program. So, for example, a vision recognition system would look for edges and patches of color. The computer vision had what I've heard called its "Imagenet moment". I wrote about this in my post ImageNet: The Benchmark that Changed Everything. The one-sentence summary is that Imagenet is a crowd-sourced database of millions of images that are on the internet (Imagenet just contains the annotations, not the actual images). Suddenly, there was enough data that it was possible to train neural networks on all the images.

imagenet results

In the 2012 ISLVRC (ImageNet Large Scale Visual Recognition Challenge), AlexNet, a neural network, improved results by a lot. Over the following years, results improved to the point that neural networks are better than humans at image recognition.

neural networks

This is software 2.0, where instead of developing and programming an algorithm, a basic neural network skeleton is created and, using training data, the weights for the network are developed (training) and then use them to do that actual work (classification of images in this case, but not limited to that),

Andrej says that in the early days of Tesla, they used algorithmic approaches to do things like recognize traffic lights and decide if they were red or green. But over time, they switched pretty much everything from software1.0 to software 2.0.

On the podcast, Andrej mentioned a paper he had written on software 2.0 (actually a Medium article). It is just called Software 2.0. It's actually a few years old now, but well worth reading (Medium says it is a nine-minute read, so not the same investment of time as the podcast!). Here's the opening paragraph:

I sometimes see people refer to neural networks as just “another tool in your machine learning toolbox”. They have some pros and cons, they work here or there, and sometimes you can use them to win Kaggle competitions. Unfortunately, this interpretation completely misses the forest for the trees. Neural networks are not just another classifier, they represent the beginning of a fundamental shift in how we develop software. They are Software 2.0.

And later:

It turns out that a large portion of real-world problems have the property that it is significantly easier to collect the data (or more generally, identify a desirable behavior) than to explicitly write the program. Because of this and many other benefits of Software 2.0 programs that I will go into below, we are witnessing a massive transition across the industry where of a lot of 1.0 code is being ported into 2.0 code. Software (1.0) is eating the world, and now AI (Software 2.0) is eating software.

Electronic Design Automation

Cadence has been adding increasing amounts of AI to its portfolio of design tools. I think that we started with Jasper Formal Verification. Unlike most EDA tools, formal verification is like a mathematical proof, and there are multiple engines. If any of them succeeds in proving the property, then it is done and it doesn't matter if all the other engines failed. However, you can waste a lot of time trying those other engines and so picking an optimal approach to use is a great task for Software 2.0 approaches, even though the actual proof engines have to be written algorithmically with a Software 1.0 approach.

More recently, we announced Cadence Cerebrus (see my post Cadence Cerebrus - Intelligent Chip Explorer). Physical design is notorious for having literally thousands of knobs and switches that can be set. So again, setting those switches and deciding which approaches are looking the most promising is a great Software 2.0 application, even though the actual placement and routing algorithms are just that, Software 1.0 algorithms. By leveraging the cloud (or big datacenters), many settings can be tried in parallel in a sort of competition, and settings that perform poorly can be culled before running to completion.

verisium at levi's stadium

Just a couple of months ago, we announced Verisium (see my post Verisium AI-Driven Verification Platform), which takes a similar approach to optimizing verification. At the same time, we announced the Cadence JedAI platform (see my post Cadence JedAI Platform: The Foundation of EDA 2.0). This is the Cadence Joint Enterprise Data and AI, to spell it out in full. To pull one paragraph from that post:

The Cadence JedAI Platform is a cross-Cadence big data analytics solution that has been built from the ground up to support EDA-type data, such as design data, RTL, netlist, waveforms, workflow data, tools, and methodology. Also workload data such as runtime, memory usage, disk space usage, and so on. The Cadence JedAI Platform also provides comprehensive application programming interfaces (APIs) and industry-standard scripting tools, such as Python, Jupyter Notebook, REST APIs, enabling AI-driven, big data analytic applications (apps) to be created by the user, allowing engineering teams to visualize data and trends, and automatically generating practical design improvement strategies.

Lex Fridman Podcast

 

Sign up for Sunday Brunch, the weekly Breakfast Bytes email.

.