• Home
  • :
  • Community
  • :
  • Blogs
  • :
  • Breakfast Bytes
  • :
  • DesignCon: Google on AI for Non-AI People

Breakfast Bytes Blogs

  • Subscriptions

    Never miss a story from Breakfast Bytes. Subscribe for in-depth analysis and articles.

    Subscribe by email
  • More
  • Cancel
  • All Blog Categories
  • Breakfast Bytes
  • Cadence Academic Network
  • Cadence Support
  • Computational Fluid Dynamics
  • CFD(数値流体力学)
  • 中文技术专区
  • Custom IC Design
  • カスタムIC/ミックスシグナル
  • 定制IC芯片设计
  • Digital Implementation
  • Functional Verification
  • IC Packaging and SiP Design
  • In-Design Analysis
    • In-Design Analysis
    • Electromagnetic Analysis
    • Thermal Analysis
    • Signal and Power Integrity Analysis
    • RF/Microwave Design and Analysis
  • Life at Cadence
  • Mixed-Signal Design
  • PCB Design
  • PCB設計/ICパッケージ設計
  • PCB、IC封装:设计与仿真分析
  • PCB解析/ICパッケージ解析
  • RF Design
  • RF /マイクロ波設計
  • Signal and Power Integrity (PCB/IC Packaging)
  • Silicon Signoff
  • Solutions
  • Spotlight Taiwan
  • System Design and Verification
  • Tensilica and Design IP
  • The India Circuit
  • Whiteboard Wednesdays
  • Archive
    • Cadence on the Beat
    • Industry Insights
    • Logic Design
    • Low Power
    • The Design Chronicles
Paul McLellan
Paul McLellan
25 Apr 2022

DesignCon: Google on AI for Non-AI People

 breakfast bytes logoai and machine learning for codersAt the recent DesignCon, one of the keynotes was given by Google's Laurence Moroney, titled The Realities of AI & Machine Learning: Cut through the Hype and Move to Production. Laurence wrote the book AI and Machine Learning for Coders. Both the book and the keynote were aimed at people who were not AI experts with an in-depth knowledge of neural networks. Note that if you know a lot about deep learning and neural networks, you won't learn much from this post since it is very basic.

Laurence started in AI in 1992, when he was unemployed in the UK. The government wanted a plan for developing AI and introducing it into the British technology industry. Not surprisingly, like most such plans, it failed totally.

 He started off with some graphs looking at GDP and how technology has been a major driver, in particular the emergence of the internet (let's say 1992 when he started his career), and many top companies today came out of that technical revolution. Then the smartphone in 2007. AI is clearly the next big thing.

For example, here are numbers drawn from the World Economic Forum "jobs of tomorrow" report:

  • Data and AI +37%
  • Engineering and cloud computing +34%
  • People and culture +18%
  • Product development +27%
  • Sales and marketing +30%

Or the Forbes report which forecasts:

  • Global ML market $1.58B in 2017, going to $20.82B by 2024
  • AI software revenue $10.1B in 2018, going to $126B in 2025

Laurence also looked at LinkedIn and discovered there are 44,864 job openings in the US in the AI space. And globally, there are 98,371. One of his points is that these positions are not all going to be filled by existing AI experts because existing AI experts are all employed already. He sees a lot of Google's job, and his in particular, as making AI techniques accessible to people who do not already have an AI background. In particular, people who have other backgrounds, but can take advantage of these techniques if they are easy enough to use. 

He had an example of generating a script for a new episode of Stargate based on AI analysis of the existing dialogue. He was a consultant on the project, but the script rapidly degenerated into gibberish during the table read. As he put it:

This is absolutely no threat to a professional writer

He didn't show this video in the keynote, but he did say that you could easily find it on YouTube. So, I did:

However, one magic thing is that one actor, David Hewlett. realized he could use those models to rehearse technobabble and do better in auditions.

So now that AI is becoming practical, Laurence says:

The goal of what we are trying to do at Google is to educate people about AI, what it is, what it isn’t, so you can get productive.:

How AI and Neural Networks Work

The traditional programming model is that rules and data are taken, a programmer writes a program, and the program produces answers.

The machine learning paradigm turns this around: data and answers are input, and the machine learning algorithm then works on them and produces rules, also known as a model.

The way the actual learning works is that an initial guess is made, the accuracy is measured, and the guess is adjusted so that (hopefully) the next iteration will be a better guess, and the accuracy will be higher.

Finally, during inference, the rules/model are taken, new data (which has never been seen before) is the input, and the output is the inference ("it is a dog" or "it is a 30 mile and hour speed limit sign").

As another demonstration of deep learning in operation, Laurence turned on supertitles, automatically generating text from his speech in real-time. It was surprisingly accurate, although, as he pointed out, his was one of the voices used for the training, so it worked especially well for him.

He then worked through an example in Tensorflow. That's too much detail for a post like this. He pointed out that Tensorflow and other Google educational material is aimed at developers as opposed to academics, the theme that ran all through this keynote. The aim:

Widen access to AI for everybody. This is the beginning of a new class of applications not previously possible and without being a genius.

Ophthalmology

His big example was deep learning being used for inspection of people's retinas in India.

The goal is not to get rid of ophthalmologists but to help them scale. There is a huge shortage of ophthalmologists, and almost half of the patients have permanent injury to the retina or even go blind before they get to see one.

The accuracy of diagnosis for detached retina (DR) is better than the average ophthalmologist (but not as good as the best). But the model turns out to be able to do things that ophthalmologists cannot do. One is to estimate the age of the patient, accurate to within 3.25 years. And the other is to be able to tell if the patient is male or female just from an image of their retina.

What's Next?

Laurence's summary of what is coming next is:

  • More AI research. In fact, just the day before Google had published a paper on using AI to explain jokes, that you might have seen.
  • Using AI to detect and avoid IT problems
  • AI and ML operations, all the stuff that goes around the actual neural network. There's a whole new ecosystem to think about (see image below)
  • The talent squeeze (those 100,000 open positions I mentioned at the start of this post)
  • Ethics and bias
  • Explainability. Such as "how do you know this is a cat and not a dog?" It turns out AI just needs the eyes, not the rest of the animal. 

SCCC 

An offtopic pet peeve. <rant> I don't know how the Santa Clara Convention Center (and the Hyatt hotel next door) managed to find the smallest escalators in the world. Most of the time it doesn't matter, but when a keynote like this ends, you suddenly have literally hundreds of people trying to get down an escalator that is just one person wide. Of course, there are stairs, which is the sensible option. I compare it to the RSA security conference held in Moscone West. The keynotes are held on the top floor, and there are several times more people there than at DesignCon (too many to fit in the auditorium, in fact). But they have four big wide escalators, and they reverse one of the up escalators to be down after a keynote, so there is a huge capacity to get down making everything run smoothly. </rant>

Sign up for Sunday Brunch, the weekly Breakfast Bytes email.

.

Tags:
  • DesignCon |
  • google |
  • neural networks |
  • designcon 2022 |
  • AI |