• Home
  • :
  • Community
  • :
  • Blogs
  • :
  • Cadence on the Beat
  • :
  • IEEE WIE ILC and Diversity

Cadence on the Beat Blogs

  • All Blog Categories
  • Breakfast Bytes
  • Cadence Academic Network
  • Cadence Support
  • Computational Fluid Dynamics
  • CFD(数値流体力学)
  • 中文技术专区
  • Custom IC Design
  • カスタムIC/ミックスシグナル
  • 定制IC芯片设计
  • Digital Implementation
  • Functional Verification
  • IC Packaging and SiP Design
  • In-Design Analysis
    • In-Design Analysis
    • Electromagnetic Analysis
    • Thermal Analysis
    • Signal and Power Integrity Analysis
    • RF/Microwave Design and Analysis
  • Life at Cadence
  • Mixed-Signal Design
  • PCB Design
  • PCB設計/ICパッケージ設計
  • PCB、IC封装:设计与仿真分析
  • PCB解析/ICパッケージ解析
  • RF Design
  • RF /マイクロ波設計
  • Signal and Power Integrity (PCB/IC Packaging)
  • Silicon Signoff
  • Solutions
  • Spotlight Taiwan
  • System Design and Verification
  • Tensilica and Design IP
  • The India Circuit
  • Whiteboard Wednesdays
  • Archive
    • Cadence on the Beat
    • Industry Insights
    • Logic Design
    • Low Power
    • The Design Chronicles
MeeraC
MeeraC
17 Jun 2019

IEEE WIE ILC and Diversity

Sultry, sultry, Austin, Texas. This was the site of the IEEE WIE ILC conference held on May 23-24, and I had the privilege of attending both days, along with 1400 other attendees. From the IEEE WIE ILC conference website, the conference is described as follows:

Launched 5 years ago, the IEEE Women in Engineering International Leadership Conference (IEEE WIE ILC) provides professional women in technology, whether in industry, academia, or government, the opportunity to create communities that fuel innovation, facilitate knowledge sharing and provide support through highly interactive sessions designed to foster discussion and collaboration. IEEE WIE ILC focuses on providing leading-edge professional development for mid-level and senior women.

Overall, I attended a dozen keynotes and sessions, two breakfasts, two lunches, a networking happy hour, and moseyed through the exhibitor hall, looking at the booths and chatting with many people.

One theme that emerged, at least considering the sessions I attended, is the concept of diversity. Obviously, diversity in the workplace is important—and there were sessions about that, too—but that’s not what I’m talking about. This conference had keynotes and sessions on AI, and many of the presenters were concerned with the diversity of training data for AI systems.

The Sessions

Candice Worley, chief technology strategist at McAfee pointed out that measurement bias, algorithmic bias, sample bias, and prejudicial bias exist in training any new AI system. Training itself is inherently biased because training data is created by humans, who are also inherently biased. The problem is how to train AI to have the subtlety that humans use. Common sense doesn’t exist for AI systems—even little kids can tell you that hippos can’t ride bikes, but an AI system needs to be told this. How do you teach a system about things that aren’t written down?

In another session, Conrad Tucker, associate professor at Penn State, presented a new technology that detects a patient’s heartbeat using AI facial recognition. As shown on their website, Video Vitals uses an RPG camera that amplifies the colors and plots relevant points on the vascular areas of the face, specifically the patient’s cheeks and between the eyebrows. The system detects minute changes of color in the user’s face, which indicates their heartbeat. This technology is a great first step in gathering vital information about patients in a remote medical environment.

The technology in action

You can probably already see the challenge using this technology, though: with the vast differences in facial structures, skin tones, facial adornments, use of cosmetics, facial hair, glasses, and other ways people change their appearance, how can this system accurately detect something as vitally important and vastly differing in presentation?

The answer to this conundrum, of course, is through enough training of the AI. Conrad Tucker and his team have and continue to travel the world, capturing faces and heart rates, training their system to take as many factors into account as possible. For instance, he and his team hadn’t thought to take the bindi—the colored dot applied to the center of the forehead, worn by some people in the Indian subcontinent—into account when training the AI. A trip to India brought this oversight to their attention, and now the AI can recognize the adornment. The system had been biased until it “learned” how to handle this subtle, yet significant factor in affecting an entire population of people.

Indian eyes with bindi

The bindi

Technology and Women in Engineering

How do these presentations fit in with a conference for women? Good question.

Sexism (and racism, and ageism, and all the other -isms) are the results of oversimplifying a complex system. People who engage in discriminatory behavior may not understand nuance and live in worlds of binary, incapable of acknowledging and appreciating the complexity of the world.

People and machine learning systems are more likely to have biased responses, too, if they have been inadequately “trained”, that is, if they haven't been introduced to a large enough dataset. I posit that any system—human or otherwise—if having been presented with inadequate or insufficient training data, whether IRL or digital, results in a biased system or worldview.

Intelligent people and machines have the ability to detect nuance. They have been “trained” with a set of data that is objective to perform a function, and have learned to detect tiny subtleties. Granted, the line drawn between people and AI is the fact that humans can never be truly objective about anything, but at least people can recognize their own subjectivity. Machines should be trained with a large enough set of data that results in as little bias as possible in their image recognition, language processing, or whatever it is that they are supposed to do. If a machine learning system can be truly objective, it should be able to recognize when its own training has incorporated a bias—just as humans can, if they choose to.

Just as we can’t predict how a feather will fall in the wind, there is no way to evaluate literally all the factors that go into making any kind of (morally ambiguous) decision. Chaos theory alone makes sure of that (see my blog post about it!). With the lack of common sense and even with adequate training, AI also has no moral compass, which brings us to a moral precipice—looking over the edge of AI becoming as ubiquitous as electricity.

 

Thank you to Cadence for sending me to this conference, which allowed me to think more about this stuff! If you're interested in exploring more moral questions regarding AI, also see my blog post, Moral Machines, published last October.

—Meera

Tags:
  • IEEE WIE ILC |
  • complexity |
  • Cadence on the Beat |
  • gender |
  • ethics |
  • machinelearningdeeplearning |
  • saving the world |