So I just completed (and graduated!) AI guru Andrew Ng's AI for Everyone course on Coursera. I liked it. A lot. It answered many of the lingering questions I had about AI like when the rocketship will land, and when we should plan for complete annihilation by the little green men with those sweet phaser guns and no clothes on. Wait. Wrong cartoon. When we can expect these hundreds of millions of jobs worldwide to be stolen by AI inducing mass hysteria, blocks-long bread lines, insurmountable hipster unemployment, and a digital apocalypse that forces us all back to the safety of those Princess rotary phones to avoid mass hackery by "them bots." I can say, unequivocally, calm TF down. It's not that deep...yet.
We are only at the advent. Which means we still have tons to learn and perfect. We still have teams to build, AI strategies to create, and structures, guidelines, and standards to establish. Learning the process of AI at its most granular understanding has eradicated most of my fears about AI and how businesses will adopt and eventually embrace AI as an integral part of their organizations. Let me reiterate: most of my fears. Let me explain.
Garbage In, Garbage Out
One of the learning modules in Mr. Ng's course focused on the ethics of AI. If you've read my articles you already know that I have serious concerns about the lack of diversity and inclusion in business as it stands. More specifically, the lack of true willingness by those in power to make change as the disparities don't really affect their wallets or standing, thus creating no real onus to fix the problem. I dive into that in great detail (and candor) in my upcoming book, "(Business) AS I SEE IT."
Mr. Ng notes that much of the data from the Internet being extrapolated is filled with racial and gender stereotypes and tacit misperceptions and is causing outputs that are quite biased and can unfairly exclude, for instance, certain job candidates based these stereotypes. Remember the headlines generated from Amazon firing its resume-reading AI because it was guilty of sexism? I'm curious to know how many women were nexted as a result of that AI before the faulty language was discovered, with hopes that they circled back and reassessed all of those candidates. (I'm confident they did. They're thorough like that.)
So, I'm not going to sugarcoat this and just put it out there straight, no chaser. If we're already having issues with diversity and inclusion in the Engineering ranks, doesn't this have massive implications on AI development? If bias such as gender and racial stereotypes are being extrapolated from that unchallenged data, fed into AI development by a pretty non-diverse group of Engineers who likely aren't as acutely aware of these biases due to the lack of diversity in their ranks, won't this only exacerbate the D&I issue if specific roles and code aren't created to detect and "zero out" these biases before they're deeply embedded in the source code of a potentially ubiquitous product?
Mr. Ng discussed a few analogies from a research paper by Tolga Bolukbasi, et.al. with regard to existing word embeddings extrapolated from data already on the Internet. Here's an example without diving too deeply into the minutiae:
As you can clearly see in the third line we have a little problem. In short, the word embeddings discovered by the Bolukbasi team on the Internet that machine learning extrapolated and reasoned are based on an ages-old stereotype about women being homemakers when they are just as capable of being (and are!) Computer Programmers or the reverse roles being completely plausible.
The main issue is that the Internet is chock full of these word embeddings harboring subversive stereotypes and biases. Even with a healthy dose of awareness that they exist it is imperative that we have enough diversity within the ranks of Engineers creating AI source code that we all will encounter in some form down the road.
Sadly, we're already guilty of these transgressions with early adopter AI bots already in the wild. For instance, has anyone created a voice-activated Assistant named Shaniqua or Chante' or Lupe or Shu Mai or Sangita yet? Nope. Amy, Alexa, Siri, Cortana, Jane, and Clara currently rule the roost. Why not a man's name for the bots? Because females are [stereotypically] "Assistants" and whose gender and voice test most favorably based on extensive consumer research. Anyone smell the same implicit bias I'm smelling?
I had a lengthy, blatantly honest conversation with my mother last night re: my upcoming book and what I really think about racial and gender bias in business. After taking and completing Mr. Ng's course I'm worried that we're about to throw gasoline on a fire that's already burning out of control. If we haven't already made serious strides, globally, to eradicate bias, discrimination, and underrepresentation of minority groups in business, the introduction and rapid adoption of AI is going to make the situation exponentially worse. And it will allow companies to hide any number of these biases that benefit or don't impact them financially deep within the code and provide them a convenient defense when eventually called onto the carpet for various forms of discrimination #becauseInternet. Yes, I want to be optimistic about what miracles AI will provide and how it will quickly revolutionize/benefit business and society in the not-too-distant future. However, my gut tells me that if these biases are already inherent in the data being extrapolated from a biased Internet, unintentionally perpetuated by a buncha dudes under the gun to push out product and be first-to-market, and execs blithely ignoring the rather serious issues of diversity and inclusion, again, because it doesn't negatively affect their bottom line, then we're essentially creating an even more nefarious and ubiquitous tool for discrimination that will be nearly impossible to course correct once released into the wild and numerous, specialized versions are created, all with the same, biased, source code.
"In short, we're fucked," is how I concluded my Mom and me's conversation and we went back to celebrating her birthday and strategizing about her breast cancer surgery that occurred without issue today. By a FEMALE surgeon, no less. (Mom thanks everyone for the well wishes, btw.)
I'm no AI expert. I'm geeking out tho, which is a good (read: dangerous) thing. Now, more than ever, I'm eager to find a way to make an impact during AI's advent to ensure that companies staff up and attack these biased word embeddings during their extrapolation and before going live with AI products so that we can head off instances of unanticipated discrimination like that experienced by Amazon's recently sunsetted, resume-reading AI. Additionally, I want the business community to truly understand and own their complicity with today's race and gender bias issues and actually do something noteworthy and consistent starting tomorrow to avoid this massive, impending trainwreck we're heading toward. Otherwise, we are fucked. All of us. And headed for a high tech civil war of epic proportions. Call me an alarmist if you want to. I'll happily accept the moniker. But after almost 27 years in the C-suite watching the abysmally slow amount of progress we've made in that time with regard to race and gender disparities and bias in business, I can assure you my predictions will prove hauntingly accurate in the not-too-distant future.
I'm curious to know your thoughts, worries, and concerns in the comments below. Also, if you're an AI Engineer or expert, please chime in and allay any fears and/or challenge anything you may feel is unjust or just WRAWNG in anything that I've written above. I'm here for discourse and TRUTH. Not simply to be right. We all deserve a say, which allows us a wide array of opinions from which we can form our own, informed opinions. As always, keep it classy. Trolls will get blocked. Pants-ed first, then blocked. I have that power.
I highly recommend Andrew Ng's course AI for Everyone. Find it here:
It's informative, well worth the $50, and will put you well ahead of the majority of people on this platform with regard to basic, working knowledge of AI. Knowledge is power. And, in this case, knowledge is survival. EAs, ask your exec's if you can expense this as continued education. It will benefit you both, greatly.