Human lives have been affected by artificial intelligence (AI) in large and small ways. One could encounter AI while applying for a credit card or have a routine question answered by a chatbot.
It’s become a cliche that more information is produced every year than in the entirety of civilization until that point. It’s true that there is a vast sea of data—which may unlock the secret to help people work, transact, and live better. But the problem is that there’s too much information for mere humans to synthesize, which means AI and machine learning have become essential.
AI is far from perfect. Paradoxically, considering the popular belief that it’s destined to be smarter than humans, machine learning is only as good as it’s programmed to be. And though smart people are doing the programming, the problem is that they’re too similar in outlook and demographics, which leads to bias. This ultimately limits the problems AI can solve, due to a lack of diverse viewpoints. Much like Hollywood, politics, corporate boards, and many other areas of life, AI has a major representation problem.
Years ago, a viral news story explained how Google image search learned to identify cats—not from file metadata or because the researchers taught them what a “cat” was, but through pattern recognition in the images themselves, successfully distinguishing cats from not-cats.
Such computing talents were the basis for plenty of machine learning in use today. A website chatbot doesn’t really understand the question, “How much is my insurance premium at age 40?” But it computes the terms how much, premium, and 40 to assume the intent of the question, referring to databases or libraries for a coherent answer. Ninety percent of the time it’ll be correct, freeing customer-service staff up to field more complex inquiries (and reducing wait times).
Anyone who has ever had a credit card stopped or received a verification call from a bank after buying from a shipper in China has experienced one of AI’s most important applications. A financial fraud algorithm knows the buyer has been spending money in his or her local area for weeks, so acting on the assumption that nobody can be in two places at once, the algorithm flags the Chinese purchase as suspect.
The buyer then explains the situation to the bank, it clears the purchase, and the algorithm learns a new rule. The next time a purchase at the same website shows up, the algorithm will assume it’s legitimate, having incorporated new information about people’s shopping habits into its “thinking.”
Medicine and the sciences produce and collect reams of data no human could synthesize effectively, but putting it through AI has already detected breast cancers from mammography scans, identified the genetic causes of diseases, and even predicted cancer immunotherapy responses.
Then, there’s predictive marketing, which can advise advertisers based on what customers will likely respond to as groups, all calculated based on what they did before.
But there’s an inherent challenge to it all: As Tonya Custis, PhD, director of AI research at Autodesk Research says, “No tech is neutral.” She’s referring to how important it is that technology is built by people from all walks of life, not only the usual STEM markers of gender and ethnicity but also religious background, cultural experience, and even income bracket all checking each other’s biases.
It’s a common myth that technology is morally neutral, like money or evolution, and that it can’t be inherently good, evil, or discriminatory—it might only appear so due to the intent of its user.
Consider an AI-designed bus route. Once upon a time, the engineers and programmers were all white males in Silicon Valley who drove luxury cars and knew nothing about the culture, needs, or infrastructure of a city’s bus services. They would have made assumptions upfront that a young Black woman or an elderly retiree would have spotted as completely wrong. Biases against the way real people use buses would have been built in from the start, and the growing algorithm would never learn the better, more accurate, information.
An algorithm that has been designed beautifully might work perfectly during testing but then behave very differently when applied to different populations in the real world. Such shortcomings are why AINow came to be. The New York University institute is producing interdisciplinary research and public engagement to help ensure AI is accountable to the communities and contexts where it’s applied.
Bias in AI is not a trivial problem. The findings of AINow’s 2019 report on discrimination in the field are quite damning:
“There is a diversity crisis in the AI sector across gender and race.”
“The AI sector needs a profound shift in how it addresses the current diversity crisis.”
“The overwhelming focus on ‘women in tech’ is too narrow and likely to privilege white women over others.”
“The use of AI systems for the classification, detection, and prediction of race and gender is in urgent need of re-evaluation.”
Most disturbing is that discrimination issues are not just a systemic problem; some in the industry want to deliberately undermine solving them. AINow reports, “We observe a small but vocal countermovement that actively resists diversity in the industry and uses arguments from biological determinism to assert that women are inherently less suited to computer science and AI.” Clearly, AI needs work.
—Tonya Custis, PhD, Autodesk Director of AI Research
But why is diversity the answer? The whole point of machine learning is that it teaches itself, isn’t it? Program a few examples into the algorithm, establish some false-positive examples to watch out for, click “start,” and the result is a smart and inclusive application—right?
Actually, Custis says, the axiom “garbage in, garbage out” very much applies here—even more so than in traditional programming because of how fast biased results can scale. “AI is pretty dumb; it only learns what we show it,” she says. “It’s an interdisciplinary sport. Even in normal programming, when you have two people locked in a room writing a bunch of rules, it’s probably a bad idea. In AI or machine learning, the same problem applies; it’s just that most people don’t understand the data can be biased to begin with and the decisions we’re letting software make affect different people differently.”
But gender, age, and racial diversity isn’t achieved by throwing everything into the melting pot to see what sticks—it’s about carefully managing the viewpoints allowed in. “Models that have been trained on the internet are seeing a lot of different data examples, but a lot of it ends up being hate speech or gender discrimination, stuff you wouldn’t want in your model,” Custis says. “There’s a big movement right now toward more curated data. You have to curate for diversity.”
There are many examples proving why Custis is right, such as the discovery in 2016 that an AI algorithm calculating the likelihood of released prisoners reoffending was clearly biased against Black people.
And because machine learning is a black box, it’s not clear how it’s arriving at discriminatory decisions. Software programmer David Heinemeier Hansson (of Ruby on Rails fame) and his wife, Jamie, applied for Apple credit cards and were shocked when he was approved for a credit limit 20 times higher than Jamie’s.
When they questioned provider Goldman Sachs, they were told credit limits were calculated by AI; nobody had the power to peer behind the curtain and determine why the algorithms considered Hansson’s wife such an outsize risk. When the scandal broke, Goldman Sachs told the media—apparently, with a straight face—that credit decisions were based on creditworthiness alone, not gender, age, sexual orientation, or race.
But there’s good news, according to Custis. Society has a unique opportunity to get diversity in machine learning right—and the time is now. “We’re super lucky,” she says. “There’s so much good AI talent out there. There are often a number of qualified people to choose from when hiring.”
Having established the benefits of diversity, what practical steps will lead to true diversity in artificial intelligence? Much in the way companies engage in greenwashing of environmental issues, it’s possible to pay lip service to AI diversity, checking boxes to look good rather than making sure algorithms are getting the best information.
First, it’s important to recognize that machine learning is expanding. It’s still mostly about math and computer fields, but now it’s essential to have product or project managers who are conversant with the technology to help develop products people want and need. User-experience designers also have to figure out the best way for consumers or workers to interface with AI.
“There are a lot more entry points now,” says Custis, who studied music and linguistics in addition to computer science. “It not only helps diverse teams; it gives more people opportunities.”
In fact, Custis is a model for what to watch for: In addition to diverse people, look for people with diverse skills. “When you work in AI, you’re not working in a vacuum; the data is usually from a specific domain or about something,” she says. “You’re not just doing computer science; you’re applying it to something: architecture, engineering, construction, or media and entertainment. Those are pretty specialized domains.”
But it’s also about more than just hiring people. Custis practices what she calls “attacking the pipeline”: “If you’re only trying to address it at hiring time, it’s often too late,” she says. “It makes more sense to address it earlier, with your interns and grad students and contributors. You also need to get involved with machine learning groups—share knowledge, bring people in for talks, et cetera. Fostering those connections early gives us relationships with people, and addressing the pipeline like that is the most effective. It creates a more organic diversity.”
Above all, practice what you preach. With a team spread across San Francisco, Toronto, and London, Custis is based in Minneapolis. She says that when people interview for positions, they like that her team is led by a woman—she hopes it makes them feel comfortable, like it’d be an inclusive place to work from the get-go. “If people walk into an AI lab, and it’s all Silicon Valley bro dudes, that can be intimidating,” she says.
Today, the field is populated by a lot more than those Silicon Valley bro dudes. It’s up to everyone to find diverse talent and bring them in to make tomorrow’s AI better for everybody.
This article has been updated. It originally published in September 2017.
After growing up knowing he wanted to change the world, Drew Turney realized it was easier to write about other people changing it instead. He writes about technology, cinema, science, books, and more.
AECO
D&M
Emerging Tech