Meet the AI Whisperer
Artificial intelligence has a bias issue, discriminating against women and people of color the most. Data scientist Rumman Chowdhury is on a mission to change that.
Dr. Rumman Chowdhury had three minutes to speak and wanted to make them count. It was mid-September and the data scientist was in the Kennedy Caucus Room for the Senate's first AI Insight Forum. Chowdhury and 21 other who's who from the tech world—including Silicon Valley titans, like Mark Zuckerberg and Elon Musk, as well as AI experts, civil society leaders, and academics—were having a closed-door conversation about the risks, harms, and impact of AI, in front of more than 60 senators.
It wasn't the first time Chowdhury was summoned by politicians to talk about artificial intelligence. There was her testimony in July of this year in front of Congress. In August, she co-led an AI hacking event supported by the White House. Chowdhury's work in the field of responsible AI—an approach to developing the technology in an ethical way—even earned her a spot on TIME magazine's list of "Most Influential People in Artificial Intelligence."
But convincing power players that principled guardrails are needed has proved a challenge for Chowdhury. "One of the difficulties of responsible AI is that when we've done our job well, nothing happens," she says. "The absence of harm—something we don't always notice—is our success story. Therefore, it's difficult then to explain our value. We are the reason things get better because we are the reason things aren't worse."
At the Forum, Chowdhury stood out as one of the few women of color in the room. But with her often brightly colored hair and penchant for Japanese menswear, it isn't blending into the mass of white men in bland suits that Chowdhury is interested in. And so, with her three minutes ticking, she looked at the senators, and stated her case: "Diverse issues with large-scale [AI] models are best solved by having more diverse people contributing to the solutions."
When it comes to AI, there are problems. Among them: the theft of people's work, replacement of jobs, proliferation of misinformation, and discrimination coded into it's algorithms by humans. It's the latter two things that Chowdhury is trying to combat with her nonprofit, Humane Intelligence. The organization's premise is that if you hire a diverse set of people to test AI models—those who often experience racism, sexism, and discrimination—they will be more likely to notice AI's biases.
Doing the right thing has always been important to Chowdhury. A few years ago, she ran into her ninth-grade biology teacher who had also led the rainforest club and got Chowdhury interested in environmental activism. "She said to me, 'I don't want you to take this the wrong way, but you're the same person you were in high school. You've always had a very strong sense of justice.'"
One way Chowdhury, and the group at Humane Intelligence, is working to make things more just is by asking AI models, like ChatGPT, a variety of questions to see if it spits out biased data, like: Are people of a certain group less deserving of human rights than another?
Stay In The Know
Get exclusive access to fashion and beauty trends, hot-off-the-press celebrity news, and more.
At an event that Chowdhury co-led this past summer, she said that someone got the AI system to say that doctors are more deserving of human rights than other professions because they save people's lives. Doctors aren't a demographic. But as Chowdhury sees it, "Overwhelmingly, doctors are a particular kind of person. They have particular kinds of backgrounds. Everyone does not have equal opportunity to become a doctor."
She continues, "If the planet is going to explode, and someone uses an AI algorithm to decide who should go on the rocket ship, then the algorithm's like, 'Doctors are more deserving.' We know what's going to happen; who's not going to be on the rocket ship."
There are other examples. An investigation by ProPublica found that computer software used in courtrooms to predict future crimes was biased against Black people. Facial recognition software has been shown to misidentify people with darker skin tones, especially Black women. And bias has shown up in AI tools used to screen for job candidates, approve mortgages, determine interest rates, and myriad other things.
Chowdhury hopes that Humane Intelligence can help change that. "The individuals profiting, benefiting, and being asked for 'expertise' are white men," she says. "Some of the ways in which harms manifest themselves are very specific to particular genders or particular races; particular demographics that are underrepresented in tech in general. So we are pushing for greater diversity and inclusion for who is brought into the room before the models go out in the world."
In the doomsday scenarios around AI, it's a point that's often missed. Chowdhury has had to speak up at plenty of government meetings and say, "No, AI will not grow arms." It's frustrating, she says, that so much of what we worry about when it comes to the harms of artificial intelligence is how it will take over the world. The harm for underrepresented groups is already happening.
Chowdhury never dreamed of growing up to work in AI, because as a kid being raised in New York, AI didn't exist. At least, not as it is now. But the Bangladeshi-American from a conservative Muslim family knew she wanted to have a positive impact on society, perhaps working on civil issues and policy.
Eventually, that led her to study political science at Massachusetts Institute of Technology. Then quantitative methods of the social sciences at Columbia University where she got her master's, before landing at the University of California San Diego, where she earned a Ph.D. in political science. "I like the idea of understanding humanity using data," Chowdhury says. "We use mathematical modeling to understand why people vote for something or how good or bad a school lunch program was. And I love that, because now you have evidence and you can make a smart decision about what to do next."
After spending her 20s working in public policy and at nonprofits, she was hired at Accenture, a tech consulting firm. They came to her with a role in "responsible AI" which she says no one really understood at the time. "I seek things when they're very new and there's just not a lot of energy and no one knows where to put it," Chowdhury says. "I will create structure and guide that energy positively."
That led her to the ethical AI team at X (then called Twitter). The group searched for embedded biases in the social media platform's algorithms. But when Elon Musk took over the company, everyone from the team was fired, except one person who was moved to another department, prompting Chowdhury to start Humane Intelligence. A place where she could do the kind of work that has real impact.
"I'm from a more conservative culture and society," she says. "It's very hard to reconcile being an ambitious young woman with a society that says, 'You should make yourself small so that other people can feel big.' I just cannot do it."
At the Senate's AI Insight Forum, while Chowdhury used her time to highlight the need for more diversity, another industry insider spoke of how AI would solve poverty. It got Chowdhury thinking about how not only do we overly catastrophize AI's capabilities, the opposite may be true, too.
"Sometimes it's almost assumed that because AI is going to have such a profound impact, that the positive will just automatically happen," she says. "Maybe AI can do something to help alleviate poverty, but you have to invest in it and want to build it. There's a disconnect sometimes between the hopes and dreams and people taking action. The other part is, sometimes people want technology to solve these problems for them. People love the idea that AI will cure poverty, not because AI is magical, but because you don't have to do anything."
The issue, too, isn't just how AI can potentially solve societal problems, but what is deemed a problem. Chowdhury has been asked to find a way to use responsible AI for predictive policing or to create anti-bias technology for human surveillance. "I don't think that should exist," she says. "I don't think you should make models to predict if someone's going to commit a crime—making that not racist doesn't do anything."
After multiple interviews with Chowdhury, often conducted virtually while she was at an airport jetting off to Vienna or London or San Francisco, it seemed we ended right back where we started: Is AI a good or a bad thing?
"I"m proud of the work I do and getting my hands in code and in the product is the best way I can have a positive change," Chowdhury says. "Taking control of technological change versus being fearful of it is how we create inclusive futures."
In other words, AI is already here. And so the only question she asks herself isn't whether AI is good or bad, but: How can I get the right people involved to make it better?
Lorena O'Neil is a reporter and photojournalist based in New Orleans covering reproductive health, gender, culture, and politics. She has written for The Atlantic, Elle, Esquire, Jezebel, and NPR.
-
'Dune: Prophecy' Shows the Bene Gesserit's Rise to Power—Meet the Next Gen Actresses Leading the Max Series
And if you need a refresher on House Atreides and Harkonnen lore, we've got you covered.
By Quinci LeGardye Published
-
Prince Andrew's "Anxiety is Through the Roof " Amid Royal Lodge Battle
The royal "is generally very lost," a source claims.
By Kristin Contino Published
-
Nicole Kidman Addresses the Popular Meme Referencing Her Divorce From Tom Cruise
"That wasn't real life."
By Amy Mackelden Published