Top

Understanding Gender and Racial Bias in AI, Part 3

January 4, 2021

In my four-part series about gender and racial biases in artificial intelligence (AI) and how to combat them, Part 1 focused on educating UX designers about bias in voice- and facial-recognition software and the AI algorithms and underlying data that power them. Part 2 discussed how our everyday tools and AI-based software such as Google Search influence what we see online, as well as in our design software—often perpetuating our biases and whitewashing our personas and other design deliverables. Now, in Part 3, I’ll provide a how-to guide for addressing your own implicit biases during user research, UX design, and usability testing.

If your 2020 went anything like mine, you may have put up your Black Lives Matter poster, read How to Be an Antiracist, and subscribed to the Code Switch podcast. Perhaps you even watched Coded Bias, this year’s eye-opening documentary on facial-recognition software. (If you haven’t watched it, you should.) Perhaps you then read Anthony Greenwald’s interview with Knowable Magazine and discovered: “Making people aware of their implicit biases doesn’t usually change minds.” (PBS News Hour republished it.) What should you do next?

Champion Advertisement
Continue Reading…

Take the Implicit-Bias Test

As a first step toward overcoming your own biases, I highly recommend taking the Project Implicit test on Social Attitudes. This is the implicit-bias test. The Kirwan Institute at Ohio State University defines implicit bias, otherwise known as unconscious bias, as “the attitudes or stereotypes that affect our understanding, actions, and decisions in an unconscious manner.” There’s no need to confess your results—either positive or negative—on social media. Just know that, if you pass—that is, show no signs of implicit bias—you’re in the minority. Implicit bias shows up in 70–75% of Americans who take the test, according to Dr. Greenwald. He goes on to say, “Even people with the best intentions are influenced by these hidden attitudes, behaving in ways that can create disparities.”

Using AI to redress these inequalities has failed “because the historical databases used to develop the algorithms that make these decisions turn out to be biased, too.” Thus, structural and institutional sexism and racism are alive and well in America and reflected in AI-driven technologies that help institutions and individuals make decisions about our own and other people’s lives.

Just ask the esteemed AI researcher, Dr. Timnit Gebru, co-founder of Black in AI and former co-lead of the Ethical Artificial Intelligence Team at Google. In December 2020, Google forced her to retract her latest research paper and, in her words, “resignated her.” They forced her out of her job. They wanted to enjoy the optics of a young, black, immigrant woman conducting ground-breaking research in artificial intelligence and co-leading their ethics in AI team, without suffering her research findings or professional opinions. Dr. Gebru smashes barriers every day. Most recently she looked into potential biases in OpenAI’s GPT-3 and Google’s Bert, the predictive tools in natural-language processing algorithms. This is the technology behind the word and phrase prompts popping up in your text messages and email messages. For many of us, these autogenerated, fill-in-the-blanks suggestions are comical and creepy in equal parts. Imagine you’re playing Mad Libs with HAL 9000.

Unfortunately, in conversations about vulnerable populations—for example, minorities and women—Ars Technica says, the predictive algorithms “could regurgitate racial and other biases that are contained in the underlying training material.” Or worse, the predictive-text software could exacerbate hate speech anywhere people type text into an app or in email messages, SMS messages, social-media posts, and in-app messaging conversations.

I cannot stress enough the irony of Dr. Gebru’s firing from Google’s ethics in AI team for drawing the world’s attention to the unethical and potentially dangerous side effects of Google’s algorithms. Former Googler Leslie Miley does not believe Google would have handled it the same way if Gebru were a white man and said, “You fired a black woman over her private email while she was on vacation. This is how tech treats black women and other underrepresented people.” When the BBC asked Dr. Gebru if she thinks Google, as an organization, is institutionally racist, she said, “Yes, Google itself is institutionally racist.” Bias begets bias.

Fake It Till You Make It

The good news is that there is some good news. A 2004 metastudy by Dr. Nilanjana Dasgupta [1] showed, “People’s awareness of potential bias, their motivation, and opportunity to control it, are a few of the factors that influence whether attitudes translate into action.” Dasgupta says, “Implicit prejudice and stereotypes are malleable and … they do not always produce discriminatory action.” In her article, she describes how people’s unwillingness to have others perceive them as prejudiced and their desire to avoid conflict resulted in better, less-explicitly biased behavior toward minority and other disadvantaged groups—including the overweight, elderly, and LGBTQ communities. Even if you are not a fully woke, hyper-politically correct, committed antiracist, you can turn your attitudes into actions. In other words, fake it till you make it.

Here are some concrete tips for actively combating your own implicit or unconscious bias.

Recruit Outside Your Usual Base

I’ve discussed recruiting outside your usual base in Part 1 and Part 2 of this series, but it’s worth mentioning again. As UX researchers and designers, we must force ourselves and our product teams outside of our usual, mostly white participant pools. We must listen to every group we are claiming to represent and every group whose needs we are attempting to meet. If that means driving to a community center in a historically black neighborhood and navigating slippery sidewalks while heavily pregnant on a bitter New England afternoon, just do it. We need to hear these voices. This is not an act of bravery. It’s a job requirement. (Pregnancy and hostile New England weather are optional.)

Record and Transcribe Your Research Sessions

Once you’ve recorded and transcribed your focus groups, interviews, field studies, and usability studies, let your findings sit for a spell. Work on something else for just long enough to forget who said what. Ideally, you should anonymize transcripts by removing participants’ names from these conversations. You want to separate the responses from the individual participants—and separate the words from the faces and voices of your participants. You’ll be able to be more inclusive and be less likely to report results that reflect your implicit biases.

If you don’t have time to let your transcripts simmer, ask teammates or colleagues who did not observe your research sessions to score your transcripts and analyze your findings. An unbiased set of eyes and ears can make a big difference to your research. Teammates who did not personally interact with participants won’t bring their preconceived notions or biases to the analysis process. Doing this requires some trust and a lot of collaboration—both of which would likely improve the accuracy and validity of your research report.

Run Your Transcripts Through a Natural Language Processor

If you have a savvy developer sitting near you, ask her to run your transcripts through a natural language processor (NLP). She can score your results using text-sentiment analysis tools from Microsoft or Google. Despite all my warnings about bias in machine learning and the underlying data, I mostly trust the natural text-sentiment APIs. These AI-driven tools eliminate human bias. They score words and phrases—or sentences and paragraphs, depending on what you give them—on a scale from -1, most negative, to +1, most positive. The underlying data, in this case, is the dictionary. Yes, I know the dictionary isn’t perfect either, but this is a known problem, and it is being addressed. No tool is perfect, but an NLP sentiment analysis of your transcripts quickly tells you whether your research participants feel good or bad about your product or service. Treat these scores as a supplement to, not a replacement for human evaluation of your findings. As a bonus, when you harness this technology in innovating your UX process, your workmates may finally recognize you as the nerd you truly are.

Keep an Open Mind

Even if you’re flying solo on these endeavors, be sure to listen to, capture, and address user needs and frustrations—regardless of whether you share them. Every idea, suggestion, complaint, and question is valid, even when you feel like your usability-test participants punched you in the gut and called your baby ugly. Turn their casual criticisms into next-generation business solutions. Don’t turn your nose up at problems for which you can’t imagine a solution. Sometimes seemingly unsolvable problems produce the most innovative solutions. Anyway, that’s what brainstorming sessions are for. Try to avoid prescreening your results by imagining all the things your boss or teammates might discount, disregard, or just plain discard. Of course, your colleagues might surprise you. Every once in a blue moon, your development lead might say something is easy to implement. Even if he’s never done that before, there’s a first time for everything. Don’t anticipate rejection. Just play back what you heard in its entirety.

Bring Back the Demographic Data

Once you’ve scored the unidentified voices in your transcripts, go back to see who said what. Add demographic data such as age, race, and gender to your research report. One of two things might happen: Either you’ll discover that your white users and nonwhite users share the same goals and needs, or you’ll discover unique opportunities among your nonwhite customers. In the first case, you truly are designing for everyone. In the second case, you have an opportunity to identify niche jobs to be done among minority consumer groups. This can simultaneously expand your professional design challenges and your organization’s sales in previously untapped markets. Talk about a win-win.

In Conclusion

Help all users to find their joy. This is what User Experience should be about. As Don Norman and Jakob Nielsen put it, “The first requirement for an exemplary user experience is to meet the exact needs of the customer, without fuss or bother. Next comes simplicity and elegance that produce products that are a joy to own, a joy to use.” We all know that a joyful user experience is better for business. It feels good, and it’s good for everyone. Frankly, we could all use a little more joy as we say goodbye to this dumpster fire of a year.

Stay tuned for Part 4, the final part of this series in February 2021. Happy new year! 

Endnote

[1] Dasgupta, Nilanjana. “Implicit Ingroup Favoritism, Outgroup Favoritism, and Their Behavioral Manifestations.” Social Justice Research, Vol. 17, No. 2, May 2004.

Principal/Founder, Black Pepper

Brookline, Massachusetts, USA

Sarah PagliaccioSarah is Founder and UX Guru at Black Pepper, a digital studio that provides customer research, UX design, and usability consulting services. Sarah designs complex mobile and desktop Web apps for healthcare, financial services, not-for-profit organizations, and higher-education organizations. Her focus is on UX best practices, creating repeatable design patterns, and accessible design solutions that are based on data-driven user research. Sarah researches and writes about bias in artificial intelligence (AI)—harnessing big data and machine learning to improve the UX design process—and Shakespeare. Sarah teaches user-centered design and interaction design at the Brandeis University Rabb School of Graduate Professional Studies and Lesley University College of Art + Design.  Read More

Other Columns by Sarah Pagliaccio

Other Articles on Artificial Intelligence Design

New on UXmatters