In my four-part series about gender and racial biases in artificial intelligence (AI) and how to combat them, Part 1 focused on educating UX designers about bias in voice- and facial-recognition software and the AI algorithms and underlying data that power them. Part 2 discussed how our everyday tools and AI-based software such as Google Search influence what we see online, as well as in our design software—often perpetuating our biases and whitewashing our personas and other design deliverables. Now, in Part 3, I’ll provide a how-to guide for addressing your own implicit biases during user research, UX design, and usability testing.
If your 2020 went anything like mine, you may have put up your Black Lives Matter poster, read How to Be an Antiracist, and subscribed to the Code Switch podcast. Perhaps you even watched Coded Bias, this year’s eye-opening documentary on facial-recognition software. (If you haven’t watched it, you should.) Perhaps you then read Anthony Greenwald’s interview with Knowable Magazine and discovered: “Making people aware of their implicit biases doesn’t usually change minds.” (PBS News Hour republished it.) What should you do next? Read More
In this month’s edition of Ask UXmatters, our panel of UX experts discusses how UX design for artificial intelligence (AI) applications differs from designing a traditional application. A panelist warns the questioner about the dangers of over-relying on artificial intelligence instead of defining a product that truly meets users’ needs.
Our experts then consider the role of User Experience in the creation of AI applications—especially those that rely on machine learning (ML). Their discussion ranges from the importance of user advocacy, the value of doing user research, how to avoid bias, defining high-quality training data, transparency to users, and gaining user trust by ensuring that the user feels in control of an AI application. This column concludes with a brief discussion of the need for UX design best practices for AI applications. Read More
Remember the computer-science maxim Garbage in, garbage out? So goes big data in artificial intelligence (AI). The historical data we use to train the machines in AI research continue to reflect biases that many of us have hoped to relegate to the past. But, if you ask the machines, women belong in the home and black men belong in prison. So what should you do if your company requires you to design systems that rely on big data that might be faulty or technologies such as voice or facial recognition that have proven to perpetuate gender and racial biases? Start by understanding the problem so you can avoid the mistakes of the past.
Voice Recognition: The Can-You-Hear-Me-Now? Problem
Most of us now have voice-recognition tools in the palms of our hands. Many of us have them on our kitchen counters, too. Every time you tell Alexa to play music, ask Siri to set a timer, or request directions from Google, you are relying on your Echo, iPhone, or Android phone to recognize your speech. Your device could respond with either perfect accuracy or seemingly capricious inaccuracy. Because these are low-stakes requests, you might either persist or give up, depending on how badly you want to listen to “Blush” by Wolf Alice, while stirring your risotto. Read More