In this month’s edition of Ask UXmatters, our panel of UX experts discusses how UX design for artificial intelligence (AI) applications differs from designing a traditional application. A panelist warns the questioner about the dangers of over-relying on artificial intelligence instead of defining a product that truly meets users’ needs.
Our experts then consider the role of User Experience in the creation of AI applications—especially those that rely on machine learning (ML). Their discussion ranges from the importance of user advocacy, the value of doing user research, how to avoid bias, defining high-quality training data, transparency to users, and gaining user trust by ensuring that the user feels in control of an AI application. This column concludes with a brief discussion of the need for UX design best practices for AI applications. Read More
Artificial-intelligence (AI) technology is capable of behaving with human-like intelligence. With recent advances, AI has become more pervasive. Insurance companies use AI in processing claims and banks rely on automated stock trading. People can perform self-checks for skin cancer, using smart apps such as Skinvision or HealthAI-Skin Cancer, or they can interact with intelligent services through user interfaces such as Google Home or Amazon Echo, which are themselves smart because they understand natural-language queries and provide answers using natural language as well.
Most users of AI technologies do not have sufficient insight into their inner workings to understand how they’ve arrived at their outputs. This, in turn, makes it hard for people to trust the technology, learn from it, or to be able to correctly predict future situations. Read More
In my four-part series about gender and racial biases in artificial intelligence (AI) and how to combat them, Part 1 focused on educating UX designers about bias in voice- and facial-recognition software and the AI algorithms and underlying data that power them. Part 2 discussed how our everyday tools and AI-based software such as Google Search influence what we see online, as well as in our design software—often perpetuating our biases and whitewashing our personas and other design deliverables. Now, in Part 3, I’ll provide a how-to guide for addressing your own implicit biases during user research, UX design, and usability testing.
If your 2020 went anything like mine, you may have put up your Black Lives Matter poster, read How to Be an Antiracist, and subscribed to the Code Switch podcast. Perhaps you even watched Coded Bias, this year’s eye-opening documentary on facial-recognition software. (If you haven’t watched it, you should.) Perhaps you then read Anthony Greenwald’s interview with Knowable Magazine and discovered: “Making people aware of their implicit biases doesn’t usually change minds.” (PBS News Hour republished it.) What should you do next? Read More