Top

The Sound of Silence: What We’re Not Saying About Siri and Her AI Gal Pals

July 9, 2018

Ask Siri to self-identify, and you’ll get an existential, but noncommittal answer. For most consumers’ ears, however, SiriNorse for “beautiful woman who leads you to victory”—has a decidedly feminine voice. Siri isn’t alone in the sorority of feminine digital assistants. She joins a lineage that started in 1952 with Bell Labs’ Audrey, who could recognize spoken numbers. Since then, technology companies have produced an array of female digital assistants, including Viv, Alexa, Cortana, and Ooma.

More recently, a couple of male voices have joined the chorus, including Google Assistant’s Voice II and Samsung’s Bixby. But Bixby was lambasted on the Internet as sexist, and Voice II’s very name implies that it is a token feature. Even Siri now has a male-voice alternative, but the name and default setting say it all.

After more than six decades of voice assistants, it’s time to ask an uncomfortable question: Why do we—Americans, anyway—prefer female voices for our digital assistants?

Champion Advertisement
Continue Reading…

The Digital Voice Default

Ask most UX design professionals why they opt for female voices, and they’ll usually cite practicality. They’ll say that user research proves that everyone wants to get information from a woman, but the data doesn’t actually bear that out. Or they’ll argue that men and women hear female voices more clearly—again, a view that the numbers do not support. They may throw their hands up in frustration, claiming that the voice’s gender is simply a matter of stakeholder preference they have to design around.

While stakeholder preference might sound like a perfectly good reason at first, it hides an ugly reality. To make this clear, let me tell you a story about a talented young woman who I managed. She designed voice features for our clients’ prototypes. Although she created a voice that was meant to be genderless, the client kept referring to the voice in feminine terms. In other words, he heard what he expected to hear.

However, had that client hailed from a culture other than the United States, he might not have responded well to a female voice assistant. Arabic, Dutch, French, and British iPhones default to a male voice for Siri. Plus, BMW learned the hard way that female voices aren’t always the right route to take when German drivers of its 5 Series vehicles complained about “taking directions from a woman.” Yes, really.

So what’s going on? If female voices are neither universally preferred nor clearer to the listener, why do so many technology companies keep using them by default?

What’s Gender Got to Do with It?

Consider again that different cultures prefer digital voices of different genders. These preferences are learned. What about a society might predispose its members to prefer an assistant that has a female voice?

For the sake of simplicity, let’s think about Americans. What is noticeable about the list of female-dominated occupations in the US? They’re all low-paying, low-status roles such as service workers, office secretaries, domestic workers, and teachers. In other words, they’re the helpers, not the leaders.

In fact, a spokesman for Microsoft explained to PC Magazine that helper traits are at the core of its choice to create Cortana with a female voice. In making this decision, Microsoft conducted gender research and found that people associate helpfulness, trustworthiness, and supportiveness with women’s voices, not men’s.

The goal for voice-assistant creators such as Microsoft seems to be for people to forget that they’re speaking with a nonhuman entity. In contrast, when UX designers create voices they intend to be transparently robotic, they audition male voice actors, not females.

Take IBM’s Watson for example. When IBM gave Watson a female persona, users didn’t know it was a computer at all. They ostensibly gave Watson a male voice to eliminate this confusion. But is it possible that Watson’s male voice is meant subtly to say leader and confident—referring to what is arguably the world’s smartest artificial intelligence (AI) technology to date—rather than helper or supportive?

If you’re skeptical about that idea, look at how listeners reacted to female producers on “This American Life,” whose speech patterns exhibited vocal fry—the adoption of a low, raspy tone, especially at the end of sentences. Host Ira Glass commented that the letters the program received after putting the producers on air were some of the angriest he’d seen. He’s never been criticized for his low, raspy voice in the way his female peers have been. The actual objection seems to be that the women’s voices sound assertive and masculine at their lower register, using more gravelly tones.

Our tendency toward using female-sounding voice assistants and Glass’s observations about vocal fry really demonstrate two sides of the same coin: prejudice against women and their voices.

Intersectionality and Unconscious Bias

Our society’s gender bias stems from women historically being limited to supportive careers. Today, bias against women shows up not only in the voices of our digital assistants, but also all over the media and technology industries.

Just twenty-nine percent of the top films of 2016 featured female protagonists—roughly half the number that made males the #1 stars. Even video games overwhelmingly cast female characters as assistants rather than featured performers. Generation after generation, these stereotypes have become deeply ingrained in our society.

The technology sector suffers from a particularly virulent strain of gender bias. Sixty percent of women working in Silicon Valley have experienced unwanted sexual advances, and two-thirds say they’ve been excluded from socializing and networking events.

This sort of sexism stands in stark relief to the role of a UX designer. Curating the user’s experience requires empathy. Without it, a designer can’t understand the context in which a user experiences a product or service. A UX designer who doesn’t understand the user’s context risks designing their personal biases into the user’s experience, furthering the cycle of prejudice against women. Recognizing the fact that women in the media and technology industries—and, specifically, voice assistants—are not operating on an even playing field is the first step toward leveling that field.

Why do American companies create virtual assistants that sound like women? The truth lies somewhere in the gray areas of cultural stereotypes, sexism, and prejudice. Do UX designers have the responsibility to take the risk of encouraging users to buck their subconscious beliefs? Is making a bold statement worth manipulating a technology’s user experience? These are questions that Siri can’t answer. But we—as UX designers, consumers, and the members of an unequal society—must. 

Product Designer and UX Consultant at Yeti LLC

San Francisco, California, USA

Jay AmicoJay is a product designer and UX consultant at Yeti, a product-focused development and design studio in San Francisco. A self-styled UX geek, Jay excels at understanding user habits, perceptions, and behaviors. After working for years in litigation and trial law, Jay attended Hackbright Academy, kicking off a career in software engineering. Then, Jay worked as a UX designer for Perforce Software and General Assembly before joining the Yeti team.  Read More

Other Articles on Voice User Interfaces

New on UXmatters