Top

What Is a Good User Experience?

A Better Future

Designing for good in a changing world

A column by Sarah Pagliaccio
October 4, 2021

Welcome to the first edition of my new column: A Better Future: Designing for good in a changing world. My hope is that this column will be a natural extension of the two series I’ve previously written for UXmatters, “Understanding Gender and Racial Bias in AI” and “The State of UX Design Education.” My goal is to continue educating myself and this community on the good, the bad, and the ugly of the future of design, with a focus on the perils of designing in an artificial intelligence (AI)–powered world and what we, as UX designers and researchers, can do to address these challenges.

To get us all on the same page about capital G, good design, I’m kicking off my column with a discussion about design ethics. This topic feels particularly relevant given the recent news from Menlo Park, California. As I write this, The Wall Street Journal has released The Facebook Files, investigative research that concludes what many of us have suspected for years: Facebook has some special rules for elite users. Instagram is toxic for teenage girls. Facebook is an angry place and makes the world an angrier place.

Champion Advertisement
Continue Reading…

Facebook is not responding to the alarms its employees have raised regarding the treatment of minorities and vulnerable populations in the developing world. None of these revelations should come as a surprise. Sean Parker, the founding president of Facebook, warned us of this very thing in 2017. He admitted that Facebook’s goal is to “consume as much of your time and conscious attention as possible.” The social-media giant is guilty of “exploiting a vulnerability in human psychology.” Parker said that he, Mark Zuckerberg, and Kevin Systrom, co-founder of Instagram, “understood this consciously. And we did it anyway.”

The Ethics of AI and the Big-Five Tech Giants

How could this happen? In the absence of strict government regulations, the men behind the algorithms—“billionaire overlords” as Parker refers them and himself—at Facebook, Google, Apple, Amazon, and Microsoft have been left to police themselves. With growth as their only goal, there’s no motivation for these tech giants to slow their roll, take stock, or ask whether they are doing the right thing. If they had wanted to do the right thing, Facebook would have scrapped Instagram for Kids, its product targeting children under thirteen, as soon as executives saw the results of their own research on what Instagram does to young teenagers. The company would have chucked their Ray-Ban Stories smart glasses in the bin when journalists pointed out how these glasses could result in the exploitation of women, children, and other vulnerable populations. (If you’re unconvinced regarding Zuckerberg’s growth goals, check out Ben Grosser’s film Order of Magnitude. In it, Grosser compiles every public instance of Zuckerberg talking about becoming more and bigger.)

Are these tech giants considering the ethical impacts of their products? In 2016, Facebook, Google and its DeepMind subsidiary, Amazon, Microsoft, and IBM teamed up to create the nonprofit Partnership on AI. Apple joined the group in 2017. According to The Verge, the group’s two key goals are to educate the public about AI and collaborate on AI best practices, with input from researchers, philosophers, and ethicists. If we overlook the fact that the group should address ethical considerations before selling the benefits of AI to the general public, they have nevertheless been conducting solid research and asking the right questions. Mustafa Suleyman, co-founder of DeepMind and co-chair of the Partnership on AI, has said that the Partnership is committed to an open dialogue about the ethics of AI, while also admitting that the group is not a regulatory body and cannot enforce rules on its member organizations. Even so, they’ve published a set of tenets to guide the organization. Sadly, the only way to find these tenets today is to dive into the Internet Archive’s Wayback Machine. Tenet 3 states, “We are committed to open research and dialogue on the ethical, social, economic, and legal implications of AI.” However, it’s impossible to engage in an open dialogue with the guiding principles of these organizations hidden from public view.

The Health Ethics & Policy Lab published a report in 2018 that mapped and analyzed the state of ethical AI in private companies, research institutions, and governments around the globe. This report describes convergence around five of eleven key principles: transparency, justice and fairness; nonmaleficence, a technocratic do-no-harm policy; responsibility, and privacy. Of the 84 ethical principles and AI guidelines that they’ve identified, organizations in the United States published 20 of them. For-profit companies have written many of these guidelines, including Google, IBM, Intel, and Microsoft, while Amazon, Apple, and Facebook were conspicuously absent from this list. Amazon executives say they are doing work in this area, but are uninterested in discussing what they are doing with anyone outside of Amazon, which is too bad because they do not have a great track record for developing their own AI solutions. For an example, just look at their sexist AI recruiting tool. Facebook claims to have an in-house ethics team, but they are also keeping mum about their work.

Microsoft, the oldest of the big five, has the most mature Guidelines for Human-AI Interaction. These guidelines are really tactics that Microsoft’s AI principles support, but they are posted on a separate Web site. The first tactic: “Help the user understand what the AI system is capable of doing.” This is impossible. One cannot provide transparency into a thing that is a black box by design. In many cases, the engineers and data scientists who develop these software systems don’t understand what they can do. The US military’s own report on AI in the military notes the lack of explainability that is common to all AI tools. How could any nonexpert know what the software can do when the dudes who create it can’t explain it?

Speaking of the military-industrial complex, Google’s principles state that they don’t do business with the US military, but they door at least their software does—according to reporting from NBC News. Google, Amazon, Microsoft, Dell, IBM, Hewlett Packard, and Facebook all benefit from contracts with governmental agencies, including Immigration and Customs Enforcement (ICE), the Federal Bureau of Prisons (BOP), and the Department of Homeland Security (DHS). They hide the details of these deals in subcontractor agreements, so any mentions of accountability or transparency in their ethics statements are a farce.

Many of us who are doom-scrolling Facebook or Instagram or Googling from our Apple, Microsoft, or Android device are far removed from the real physical harm that these platforms are causing. But we are still being manipulated in obvious and subtle ways. These platforms are harvesting our data whether we realize it or not. Some of us may be more complicit than others, but we are all victims of these privateers.

The Ethics of AI in the UX Design Community

Shortly after our Facebook feeds served up the latest breaking news on Facebook, a long-time friend and colleague and I had a conversation on Facebook about the Facebook Files. (Where else would we have this conversation?) She has spent her “career in and around software development, and there have definitely been clients, or even just features, that teams would not work on for moral reasons.” She remarked, “This [revelation from Facebook] makes dark UX patterns look like child’s play.” Where were all the designers? she wondered.

To answer her question for all UX professionals, here’s a summary of what the UX community has had to say about ethics in research and design in recent years. The good news is that the practice of user research has reached ethical maturity. Although not every UX professional or organization might be practicing ethical user research, there are well-established best practices in this area. Josephine Scott shares real-life research dilemmas when conducting big-data research and how to solve them—for example, doing live A/B testing, conducting large-sample unmoderated testing, or using survey-intercept tools.

Regarding ethics in UX design, enough UX professionals have questioned our role that Jakob Nielsen felt compelled to speak out on “The Role of Design Ethics in UX” at a recent conference. He says that we should never deceive through design. Treating users well makes them loyal customers, which drives long-term business value. If we have to ask whether something is ethical, it probably isn’t. To paraphrase, if something truly is useful, usable, and desirable, we can be honest about it and sell more widgets.

On UXmatters: Juned Ghanchi discusses the challenges of incorporating ethics into his design process and says we can do it without making value judgments. We can balance business needs against equity and accessibility goals. Peter Hornsby talks about the jaded jargon and industry doublespeak in which we engage or have bought into, fooling ourselves into thinking that we’re designing for good. Hornsby adapts Isaac Asimov’s Three Laws of Robotics to UX design.

  1. “A UX designer may not injure a user or, through inaction, allow a user to come to harm.
  2. “A UX designer must meet the business requirements except where such requirements would conflict with the First Law.
  3. “A UX designer must develop his or her own career as long as such development does not conflict with the First or Second Law.”

I love the simplicity and familiarity of Hornsby’s approach, but worry about the welfare of those of us who cannot afford to quit a job because we disagree with the intended or unintended consequences of something we’ve designed.

Chris Kiess looks at the big picture of UX design ethics and divides the problem into three categories: existential values, ill or misdirected intent, and benevolent intent. He then dives into specific design challenges such as dark patterns, influence, distraction, and hidden information. Vikram Chauhan takes Kiess’s discussion a step further and questions at what point our persuasive designs become evil. Both agree that dark patterns have no place in Good UX design. (I think we can all agree that dark patterns have no place in UX design, but they still exist.) Both of these authors also question who is ultimately responsible when things go awry. Is it the UX designers, or does the fault lie with our stakeholders, business owners, project managers, or others who demand alterations to our designs? According to the beating drum of journalists at Harvard Business Review, “Everyone in Your Organization Needs to Understand AI Ethics.” So maybe we’re all to blame.

While our UX design work may be well intentioned and useful, we often lack carved-in-stone standards. Xinru Page and her colleagues have proposed a set of standards that are specific to responsible privacy design in social media. Huatong Sun has written guidelines for cultural sustainability in Global Social Media Design, explaining how to create local-global online networks that are sensitive to their cultural and commercial contexts.

Dorothy Shamonsky has examined the professional code that governs architects and translated their principles to User Experience by placing an emphasis on usability and accessibility. The outcome is a list of proposed standards encompassing accessibility, ergonomics, safety, appropriate attention, movement, beauty, transparency, security, mind, community, and innovations for designing holistic, ethical user experiences. She admits that we need to do more work in this area. Perhaps because we lack one organization to rule us all, we’ll never have just one set of ethical guidelines.

The UX community does not have the kind of organization our graphic-design partners have in AIGA. The folks at AIGA took a stand 20 years ago when they published the first edition of Design Business and Ethics, which is now in its third edition. This publication includes standards of professional practice that outline the designer’s responsibility to their clients, to other designers, and, even more importantly, to the public, our broader society, and the environment. These standards of practice feel both fresh and prescient, offering commandments to “avoid projects that result in harm to the public, … communicate the truth in all situations, … and respect the dignity of all audiences.”

It’s this last point that really sticks with me. Treating people with dignity is what matters above all else. Dignity appears on the list of eleven principles in the global landscape of ethics from the Health Ethics & Policy Lab, to which I referred earlier. “Dignity is believed to be preserved if it is respected by AI developers in the first place.” (The emphasis is mine.) This sums up what many of us are already saying: Good design is the result of good intentions. Only by designing and creating useful, usable, and desirable products that treat everyone with dignity—regardless of gender, race, culture, ethnicity, and ableness—can we do good design. 

Principal/Founder, Black Pepper

Brookline, Massachusetts, USA

Sarah PagliaccioSarah is Founder and UX Guru at Black Pepper, a digital studio that provides customer research, UX design, and usability consulting services. Sarah designs complex mobile and desktop Web apps for healthcare, financial services, not-for-profit organizations, and higher-education organizations. Her focus is on UX best practices, creating repeatable design patterns, and accessible design solutions that are based on data-driven user research. Sarah researches and writes about bias in artificial intelligence (AI)—harnessing big data and machine learning to improve the UX design process—and Shakespeare. Sarah teaches user-centered design and interaction design at the Brandeis University Rabb School of Graduate Professional Studies and Lesley University College of Art + Design.  Read More

Other Articles on User Experiences

New on UXmatters