Designing Ethical Experiences: Understanding Juicy Rationalizations

June 23, 2008

From “The Big Chill”: [1]

Michael: “I don’t know anyone who could get through the day without two or three juicy rationalizations. They’re more important than sex.”

Sam Weber: “Ah, come on. Nothing’s more important than sex.”

Michael: “Oh yeah? Ever gone a week without a rationalization?”

Designers rationalize their choices just as much as everyone else. But we also play a unique role in shaping the human world by creating the expressive and functional tools many people use in their daily lives. Our decisions about what is and is not ethical directly impact the lives of a tremendous number of people we will never know. Better understanding of the choices we make as designers can help us create more ethical user experiences for ourselves and for everyone.

In Part 1 of this series on Designing Ethical Experiences, “Social Media and the Conflicted Future,” I explored the familiar dynamic in which design mediates unresolved conflicts between business stakeholders and users by making unethical compromises and looked at changes in technology and culture that make this unhealthy dynamic more likely in the future. In Part 2, “Designing Ethical Experiences: Some Practical Suggestions,” I outlined some practical techniques for effectively resolving ethical conflicts during our design efforts by adapting existing user experience tools and methods.

Champion Advertisement
Continue Reading…

In this third installment, I’ll explore the surprising mix of misperceptions, biases, and cognitive mechanisms underlying the decisions people make when facing ethical choices, which unfortunately encourage us to come up with juicy rationalizations for unethical decisions.

Not As Ethical As We Think

“People believe they will behave ethically in a given situation, but they don’t. They then believe they behaved ethically when they didn’t. It is no surprise, then, that most individuals erroneously believe they are more ethical than the majority of their peers.”—Ann E. Tenbrunsel, Kristina A. Diekman, Kimberly A. Wade-Benzoni, and Max H. Bazerman [2]

As in the mythical town where every child is above average, it is obviously impossible for everyone to be more ethical than his fellows. Surprisingly, however, consistent findings from psychology, management, sociology, and economics research show our ethical behavior is quite a bit worse than we imagine. We not only choose unethical options more often that we think when facing ethical dilemmas, but after the fact, we also change our ethical standards and our memories to justify the many unethical decisions we make. [2]

Considerable research on decision making and business ethics shows a powerful combination of cognitive distortions, shifting perceptions, and personal biases—a combination that recalls the dialogue about juicy rationalizations from “The Big Chill”—heavily affect the choices we make when faced with ethical dilemmas. To shed light on the mechanisms that affect our decisions, I will summarize some of the most relevant research, with an emphasis on how these findings relate to user experience design. It seems we all carry a bit of Edgar Allan Poe’s “Imp of the Perverse.”

Bounded Ethicality

Perhaps the most important thing to understand is that we do not consistently apply our own ethical standards. Instead, we place boundaries on how our morals apply, doing so “in systematic ways that favor self-serving perceptions.” [2] Bounded ethicality involves more than simply deciding when and where to apply ethical standards to our decisions. Researchers studying the subject find “people develop protective cognitions that regularly and unwittingly lead them to engage in behaviors that they would condemn upon further reflection or awareness.” Bounded ethicality helps explain how “an executive can make a decision that not only harms others, but is also inconsistent with his or her conscious beliefs and preferences.” [2]

Many ordinary self perceptions play a role in bounded ethicality, including the following:

  • the desire to see ourselves as moral and competent despite contrary evidence
  • ignorance of our own prejudices
  • holding overly positive views of ourselves
  • failure to recognize our own conflicts of interest
  • unconscious discrimination that favors in-groups or people similar to us

Looking at the work of UX professionals, these perceptual biases appear in many forms such as the following:

  • the assumption that we can speak for users without consulting them
  • our conviction that our recommendations are inherently valid, even when uninformed
  • placing greater weight on the views of people we happen to identify with
  • believing our decisions always benefit users more than ourselves

Two Selves

Within the self-serving boundaries of our ethics, we listen to conflicting internal voices when we make decisions. Poe called this deep-seated desire to be contradictory the “Imp of the Perverse,” describing it as “an innate and primitive principle of human action, a paradoxical something, which we may call perverseness.” [3]

Ethics and psychology researchers call these competing perspectives the Want Self and the Should Self—though this is not to suggest we all literally harbor multiple distinct personalities. The Want Self “is reflected in choices that are emotional, affective, impulsive, and ‘hot headed’.” [2] In contrast, the Should Self is “rational, cognitive, thoughtful, and ‘cool headed’.” [2] Where the Should Self “encompasses ethical intentions and the belief that we should behave according to ethical principles,” the Want Self reflects actual behavior that is characterized more by self-interest and a relative disregard for ethical considerations.” [2]

Figure 1—Our two selves
Our two selves

Like the miniature angelic and devilish versions of the cartoon character Daffy Duck arguing over what course of action he should take, the presence of these two voices generates internal conflicts when we face decisions with ethical aspects. Their inevitable clashes can lead to some types of counter-intuitive behavior that are familiar to user experience professionals—such as making unreasonable design and delivery promises we cannot possibly meet or making misleading statements to persuade others or win business—sometimes called the Guru Effect.

The Three Phases of Decision Making

To understand how we make choices, psychologists divide our decision-making process into three phases called prediction, action, and evaluation. Prediction refers to the time before we act or make a decision, action occurs during the period when we act or make a decision, and evaluation occurs when we look back on our choices to assess them.

Based on their differing concerns and perceptions, our Want and Should Selves experience tension and trade control over our perceptions and actions at alternating stages of the decision cycle. During the prediction phase, our Should Self leads us to believe we will choose to act ethically, in a manner consistent with our beliefs. During the action phase, our Want Self dominates, and we behave unethically. Finally, in the evaluation phase, our Should Self retrospectively alters our perceptions of our behavior, crafting juicy rationalizations to bring our choice in line with our ethical beliefs. [2] Figure 2 illustrates this pattern.

Figure 2—Want and Should Selves in decision making
Want and Should Selves in decision making

This separation of our Want and Should Selves during the different stages of the decision cycle “allows us to falsely perceive that we act in accordance with our Should Self when we actually behave in line with our Want Self.” [2] Overcoming this separation across the timeline of decision making is the focus of most recommendations for improving our ethical choices.

In addition to the tension between our Want and Should Selves, two other mechanisms come into play over the course of the decision cycle: ethical fading and cognitive distortions.

Ethical Fading

Ethical fading is the “process by which, consciously or subconsciously, the moral colors of an ethical decision fade into bleached hues that are void of moral implications.” [2] Ethical fading occurs primarily during the action phase of decision making and is the mechanism that allows the Want Self to take over and choose the unethical path.

Many factors contribute to ethical fading, including a natural tendency to make mistakes when predicting our actions, shifting interpretations of events and situations, and the language we use to describe problems. Let’s look briefly at a few of the most common factors.

Forecasting Errors

Studies conducted since the 1980’s demonstrate people do not accurately predict their own future behavior in a wide range of situations. They underestimate the time they’ll require to complete tasks; predict they’ll stand firm in important interpersonal conflicts, then fail to assert themselves; and they declare they’ll donate to charities without following through.

Ethical challenges are especially likely to inspire inaccurate predictions. People’s “self-predictions generally reflect their hopes and desires rather than realistic self-understanding,” and most people want others to perceive them as ethical or moral, because it is socially desirable. [2]

Temporal Construal

Our interpretations of events change with time—a mechanism psychologists call temporal construal. We think of events in the distant future using abstract representations, or high-level construals, while we think of events in the near future in terms of concrete details, or low-level construals. [2] Temporal construal increases the tension between our Want and Should Selves. The Should Self is more likely to view situations in the future in terms of their ethical aspects. The Want Self responds to the immediate context of events at hand and emphasizes concrete factors.

At the beginning of a community design project, for example, we may initially think in terms of the ethical aspects of the user experience—such as how to ensure we protect members’ privacy. Yet once design is under way, we shift viewpoints and think primarily in terms of the length of time it will take to design the different pieces of the community’s infrastructure. The shift in viewpoints lets us reduce or eliminate planned privacy-protection mechanisms, because they require more time to complete than the schedule allows.

Figure 3 shows the changes in perspective that occur during decision making, which result from temporal construal.

Figure 3—Temporal construal and decision making
Temporal construal and decision making

Framing and Context

Context and the initial framing of a choice is one of the most important contributors to ethical fading. [2] The way we frame a choice—that is, whether we identify it in advance as a decision with ethical implications—encourages or discourages ethical fading. Studies on decision making show “those individuals who saw the decision as an ethical one were more likely to behave ethically than those for whom ethical fading had occurred.” [2]


Euphemism and other deceptive language practices that mask the ethical aspects of a situation—think of ‘creative accounting’ and the recent credit crisis—“allow us to deceive ourselves that the decision is void of ethical implications.” [2]

Understanding and speaking the language of users during the design effort—working with their vocabularies for values and emotions, tasks and goals, roles, concepts, and actions—is a natural way for UX designers to discourage the use of euphemisms that hide unethical behavior and set a good precedent for design decisions.

Ethical Numbing

Ethical numbing, or loss of sensitivity to ethical concerns, occurs when people repeatedly encounter unethical behavior and increases the likelihood of ethical fading. [2] Genuine ethical numbing is an indicator of sustained organizational failure and a good sign to designers that it is time to move on.

Visceral Factors

Visceral factors such as hunger, anxiety, and fear become very influential during the action phase and increase ethical fading—for example, when salespeople make unreasonable promises to customers to meet their sales quotas. [2] Designers under pressure to avoid delays in timelines and project plans can experience powerful visceral feelings!


Our predictions about the choices we will make often reflect things we see as desirable—such as behaving ethically under challenging circumstances. At the time we actually make a decision, however, we often choose the most immediately feasible course of action—such as making a sale by misleading a customer or agreeing to an unreasonable deadline to avoid conflict about time estimates. [2]

Cognitive Distortions

Ethical fading lets us choose an unethical course during the action phase. A collection of cognitive distortions then takes effect during the evaluation phase, when the Should Self returns to prominence, and helps us craft juicy rationalizations for our bad behavior. This broad palette of mental paints lets us retrospectively change our picture of what happened and the ethical standards in play at the time, supporting our belief that we acted ethically, in spite of knowledge and evidence to the contrary, and lets us satisfy our need to feel ethical. [2]

Memory Revisionism

Memory revisionism is “a process in which people selectively and egocentrically revise their memory of their behavior.” [2] Designers who retroactively downplay or disregard important information—such as the findings of user research, whether their own or that of others, that contradicts their decisions—are engaging in memory revisionism.

Shifting Standards

People adjust their definitions of what is ethical to fit their behavior into an acceptable ethical standard. For example, “they may tell themselves that lies told during [a] negotiation fell within the mutually understood rules of the game.” [2]

Creeping Normalcy

Under the creeping normalcy effect, people compare their actions to their personal ethical standards on the basis of their past behavior. If the difference between their recent unethical behavior and the ethical standard is small, they do not notice the discrepancy, and the unethical choice becomes the new standard. [2]

The Omission Bias

Ethics research “...shows that most people consider acts of commission that cause harm to be morally worse than omissions, or the failure to act, that cause harm.” [2] Under the influence of the omission bias, people “... allow themselves to believe that they were not unethical because they did not act to create additional harm.” [2] Like saying the end justifies the means, the omission bias shifts the focus of our assessments about what is ethical to our initial intent, not the decision itself.’s member registration process, which I discussed in Part 1 of this series, included specific UX design features that misled new members into inadvertently granting the network permission to send invitations to everyone in their email address books and is a good example of the omission bias in action.

The Outcome Bias

“People too often judge the ethicality of actions based on whether harm follows rather than on the ethicality of the choice itself.” [4] The outcome bias leads to validation of unethical choices that happen to yield good results. This cognitive distortion is behind the school of thought advising people to ask for forgiveness, not permission when considering a course of action, assuming that if no harm results, the unethical action is retroactively acceptable. [4]

An excellent example of the outcome bias is TechCrunch’s positive reporting on from May of 2007, specifically mentioning rapid growth and profitability achieved through “controversial” methods. [5]

Figure 4 shows how cognitive distortions and ethical fading take effect over the course of the decision-making cycle.

Figure 4—Cognitive distortions and ethical fading
Cognitive distortions and ethical fading

Seeing No Evil

Several factors independent of the decision cycle make it easier for us to engage in and overlook unethical behavior.

Motivated blindness is the term psychologists use to refer to the ways people “evaluate evidence in a selective fashion when they have a stake in reaching a particular conclusion or outcome.” Motivated blindness means people “selectively see evidence supportive of the conclusion they would like to reach, while ignoring evidence that goes against their preferences or subjecting it to special scrutiny.” [4]

In addition to the judgment distortions that result from motivated blindness, people “do not view indirect harms to be as problematic as direct harms.” [4]

The invidious distinction between direct and indirect harms helps explain the “identifiable victim effect,” which “suggests that people are far more concerned with and show more sympathy for identifiable victims than statistical victims. Simply indicating that there is a specific victim increases caring, even when no personalizing information about the victim is available.” [4]

Motivated blindness, our insensitivity to indirect harm, and the identifiable victim effect directly affect our perceptions and evaluations of ethical choices. The combination of these biases leads people to delegate unethical behavior to subordinates in organizational settings and feel they’ve done no wrong.

In design settings, this pattern takes a familiar form: Stakeholders under the influence of motivated blindness frame ethical conflicts as business decisions and leverage organizational structures and cultures to direct designers to craft user experiences that achieve business goals at the expense of doing indirect harm to the beliefs and values of anonymous users, or unidentifiable victims.

Designers then face the dilemma of whether to take up the role of misplaced mediator—as I described in Parts 1 and 2 of this series—and champion an ethical cause rather than focus on design. Prudence and other visceral factors often dictate our going with the flow, so the Want and Should Selves work together to craft juicy rationalizations for unethical design decisions.

Managing the Imp of the Perverse

This is a powerful collection of biases, misperceptions, and slippery cognitive mechanisms, and it seems designers all carry a bit of the “Imp of the Perverse.” Shedding light on such factors as the normal changes in our perspectives and our competing voices makes it easier to understand why designers sometimes make unethical choices, then craft juicy rationalizations to justify them.

The future of ethical user experiences may seem dim, but never fear, designers! In the next installment of this series, “Improving Our Ethical Choices: Managing the Imp of the Perverse,” I will share recommendations from psychology and ethics experts for making our design choices more ethical and suggest how designers can quickly put these recommendations into practice. 

You can hear Joe Lamantia discussing Part 1 of this series on Jeff Parks’s I.A. Podcast.


[1] The Internet Movie Database. “Memorable Quotes for ‘The Big Chill’.” The Internet Movie Database. Retrieved May 5, 2008.

[2] Tenbrunsel, Ann E., Kristina A. Diekman, Kimberly A. Wade-Benzoni, and Max H. Bazerman. “Why We Aren’t as Ethical as We Think We Are: A Temporal Explanation,” HBS Working Paper #08-012 (2008).

[3] Poe, Edgar Allan. “The Imp of the Perverse.” Wikisource. Retrieved May 1, 2008.

[4] Gino, F., D. A. Moore, and M.H. Bazerman. “See No Evil: When We Overlook Other People‘s Unethical Behavior.” HBS Working Paper #08-045 (2008).

[5] Arrington, Michael. “ Turns Profitable—May Be Fastest Growing Social Network.” TechCrunch, May 9, 2007. Retrieved February 11, 2008.

Head of UX, Commercial Card, at Capital One

Boston, Massachusetts, USA

Joe LamantiaA veteran architect, consultant, and designer, Joe has been an active member and leader in the UX community since 1996. He has crafted innovative, successful user experience strategies and solutions for clients ranging from Fortune 100 enterprises to local non-profit organizations, digital product companies, and social media startups, in a wide variety of industries. Joe is the creator of EZSort, the leading, freely available tool for card sorting, as well as the Building Blocks for Portals design framework. He is also a frequent writer and speaker on topics including future directions in user experience; the intersection of business, culture, and design; and systems thinking. Joe is currently based in New York, working as a UX strategy consultant for the enterprise architecture group of a global IT services firm.  Read More

Other Columns by Joe Lamantia

Other Articles on Software User Experiences

New on UXmatters