Top

Misinformation and Disinformation Online: What Design Can Do to Remedy This Problem

Envisioning New Horizons

A critical look at UX design practice

A column by Silvia Podesta
April 8, 2024

We refer to narratives that unintentionally “contradict or distort common understandings of verifiable facts,” [1] as misinformation. In contrast, disinformation is a form of propaganda—the deliberate spread of false information to intentionally mislead people, effect changes in their thinking, and ultimately, manipulate society as a whole. Both are rampant on social media and on other less visible parts of the Web.

The multifarious, potential threats that misinformation and disinformation pose to society and public welfare justify our worrying about their spread. In addition to causing polarization in our societies and social conflict, the spread of conspiracy theories and other forms of disinformation often undermine people’s perceptions of the validity of objective or scientific truths, potentially making them prey to liars and con artists or encouraging harmful behaviors.

Champion Advertisement
Continue Reading…

Freedom of opinion and critical thinking are, in principle, good things, to be cherished and encouraged online. However, to quash harmful social movements, some public entities and social-media platforms have chosen to adopt unpopular censorship measures, thereby either reducing the visibility of certain content or removing the offending content altogether. But such top-down interventions often spur backlashes and, thus, risk reinforcing the very conspiracy theories they’re trying to stave off in the first place.

Other, more constructive approaches to addressing misinformation and disinformation online include providing users with a means of understanding whether certain information is trustworthy. [2] However, there has been little study of the relationship between UX design and the development of reliable means of distinguishing fact from opinion or outright fiction.

In this column, I’ll highlight some findings from research that can guide UX designers in creating what I’ll refer to as counter-misinformation features. Arguably, our need for such competencies could spike in the future, in parallel with the rise of needful interventions by governments and content regulators who are trying to tackle extremist and misleading narratives.

Now, let’s consider some effective approaches to helping users assess the veracity of the information they consume online.

Minimizing User Effort

In dealing with misinformation, the first thing that comes to mind is obviously users’ access to reliable resources for fact-checking. Spotting and debunking misinformation online is, in fact, by no means always easy—particularly when it comes from people we know or what we think of as authoritative sources.

However, there are plenty of examples of fact-checking resources that are available online—such as Google Fact Check Explorer or Factcheck.org. Plus, some platforms and industries are themselves making fact-checking resources available that are specific to a particular domain. Examples include Facebook’s COVID-19 misinformation center, as well as for the practice of labeling of posts that contain misinformation or disinformation.

One factor that is worth considering is so-called emotional labor, a phenomenon that impacts users who are actively trying to identify fake news.

When providing resources for fact-checking, you must place particular emphasis on its ease of use and the immediacy of access to the most relevant content. The aforementioned Google Fact Check Explorer provides a good example and offers users a simple search bar for retrieving topics and displays legitimate results that are associated with a query by highlighting the following information:

  • the title of the fact
  • the name of the poster
  • an evaluation score on its veracity
  • a link to a relevant source page
  • a thumbnail image, which serves users mostly by helping them navigate quickly through the various results

Tailoring Responses to Misinformation Online

Responding to misinformation or disinformation that someone has posted online might represent by far the greatest challenge. Undoubtedly, when engaging in a debate around a topic that is controversial, people tend to hold back for two different reasons, as follows:

  1. The reasonable fear of escalating an argument with someone they might not know, primarily in an online setting
  2. The fear of confronting a loved one, friend, colleague, or acquaintance and of thereby causing a possible rupture in their relationship

In one study, [2] participants expressed the desire for a sort of “recommendation system” that could help them tailor the best response to misinformation or disinformation, not only on the basis of the content’s topic, but also the person posting or spreading it.

This is an area where generative artificial-intelligence (AI) tools could shine. AI-powered assistants could help users write the most appropriate responses, tailoring them to their audience and context.

Making Users Autonomous in Their Critical Thinking

Crucially, the desire for recommendations that are tailored to a particular situation must go hand in hand with the need for user autonomy.

As has consistently emerged in several studies, users who engage in countering misinformation and disinformation want to have both

  1. Comprehensive access to relevant knowledge for fact-checking
  2. In case they want to respond, tailored suggestions for what might be the best responses

However, they also want to retain control of their responses.

The design of assistive tools for crafting tailored responses should support users’ agency and decision-making. Possible AI features include ways for users to provide examples to teach the system the right tone of voice and style or dedicated settings that let them tweak the system’s prompts to obtain the best possible text outputs from the generative AI. 

Endnotes

[1] A. M. Guess and B.A. Lyons. “Misinformation, Disinformation, and Online Propaganda.” In N. Persily and J. A. Tucker, eds., Social Media and Democracy: The State of the Field, Prospects for Reform. Cambridge, UK: Cambridge University Press, 2020. Retrieved March 27, 2024.

[2] Malhotra et al. “User Experiences and Needs When Responding to Misinformation on Social Media.” Misinformation Review, 2023. Retrieved March 27, 2024.

Innovation Designer at IBM

Copenhagen, Denmark

Silvia PodestaAs a strategic designer and UX specialist at IBM, Silvia helps enterprises pursue human-centered innovation by leveraging new technologis and creating compelling user experiences. Silvia facilitates research, synthesizes product insights, and designs minimum-viable products (MVPs) that capture the potential of our technologies in addressing both user and business needs. Silvia is a passionate, independent UX researcher who focuses on the topics of digital humanism, change management, and service design.  Read More

Other Columns by Silvia Podesta

Other Articles on Web Experiences

New on UXmatters