Top

Communicating the Algorithm: Why Language Matters in AI Ethics

November 3, 2025

Users don’t see algorithms; they see the words that communicate decisions. These words can provide clarity and dignity or quietly take both away. That is why—alongside our efforts to build fair AI models, clean up biased data, and improve accuracy—the communication layer needs equal care. Only then can a machine’s decisions become a human experience.

Writing ethically is how we show care for those moments of decision—not in abstract terms, but in the daily reality all product teams share: shipping product features under pressure, working within business and technical constraints, and trying to do right by the people on the other side of the screen. In this article, I’ll present a principle-based, targeted approach that represents a mindset shift for those critical moments when our words matter most.

Champion Advertisement
Continue Reading…

Why Communication Is an Ethics Layer

When an artificial intelligence (AI) system makes decisions that affect people’s lives—for example, approving or denying credit or deciding whether they should immediately get healthcare—those decisions don’t arrive as code, but as words on a screen. Those words can shape everything: how users understand what’s just happened, whether they feel respected or blamed, and what they believe they can do next.

Research on algorithmic lending has shown that AI systems, when poorly explained, disproportionately harm people from marginalized communities. If users are already navigating systemic barriers—for example, an immigrant with a limited credit history or a gig worker whose income looks irregular to an algorithm—a vague rejection message doesn’t just feel frustrating; it feels like they’re being locked out with no key to get back in.

UX writers providing words for products sit at the last mile of AI decisions and are, therefore, uniquely positioned to prevent harms that come with model explanations. Engineers build these models and define a product’s logic; writers translate outcomes into words that people read. That translation determines whether a system feels fair and leads to clear next steps.

Practical Principles for Ethical AI Communication in Products

Checklists are helpful for compliance, but they don’t always help us think through messy, real-world situations in which every design decision involves trade-offs. Instead, a shared mindset is important. There are three principles that can guide us through any AI decision moment, no matter how complex or constrained.

These principles are grounded in transparency, accessibility, and accountability. They are especially useful for teams working within strict timelines, collaborating with legal teams, and trying to balance users’ needs with business goals. Think of them as guides that can help us navigate such contexts, not rigid rules.

Let’s consider each of these principles in greater depth.

Champion Advertisement
Continue Reading…

1. Be transparent about what has happened.

Transparency is not just about telling people that the AI made a decision. It’s also about helping them understand what the decision is and why it happened, in language they can verify and act upon.

For instance, if someone’s loan application gets denied, a message like “Your application was denied” is technically transparent because it communicates the outcome. But it’s not meaningfully transparent. It doesn’t help the person understand what went wrong or what they could do differently next time.

Now compare that message to the following message: “Your loan application was denied because your debt-to-income ratio is 48%, which exceeds our 40% threshold.” That’s meaningful transparency. It names a specific factor behind a decision—debt-to-income ratio—gives the actual number (48%), and explains the standard (40%). Users reading such a message would know exactly where they stand and what would need to change.

In sectors such as finance, specificity matters if we want to create truly transparent experiences with AI. Vague language like “based on your profile” or “our algorithms determined” doesn't give people anything to work with. It just creates more questions. And in lending, where regulatory requirements like the Equal Credit Opportunity Act (ECOA) already mandate adverse action notices, specificity isn’t just good UX design, it’s legally necessary.

Admittedly, things can get tricky because transparency has limits. Sometimes we can’t reveal everything because the reasons are too complex to explain in a short message. Sometimes revealing the exact logic would create privacy concerns—for example, if the AI revealed something sensitive about someone’s health or relationships. And sometimes, in areas such as fraud detection, explaining too much would expose vulnerabilities in our systems.

So we’re looking for balance: tell people enough to act on, but not so much that it overwhelms or harms them. This might mean using progressive disclosure, starting with the most critical information and letting people click for more details.

One more thing we should watch for is the use of passive voice. For instance, “Your application was denied” hides agency. It’s important to know who denied it—a system or a person? When we say, “We couldn't approve your loan because…,” we're being more honest about who is responsible. That small shift in language makes a big difference in how people experience the decision.

2. Keep messages accessible by using plain language.

Plain language allows us to respect the user’s time, cognitive load, and lived experience. In the context of AI decisions, where people are often stressed, confused, or encountering systems for the first time, using plain language becomes a matter of fairness.

The user might be using a mobile device with a cracked screen, juggling three jobs and, thus, reading a message at 11pm, speak English as a second language, or have a cognitive disability that makes dense text harder to process. The user might be a 70-year-old encountering algorithmic decision-making for the first time. If your copy works for these users, it would probably work for everyone.

So what does plain language look like in practice? Short sentences. Active voice. Everyday words instead of jargon. It means saying “prove your identity” instead of “verification required.” It means saying “We need more information” instead of “Additional documentation is requested.”

Accessibility goes beyond using easy-to-read language, though. It’s also about structure. Using headings. Breaking information into scannable chunks. Not burying the most important thing in the third paragraph. People don’t read user interfaces; they scan them. We can make it easier for them to find what matters.

3. Make accountability visible.

People need to know who is responsible for a decision and what they can do about it. That’s accountability. In the world of AI, where decisions can feel faceless and opaque, making accountability visible is one of the most important things we can do as writers.

Phrases such as “the algorithm” or “company policy” might feel safer from a legal perspective, but they create distance. They make it sound like no one is actually responsible—like the decision just happened on its own.

But accountability isn’t just about attribution, it’s also about creating a path to fixing things. When users receive a decision they don’t understand or with which they disagree, what can they actually do about it? That answer should be right there in your copy.

Visible accountability becomes especially critical for people from marginalized communities—for instance, people who might face more denials and who often have the hardest time getting such decisions reviewed or reversed. When we make accountability visible, we’re giving people the tools they need to advocate for themselves. We’re acknowledging that the system isn’t perfect and they have a right to challenge it.

Policies on algorithmic accountability and data usage consistently emphasize the need for contestability—the ability for people to challenge automated decisions and get a human in the loop. Our copy is where that contestability begins. If a message doesn’t include a clear path to appeal or review, we’re making the system unaccountable by default. Plus, visible accountability is not one-sided; it actually protects our organization, too. When people understand who makes decisions and how and have a clear path to contesting them, they’re less likely to feel that the system is arbitrary or discriminatory. This builds trust, reduces complaints, and makes AI systems more defensible, not less.

Applying the Principles in Real Scenarios

Let’s look at some situations that many people have encountered or might encounter. For each scenario, I’ll show what the copy often looks like, why it creates problems, and how we might apply the three principles for ethical AI communication that I described earlier to improve messaging.

Scenario 1: Account Flagged for Fraud

Fraud detection is critical for protecting users, but false positives can happen. These false positives disproportionately affect people with irregular income patterns such as gig workers, freelancers, or immigrants who send money across borders. Being locked out of your account isn’t just inconvenient. It can mean not paying your rent on time, not being able to buy groceries, or losing access to funds when you need them most.

There’s also the emotional weight of being flagged for fraud. However implicitly, this can be stigmatizing. For users who have done nothing wrong, it can feel like the system doesn’t trust them or is treating them unfairly. Figure 1 shows a message that might appear when a user’s account is flagged for fraud.

Figure 1—An account flagged for fraud without explanation
An account flagged for fraud without explanation

What’s problematic here? The message doesn’t explain what activity triggered the flag and doesn’t clarify what restricted means. Can users see their balance? Can they receive money? The message doesn’t tell them what to do next or how long this restriction might last. Plus, the word suspicious carries judgment, implying wrongdoing without naming what that might be.

Apply the principles, as follows:

  • transparency—Name the specific pattern that triggered the flag.
  • plain language—Explain what paused actually means for the user’s account.
  • accountability—Provide clear next steps, who to contact, and a timeline.

Figure 2 shows an improved message that explains why the user’s account was flagged for fraud.

Figure 2—An account flagged for fraud with an explanation
An account flagged for fraud with an explanation

Now the person knows exactly what has happened—that the user made three login attempts from a specific location—what unusual means—it doesn’t match the user’s typical behavior—and what options are available to remedy the situation, depending on whether the user actually took this action. The timeline tells the user what to expect, and the tone is informative, not accusatory.

Scenario 2: Loan Application Denied

Loan denials affect wealth building, especially for communities that have historically faced discrimination in lending. If AI-driven lending systems are not carefully designed, they can perpetuate such disparities—especially if their decisions are not well explained. An unexplained denial doesn’t just block access to credit, it denies someone the information they need to improve their situation.

This is a particularly challenging situation when denials are not specific because people can’t tell whether they’re being treated fairly. They don’t know whether the decision was based on legitimate risk factors or hidden bias. This ambiguity erodes trust and makes it impossible to challenge unfair outcomes. Figure 3 shows a message that might appear when a user’s application for a loan is denied.

Figure 3—A loan denial with no explanation
A loan denial with no explanation

This message tells the user almost nothing. What credit threshold was used? How far off was the user from qualifying? Is there anything users can do to improve their chances? The word unfortunately adds a sympathetic tone, but doesn’t add substance to the message. Plus, “at this time” is vague. When can the user reapply? Figure 4 shows an improved message that explains why the user’s loan application was denied.

Figure 4—A loan denial with an explanation
A loan denial with an explanation

This version provides some actionable information. The user knows the exact metric—debt-to-income ratio—where he currently stands (45%), what the standard is (40%), and what the user would need to change—reducing debt or increasing income. The reapplication timeline is clear. There’s no need to wonder whether something else was happening behind the scenes. The message feels fair.

Legal Concerns Versus User Needs

The tension UX writers probably face most often is balancing legal concerns with users’ needs. Legal teams are trained to minimize risk. From their perspective, vague language might feel safer. “We cannot approve your application at this time” doesn’t commit to anything specific. It’s legally defensible.

A better situation might be working with legal teams, not around them. Show them examples of transparent copy that’s still legally sound. You can frame this problem in terms of risk mitigation: “If we’re vague, users might assume discrimination and file complaints. If we’re specific, we show that we’re applying consistent, defensible standards.”

We can be specific without becoming vulnerable. “We couldn’t verify your income” is more actionable than “application incomplete,” and it doesn’t expose the organization to any additional risk. It’s just clearer about what is necessary.

Conclusion

AI is scaling decisions that affect millions of people’s lives in domains that include credit, housing, employment, healthcare, and justice. In every one of these domains, there’s a writer somewhere crafting the messages that best deliver the outcomes. Our words are the bridge between algorithmic power and human dignity. That bridge can be sturdy and clear, or it can be shaky and full of gaps.

I’m not suggesting that writers can fix every issue. I’m not suggesting that writers can solve bias in machine learning or redesign broken systems. What I’m suggesting is that writers can take responsibility for the part they control: the language. They can make it transparent. They can make it accessible. They can make it accountable. When they do all of this, they protect people in ways that ripple out far beyond what we can see. 

Senior UX Writer

Raleigh, North Carolina, USA

Ademola AdepojuAdemola is a Senior UX writer and graduate student in technical communication at North Carolina State University. As a content designer, he has helped shape product outcomes for some of Africa’s biggest financial-technology (Fintech) companies, as well as higher-education software teams. His current work and research explores justice-grounded frameworks for making artificial intelligence (AI) systems more usable for people within marginalized groups, especially in the areas of credit and lending.  Read More

Other Articles on Artificial Intelligence Design

New on UXmatters