Top

Designing AI Experiences Beyond the User Interface

October 20, 2025

Every few decades, a breakthrough in computing changes the way people interact with machines. The mouse and graphic user interfaces (GUIs) made computers accessible and easier to use. Touchscreens turned phones into pocket-sized hubs for our lives. Now, many are betting on artificial intelligence (AI) being the next leap.

Perhaps that’s why OpenAI spent $6.4B buying io Products. Who better to acquire when inventing the AI design to rule them all than the man who designed the iPhone? Many are already wondering what the AI interface will look like. Will it be entirely voice driven? Will there be wearables? Holographic projections? Most people’s only point of reference so far has been text-prompt windows as in ChatGPT and Gemini. What’s the next logical step for an AI assistant user interface?

Champion Advertisement
Continue Reading…

In truth, we probably spend too much time thinking about the user interface, when the deeper shift will be something a lot less visible. The defining contribution of AI assistants will not be replacing taps with speech or gestures with gaze. It will be their ability to anticipate people’s needs, act within a specific context, adapt to individual users, and complete complex tasks across tools—without having to expose the user interface of each service.

If this prediction ends up being true, it means the technology industry needs to think about designing AI assistants that go beyond the traditional notion of user interfaces and apps altogether.

With AI, Inputs Aren’t the Whole Story

Historically, design has revolved around guiding people through layers of menus, icons, and screens. Booking a restaurant or making a travel change still requires multiple steps across one or more apps—or at least multiple user interfaces. AI promises to erase some of that friction.

This is why voice interactions are often cast as the next frontier. It’s probably also why we’ve seen the Humane AI Pin and Rabbit r1 position voice-first control as a smartphone replacement. But they haven’t yet replaced our phones because changing the input method alone doesn’t magically solve the underlying problem.

One could phrase a command to an AI agent in a thousand ways, but unless that AI assistant can understand and anticipate the context of what the user needs before the user even asks for it, that agent won’t be all that useful. People want an assistant that doesn’t just react to prompts but has been built with anticipatory social design in mind. It shouldn’t just list a bunch of movies that are currently at a nearby cinema when the user asks it to. It should notice that the user really likes Christopher Nolan films and that the cinema the user passes when commuting home on the train isn’t just playing his next film, The Odyssey, but also hosting a question-and-answer (Q&A) session with the cast and crew. For optimal convenience, it might even tell the user to leave work early because of ongoing train strikes.

This level of support doesn’t depend on whether a trigger is a tap, a word, or a glance. It depends on the intelligence of an assistant that is working quietly in the background.

Champion Advertisement
Continue Reading…

Powerful AI Assistants Are Useless If They’re Inaccessible

Powerful data and context are essential for AI. But unless AI assistants become reliably available to a broad swathe of people, it doesn’t really matter how powerful AI gets.

AI assistants will have to focus on overcoming other obstacles, too, like latency and connectivity. If an AI assistant lives up to its full potential only on a flagship phone while connected to a blazing-fast network, it’s just going to alienate many users. That’s exactly why we must design and build AI assistants to handle offline scenarios and function well within low-bandwidth environments.

Look no further than the car industry for the poster-boy example of learning that lesson the hard way. Car makers once pursued elaborate, high-fidelity digital dashboards, only to learn that such experiences wouldn’t scale to more affordable vehicles. The software on the screens in vehicles today still looks great, but the lesson they learned was to design for consistent experiences across a range of performance levels rather than showcase demos. AI development needs that same pragmatism.

Yes, of course, we do also need to address the user interface. While it might not ultimately be the deciding factor in whether AI reaches mainstream adoption, people will need to interact with it somehow. So buttons, touchscreens, and gestures aren’t likely going away any time soon. Why? People like tactile feedback. Sometimes, the quickest, most reliable interaction still requires a physical button or a simple tap. Having said that, we should be design and build AI assistants with diversity in mind. Factoring in language and cultural differences that shape how people prefer to interact with machines, there will never be a single, universal mode of AI use.

Some people might prefer voice input, but want to see a visual change on a menu as the output—not spoken or long-form textual feedback. Others will prefer touch. Ultimately, we must design AI assistants to cater to everyone’s needs and desires, mixing different input and output methods depending on the user’s personal preferences or what is most suitable for the current context and use case.

The Most Important Design Principle Isn’t Novelty

The real test of AI design won’t be whether it can impress people with flashy visuals or science-fiction (sci-fi) style voice interactions. We’ll achieve successful design when the technology feels predictable, trustworthy, and helpful.

The AI assistants that remember user preferences without being intrusive, act within a context without overstepping, or reduce complexity without demanding the user’s attention will be the ones that people come back to again and again. They should not feel like tools. They should feel like a partner or a friend.

Some industries will quickly embed AI intelligence, while others will struggle. But as costs fall and AI processing shifts closer to the devices themselves, expect to see context-aware, personalized assistants become increasingly possible, even for users with modest hardware.

This revolution in technology doesn’t have to be dramatic—at least not on the outside. AI will arrive quietly, through assistants that gradually earn users’ trust by consistently making life and work easier. And when the technology is so easy to use that it feels almost invisible, AI assistants will stop being a novelty and start becoming a necessity. 

Director of User Experience at Qt Group

Helsinki, Uusimaa, Finland

Sondre Ager-WickSondre is a design director, product leader, and strategist with over 20 years of design and technology-industry experience. Since 2006, he has worked for some of the world’s biggest global brands, including Cisco, Microsoft, and Nokia. He has also worked as a design and strategy consultant, working with both global brands and startups. Sondre believes in the power of empowered cross-functional teams to create successful products. As a UX designer, his goal is to design products and services that deliver real benefits to people and the planet.  Read More

Other Articles on Artificial Intelligence Design

New on UXmatters