Top

The User Interface Vanishes: How Smartware Will Change the User Experience

Smartware

The evolution of computing

February 19, 2018

In this column on the future of computing, we’ve examined how a handful of advances in technology, including the Internet of Things (IoT); along with sciences of human understanding such as neuroscience and genomics; and emerging delivery platforms such as 3D printers and virtual-reality (VR) headsets will together transform software and hardware into something new that we’re calling smartware.

Smartware are computing systems that require little active user input, integrate the digital and physical worlds, and continually learn on their own. Now, in this, the final edition of our column on smartware, we’ll consider how the powerful capabilities of smartware will enable new interactions and user experiences that, over time, will become seamlessly integrated into our digital lives.

Champion Advertisement
Continue Reading…

The Tyranny of the User Interface

We’ve integrated more computing user interfaces into our lives than ever before—a smartphone, a tablet, a notebook computer, a desktop computer, along with multiple input devices and maybe one or more large monitors, depending on the requirements of our work and personal lives. We wear smartwatches and fitness-monitoring devices such as the Fitbit. In our homes, we have Internet-enabled televisions, gaming systems, smart toys, and home-management devices such as the Nest, Amazon Echo, or Google Home. Today, we are inundated with user interfaces.

The abundance of user interfaces in our lives reflects the immaturity of our current technology, as well as the plethora of companies who are independently trying to monetize these technologies. We are still exploring our connected world. The consequence is a messy, crowded, poorly designed computing ecosystem that succeeds in spite of itself. However, the power of these diverse devices, products, and services is such that, at least for now, we’re simply so glad to have them that we fail to realize the extent to which the user experience ecosystem we inhabit is fractured, clumsy, and inefficient.

The earliest personal computers were small and simple: a monochromatic monitor and keyboard. Things didn’t change that much until the smartphone revolution occurred, whose primary impact was to make computing a core activity at the center of our lives. Only then did nascent or perhaps previously ignored Internet-enabled technologies really take root and spread wildly. It is hard to believe that all of this began only about ten short years ago!

But we think our crazy, screen-filled world will become one of invisible interfaces. Why? First, because of the impact of artificial intelligence. Once computing devices already have a good idea of what we want, there will no longer be a need for user interfaces that let us tell them what to do. Plus, the current generation of user interfaces is not user friendly. Nor are they optimized for the contexts and tasks for which they are useful. For example, typing instead of speaking—or, someday, merely thinking—to communicate commands is inefficient, is not ergonomic, and is terribly inconvenient.

Almost everything we do on our smartphones is a poor user experience. Given the power of these devices, we generally excuse or ignore how bad these user experiences are. We hold our smartphone in our hand, with our head bent down and our thumbs or fingers dancing, as we drive, walk, eat, and stand. We really should improve this experience—and we will.

The identity graphs of the future will be another important driver of the invisible interface. Advances in neuroscience and genomics will enable us to use big data—integrating a deep understanding of the individual user—to drive AI-empowered systems that will behave predictively and in hyper-customized ways.

The final factor that will transform smartware into invisible user interfaces will be progress in the underlying technologies. Every year, things that were not previously possible become possible. As power management, battery life, and voice recognition improve, increasing miniaturization makes new frontiers in computing interfaces possible.

The Modern Personal Computing Ecosystem

To understand how the future we’ve described in our column will become possible, let’s first examine what a standard, modern computing setup looks like.

Smartphone / Mobile Use

For most of us, our multi-purpose smartphone is the device we use the most. It bridges our work and personal contexts. While a smartphone is a poor substitute for a notebook or desktop computer, it provides uniquely mobile services such real-time GPS navigation, a camera that lets us capture photographs and video, and access to just-in-time services such as ride-sharing apps.

We now get our greatest computing utility from mobile. By superimposing a software layer over our experience as we travel through the world, the smartphone enables us to enjoy the greatest benefit from a wide variety of technologies. But, while the smartphone is currently the enabling device for mobile experiences, other technologies may absorb its capabilities over time. What is most important is the ability to leverage just-in-time computing capabilities, using a layer of functionality that overlies our movement through the world.

Notebook / Creative Use

For those of us who need to accomplish work using a computing device, our notebook or desktop computer is our primary device for getting things done. It’s necessary for a wide range of work and personal activities—as varied as filing our taxes, word processing, and creating PowerPoint presentations. These computers provide a greatly enhanced experience over the smartphone for most things that don’t require just-in-time mobile services.

The only reason we still have notebook- and desktop-computing devices is to enable various types of creation—whether 3D animations, financial spreadsheets, or presentation slides. Creating these in an efficient way requires a robust computing environment that includes a large display, a full, sturdy keyboard, and a mouse, trackpad, or tablet rather than the gestural interactions that define small-screen, mobile computing. For the foreseeable future, we will continue to have creative computing needs that require a more powerful computer.

Tablet / Consumptive Use

Our tablet is a supplementary device that we generally use for consuming instead of creating—unless we augment it with input devices that turn it into a makeshift notebook. Tablets often have higher-resolution screens, and their larger screens provide a better computing experience than the smartphone. Plus, tablets are more ergonomic, convenient, and comfortable for consumptive use than notebook computers are.

We spend most of our computing time consuming content—watching videos, listening to music, browsing Facebook, or reading in a Web browser or app. These activities require minimal power and resolution in a computing device. Some are important functions that translate our past behavior into a more modern context. After all, we’ve been watching television for about 70 years and listening to music on mobile players for almost 40. Others are consequences of our current technologies. For example, while Facebook feeds may seem ubiquitous today, we’ll likely leave them behind in the future. The bottom line, though, is that we’re primarily consumers when using our computing devices. Thus, our computing needs for notebook computers are often similar those for our mobile devices—at lower levels of fidelity—or tablets for a richer experience.

Specialized Devices / Narrow Use Cases

There are so many of these devices and they are so varied that it is just about impossible to consider all of the possibilities. However, most of us have some devices that support specific, specialized tasks or extend our core computing ecosystem to other user interfaces around our home.

By breaking down our computing needs into these use contexts, we can begin to see patterns in our computing needs and why we need certain devices. The smartphone brought a revolution because it offers so much computing utility that is appropriate for mobile devices—in the form of just-in-time services and entertainment. But we still need a more robust environment in which to create and work. Paradoxically, our computing needs are much simpler than those that are supported by the complex, crowded ecosystem we currently inhabit, shown in Figure 1, but nuanced enough that our future of invisible interfaces must support diverse ways of computing.

Figure 1—The clutter of a modern computer ecosystem
The clutter of a modern computer ecosystem

Image source: Domenico Loia on Unsplash

Envisioning the Smartware Ecosystem

Considering all of this, how will user interfaces disappear in the future?

Input Devices

A substantial portion of our computing devices’ physical footprint relates to input—the keyboard and mouse or the smartphone screen that, while clumsy, is worlds better than the face of a smartwatch. These input devices are changing—thanks to advances in both hardware and software. On the hardware side, we’re moving toward integrated rather than external computing devices. We see this in prominent use with features of consumer products such as Apple’s Siri or Amazon’s Alexa that use voice control to replace typing, clicking, and tapping. There are also examples of using a person’s eye movements to communicate intent to a machine. While these technologies are still in their early days and far from stellar, they give us a glimpse into a future of hardware that blends into our environment and even ourselves, freeing us from the need to hold something in our hand or bend our head down unnaturally to see it.

Plus, future computers will simply need fewer inputs. They may know what we want and need better than we do. They’ll often know better ways to do things, giving us the eye of an artist or the judgment of an analyst without our needing to do a single, simple thing. The requirement to provide input to power on a machine will, in many cases, simply go away. Instead of machines’ requiring human input, software will power the computer’s ability to serve us, so the need for clunky, cumbersome inputs will disappear as well.

Output Devices

The monitor as an output technology predates computers, going back to the days of early television and even film—arguably, all the way back to cave paintings! While audio outputs also play a role, our eyes watching a display is viscerally core to the computing experience. Crucially, the monitor has always been an external device—one that sits on our desk or in our hand or on our lap. Thus, computing requires the presence of a device that is alien to the way we would otherwise go about our day. But this is now changing thanks to advances in virtual-reality technology.

Remember Google Glass, which looked like a futuristic pair of eyeglasses? In 2013, this computing device enjoyed a hype-fueled release. Many technologists purchased Glass and proudly tweeted using the device. Yet, it quickly faded away as just another over-hyped new technology product. The reason so many smart people were excited about Glass was the superior idea behind it: shifting device output from a large screen to a head-up display.

But the technology came nowhere near to being able to deliver on the concept. The experience of using Glass and the capabilities the device offered paled in comparison to that we foresee in future devices. Today, virtual-reality (VR) systems such as the Oculus Rift, HTC Vive, and Sony PlayStation VR are introducing us to a different take on integrated outputs—one in which the content is much richer, as well as more worthwhile. Titles include Eagle Flight, Arizona Sunshine, and Minecraft VR. The problem with VR today is that, unlike Google Glass, the systems are big, chunky wraparounds like those depicted in Figure 2. These headsets have more in common with a helmet than a pair of glasses. We’re heading toward a future now—in years, not decades—in which solutions will have more ergonomic, natural form factors, even as the quality of the experiences they provide continues on an upward trajectory.

Figure 2—Clunky VR headsets
Clunky VR headsets

Image source: Guido van Nispen on Flickr (CC BY 2.0)

Advanced Processors

Of course, nothing happens in computing without a processor. Thanks to perpetual advances in hardware and software, the footprint of the core engine that drives our computing devices has gotten ever smaller, while at the same time, more powerful. As much as we continue to muddle through with clumsy input and output devices, the engines of these wonderful machines are already great and will just keep getting better. Famously, the computing power inside a garden-variety smartphone today is significantly greater than the entire computing capability that ran the Apollo 11 mission, which put the first people on the moon. This is the very reason why our smartphones can be so small, yet do so many helpful things.

Historically, or at least in the years before the smartphone revolution, the processor and memory in a desktop computer were separate from the interface devices, enabling better ergonomics. The monitor, keyboard, and mouse could be positioned without concern for the often-bulky innards that powered the user experience. Today, each device in our more complicated computing ecosystem generally has its own integrated processor. The ability to store our files on portable, external storage devices or in the cloud mean we no longer need cavernous storage capacity on every device. This distributed system uses computing resources more sensibly and efficiently and, at the same time, decouples the extent of our machine’s capabilities from whatever its physical form factor is able to support.

This model is helping to define the smartware computing ecosystem. We may wear a processor on our wrist or something like a key fob that powers entirely distinct input and output devices. We already know that the processors of the future will be more and more powerful. The invisible interfaces they power will not necessarily be integrated into the same devices that accept our inputs and serve their outputs.

Figure 3—Virtual user interfaces in a computing ecosystem of the future
Virtual user interfaces in the computing ecosystem of the future

Image source: COMSALUD on Flickr (CC BY 2.0)

Conclusion

We believe the future of computing will be invisible—and integrated. Instead of having a notebook computer and a smartphone and a tablet and a smartwatch, we’ll have computing capabilities that fit organically and seamlessly into the contexts of our lives. For mobile and consumptive computing, we’ll communicate with our machines with our eyes and voice more than with our hands. We’ll still need a screen—perhaps in the form of an apparatus we wear like glasses similar to the fanciful user interface shown in Figure 3 or an accessory we wear like a watch. However, it will always be with us as we move through the world—more like clothing we wear than a pocketbook we carry.

Yes, there will also be computing environments in which we create, which require high-fidelity inputs and outputs, as well as more processing power. But even these will get better as their screens blend into our environments and improve in resolution, the combination of artificial intelligence and more natural input paradigms reduce the need for typing and clicking, and transitions from our creative platform to mobile devices happen seamlessly.

The digital world has already become nothing short of magical, but the years and decades ahead will usher in a world that—for all but the youngest of us—will feel like it’s from the pages of a science-fiction book rather the reality of our everyday lives. Let’s enjoy and make the most of the experience together! 

Managing Director, SciStories LLC

Co-owner of Genius Games LLC

Boston, Massachusetts, USA

Dirk KnemeyerAs a social futurist, Dirk envisions solutions to system-level problems at the intersection of humanity, technology, and society. He is currently the managing director of SciStories LLC, a design agency working with biotech startups and research scientists. In addition to leading SciStories, Dirk is a co-owner of Genius Games LLC, a publisher of science and history games. He also cohosts and produces Creative Next, a podcast and research project exploring the future of creative work. Dirk has been a design entrepreneur for over 15 years, has raised institutional venture funding, and has enjoyed two successful exits. He earned a Master of Arts from the prestigious Popular Culture program at Bowling Green.  Read More

Principal at GoInvo

Boston, Massachusetts, USA

Jonathan FollettAt GoInvo, a healthcare design and innovation firm, Jon leads the company’s emerging technologies practice, working with clients such as Partners HealthCare, the Personal Genome Project, and Walgreens. Articles in The Atlantic, Forbes, The Huffington Post, and WIRED have featured his work. Jon has written or contributed to half a dozen non-fiction books on design, technology, and popular culture. He was the editor for O’Reilly Media’s Designing for Emerging Technologies, which came out in 2014. One of the first UX books of its kind, the work offers a glimpse into what future interactions and user experiences may be for rapidly developing technologies such as genomics, nano printers, or workforce robotics. Jon’s articles on UX and information design have been translated into Russian, Chinese, Spanish, Polish, and Portuguese. Jon has also coauthored a series of alt-culture books on UFOs and millennial madness and coauthored a science-fiction novel for young readers with New York Times bestselling author Matthew Holm, Marvin and the Moths, which Scholastic published in 2016. Jon holds a Bachelor’s degree in Advertising, with an English Minor, from Boston University.  Read More

Other Columns by Dirk Knemeyer

Other Columns by Jonathan Follett

Other Articles on Experience Trends

New on UXmatters