Top

Smartware: The Evolution of Computing

Smartware

The evolution of computing

September 25, 2017

In this column on the future of computing, we’ll look at how a handful of advances—including artificial intelligence (AI), the Internet of Things (IoT), sciences of human understanding like neuroscience and genomics, and emerging delivery platforms such as 3D printers and virtual-reality (VR) headsets—will come together to transform software and hardware into something new that we’re calling smartware.

Smartware are computing systems that require little active user input, integrate the digital and physical worlds, and continually learn on their own.

A Tribute to Dead Machines

Humanity and technology are inseparable. Not only is technology present in every facet of civilization, it even predates archaeological history. Each time we think we’ve identified the earliest cave paintings—such as that by an unknown artist in Figure 1—stone tools, or use of wood for fuel, some archaeologist finds evidence that people started creating or using them even earlier. Indeed, while our own species, Homo sapiens, is only about 300,000 years old, the earliest stone tools are more than 3 million years old! Even before we were what we now call human, we were making technology.

Champion Advertisement
Continue Reading…
Figure 1—Cave painting in the Tassili n’Ajjer mountains
Cave painting in the Tassili n'Ajjer mountains

Image source: Wikimedia Commons

It is ironic that the technology of today may outlive humanity itself. Projecting how current technologies such as artificial intelligence (AI), nanotechnology, and materials science will continue to develop and potentially synthesize, we may create a species of human-like machines that outlive us—or in a doomsday scenario, even bring about our extinction.

But that’s all fantasy—something that will not happen during our lifetimes, if it ever does. But compelling as this fantasy or horror may be, depending on your disposition, focusing on the distant future distracts us from the exciting and interesting opportunities that current and emerging technologies offer. This tempestuous brew of technology will bring our machines to life. No, not life that would be comparable to another person or even a beloved pet, but in the form of machines that are fully functional without a human intermediary. The implications of this future are vast.

A History of Human Technology

We’ll start our new column by taking a look at human history through the lens of technology to understand where we came from and why. Ours is a history full of incredible advances and demonstrates our ancestors’ uncanny ability to make magic over and over again.

Up to 10,000 BCE

Over the course of the approximately 300,000 years leading up to 10,000 BCE, we evolved into human beings and created the first agricultural revolution. At that time, humans lived as hunter-gatherers. Instead of establishing permanent residences, we migrated in pursuit of food and comfort. As the seasons changed and different plants and animals sprouted and prospered, humans adapted to their surroundings.

10,000 BCE to 4,000 BCE

During the next 6,000 years, humans established permanent settlements that centered around farming and the domestication of animals. These early settlements evolved into the more advanced environments we now think of as civilizations. Humans shifted from being migratory drifters to permanent landholders. Rather than seeking out the best available environment within their limited ability to travel, humans learned to bend the environments they inhabited to their needs and desires.

4,000 BCE to 1450 CE

During this period—which we may consider the first 5,500 years of civilization—humans first established cities, along with an array of technologies relating to their comfort, safety, and prosperity. This is the rough structure of human civilization as we know it, albeit in a form that is unrecognizable in comparison to the way we live today.

1450 CE to 1950 CE

Over the course of this 500-year period, humanity achieved greater advancement than during the previous 200,000 years of our history. We translated the discoveries and knowledge of the Renaissance and Enlightenment to realize the Industrial Revolution, completely remaking the face of the world in the process. In fact, despite humanity’s many great advances during the subsequent 70 years, the world today continues to look very much like the one at the end of this period, which Figure 2 depicts.

Figure 2—Industrial pollution in late-19th-century Widnes, England
Industrial pollution in late-19th-century Widnes, England

Image source: Wikimedia Commons

1950 CE to the Present

This brings us to the current period of human history, during which the technological precursors from World War II have led human beings directly to the dawn of the digital age.

Starting just after World War II, when technologists produced the first computers with stored programs—computers with decidedly Cold War–era names such as the SSEM, EDSAC, EDVAC, BINAC, and MESM—we’ve lived in a world of digital computing. For the first 30 of these years, progress continued in research institutions—for example, at NASA’s Langley Research Center, where they used the IBM Type 704 Electronic Data Processing Machine to make computations for aeronautical research, shown in Figure 3. But, other than the occasional article appearing in a newspaper or magazine, this technology was invisible to the public. Important things were happening, but they were not yet impacting people’s everyday lives.

Figure 3—IBM 704 at NASA’s Langley Research Center
IBM 704 at NASA's Langley Research Center

Image source: Wikimedia Commons

Everything changed in the 1970s with the advent of digital consumer products. Along with personal computers such as the Apple II shown in Figure 4, other interesting products built on computing technologies began to surface—including autofocus cameras, home video-game consoles, and video-cassette recorders (VCRs). This wave of consumer products brought digital technology into our homes and our everyday lives.

Figure 4—Apple II ad in Byte Magazine, December 1977
Apple II ad in <em>Byte Magazine</em>, December 1977

Image source: Wikimedia Commons

Our daily familiarity with technology contributed to the speed at which people adopted the Internet after the invention of the World Wide Web in 1991, which made computing core to our home and work lives. By the end of the 1990s, most knowledge workers had shifted to email as their primary form of business communication, there was a computer on every desk, and typing had become an essential skill for every professional. In less than 30 years, we had gone from being part of a society that encountered no digital technology on a regular basis to having homes full of digital consumer products and work that centered around a small, plastic box sitting on our desk.

Still, personal computing was not a core activity in most people’s personal lives until 2007. Steve Jobs’s ability to create and market desirable computing products reached its crescendo with the release of the iPhone in 2007, which is shown in Figure 5. The iPhone was so successful that, in the course of just a few years, it transformed our society into heavy computer users. Going beyond “a smartphone in every pocket,” the ways in which smartphones interfaced with cloud computing resulted in applications that almost everyone valued. We could now take, edit, and share photos on our phone. Simple text messaging incorporated images and emojis. Mapping software triangulated our current location to enhance search and ratings functionality. Home entertainment transformed from an experience that centered around one TV, with a cable or satellite television subscription, to an array of networked devices and Internet subscriptions.

Figure 5—First-generation iPhone
First-generation iPhone

Image source: Carl Berkeley (CC BY-ND 2.0)

It is hard to believe that, just a decade ago—when a phone was just a phone and a computer was a small plastic box on a desk—none of this constituted the mainstream computing reality. My how times have changed.

Getting to Smartware: Some Early Examples

Today, digital technology provides us with an array of tools that are core to our lives and in daily use—tools ranging from email to collaborative document creation to text-messaging services. However, they do all of this in a state of blissful ignorance.

Your iPhone obliviously keeps open every program you’ve used, even briefly, over a period of weeks or longer. Your browser re-opens—or, all too often, fails to reopen—whatever workspaces you were using when last on your computer. Your voice assistant does not understand your clearly enunciated commands and fails to provide what should be a straightforward service. So the much-ballyhooed AI revolution feels like it is still a long way away. But, it isn’t. It’s actually pretty close—thanks to the larger context of the potential of smartware—even if our daily experience doesn’t make it seem like it’s almost here. Indeed, the peccadilloes of our current technology make it easy to forget how far we’ve come and how quickly. Today’s machines are much more convenient and powerful, and they offer a much wider array of capabilities. And, they are certainly trying to be smart, even as they fail. Let’s consider three contemporary examples.

Tesla Automobiles

Tesla automobiles represent this difficult stage in the development of smartware quite well. Amazingly, all new Teslas have self-driving capabilities, but almost no other cars on the road do. Ironically, Tesla’s documentation advises drivers to keep their hands on the steering wheel and put their foot on the brake to stop. So it’s a self-driving car that requires a fully engaged human driver. Tragically, some drivers have already died in crashes when using this capability. So, at this point, it’s ambiguous whether this feature represents a safety upgrade, and it hasn’t yet improved the convenience of the driving experience. Therefore, while the potential is there, it is far from fully realized.

Amazon Echo

The Alexa voice service lets us speak commands to the cloud—even if we’re not near any kind of a computing device—and receive spoken information in response. We can place orders on the parent Amazon service or initiate and host a call with another person. But the results are, to put it generously, mixed. The user experience might actually lead you to unplug the device and put it away instead of having it scream throughout the house every time it tethers to your notebook computer.

Google Chrome

The Google Chrome Browser is a self-contained workstation, particularly when you’re using Google Docs or Sheets and cross-referencing with Internet research. The Google Docs autosave feature is certainly a boon. However, as windows and tabs propagate, it is all too easy to close your work environment accidentally. Of course, you can reverse this by simultaneously pressing the Control-Alt-T keys. Unfortunately, this feature is something of which most users are unaware. It harkens back to one of the worst UX design decisions in computing history: the Control-Alt-Delete standard IBM that set in 1981, which is still in force today. Significantly, Bill Gates recently apologized for this long-lasting, but much derided feature.

Conclusion

So we still don’t have cars that simply take us safely wherever we need to go while we take a nap. We don’t have voice services that just work. We don’t have computing workspaces that show an understanding of our work and prevent our ever having that “Oh $*!%!” moment when we accidentally lose—or think we’ve lost—hours of work. We can do so much with our computing devices, but—similar to the mechanical dumbwaiter that was invented in 1883—nothing happens until we push a button. Today, these devices are still simply dumb.

For you and me, our families and communities, our employers and nations, for each and every one of us, the future of smartware is soon going to make technology smart in ways that delight, challenge, and offer opportunities to us that were unthinkable just a few short years ago. While the prospect of smartware is certainly exciting, our experience of smartware absolutely should not be confusing, uncertain, or destabilizing. Over the course of this series, we’ll explore the technologies that are contributing to the capabilities of smartware and the changes we can expect to experience.

If you would like to explore this topic further, you might enjoy our podcast on smartware and the history of computing, The Digital Life Podcast. You can listen to episode 224, which appears in Figure 6. 

Figure 6—Podcast: Smartware: A Tribute to Dead Machines

Managing Director, SciStories LLC

Co-owner of Genius Games LLC

Boston, Massachusetts, USA

Dirk KnemeyerAs a social futurist, Dirk envisions solutions to system-level problems at the intersection of humanity, technology, and society. He is currently the managing director of SciStories LLC, a design agency working with biotech startups and research scientists. In addition to leading SciStories, Dirk is a co-owner of Genius Games LLC, a publisher of science and history games. He also cohosts and produces Creative Next, a podcast and research project exploring the future of creative work. Dirk has been a design entrepreneur for over 15 years, has raised institutional venture funding, and has enjoyed two successful exits. He earned a Master of Arts from the prestigious Popular Culture program at Bowling Green.  Read More

Principal at GoInvo

Boston, Massachusetts, USA

Jonathan FollettAt GoInvo, a healthcare design and innovation firm, Jon leads the company’s emerging technologies practice, working with clients such as Partners HealthCare, the Personal Genome Project, and Walgreens. Articles in The Atlantic, Forbes, The Huffington Post, and WIRED have featured his work. Jon has written or contributed to half a dozen non-fiction books on design, technology, and popular culture. He was the editor for O’Reilly Media’s Designing for Emerging Technologies, which came out in 2014. One of the first UX books of its kind, the work offers a glimpse into what future interactions and user experiences may be for rapidly developing technologies such as genomics, nano printers, or workforce robotics. Jon’s articles on UX and information design have been translated into Russian, Chinese, Spanish, Polish, and Portuguese. Jon has also coauthored a series of alt-culture books on UFOs and millennial madness and coauthored a science-fiction novel for young readers with New York Times bestselling author Matthew Holm, Marvin and the Moths, which Scholastic published in 2016. Jon holds a Bachelor’s degree in Advertising, with an English Minor, from Boston University.  Read More

Other Columns by Dirk Knemeyer

Other Columns by Jonathan Follett

Other Articles on Technology Trends

New on UXmatters