Top

Smartware, AI, and Magical Products

Smartware

The evolution of computing

November 20, 2017

Do you remember the first time you saw magic? Something that stretched your imagination beyond what you thought possible? For Dirk, this happened in a most unlikely place: a Sears store in a sleepy mid-Western shopping mall, circa 1977, at a demonstration of the Home Pong console, which was, at the time, the latest technological wonder. A small crowd had gathered in awe around a chunky tube TV, and children and adults alike turned the control wheels with delight, bouncing a pixelated ball back and forth. Although, as a child, Dirk had experienced a variety of traditional magic shows involving cards, rings, and pigeons, it was that Pong demonstration that stayed with him. In that moment, the television transformed into a machine with which he could interact, and he began a newfound relationship with the screen.

The interactivity that so enthralled Dirk that day is, in fact, core to computing. Ever since consumers adopted the earliest personal computers, we’ve input commands to yield desired outputs. Today, however, interactivity is changing, becoming far less direct. Using artificial intelligence (AI), services such as Amazon and Netflix have mapped a detailed identity graph for each of their customers. Machine learning enables these services to recommend products that customers are likely to buy and new shows that viewers are likely to enjoy.

Champion Advertisement
Continue Reading…

Such recommendation engines are great, early examples of what we call smartware. Without prompting, a machine discerns the best fit for what we want and need and proactively serves it to us. A combination of automation and design lets us bypass a slow, potentially frustrating discovery process, instead offering us something better, just in time. While the more interesting and impactful applications of this technology remain in a nascent stage, we can see the promise of our smartware future even in these simple product and movie recommendations. Like Dirk’s revelatory encounter with the Pong Home console—that seemingly magical device shown in Figure 1—these experiences promise something transformative.

Figure 1—The Atari Pong Home Console
The Atari Pong Home Console

Image source: Wikimedia Commons

Smartware and AI

As we outlined in “The Smartware Transformation,” AI and machine learning are coming of age at the same time as advances in neuroscience and genomics and new technologies that include the Internet of Things (IoT), additive fabrication / 3D printing, and virtual reality (VR). Collectively, these advances are enabling smartware that promises to create a radical inflection point at the same scale as personal computers in the 1970s, the Internet in the 1990s, and mobile computing in the 2000s.

In human-computer interaction, the round trip of each interaction has always been: user input > algorithmic processing > machine output. But, with the AI of smartware, there will be far fewer user inputs and machine outputs and much more algorithmic processing. The term artificial intelligence is often misunderstood. Artificial Intelligence is just software. Smarter software. Software that does more on its own without user inputs, but software nonetheless. AI is not a black box that is mysteriously supercharging computing. It is simply an evolutionary stage in the development of software—one in which software is less reliant on the human user and better able to gather, analyze, and act on data itself.

A Short History of AI

AI began as a dedicated field of practice in 1956, at Dartmouth College, in the United States. Eleven intellectuals met to discuss the future of thinking machines, or artificial intelligence, as the group’s organizer John McCarthy, shown in Figure 2, christened it. That workshop began a 50-year ebb and flow, during which periods of great funding, excitement, and interest in AI resulted in a lack of practical gains, which then brought about fallow periods of limited investment and attention when the hope turned out to be merely hype. This periodic cycle continued over decades. But, with the rise of machine learning, it seems that this 50-year effort has finally culminated in a true inflection point.

Figure 2—John McCarthy, American computer scientist and AI pioneer
John McCarthy, American computer scientist and AI pioneer

Image source: null0 (CC BY 2.0)

Machine Learning

In machine learning, an AI continually learns and makes new rules beyond those initially set by a human operator. Previously, an expert-system, or rule-based, approach to AI predominated. In that model, humans program software, creating the logic and rules that the AI follows to behave intelligently. However, this approach doesn’t scale, and changes or challenges cause it to bog down, limiting its effectiveness. Machine learning—while also having a core of human-defined and programmed rules—enables the software to adapt based on the data and user feedback it receives. While an expert system requires software to solve problems in a specific way, the machine-learning approach instructs the software how to find the best answer by itself. In this way, machine learning enables flexibility and resiliency.

Of course, it can also allow unintended consequences. In one example, a machine-learning AI for identifying different kinds of dogs instead focused on the settings the dogs inhabited, becoming adept at identifying pictures of snow, but not at identifying dogs as the developers intended. Effective machine learning generally requires a degree of human guidance—at least with today’s technology. Yes, there are plenty of examples of machine-learning AIs that we’ve turned loose and are effectively learning in the wild by themselves. However, especially in situations where there are specific goals or desired outcomes, providing a guidance layer that teaches the machines before they begin learning on their own is the more predictable, efficient model for AI. For example, Google’s DeepMind AlphaGo AI used just such an approach—defeating some of the best human champions in this ancient game over the last two years, before retiring in May 2017.

The Weak and the Strong

We can broadly split AI into two types:

  • Weak, or narrow, AIWeak AI focuses on just one narrow task and uses machine-learning techniques to overcome that specific problem. This is the reality of artificial intelligence today. Even with machine learning and the ability to conquer the best human players in any strategy game, these silicon conquerors are merely one-trick ponies. This is an amazing advance that is core to the potential of smartware, but a far cry from the artificial life form in Ex Machina, shown in Figure 3, or the virtual companion in Her.
  • Strong AI—In contrast, strong AI refers to a more general artificial intelligence with capabilities that are equal to or greater than that of the human mind. Sometimes Strong AI is broken into two categories: Artificial General Intelligence, which is an AI that can do various things and, thus, is similar to human intelligence, and Superintelligence, an AI that exceeds the capabilities of humans and can potentially disrupt existence—for example, with a Singularity, as promised by a small group of futurists. All of this is the stuff of tabloids and science fiction.
Figure 3—The artificial life form in Ex Machina
The artificial life form in 'Ex Machina'

Image source: Ex Machina Movie Web site

Given what we see in art and the media, we can be excused for thinking that AI is on the verge of replacing knowledge workers everywhere. Machine learning certainly sounds like real intelligence. If AIs have become the undisputed masters of strategy games by defeating humans, perhaps this portends a new species of superintelligence? In the many contemporary science-fiction movies that depict seemingly credible artificial life, the obsolescence of human workers is one of the biggest questions and fears surrounding AI. The truth is far less dramatic.

AI in Action: IBM Watson Health

What is remarkable about artificial intelligence today is that we can harness it to perform tasks that were previously unthinkable for a machine. For example, IBM Watson Health has partnered with the Mayo Clinic to provide a virtual adviser for physicians. Here’s how it works:

  • IBM’s framework combines Watson’s AI technology with the domain expertise of their health business.
  • The Mayo Clinic contributes rules and expertise from their experts, along with patient data that informs Watson.
  • Watson continually learns, taking in data from every possible source. For example, Watson consumes orders of magnitude more medical journals than any one doctor could.
  • As Team Mayo works with a patient, Watson analyzes the patient’s complete health record, along with all additional inputs that Team Mayo records.
  • Watson provides advice on diagnoses and treatments, backing it up with data and statistics, for Team Mayo to consider in the management of their patient.
  • Team Mayo provides feedback on the quality of Watson’s recommendations, continuing and refining the learning process.

IBM Watson is also partnering with Froedtert & the Medical College of Wisconsin to provide an engine for clinical-trial matching, shown in Figure 4. According to IBM, the software can “quickly complete the data-intensive process of matching patients with clinical trials and provide doctors the information they need to advise their patients about relevant studies.” Now that IBM Watson Health has been in the wild for a couple of years, complaints about its quality and effectiveness are growing. So, while the reality of IBM Watson Health is still not equal to the hype, the potential we see just down the road remains enticing.

Figure 4—Clinical-trial matching with IBM Watson
Clinical-trial matching with IBM Watson

Image source: IBM / Froedtert & MCW Cancer Network

Previously, the doctor had been the keeper of expert knowledge in the medical system. However, in comparison to the totality of available medical information, any doctors’ relatively limited knowledge is a bottleneck—even for the best and brightest of physicians. The quality of our healthcare is subject to their biases, experience, and exposure to information. In extreme or unusual cases, they might consult with other physicians, or patients might seek a second opinion. Either way, the overworked, time-constrained human doctor can bring only a fraction of the possible analytical considerations to bear on our treatment.

With Watson, expert knowledge shifts to a machine that has, for all practical purposes, infinite time and processing power to consider information. Even though Watson might be imperfect, if it is at least competent, it can already improve on the physician’s advice. While IBM, trying to be nonthreatening to their customers and prospects, positions Watson Health as a doctor’s aide, it is not much of a leap to re-envision a healthcare system without doctors as we know them today. Imagine how the doctor’s role could shift from being the primary medical-knowledge expert to that of a relationship expert who is acquainted with the organizations that patients hope can provide them with further care. Or the doctor’s role might become that of a psychological expert who can help patients manage the fear and uncertainty inherent in their need for healthcare. The human’s role would shift from knowledge holder and subject-matter expert to manager of the human beings within a medical world of super-genius, silicon brains.

Conclusion

Of course, many of the impacts of good artificial-intelligence systems will be far less dramatic than a redefinition of the roles of doctors and other alphas at the top of the knowledge-work pyramid—particularly in the short term. AI will enable invisible interfaces—computing environments that require significantly less user input—and integrate more naturally into human lives. It will also make our simple software smarter. Rather than our needing to learn a unique topography for each of the variety of software applications we use, AIs will more adeptly anticipate our needs and provide just-in-time functionality. There will come a time when only the most expert professional computer operators will need to memorize different key combinations to get their software to do the things they want.

As NVIDIA CEO Jensen Huang has said on many occasions—including in an interview with MIT Technology Review, in May 2017—“Software is eating the world, but AI is going to eat software.” He’s playing off Marc Andreessen’s commentary in The Wall Street Journal, in August 2011, that software is disrupting industries across the board. But the overall idea is correct. AI will eventually permeate all of the software we use. As you try to get comfortable with artificial intelligence, remind yourself that AI is just the next logical step in the evolution of software. It is not some mysterious other thing. It is just smarter software. It is smartware. 

Managing Director, SciStories LLC

Co-owner of Genius Games LLC

Boston, Massachusetts, USA

Dirk KnemeyerAs a social futurist, Dirk envisions solutions to system-level problems at the intersection of humanity, technology, and society. He is currently the managing director of SciStories LLC, a design agency working with biotech startups and research scientists. In addition to leading SciStories, Dirk is a co-owner of Genius Games LLC, a publisher of science and history games. He also cohosts and produces Creative Next, a podcast and research project exploring the future of creative work. Dirk has been a design entrepreneur for over 15 years, has raised institutional venture funding, and has enjoyed two successful exits. He earned a Master of Arts from the prestigious Popular Culture program at Bowling Green.  Read More

Principal at GoInvo

Boston, Massachusetts, USA

Jonathan FollettAt GoInvo, a healthcare design and innovation firm, Jon leads the company’s emerging technologies practice, working with clients such as Partners HealthCare, the Personal Genome Project, and Walgreens. Articles in The Atlantic, Forbes, The Huffington Post, and WIRED have featured his work. Jon has written or contributed to half a dozen non-fiction books on design, technology, and popular culture. He was the editor for O’Reilly Media’s Designing for Emerging Technologies, which came out in 2014. One of the first UX books of its kind, the work offers a glimpse into what future interactions and user experiences may be for rapidly developing technologies such as genomics, nano printers, or workforce robotics. Jon’s articles on UX and information design have been translated into Russian, Chinese, Spanish, Polish, and Portuguese. Jon has also coauthored a series of alt-culture books on UFOs and millennial madness and coauthored a science-fiction novel for young readers with New York Times bestselling author Matthew Holm, Marvin and the Moths, which Scholastic published in 2016. Jon holds a Bachelor’s degree in Advertising, with an English Minor, from Boston University.  Read More

Other Columns by Dirk Knemeyer

Other Columns by Jonathan Follett

Other Articles on Experience Trends

New on UXmatters