A Day at Frontiers of Interaction 06
Published: November 20, 2006
There are not many interaction design conferences in Italy, so you can imagine the interest a conference about the frontiers of interaction design engendered. Frontiers of Interaction 2006—the second edition of this conference—had as its mission the exploration of the future of interaction design. Though I’m Swiss, my native language is Italian, and this topic attracted my attention.
The event took place in Milan, Italy, at the Bicocca University. The slide shown in Figure 1 provided the backdrop in the hall where the organizers greeted participants.
Figure 1—Frontiers of Interaction logo
The day started with a video from Dirk Knemeyer, in which he said, “It’s a weird world that we live in, where I’m sitting here in the Santa Clara studio for my company Involution Studios and able to be present there with you in Milano this morning, at the same time. It’s pretty cool.” Dirk told us, “In design, you have to love and care about and empathize with those who you are designing for. You have to really pay attention to them—what their contexts of use are, what their mindset is, who they are, how they behave and why—and design for that. And it’s by paying attention to them that you can really design for the things that will delight and surprise and create usable and delightful environments.”
Matteo Penzo began his speech with a very interesting idea: “The best products solve a specific problem for a particular person.” He spoke about the problems people sometimes have when shooting photos on the street, using their smartphones to capture the fleeting moments of daily life. Matteo noted that it’s the objective of these people to quickly shoot and publish photos, without regard for their quality, in order to be ready to shoot again in a very short time. To help such people, Matteo created Mobup, which is a very intuitive application for smartphones that lets people shoot and publish photos directly to Flickr.
Speaking about “The World Around Our Screens,” Fabio Sergio began with the origin of the word avatar, which in Sanskrit refers to a personification of divinity. In a software product, the concept of an avatar refers to a personification or entity on the screen. Fabio cited Paul Dourish’s ideas regarding embodiment and how we can bring the virtual into the physical world, using the analogy of looking through a pair of binoculars backwards.
I really appreciated Fabio’s idea of creating objects for daily use that provide an emotional interface for common, complex activities. He highlighted the advantages of this approach through an anecdote. After the introduction of the use of magnetic cards on buses in Milan, an old lady told Fabio, “In the past, you had to stamp the ticket. Now you simply caress the machine.”
In the future, this approach could create an “Internet of things” that is populated by a lot of little emotional actors like that shown in Figure 2.
Figure 2—One Nabaztag as an actor in an Internet of things
Christian Peter, of the Fraunhofer Institute, spoke on “Emotion in Human/Computer Interaction.” During his speech, his ability to synthesize the concepts of emotion and human/computer interaction amazed me.
Christian began with a clear premise: Computers must accommodate people, not vice versa. So, when designing computing systems, we must respect the fact that human beings feel emotions when using such artifacts. Christian warned that it’s possible we might make people very angry if we do not respect human emotions and, to illustrate his point, presented videos and images showing people destroying their computers.
Christian’s conclusions were very clear and pragmatic: Good product design must respect human nature and motivate users to use a product, without creating stress.
After the morning break, it was time for Leandro Agrò’s talk, “Virtual Assistant: Work or Fun?” He noted, “Maybe it’s not essential for a computer to have a face, but it’s undoubtedly true that, in a dialogue, interacting with a human face provides the best experience.”
Leandro presented some videos showing emotional user interfaces, in which virtual assistants proactively helped users to perform certain tasks. For more information, see Leandro’s article on UXmatters, “From GUI to E(motional) UI.” The audience in the hall was buzzing—probably because they were envisioning the potential applications of this technology in their daily lives and the revolution that its adoption would entail.
The next speaker via video was Dario Nardi of UCLA. During his talk, “Social/Cultural Cues for More Effective Human/Machine Interaction,” he presented a video about his SocialBot, a sort of avatar that can assist users through social interactions—both expressing itself through and understanding verbal communication, facial expressions, and gestures. Dario said, “Humor is so important,” and SocialBot can understand some simple jokes.
Andrea Benassi gave a long talk during which he showed us the magical world of Second Life. Many users transfer their social lives to Second Life and even create commercial ventures there. I remember Andrea asking this question: “Has the metaphor of the page excluded other types of metaphors, affecting the evolution of virtual reality?”
During lunch, we discussed all of the presentations we’d seen in the morning, and particularly the moral and social implications of the emotional user interfaces Leandro presented.
After lunch, Antonio Rizzo explained how human beings—and in particular children—are innately able to foresee events in their surrounding environment. By observing cause and effect—or reactions to certain events—we can determine the likely sequence of events that will occur.
For example, if one puts an inclined board on a table, hiding a part of its length behind a panel, then rolls a ball down the board, a child can follow its path. However, if one then blocks the progress of the ball down the board at the point where it is hidden behind the panel, the child is amazed to see it to disappear.
Antonio then discussed other instincts of human beings such as the ability to recognize people’s expressions and told us that these instincts should not be thwarted, but encouraged.
In her keynote address, Pabini Gabriel-Petit spoke about “Getting from Concept to Realization: The Role of UX in Product Development.” She first defined what user experience is, then described the workflow on a product team that shares ownership of the user experience, and finally, spoke about the value of user experience. She gave us many ideas on which to reflect, and I think I’ll find some of these useful when working with my clients. Three points interested me a lot.
The first point was her definition of user experience. Pabini noted the differences in various people’s definitions of user experience, showing Peter Morville’s facets of user experience and Jesse James Garrett’s elements of user experience. She spoke about the multidisciplinary nature of user experience and the importance of defining the field to include all professionals whose work impacts the user experience. She cited Don Norman’s definition of user experience, which is very broad, but perfectly contains the world of UX.
The second point was about the professional domains that share ownership of the user experience. Pabini noted that there are three key actors on product teams: the Product Manager, the UX Designer, and the System Architect. The problem is that, in many cases, either the Product Manager or the System Architect dominates the product development process, which can damage the user experience. To avoid this problem, there should be balance among all three actors and clear communication among these members of a multidisciplinary product team, as Figure 3 shows.
Figure 3—Sharing ownership of UX
Finally, Pabini struck me by her emphasis on the importance of the effective communication of design solutions by using wireframes, prototypes, flowcharts, design patterns, specifications, and other formal documentation that is all too often absent from UX projects. And I agree!
During the afternoon break, I met Pabini, and we discussed some of the ideas from her presentation.
After the break, Nathan Shedroff blew away everyone present with his videotaped speech. He explained what elements are essential in the experience of every artifact, including triggers, duration, intensity, breadth, significance, and interaction. He summarized all of these elements in a single sentence: “A satisfactory experience is meaningful for the one who lives it.”
The last three talks were brief. Some researchers from Bari University gave the first of these. They spoke of the importance of the environment in which we live and interact and showed us some experiments in which robots (Un AiBo) and avatars helped users to solve problems within specific environments in the real world.
Francesco Di Nocera gave the second of these talks. Since it’s possible to monitor and measure our brain activities, he posed this question: Isn’t it possible that machines can adapt to these measured parameters? His ideas were really interesting and introduced the concept of neuroergonomics, but some ethical doubts emerged, particularly regarding the correctness of tracking and manipulating people’s emotions, as shown in Figure 4.
Figure 4—Ethical issues
The last speech was by me, Luca Mascaro. I gave a presentation on “Emotional Design on the Market,” showing product development during various market phases, from introduction to maturity, and explaining the essential investments for technology, marketing, and design, as shown in Figure 5. My final question was this: “Are the software and Web markets good enough to justify the necessary investments?”
Figure 5—Comparate vision