Top

Frontiers of Interaction 06: Conference Report

November 20, 2006

On June 16, 2006, Frontiers of Interaction was held at the Bicocca University, in Milano, Italy.

Welcome

Speaker: Dirk Knemeyer

Dirk Knemeyer is an information and experience designer and a brand strategy expert. In 2004, he founded Involution Studios LLC, a company building applications and providing consulting services to leading technology companies.

In Dirk’s opinion, virtual reality is becoming more and more important—both as a topic worthy of deep reflection and as a design issue. When we design, we must begin with people—with customers. First of all, it’s necessary to do an in-depth analysis of the context in which people will use a product and the behavioral aspects of an interaction. Only in this way can we truly satisfy future customers.

Champion Advertisement
Continue Reading…

Mobup: The Emotional Aspects of Ubiquitous Computing Through a Camera Phone

Presenter: Matteo Penzo

Matteo Penzo began his talk by sharing his personal conception of interaction frontiers: Considering users’ emotions and instincts is the new frontier and the challenge we must face in designing digital products.

Mobup, shown in Figure 1, demonstrates perfectly what Matteo means. Mobup is a small application that manages photo uploads to Flickr from a camera phone. He conceived of and developed it to meet the needs of bloggers in motion. A user can shoot a photo, tag it, and send it, using only his phone. A user can use various features of the phone in the process. For example, if a user has a Bluetooth® GPS (global positioning system) device, he can use geographical information when tagging photos.

Figure 1—Mobup
Mobup

The World Around Our Screens

Presenter: Fabio Sergio

Nowadays, the GUI (Graphic User Interface) limits our physical interactions with the digital world, according to Fabio Sergio. Starting with the idea of embodiment—physical and social phenomena unfolding in real time and space as part of the world in which we are situated, right alongside and around us—Sergio firmly hopes that the Internet can become “the Internet of things.” Interactions with digitally enhanced artifacts can be enriched by people’s deep emotional relationships with things, or their emotional affordances. Figure 2 shows two examples of “the Internet of things,” with which it will be very easy for users to interact emotionally:

  • Cellular squirrel prototype—a device, with a Bluetooth peripheral inside, for managing mobile phone conversations
  • Nabaztag—a small interactive rabbit, with a Wi-Fi connection to the Internet, which is an avatar rather than an agent.

The original meaning of the term avatar is embodiment, which fits Nabaztag perfectly. Nabaztag is one of the most fully developed avatars available on the Internet.

Figure 2—Avatars
Avatars

Such new objects—even simpler ones—can become agents that convey, transmit, and distribute information. They can communicate with users and play an important role in their decision-making processes. In the future, these new agents will provide user interfaces for our mobile phones or PDAs (Personal Digital Assistants). How might our interactions with these devices be?

The new Wii™ controller from Nintendo®, shown in Figure 3, provides an illuminating example. It is a peripheral that offers a better way of interacting with a video game console. It has sensors that can recognize a user’s movements and responds to gestures and the playing of sounds. Beyond the screen, interactions through such a device could take advantage of new kinds of user interfaces.

Figure 3—Wii—the new controller from Nintendo
Wii

Emotion in Human-Computer Interaction

Presenter: Christian Peter

Christian Peter works for the Human-Centered Interaction Technologies department of the Fraunhofer Institute for Computer Graphics (IGD), in Rostock, Germany.

How is it possible to interact with computers so interactions take into consideration a user’s emotions? First, to create a satisfying interaction, it’s necessary to focus on the user, to pay attention to his emotions, and to create a comfortable situation that lets interactions develop in the best way. People show their emotions through their facial expressions, through the intonations of their voices, through gestures, through their posture, and through physiological changes. So where do emotion-recognition technologies stand today?

Researchers have investigated the physiological aspects of emotional expression, which are already being used in real applications—for instance, biofeedback. Recognition of the visual expressions of users who are in motion is still difficult, but researchers are conducting laboratory experiments to solve this problem. At present, voice analysis is still the most difficult aspect of this research, but even in this case, there have been positive results in laboratory experiments. The interpretation of gestures and posture analysis are both promising, but there has been less research in these areas.

However, to be useful in interaction design, emotion-recognition technologies must be minimally intrusive—or better, not at all intrusive—easy to use, accurate, valuable to customers, easy to apply, robust, and reliable.

At the moment, IGD is testing a sensor system in a special glove that collects data about physiological responses that reflect a user’s emotions. As shown in Figure 4, the first glove still had wires, but now the glove is wireless and has a memory card and a microcontroller. Currently, the sensors in this glove can measure heart rate, skin conductivity, and skin temperature. The next generation of the glove, shown in Figure 5, will be lightweight and more aesthetically designed.

Figure 4—Gloves containing sensors that register physiological responses
Gloves with sensors
Figure 5—Next generation of the glove
Next generation glove

This system could have very interesting applications in laboratory tests and for some tasks that users perform in their jobs, but also for education and games.

For the present, emotions are not taken into account in our interactions with technological systems, but they can be. Certainly, this is essential aspect of our enriching human/computer interactions.

Virtual Assistant: Work or Fun?

Presenter: Leandro Agrò

Leandro Agrò proposes that we shift from the idea of a graphical user interface (GUI) to an emotional user interface (EUI) and told us that we can bridge that gap using the technological capabilities we already have. He said we must strive for a complete integration of the necessary technologies. Agrò’s research on virtual assistants is a big step forward toward this goal. Figure 6 shows two frames from his demo videos.

Figure 6—Virtual assistants
Virtual assistants

In Agrò’s opinion, it is not essential that a virtual assistant understand every element of a user’s communication. However, it is important that an assistant be proactive in managing a situation, even though it may not necessarily solve a user’s problems.

Social/Cultural Cues for More Effective Human/Machine Interaction

Presenter: Dario Nardi

Dario Nardi, PhD, teaches at the University of California at Los Angeles and is a founding faculty member of its new program in Complex Human Systems. He teaches the modeling and simulation of social systems.

Nardi spoke about his project SocialBot, which began in 1997 with the goal of improving natural-language-parsing engines and the efficiency of chatbots, or chat virtual agents. Figure 7 shows the SocialBot user interface. In 2002, Nardi created a conversational agent that had almost 4000 behaviors. A year later, he improved the chatbot by giving it a changeable identity and made it possible for other people to program it. The project’s underlying purpose was to validate the importance of socially relevant behaviors in conversations.

Figure 7—SocialBot user interface
SocialBot

Adding some personal information to a message can improve the quality of an interaction—for example, some clues about a person’s status or recalling his mother’s favorite color. As a matter of fact, adding only small details to a message can dramatically contribute to a more satisfying interaction—a more human communication. The aim is to create virtual agents that have practical abilities, in order to achieve a more natural, pleasing, and useful interaction.

The Second Life of the Net

Presenter: Andrea Benassi

Andrea Benassi is a learning designer at INDIRE, the Italian center for educational development, which is located in Florence.

According to Andrea Benassi, the extraordinary success of the Web might, somehow, have limited the evolution of other potential user interfaces and modes of interaction. The use of the page metaphor has perhaps excluded other interactions.

However, for video games, designers have developed different user interfaces and interactions that let users manage situations in ways that simulate personal experience. These different modes of interaction can go beyond the game playing for which they were created and be useful also in education. Unfortunately, the use of such interactions is still limited, especially in teaching and learning.

Situations in which a player plays a role in a game—either in single-player or multi-player games—are very interesting. However, the most interesting situation might be one in which a player interacts with a virtual world as himself. This means that, as a user shifts from a virtual world to a synthetic world, that world is still strictly related to the real world.

The game Second Life® is a perfect example of a synthetic world—a digital simulation of the physical world. Second Life is a 3D virtual world built entirely and owned by its inhabitants—the players. But Second Life is something more than just a game: It’s a powerful way for people to communicate and express themselves. That is why 25 universities use Second Life for educational and pedagogical purposes.

Right Between the Eyes: Recognize the User’s Functional Status Through Ocular Movements

Presenter: Francesco Di Nocera

Francesco Di Nocera spoke about analyzing ocular movements to determine a user’s status. The adoption of such a technique lets us shift from the idea of satisfaction to the idea of pleasure, from interaction to relationship, focusing attention more and more on the emotional involvement of users. Obviously, if we have complete synthetic data on the emotional state of users, we can design systems that provide more effective feedback to them. However, using these techniques requires the use of an eyetracking system. This comes very close to manipulating users’ emotions and we may perhaps have to face some moral issues.

Getting from Concept to Realization: The Role of UX in Product Development

Keynote Speaker: Pabini Gabriel-Petit

Pabini Gabriel-Petit is the founder of Spirit Softworks LLC, a Silicon Valley user experience design and strategy consultancy. She’s also the founder, publisher, and editor in chief of UXmatters.

There are many definitions of user experience. UXnet defines user experience design as “an emerging field concerned with improving the design of anything people experience: a Web site, a toy, or a museum. User experience is inherently multidisciplinary, synthesizing methods, techniques, and wisdom from many fields, ranging from brand design to ethnography to library science to architecture and more.” Gabriel-Petit described user experience design as “a holistic, multidisciplinary approach to the design of user interfaces for digital products.”

But beyond definitions, a multidisciplinary approach means that the design process ensures the best possible user experience. This implies teamwork, with shared ownership of the user experience, as follows and as Figure 8 shows:

  • Product Management is responsible for determining what’s needed and, therefore, valuable to users.
  • UX Design is responsible for what’s usable, useful, and desirable.
  • Engineering is responsible for what’s possible and what’s not.
Figure 8—Multidisciplinary approach to the design of user interfaces for digital products
Multidisciplinary approach

However, a multidisciplinary approach sometimes encounters obstacles in corporate cultures—especially in regard to sharing information, sharing ownership, and collaboration during the design process. Some metaphors that are representative of common communication problems on product teams are self-explanatory:

  • documents thrown over walls—There are communication problems among groups that should collaborate.
  • silos—Individual groups lock information away in boxes that are unapproachable from the outside rather than sharing it with other groups.

In the ideal process, the product manager defines a product’s value proposition, the software architect defines its system architecture, and the UX designer designs a holistic product—together establishing the vision for the product. Sharing responsibility for a product’s function, the product manager defines the product’s essential functionality, the UX designer designs user interactions, and Engineering implements the specified functionality. Sharing responsibility for a product’s form, the product manager sets the features, the UX designer designs the visual interface, and Engineering implements the user interface. To define product goals, the product manager creates a Marketing Requirements Document; the software architect, an Engineering Requirements Document; and the UX designer, a Usability Requirements Document. There are many effective means of communicating design solutions, including wireframes, site maps, design patterns, storyboards, detailed specifications, and usage scenarios.

Gabriel-Petit suggested the following ways in which a UX designer can support a development team:

  • Act as a consultant to the development team and be available to answer questions.
  • Clarify your specifications so they cover the questions developers raise.
  • Provide error messages as needed.
  • Revise your specifications as necessary in light of technical constraints developers encounter.
  • Adjust the scope of your specifications as requirements change.

Working closely with a development team offers several advantages, as follows:

  • Gives you opportunities to sell your design to the developers and ensure they understand it.
  • Helps you to understand engineering constraints better.
  • Encourages developers to perceive you as a member of their team.
  • Makes it more likely that the developers will build to your specifications.

Living Interfaces Versus Windows

Presenter: Antonio Rizzo

Antonio Rizzo, of UniSi, emphasized that, from the moment we are born, we are attracted in a special way to visual stimuli that remind us of the human face. The ability to recognize a human face is an inborn faculty—as are the abilities to perceive physical objects and ascribe certain characteristics to them. However, while both primates and human beings own this “physical knowledge,” only human beings own “social knowledge.” This is particularly true regarding the faculty of recognizing and using intentionality in communication. Reading, ascribing, and communicating intentions are natural faculties for the animal kingdom. To date, we have used these abilities in digital technologies only marginally. In developing interactions with digital devices, surely we will try to use human-like components that are easier for people to connect with intentionality. However, we must be aware that this way is full of hazards, because the recognition of human intentionality is very complex.

A simplified simulation is not sufficient and can be counterproductive. But at the same time, it’s very attractive, because it enriches our conception of knowledge—which is connected to our ability to understand—with our ability to feel emotions, giving us knowledge plus feeling. This suggests the processes of imitation and dialogic alternation as the first principles we can use when developing interactions with new technologies. We must develop this approach to designing interactions with new technologies in a complementary, but completely different way from the techniques we now use in designing visual interfaces.

Making Meaning: Emotion, Values, and Meaning in the Interface

Presenter: Nathan Shedroff

Nathan Shedroff is the author of the book Making Meaning: How Successful Businesses Deliver Meaningful Customer Experiences.

According to Shedroff, the most important aspect of interface design, in the wider sense of the term, is meaning. But first, it’s necessary to speak about experience—that is, the way in which we relate to objects and the physical world. Designers create experiences—sometimes deliberately, sometimes unconsciously. Shedroff identified six dimensions of a successful user experience, as shown in Figure 9. Each dimension of user experience has its value, which derives from appropriate design choices.

Figure 9—Shedroff’s six dimensions of user experience
Six dimensions of UX

Shedroff defined these six dimensions of user experience, as follows:

  • Significance—The most important aspect of the experience.
  • Breadth—The amplitude of the experience—for example, the audience, the contest, the brand, the channels. Every aspect of user experience represents an opportunity and has characteristics that influence the experience.
  • Intensity—The level of a user’s involvement in an experience.
  • Duration—Time in relation to the experience. All steps must be studied. Nothing should be omitted—the beginning, the time over which an experience continues, the end, repetition.
  • Triggers—Through their senses, people can perceive all aspects of what we design. Many objects communicate only through the visual layer, even though all of our five senses are always active.
  • Interaction—The passage from a passive state, to an active state, and then to an interactive state.

There are five levels of meaning—from the most superficial to the deepest—function, price, emotions/lifestyle, status/values, and meaning. Meaning is closely connected with the whole decisional process, from the user’s first interest to the act of choosing. The most superficial layers make an object attractive—for example, emotions—but the deepest layers are decisive for its purchase. Sometimes they work at an unconscious level.

Identifying meanings and the priorities of meanings is an important step in the development of new products. In this way, the design process requires the ability to evoke—not create—meanings. Triggers play an important role in this process. If we are able to understand which meanings are important to customers and what the related values are, we succeed in improving the user experience.

The stress on meaning has an impact also on rethinking the whole product development process. We must take into account the importance of meaning from the first steps—from the definition of corporate strategy. During the whole process, we must be extremely coherent. Every choice we make in the name of meanings and values becomes a starting point for the next step. At the end of the process, every decision we have taken must be coherent with product goals we have set. 

Product Manager at Matrix

Milan, Italy

Laura CaprioLaura has been active in Web site design since 1998—first as a Web designer, then as a content designer and editor. In 2001, Laura joined the new Information Architecture team at DNM—now Fullsix. Since then, she has been active in information architecture, user-centered design, and usability testing. Currently, Laura is working as a product manager at Matrix, a company in the Telecom Italia Group—the most important telco player in Italy. In her role at Matrix, she is responsible for UX design and information architecture and is the editor of www.alice.it, one of the best-known Web portals in Italy. Laura is co-founder of www.informationarchitecture.it, the first Italian Web site focusing on information architecture and user-centered design. She is also co-author of the book Information Architecture, the only book in Italian on this topic. For the last five years, Laura has been promoting information architecture and user-centered design through articles, courses, and panels at seminars and conferences, including the Italian IA Summit.  Read More

Other Articles on Conference Reviews

New on UXmatters