Controlling Privacy on Social Networking Services

April 19, 2010

Among the challenges facing social networking services, concerns about security and privacy are becoming increasingly significant. In particular, even if we trust do a given social networking service provider, the mechanisms for restricting who can see the information we publish are usually inadequate. Despite all of their claims to offer fine-grained control over who can see what, they provide far more control over the what than the who.

First, I’ll describe an Object-Actor-Action permissions model and survey some social networking services’ current approaches to privacy control. Then, I’ll propose two specific constructs—privacy onions and privacy tags—that attempt to address control over the Actor dimension at the appropriate level of granularity. Finally, I’ll outline the advantages of the privacy tag approach.

Champion Advertisement
Continue Reading…

Privacy Concerns

One of the major concerns about social networking services is users’ control over the privacy of their information—or more accurately, control over who gets to view what they publish. Such services should offer privacy protection that’s more substantial than a simple differentiation between stuff users’ friends can see and stuff that is open to the public. But providing more fine-grained control might impose a complex user interface that the majority of users couldn’t easily follow.

What we really need is a unifying metaphor—and its corresponding user interface—that captures various levels of interpersonal trust in a way that is both comprehensive and simple.

Historical Background

In the beginning, the World Wide Web provided a mechanism for a small number of information providers to publish information to a large audience. During the early 1990s, users could navigate and search this growing information space, but were fundamentally consumers who had no ability to contribute. [1] In this one-way medium, privacy issues were limited to cases where an organization deliberately published someone’s personal information on its Web site, and people dealt with such breaches of trust via personal communications and legal processes outside the medium.

During the late 1990s, the Web changed from an information space to an application space. In addition to finding information, users could interact with the published content—particularly by making product purchases online. This extended the Web into a two-way medium, but a relatively small number of participants controlled the scope of interaction—namely, the companies who hosted Web sites. It remained the case that only information deliberately published into the public space became visible to other participants. Nevertheless, the privacy of personal information such as credit card details was often a concern. Technological solutions could deal with one aspect of this concern—the fear that some third party might intercept the information people provided over the Web—so this problem has gradually disappeared with the increasing use and understanding of HTTPS/SSL since its introduction in 1994. Beyond that concern, however, the key privacy issue was whether Web site hosts were trustworthy. Web users often provided personal information as part of their transactions, in the belief that an organization collecting the information would both secure it from other parties and use it only for purposes relating to those transactions.

More recently, the Web has morphed again, this time into a social space where privacy becomes a more fundamental concern. The emphasis of Web 2.0 on social networking, sharing, collaboration, folksonomies, communal blogs, the wisdom of crowds, and user-generated content makes today’s Web more significantly a two-way medium than the Web was previously. A vast array of new sites promote the self-publishing of text, audio, and images, juxtaposing the right of attribution with the desire to limit distribution.

Outdated copyright laws provide some protection from what people may do with your published works. However, at its core, the Web has always been a public space where people share content freely. The mechanisms that allow private interactions within that public space—such as authentication schemes, Virtual Private Networks (VPN), levels of trust, and uses of encryption in systems like The Onion Router (Tor)—are extensions rather than essential components of the Web. As people increasingly use the Web for social networking, the need for control over privacy becomes an essential part of using the Web, even though it is not essential to the Web’s technology.

The Expectation of Privacy

Governments do not universally recognize privacy as a right, whether in the real world or cyberspace. Although some countries have legislated privacy matters, there are still important disagreements over individual versus communal rights, including claims that the state’s right to know things about its citizens sometimes overrules its citizen’s rights to privacy.

Putting aside political and philosophical issues, however, a central premise of any practical discussion about privacy on the Web is the fact that information on the Web is, at least by default, public. People’s complaints about social networking sites indiscriminately distributing their personal data on the Web are unconvincing when they have already made an explicit choice to expose that data in a public space.

Legislative and commercial pressures have caused most Web properties to include a privacy statement that describes the host organizations’ responsibilities with regard to how they treat users’ data. But those responsibilities do not extend to the protection of users from any unintended consequences of self-publishing in a public space.

Nevertheless, many people who use social networking services do so with the intention of limiting access to their published material to a select audience. They publish information with a reasonable expectation that these services will respect their intentions. Therefore, it behooves the providers of social networking services to implement privacy mechanisms that are both simple to use and sufficiently fine-grained to enable users to have adequate control over access to the material they publish. So, an effective privacy mechanism needs to deal with the following dimensions of a permissions model and answer these questions:

  • Objects—Which personal information Objects do users choose to expose to other people, or Actors?
  • Actors—To whom do users choose to reveal those Objects?
  • Actions—What do users let the permitted Actors do with the Objects to which they have access?

Together, these requirements constitute just one special case of a broader task that information technology (IT) has addressed in many forms in the past. Virtually every operating system and every database management system has implemented some mechanism for controlling who has what type of access to which resources. The more mature examples of such mechanisms are role based rather than based on individuals—that is, administrators associate permissions with roles such as Accountant or Data-Entry Operator and grant or deny individuals’ access according to their assigned roles.

What differentiates privacy on social networking services from previous implementations of access control is their requirement for simplicity of use. In other contexts, trained IT system administrators who can cope with complex systems control the configuration of access rules. Consequently, breadth of functionality and fine granularity dominate the design task rather than usability. When a privacy system structures information objects hierarchically—as in a filing system—the inheritance of permissions compounds the complexity. For example, the file-level security of Microsoft Windows provides a very high degree of control, but few people can use it with confidence. In my own experience, even competent system administrators fiddle around with permissions until they achieve the desired behavior. Nobody I know gets it right the first time.

When users might be anyone with an Internet connection and a Web browser, designers must elevate simplicity to their highest priority. Unfortunately, to date, this has meant the sacrifice of important functionality.

Privacy Controls on Social Networking Sites

Current practice in social networking privacy focuses primarily on Objects, with only elementary regulation of Actors and virtually no attention to Actions. As a result, there has been a growing number of well-publicized stories about the painful consequences of social networking sites’ distributing personal material beyond its owner’s intention.

There is not a lot of variation in the approaches the major social networking sites have taken. One reason for this results from the nature of the interactions such sites promote. For instance, on Twitter, once you have set your account to either Public or Protected, there is no further control—and there doesn't need to be. Your tweet stream is visible to either everyone or just your followers. That’s as complex as it needs to be.

At the opposite end of the spectrum, the basis of tools like Google Wave, instant messaging applications, and online meeting services such as Cisco Meeting Place, GoToMeeting, and WebEx is a conversation metaphor that naturally restricts each conversation to invited participants.

The sites where explicit control of privacy becomes important are those that let users publish multiple types of information Objects—such as photos, personal information including demographic details, address books, personal opinions, and other types of user-generated content. Most sites of that nature let users restrict access across every type of Object their service supports. However, they do not match that fine level of granularity when it comes to defining the Actors or the Actions on those Objects. In virtually all cases, they support only one Action: the ability to view the exposed Object.

Throughout the rest of this article, I’ll focus on the granularity of control over Actors. This, I believe, is the area requiring urgent attention, and I’ll make some suggestions for improvements in that area.


I’ll start with Facebook, because it seems to attract most of the flack, [2] while at the same time offering the most comprehensive set of controls on any site I have seen. On Facebook, dialog boxes similar to that shown in Figure 1 control access to most Objects. An Object’s owner can restrict access to the following Actors:

  • Only Me—which provides complete privacy
  • Some Friends—which permits an explicit list of people to access the Object
  • Only Friends—which permits all friends to access the Object—of course, implying friends within the Facebook network
  • Friends of Friends—which broadens access to all of a user’s friends, plus friends of their friends
  • Everyone on Facebook—which makes the Object completely public

The Facebook options imply a hierarchy of privacy, including the ability to explicitly deny access to a list of selected people.

Figure 1—Privacy control on Facebook
Privacy control on Facebook

During the past year, Facebook seems to have expended a lot of effort modifying their privacy settings—both to make them easier to use and to extend the range of Objects to which they apply. But even after their most recent changes on December 10, 2009, it still looks like the site provides control over Actors at the wrong level of granularity. The categorization of Friends, Friends of Friends, and Everyone is too coarse; but the ability to specify a list of individuals is too fine.

I say it looks like this is the case, because there is actually a very powerful, though hidden feature whose level of granularity is in the Goldilocks zone—that is, just right. How many people know they can define arbitrary groups of friends via Facebook’s friend lists? My guess is that the difficulty of even finding that feature most likely results in its being seriously underused. Facebook has not even assigned a proper name to such lists. Users can place friends in as many named lists as they like. For example, a user could create one friend list called Business Associates and another called People I Don’t Trust, then, after selecting the Some Friends option shown in Figure 1, specify that anyone on the Business Associates list can view an Object, except those who are also on the People I Don’t Trust list.

Even if users are clever enough to know about friend lists, it is not immediately obvious that they can use them in this way. The tip text in Figure 1, Start typing the name of a friend or friend list…, erroneously led me to think I could type only a comma-separated list of individual friends. It was only when researching this article that I learned from Nick O’Neill that this option lets users add not only individual friends, but also predefined friend lists.


On LinkedIn, as Figure 2 shows, users can make the various components of their personal profiles visible to either:

  • My Connections—people with whom there has been an explicit interchange of invitation and acceptance
  • My Network—people within three degrees of separation on LinkedIn, plus those with whom a user shares membership in any LinkedIn Group
  • Everyoneall LinkedIn users
Figure 2—Privacy control on LinkedIn
Privacy control on LinkedIn

Under Privacy Settings, one very odd feature is Viewing Profile Photos, shown in Figure 3, which provides the ability to control which photos of other people a user can see! I assume this setting cannot overrule others’ settings for the privacy of their own photos. Perhaps this is really a misplaced preference setting for reducing screen clutter and download waste.

Figure 3Viewing Profile Photos on LinkedIn
Viewing Profile Photos on LinkedIn

Members of a Facebook Group or a LinkedIn Group can post comments, photos, and discussions in a space that is accessible only to members of that Group. Plus, LinkedIn recently introduced the ability to create subgroups within Groups, a feature that potentially adds another level to the privacy hierarchy.


The Ning platform, which claims to host 1.8 million social networking sites, provides a hierarchy of progressively looser privacy settings that is similar to that on Facebook and LinkedIn, using the nomenclature Just Me, Just My Friends, Members, and Anyone.


MySpace displays a warning to new users that includes this sentence:

MySpace is a public space. Don’t post anything you wouldn’t want the world to know.

Its Privacy Settings page lets users define default access for many categories of information, as in the Photos group shown in Figure 4.

Figure 4—Privacy control on MySpace
Privacy control on MySpace

In Figure 4, the check box labeled Allow my photos to be shared/emailed represents a small step toward the Actions component of privacy. Once someone can see a user’s information, what are they allowed to do with it?

Within each section of MySpace—such as Photos or Blog—a user can limit access to Everyone, Friends Only, or Just Me. In some contexts, users can also define a Preferred List—that is, a subset of their Friends in a private space or a subset of all MySpace users in a public space.


Plaxo applies the predefined categories Business, Friends, or Family to all of a user’s contacts. Users can then use those same categories to define who can view items within their Plaxo account, as shown in Figure 5.

Figure 5—Privacy control in Plaxo
Privacy control in Plaxo


On Flickr, users control access to each of their photos individually, with options to make the photos visible to everyone—including people who are visiting the site, but aren’t Flickr members—only to friends or to family or to keep them completely private. [3]


Picasa provides the following basic access options for photo albums:

  • Public—which opens access to everyone
  • Unlisted—which is still public, but uses obscure URLs that make an album’s location unguessable
  • Sign-in required to view—which limits access to a specified set of individuals, each of whom must have a Google account

Using personally defined Google Contact Groups makes selecting people easier [4, 5] and also makes it possible for users to define groups of people specifically for the purpose of controlling Object distribution, similar to the process I described earlier for Facebook’s friend lists.

Two Possible Approaches to Access Control

I’ll now present two possible approaches [6] to controlling who can access what information on social networking sites—both of which improve on the current approaches I’ve just described. The functionality of these approaches goes no further than the use of groups by Facebook and Picasa. My intent is to improve the usability of that functionality. After describing the first of these ideas, I’ll point out some difficulties with it that led to the second, superior idea.

Privacy Onions

The basis of privacy options on the example social networking sites I’ve already described is a built-in set of categories that typically define progressive supersets of people. We can represent such a hierarchy as a series of concentric circles, or what I call a privacy onion, as depicted in Figure 6.

Figure 6—A simple privacy onion
A simple privacy onion

On many social networking sites, the categories are predefined and unalterable, but we can readily extend the idea of a privacy onion to a user-definable set of categories, as shown in Figure 7. From an information architecture perspective, a social networking site could allow users to define and name their own categories, then nest those categories however they want. Thus, users could augment any Object with a category that defines what type of Actor has access.

Figure 7—A user-definable privacy onion
A user-definable privacy onion

Imagine a user interface that displays a diagram representing a privacy onion rather than users’ needing to infer the hierarchy from group membership. Such a diagram would take up a good deal of visual space, so you’d probably hide it a click away, perhaps with a thumbnail image on both the Friend Management page and next to each information Object. Then, you’d need to design mechanisms for three functions, preferably with drag-and-drop interactions, as follows:

  • defining the successive layers of the privacy onion
  • positioning an Actor in the appropriate layer of the onion
  • positioning an Object in the appropriate layer of the onion

It would quickly become apparent to anyone who tried to construct more realistic examples that the categories of people in a user’s network would not necessarily form such progressive supersets. While the onion metaphor could stretch to accommodate some more complex relationships—like those shown in Figure 8—even then it does not deal with cases in which, for example, one person is both a work colleague and an old school buddy. One could, perhaps, allow arbitrarily overlapping circles, but the complexity would then become difficult to manage—both in terms of visual rendering and the cognitive load on users.

Figure 8—Representing more complex relationships
Representing more complex relationships

The underlying problem with this approach is that it does not easily capture the arbitrarily complex relationships between people in a social network. Privacy onions could visually depict simple cases, but would not readily extend to more complex cases.

Privacy Tags: A Better Approach

How have we represented arbitrarily complex relationships in other contexts? We have handcrafted UML object diagrams, semantic models, hierarchical taxonomies, network topologies, and network database models. None of these would fulfill the requirement for simplicity in a social networking context. But we have also allowed users to generate their own content categories using tags. Let’s see how we could apply tagging to privacy.

Suppose users could augment every Object not only with a set of content tags, but also with a separate set of privacy tags. The naming of privacy tags and the assignment of those tags to Objects would both be under the control of the Objects’ owners. Further suppose that the same Object owners could assign similar privacy tags to each person in their social network. Then, the social networking software could impose the constraint that an Actor could see an Object only if both the Actor and the Object had the same privacy tag.

For instance, if Chris assigned the privacy tags BFFE and Work Colleagues to Sam and Work Colleagues to Rob, then uploaded a photo of her office Christmas party, she might assign content tags like Party and Christmas, but in this case, she could also assign the privacy tag Work Colleagues. At the same time, Chris could write a comment titled My Boss Stinks and give it the privacy tag BFFE. The result would be that both Sam and Rob could see the photo, but only Sam could see the private comment.

Note the additive nature of these privacy tags. The more tags a user assigns to an Actor, the more likely that Actor will have access to many different Objects. A simple extension of this model would be to create separate Allow and Deny privacy tags. Thus, an Object’s owner could tag the Object with Allow tags such as BFFE and Work Colleagues, as well as a Deny tag such as People I Don't Trust. An Actor would have permission to access an Object only if all of the following conditions were met:

  • The user who owns the Object has assigned privacy tags—either explicitly or by the use of default privacy settings—to both the Object and Actor.
  • At least one of the privacy tags the user has assigned to the Actor appears among the Object’s Allow tags.
  • None of the privacy tags the user has assigned to the Actor appears among the Object’s Deny tags.

The Value of Privacy Tags

Anything you could do with privacy onions you could also achieve with privacy tags. With tags, however, it would be far easier to include an Actor or an Object in multiple categories, regardless of whether those categories form a hierarchy.

Privacy tags are not functionally different from Facebook’s use of friend lists and Picasa’s use of Google Contact Groups, but they would reveal this functionality’s full value through increased usability. One immediate affect on usability would be that privacy tags would unobtrusively bring the control of privacy closer to center stage. Rather than requiring users to work their way through nested dialog boxes, we can list privacy tags and let users assign them on the Object pages themselves, just like content tags. The tags’ increased visibility would decrease the proportion of times when people unwittingly publish Objects to a broader audience than they intended. It would also be easy to set up default privacy tags for each Object type, reducing users’ need to manually assign tags to each Object.

The concept of content, or semantic, tagging has gained favor in recent years because of the way it combines expressive power with simplicity. Extending the application of tagging to privacy would piggyback on users’ familiarity with the tagging metaphor in social networking applications and, consequently, would add very little learning overhead.

For users who favor a visual mode of communication, it would still be possible to generate an onion-style diagram of privacy tags. You could imagine, for instance, users navigating to the Friend Management page and having the option to view their whole set of friends displayed as small dots in a diagram like that shown in Figure 8. The application would autogenerate the diagram from relevant privacy tags rather than having users construct it manually. When users pointed to any of the small dots representing friends, a bubble would appear, showing that person’s details.

What’s Next?

In the future, I hope we’ll see more social networking sites picking up on the importance of giving their users privacy controls that are both comprehensive and simple. To achieve that, they could start by using the methods I’ve described in this article to give users finer control over the Objects and Actors in their access models.

I encourage leaders in social networking to start thinking now about the third aspect of privacy control—that is, control over the Actions users can perform on published Objects. Social networking applications already support Actions other than viewing in some contexts—for instance, allowing users to tag others’ Objects, comment on others’ Objects, and distribute others’ Objects to third parties.

How can we give control over who can perform such Actions to the owners of Objects without reimplementing a behemoth like the Windows file security model? Although I do not have a well-developed answer to that question, one option could be to extend privacy tags to Tag-Action pairs. So, for instance, assigning the Allow privacy tags BFFE/Edit, Friends/Distribute, and Work Colleague/View might mean my best friend can make changes to a photo I publish, all friends can pass the photo on to others, and work colleagues can view the photo, but do nothing else with it. A feature like this would give users power not only over who could access what, but also over what others could do with their published information—all without sacrificing ease of use. 


1. Although I’ve limited my historical synopsis to the World Wide Web, it is interesting to note that many forms of user-generated content were in use prior to the Web. The early Internet fostered a rich communal life, in the form of email, IRC (Internet relay chat), and various forums such as Usenet. Furthermore, wikis and blogs have been around since the mid-1990s. (See Wikipedia’s timeline of Internet services.) Although the Web brought the Internet out of geekdom into the larger world, the initial functionality of the Web was very limited. It has taken 15 years for those early social technologies to gain their current prominence.

2. Chris Conley has raised concerns about Facebook applications’ access to users’ personal information. FaceCloak is a third-party Firefox plugin that lets users enhance privacy on Facebook.

3. Flickr. “Help / FAQ / Public / Private.” Yahoo! Retrieved December 20, 2009.

4. Picasa. “Album Privacy: Album visibility.” Google. Retrieved December 20, 2009.

5. Picasa. “Album Privacy: Manage album access.” Google. Retrieved December 20, 2009.

6. These suggestions regarding the enhancement of user control over content distribution are the intellectual property of the author, Matthew C. Clarke. Please discuss any application of these suggestions with the author prior to their implementation.

Independent UX Consultant

Sydney, New South Wales, Australia

Matthew C. ClarkeMatthew’s 26 years in information technology include time in the trenches as a programmer, 10 years as a university lecturer in Australia and South Africa, technical writing, user interface design, and product management. During the past decade, Matthew worked with CorVu Corporation—later acquired by Rocket Software—as VP of Information Architecture, GM of Asia-Pacific, and Global Head of CorVu Product Development.  Read More

Other Articles on Software User Experiences

New on UXmatters