In her previous post Libby asks whether you might see NoTube’s Beancounter, which lets you “discover you what you watch and listen to – and the overall categories of things that you like”, as an invasion of your privacy.
During the course of the NoTube project we will be investigating ways of supporting users in understanding and managing privacy issues when data from various sources is merged about them. Many people have become used to being quite open with their data on sites such as Facebook but might not realise the implications of this openness when these “walled gardens” are opened up. Expressing this to people without scaring them is very hard.
With the user’s permission Beancounter collects, stores and analyses attention data from various sources (such as “Libby watched BBC Question Time on BBC1”), enhancing this data by linking it to other data. It then produces a machine-readable profile of your interests and contacts, which can be used to generate personalised content recommendations. The idea is to combine fragmented data around the Web to allow the user to make use of it.
So far we have identified four key aspects of the Beancounter that pose interesting new research questions relating to privacy.
- Aggregation of users’ previously unconnected data: Can existing privacy policies from the original sources of the attention data (applications such as Twitter for example) be transferred over to new sets of aggregated data? Can new privacy policies be defined based on existing ones in the original applications? Can the privacy policies of the existing applications be preserved and altered to fit the aggregated data?
- Enrichment of users’ data by linking it to other publically available data: How can we best maintain user awareness of the potential privacy risks when creating new links between data? How should we guide the user in defining privacy strategies for such enriched data?
- Statistics: How do we to help users to manage the sharing of their personal data analytics, given that the statistics might reveal things that people might prefer to keep confidential? How do we best present the statistics so that users understand the added value for them with respect to future recommendations?
- Control: How can the user maintain control over the use of their data, restricting how it can be reused, in commercial and non-commercial scenarios?
These questions are not just about NoTube Beancounters but about any application that merges or connects public data – or in the more difficult case, merges private and public data and re-broadcasts it. Data aggregation can impact directly on the user. Consider a calendar-sharing application in which you can see your friends’ private calendars and which then merges the data from them into a composite social calendar for you. If you were to then make your composite calendar public, you would probably be broadcasting your friends’ personal data about their location and activities that they wouldn’t wish to be publically known; and worse, you might by linking the data derive a new, unwelcome and private piece of knowledge. Here’s a jokey but throught-provoking example from twitter a couple of days ago:
“Both @jonronson @AIannucci claiming to be in Oslo at the same time. Hope this extra-marital affair goes well, boys.” link
Or consider the following case directly relevant to TV: someone connects their TV to their private stash of illegally downloaded movies and also to twitter, and then twitter broadcasts the fact that they watched something they could not legally have watched with precise time and device information. There we have a new link created betweeen the person, the movie and the time, which provides evidence that they have done something illegal.
Assessing and managing privacy in everyday life is an intuitive process. As the sociologist Erving Goffman has described, the front you present is not only tailored to the pertinent audience but also to the context (for example, whether you are at work or at home) and it determines the amount of information you are willing to disclose to the audience. Many of our intuitions are not applicable to new cases where our data is combined and re-broadcast, even if we have control over it, and particularly when we do not have a good understanding of the capabilities of software and the potential consequences of using it. If we are to venture into these kinds of areas, we need to help people develop good intuitions about their privacy when using these services. This is difficult:
- People find it very difficult to think about privacy in an abstract way and perceptions of privacy vary across nations and cultures. For example, people in India are much more comfortable about giving out personal details on social networks than people in America. (Source: Synovate 2008, Social Network Users)
- People systematically underestimate privacy risks online: many users never customise their privacy settings at all but just stick with the defaults, as confirmed in a recent UK Office of Communications survey.
- Reassuring people about online privacy tends to make them more, not less, concerned. The results of a series of experiments conducted by Carnegie Mellon University showed that people who were reminded about privacy were less likely to reveal personal information than those who were not.
- Privacy settings could have complex and numerous consequences and it is difficult to predict all of them. MIT’s Gaydar project is recent example of how personal information can be shared inadvertently.
As we test and develop user experience solutions to these challenges during the coming months we‘ll report back on our findings here.