Classifying Privacy Concerns

Privacy has been in the news a lot recently but, for once, not because of some egregious breach. Rather, it is Apple executives’ repeated statements of the company’s commitment to user privacy and the way in which it seeks to set itself apart from its competitors on this point. I thought it would be useful, by way of background, to walk through a classification of the major privacy concerns we as consumers seem to have and how each of these is (or isn’t) relevant to the different companies that compete in this industry. I’ve done quite a bit of research into major privacy stories covered in the news over the last few years, and most of them fall into one of these categories.

1. Sensitive personal information being exposed to other people

Description: One of the greatest fears people have is information they consider particularly personal or sensitive being shared with people they don’t want it shared with.

Examples:

  • I’m a school teacher who also has an active personal life. But I don’t want pictures of me drinking or partying exposed to the students, their parents, or perhaps even the other staff at the school where I teach
  • I’m gay but, for the time being, have chosen only to share this information with certain people and definitely do not want this information shared with others – whether family members, colleagues at work, or neighbors
  • I’m divorced and have recently started dating again and I don’t want my ex to know anything about my new life

The list could go on, but you get the picture – this fear is about personal information being shared with other individuals (not corporations or advertisers) beyond those I’ve chosen to share it with, especially in situations where I have chosen to share some of this information with specific groups or individuals but not others.

Companies most likely to cause this concern: In general, the companies most likely to commit breaches of this particular facet of privacy are those through who and with whom users proactively share certain information with other groups, which for the most part limits it to social networks such as Facebook, Google+, and the like. Facebook has certainly had several periods when its users were exposed in this way, often because default privacy policies were set too open or when policies or settings changed without due notice to users.

The vast majority of the privacy stories concerning Facebook over the last several years have been in this category, with relatively few other companies affected in quite the same way, at least not frequently. However, Google has occasionally been guilty too, as when its Buzz service first launched a few years back.

2. Personal information being “read” by computers

Description: We fear our personal information is being “seen” or “read”, not by other human beings, but by computers used by companies to personalize services, to serve advertising, and so on.

Examples:

  • My email provider has computers which view the contents of my emails to filter them into appropriate categories
  • My search provider sees all the searches I enter, and which results I click on, and slowly builds a profile of which search results are likely to be most relevant to me
  • My photo service performs machine analysis of my pictures to make them searchable.

In this case, the fear isn’t that human beings are seeing the personal information we’re sharing (though sometimes misunderstandings do occur on this point, or there may be skepticism that human beings really can’t see this information if they want to), but a vague sense of creepiness that machines are delving into some very personal information.

Companies most likely to cause this concern: On this point, it’s hard even to come up with examples that don’t sound like they’re talking about Google, which feels like the ultimate symbol of this kind of computer snooping. There’s no true breach of privacy here from a human perspective, but these types of services can create a vague sense of unease among at least some users.

3. Fear of one company knowing too much about us

Description: We fear that, even though many services may collect personal information about us, more and more of this information seems to be consolidating with just one or two companies, which are coming to “know” an awful lot about us.

Examples:

  • My email, calendar, contacts, photos, search history, and so on are all hosted by a single online service provider
  • My call records, email, calendar, contacts, phone search history, text messages, music, and books are all on my phone
  • The vast majority of my news and video consumption, most of my social connections, my interests, and my political views are all known by the social network I use.

In this case, some users may be genuinely uncomfortable about this enormous amount of knowledge held by a single company — a worry in its own right — which fits to some extent in the same category of vague unease as the previous concern on this list. However, in other cases, it may be a factor in other worries listed below.

Companies most likely to cause this concern: As a broad concern, this issue could affect any one of a number of companies, from Google to Apple to Facebook to Microsoft to Samsung. Any company which either provides a very broad range of services or provides smartphones and other devices is at least potentially in a position to “know” an enormous amount about its users. However, much depends on how data is collected, stored, and used. This is one area where Apple seeks to set itself apart from competitors, by focusing on its tendency to keep personal information on the device itself rather than on cloud servers. However, even then, there is potential for some exposure of this data, as discussed below. Companies which gather and store this data for the explicit purpose of building profiles of their users for purposes other than personalizing their services may also foster some of the other concerns listed. Google, in particular, has seen a number of stories about this aspect of its business, and especially about its decision a couple of years ago to unify its logins and data across all its services, over which several European jurisdictions are still pursuing legal action.

4. Fear of data being sold to advertisers

Description: We fear that not only do the companies whose services we use collect lots of data about us (see 2 and 3 above), but they sell this data in some form to advertisers.

Examples:

  • My search provider uses information from previous searches to allow advertisers to reach me when I make future searches
  • My smartphone vendor uses broad profile information about me to provide targeted advertising from companies who want to reach people like me
  • My social network uses information about my interests which I have provided explicitly and information gathered through my other actions on the service to serve up ads which seek to reach people with my demographics and interests

The reality is few of the companies we’re talking about here really do “sell” data to advertisers. What they do sell to advertisers is the ability to target their advertising to users based on their interests (whether explicit or implicit), and/or their demographics. The data itself is not shared with the advertisers except perhaps in an aggregated form as an indication of the size of target markets, for example. There are companies that do sell this kind of information, but they exist outside the world of consumer technology providers.

Companies most likely to cause this concern: This is a tricky one to define, because these companies don’t technically sell the information to advertisers. However, the very act of allowing advertisers to target users causes the same unease among some users as some of the other items I’ve described. There’s no breach of personal information per se, just as with 2 and 3, and unlike number 1 on our list. But there’s a sense our privacy is being invaded because advertisers are being allowed to reach us based on the profiles our providers have built up about us. This is obviously particularly true for companies which are heavily dependent on advertising business models, such as Google and Facebook, but it also applies, in a narrower way, to companies like Apple which have advertising products like iAd that allow for targeted advertising.

5. Fear of an accidental breach of security

Description: We fear that, because service providers and device vendors collect the information described in the various points above, there is always the potential this information is shared with third parties through no deliberate action on our part or on the part of the provider or vendor.

Examples:

  • My social network provider is hacked, exposing my personal information
  • There is a bug in the privacy settings on the online service I use which allows people I have no connection with to see personal information I store in the service
  • My device collects information about me which should be private but can be exposed through a loophole in the security settings

In none of these cases did the provider deliberately share information with anyone else but, in some cases, the argument can be made the provider should have done more to protect sensitive data, either to ensure its software was bug free in the most important security aspects or to protect it against malicious attacks.

Companies most likely to cause this concern: All companies are to some extent vulnerable to these issues, but those that collect the most data (even if for entirely legitimate purposes) have the most at risk if there is a breach. Google, Facebook, Apple, and others have all been the subject of stories along these lines over the last few years, whether as a result of bugs, hacking or other factors (such as rogue employees). These stories often say more about the desire of malefactors to access valued information than they do about security policies but, in some cases, they reveal shortcomings in company security that can build into a narrative over time (Apple has seemed at risk of this outcome at various times).

We’re not all the same

There are undoubtedly other facets of privacy concerns that aren’t completely captured here, but the vast majority of concerns we have, and the headlines about privacy issues, tend to revolve around one or more of those outlined here. The reality is we’re all different – each of us has a different tolerance for these different categories of privacy risk. Some of us care deeply about all of them, and are inherently distrustful of many service providers and device vendors for this reason. Others care only about some – likely 1 and 5 (or possibly just 1) but not the rest. And many more are in the middle, perhaps most concerned about a couple but also somewhat uneasy about the others. There’s likely a segmentation in any given population that could apportion users among these different groups, with the size of the various segments differing by company and culture. For example, users in China and other oppressive regimes and users with particularly sensitive personal information are likely to be more cautious on privacy than those who live in relatively free societies and those who are able to show their full personalities openly without fear. As Apple and other companies seek to stake out privacy positions they need to be aware of these different classifications and the various segments that exist. Apple’s remarks about privacy are likely to land hard with certain segments and entirely bounce off others, whereas Google’s stance is likely to turn off some users while attracting others. Each of these companies needs to bear this in mind to ensure they don’t risk alienating users who might otherwise be attracted to their products and services.

Published by

Jan Dawson

Jan Dawson is Founder and Chief Analyst at Jackdaw Research, a technology research and consulting firm focused on consumer technology. During his sixteen years as a technology analyst, Jan has covered everything from DSL to LTE, and from policy and regulation to smartphones and tablets. As such, he brings a unique perspective to the consumer technology space, pulling together insights on communications and content services, device hardware and software, and online services to provide big-picture market analysis and strategic advice to his clients. Jan has worked with many of the world’s largest operators, device and infrastructure vendors, online service providers and others to shape their strategies and help them understand the market. Prior to founding Jackdaw, Jan worked at Ovum for a number of years, most recently as Chief Telecoms Analyst, responsible for Ovum’s telecoms research agenda globally.

28 thoughts on “Classifying Privacy Concerns”

  1. This classification you have done is a great idea, and indeed, since the word “analysis” itself means to break up and understand, I’m surprised that more experts are not doing the same.

    Having said that though, I think there are some important points that have been missed. In particular, in item 1), you mention that a huge fear is that your private information is shared with people you don’t want it shared with. However, you seem to imply that in items 2) to 4), that will not happen. That is, the information will remain in the hands of companies in maybe anonymous form, and will not be shown to anybody around you.

    This is not true. In fact, this is only true because traditional advertising was dumb. With the huge abuse of retargeting ads, and some Facebook ads, the ads on your computer screen can tell any casual bystander some very private stuff about you.

    Facebook ads, for example, can give away your education, your exact age, etc. I have seen recruiting ads in Japan that are targeted to alumni of specific universities and anybody who sees those ads can know which one I graduated from. I have also seen ads for drinks that are supposed to invigorate middle-aged men, and they have exact ages in the ad copies.

    Retargeting ads are significantly worse. I’ve seen ads that display the exact lodges that I was looking at when I was researching for a holiday online. What if I was researching for a surprise present for my wife on our family iPad? What if she used it the next day and suddenly found all these retargeting ads for the exact presents that I was thinking of? What if I was researching the right diet and exercise regimen for a diabetic condition, and my co-worker, from whom I was hiding my health issues, discovered some retargeting ads on my office computer?

    The reality is, as more and more people realise that ads on the web can reveal very private information about oneself, the items 2) to 4) will become a much larger issue, potentially larger than 1).

    1. I’m not sure I agree with this – you seem to assume that my seeing an ad somehow means someone knows something specifically about me, but that isn’t the case. An advertiser merely knows that its ad is being seen by someone who fits the criteria the advertiser set with the host of the advertisement. The closest things could come to the scenario you describe is when the criteria are so narrow that you’re targeting effectively one individual, but unless I actually engage with the advertiser in some way, they never get any direct visibility over me in the process.

      I think the retargeting stuff you mention is much more plausible, however.

      1. As you mentioned in item 1), the worst case is not that the advertiser knows something specifically about me. The worst case is when somebody whom I directly interact with gets to know something that I would rather hide from them. That is for example, when my co-workers know something bad about my health, etc. (of course, in many cases the company should be notified, but not necessarily your co-workers).

        That can happen if they accidentally see the ads that were targeted for you. And the criteria is, in the case of Facebook at least and of course for retargeting, often too narrow for my comfort.

      2. I’ll try to go a bit more into specific examples. Unfortunately, I purge my cookies often because of the exact concerns that I describe so I can’t give you screenshots etc.

        1.
        I was recently researching some job opportunities (after clicking an ad on Facebook), and now Facebook always puts ads from this company on my timeline. Anybody who might see my timeline can easily speculate that I might be looking for a job.

        2.
        I often get Google retargeting ads from a company that helps to manage our company servers. Now if I ever browse the web in front of my clients, I risk exposing our infrastructure information. I would not be surprised if companies sensitive about this kind of thing, started to make the use of adblockers mandatory for sales reps who do demos.

        Some other examples I’ve touched in my previous comment that expose my age and the university that I graduated from, are also among those which I would prefer to block.

      3. Sorry about posting so many comments, but I’d like to point out one thing that might be central to our disagreement.

        In your article and in your comment, you emphasise that an advertiser does not know who you are exactly, but only knows about you as an anonymous profile against which they can advertise. I agree that is generally true. In this sense, neither are advertisers intruding on your privacy per se, nor is Google selling your private information.

        However, by displaying ads on your computer screen which might be viewed by people who know you personally, these people *can* make the final association. They *can* know which websites you visited, your age, and anything in your anonymous profile that advertisers can target against.

        Therefore, although there is not a direct intrusion of your privacy per se by the advertisers or Google, these companies can indirectly expose your private information (or your company’s information) to people around you, and damage you as a result.

        This is the point that I want to stress. Whether or not there is a direct violation of privacy is not necessarily the issue. Significant damage can be done indirectly, and that is why items 2) to 4) are more problematic than you suggest.

  2. I think there are several dimensions to the issue. So that doesn’t make for pretty graphs… this will be an uphill battle ^^
    0- As a preamble, Online is really only a facilitator. Credit cards companies, magazine publishers, shops… have been sharing/selling info on their customers forever, offline
    1- Voluntary/explicit sharing vs involuntary/implicit. Opening the Uber app and pushing the “Have someone pick me up here” button, vs being tracked all day long in the background by Facebook Messenger. Opens the door to confusion too, whenever I buy a one-off thing for a friend, I’m bombarded with related ads…
    2- Public vs sensitive. I don’t mind anyone reading my Disqus posts (well, most of them ^^). I do mind anyone reading not only my banking password, but my banking statements.
    3- Stuff I’m not even sharing, just storing online. Maybe sharing with specific private individuals.
    4- Accidental/negligent sharing. Whether on my end or on a provider’s end. I once typed my World of Warcraft password in the WoW chat. ebay, Amazon, and a handful of others have sent me notices over the years that info/passwords got leaked…

    Plus there’s the compounding issue that info I specifically share w/ someone can then be shared on. I’m OK with my bank having the detail of my credit card purchases (that falls into the “necessary evil” category I guess). I’m more uneasy with them selling that info on to whomever.

  3. SQL causes the problem with privacy! It’s the real cause!

    SQL structures data externally, through queries.

    For the past 70 years SQL (Structured Query Language, generic name for
    whatever IBM has done because all used the same ideology under different
    names) dominated search for electronic information. It’s external to
    data technology, which helps to distill patterns and doubtful statistics
    based on external queries.
    SQL technology emanates from External Relations theory of Analytic Philosophy: students of Moore, Russell and Wittgenstein established IBM and everybody else followed their path.

    However, there is Internal Relations theory, which is based on Bradley, Poincare and my ideas. In this theory patterns and statistics are found into structured data. I discovered and patented how to structure any data: Language has its own Internal parsing, indexing and statistics. For instance, there are two sentences:

    a) ‘Sam!’
    b) ‘A loud ringing of one of the bells was followed by the appearance of a smart chambermaid in the upper sleeping gallery, who, after tapping at one of the doors, and receiving a request from within, called over the balustrades -‘Sam!’.’

    Evidently, that the ‘Sam’ has different importance into both sentences, in regard to extra information in both. This distinction is reflected as the phrases, which contain ‘Sam’,
    weights: the first has 1, the second – 0.08; the greater weight signifies stronger emotional ‘acuteness’.
    First you need to parse obtaining phrases from clauses, restoring omitted words, for sentences and paragraphs.
    Next, you calculate Internal statistics, weights; where the weight refers to the frequency that a phrase occurs in relation to other phrases.
    After that data is indexed by common dictionary, like Webster, and annotated by subtexts.
    This is a small sample of the structured data:
    this – signify – : 333333
    both – are – once : 333333
    confusion – signify – : 333321
    speaking – done – once : 333112
    speaking – was – both : 333109
    place – is – in : 250000
    To see the validity of technology – pick up any sentence.

    Do you have a pencil? The numbers on the right – the internal weights, for which Microsoft, Google, NSA and everybody on Internet spy!

    My technology makes the spying unnecessary: being structured information search for passive, invisible at Internet people – 101% privacy!

    Personal profiles of structured data have no value neither for NSA nor anybody else: they cannot be read and understood in no way – see the sample above.

  4. I am curious to find out what blog system you have been working with? I’m experiencing some small security problems with my latest blog and I’d like to find something more safeguarded. Do you have any suggestions?

Leave a Reply

Your email address will not be published. Required fields are marked *