In the past few weeks, we were again reminded of the privacy paradox. Privacy, as a concept, sounds good, and people will always say it matters. Yet their behavior often contradicts their statement because for many privacy is not a core principle or decision guiding conviction. There is a subset of people for whom privacy is a guiding conviction, but for the majority of consumers, it is not.
FaceApp and Privacies Paradox
I wrote about FaceApp when it went viral a few years ago, putting it into the bucket of augmented reality. The app seems to have gone even more viral as of yesterday with people all over social media showing pictures of the apps feature that using some machine learning to estimate how you will look when you are old. I won’t go into why what they are doing is more like a parlor trick, but perhaps that is for a different article.
By the end of the day yesterday, I had personally seen the vast majority of friends on Instagram and Facebook post pictures of themselves as older people using this FaceApp feature. This happened fast and went viral, and a lot of people put their privacy at risk without even thinking about it. This is the privacy paradox.
The vast majority of the market will so easily compromise their privacy for social show, which means to join in on the social trend and show it off on social media. Yes, the feature was fun and got a few laughs, but the question of privacy was largely never raised I’m sure by many. I wonder how many people would have at least paused from using FaceApp if they had read the following excerpt from FaceApp’s privacy policy.
You grant FaceApp a perpetual, irrevocable, nonexclusive, royalty-free, worldwide, fully-paid, transferable sub-licensable license to use, reproduce, modify, adapt, publish, translate, create derivative works from, distribute, publicly perform and display your User Content and any name, username or likeness provided in connection with your User Content in all media formats and channels now known or later developed, without compensation to you. When you post or otherwise share User Content on or through our Services, you understand that your User Content and any associated information (such as your [username], location or profile photo) will be visible to the public.
You grant FaceApp consent to use the User Content, regardless of whether it includes an individual’s name, likeness, voice, or persona, sufficient to indicate the individual’s identity. By using the Services, you agree that the User Content may be used for commercial purposes. You further acknowledge that FaceApp’s use of the User Content for commercial purposes will not result in any injury to you or to any person you authorized to act on its behalf. You acknowledge that some of the Services are supported by advertising revenue and may display advertisements and promotions, and you hereby agree that FaceApp may place such advertising and promotions on the Services or on, about, or in conjunction with your User Content. The manner, mode, and extent of such advertising and promotions are subject to change without specific notice to you. You acknowledge that we may not always identify paid services, sponsored content, or commercial communications as such.
It’s actually not a bad privacy policy because it is so clear on what you agree to. I’ve seen so many that are quite vague because they don’t really want you to know what they are up to. FaceApps is pretty clear, and they are taking your data and your images and doing whatever they want with it.
As I said, had every person who just got the app to just try the age filter and then likely never use the app again (not delete it) read this part of the privacy terms I wonder how many would have still proceeded. I had the opportunity to ask a few friends and their families who were with me the last few days, and they looked at the privacy policy and had several responses. One of my well-intentioned friends read the policy and concluded that it was ok to use the app if he did not post the image to social media. Several others seemed to agree, while two of my other friends, both lawyers, said they would not use the app. Once I explained the image they took of themselves was being sent to FaceApp’s servers to process the filter and that image falls under the (they do what they want with it) part of the privacy policy, they all agreed they would not use the app with all that information.
So, a small sample, but one that demonstrates how even a clearly written privacy policy can still be misunderstood and misinterpreted.
Platform Owners Can Do More
The question then turns to what can software platform owners do to continue to give consumers all the information they need to make a decision. I understand this is a fine balance of how much information is too much to handle and do you run the risk no one downloads apps anymore if this gets too complicated. However, Apple, in particular, has been architecting their platform with security in mind, and even notifies a consumer when an app wants to use their location informing them of what that means so they can make an informed decision. I wonder if something like this needs to also apply to apps that use our photos. In particular, when an app like FaceApp asks for access to my photo roll, a reasonable thing to ask in case I want to use a previously taken picture from my camera roll.
Matt Panzarino from TechCrunch brings this up in a recent post.
One thing that FaceApp does do, however, is it uploads your photo to the cloud for processing. It does not do on-device processing like Apple’s first-party app does, and like it enables for third parties through its ML libraries and routines. This is not made clear to the user.
I have asked FaceApp why they don’t alert the user that the photo is processed in the cloud. I’ve also asked them whether they retain the photos.
Given how many screenshots people take of sensitive information like banking and whatnot, photo access is a bigger security risk than ever these days. With a scraper and optical character recognition tech, you could automatically turn up a huge amount of info way beyond ‘photos of people.’
So, overall, I think it is important that we think carefully about the safeguards put in place to protect photo archives and the motives and methods of the apps we give access to.
I’m not sure if it’s possible within iOS for an app, which has access to my camera roll, to upload all of them to a server in the background, but Matt’s point about using ML to scrape for banking information or anything else I may have taken a screenshot or picture of is a great one.
Honestly, the camera roll should be as sacred as location and treated as such by the operating system. I agree 100% with Matt that more safeguards need to be in place around the camera roll and I’ll look for Apple to lead here and hopefully start to address how to better inform their customers around privacy risks with their photos when trying an app like FaceApp.