About a week ago, news broke Google had found inappropriate photos of a child in a man’s Gmail account and turned the info over to the police. This week we found out Microsoft had done something similar when they found two images of child abuse on the OneDrive of a Pennsylvania man.
At a personal level, I am glad they nailed these detestable human beings but as you can imagine these moves have brought into the public conscience the fact that, to do this, Google and Microsoft had to invade the privacy of these child abusers and access these peoples’ private email and storage. While I am sure nobody would support the activities of these horrible people, this type of invasion of privacy powers companies have has to be very troubling to the average user.
In a column I recently did about Apple’s Wearable Strategy, I mentioned a vacation trip to Disneyworld where I used a wearable wristband with an RFID radio inside that was tied to my personal credit card. I used this wristband to open the lock to my room on the Disney property, enter the Disney parks, buy souvenirs from the Disney stores and pay for meals in Disney restaurants. The band was extremely convenient and I suggested Apple might go to school on Disney wristbands and make personal ID a key pillar of any future wearable they bring to market.
The reason the Disney wristband worked so well for me and millions of other Disney customers is Disney has become a highly trusted brand and one that delights those who go to their parks, see their movies and buy their goods. I did not even give a second thought to tying my wristband to my credit card and taking full advantage of the flexibility it gave me while at Disneyworld.
In my Apple wearable article, I pointed out that, while I expected Apple to make personal ID a pillar of their wearable strategy, I stated I did not think this feature would be in the first generation of any wearable they bring to market. The reason is I felt Apple needed perhaps a year or two of people using any wearable from them for health and home automation in order to develop a strong level of trust of Apple before adding this ID feature to any Apple branded wearable.
Here lies the biggest problem for the tech industry in this new era of intelligent connections. While everyone can apply security and encryption to their apps and services, I really believe the biggest battle for the hearts and minds of the user of the future will revolve around trust. And trust is earned, not appropriated.
When ATMs first came to market, the average user was actually afraid to use them. There were issues of security, privacy and trust tied to ATM’s and it took a while for the banks to earn peoples’ trust. For the banks, ATMs were a boon for their staid business of the past — everyone had to go to a teller to get money and lines at banks were a source of pain for the banks and their customers. Of course, once people got past the privacy and security issues, the handiness of a 24 hour cash machine eventually trumped any fears they had about ATMs. But the banks had to work extra hard to create a level of trust and protection for their customers before they actually embraced ATMs and, in the end, saved the banks hundreds of millions of dollars in related costs.
The tech industry had to go through something similar when they created online transaction services. Banks again where at the heart of the first generation of these services since they had to instill trust in customers before they were willing to do their banking online. Then Amazon and Apple, along with many others, helped build up a new level of trust and confidence in and of online transactions.
But we are now entering a new era of intelligent connections, IoT, wearables and transitioning of all data to the cloud that will bring the next major challenge to the tech world in the way they secure these new connections — as well as create a very important level of trust between themselves and their customers. The banking industry and even tech companies who garnered the trust of their customers to date will have to work even harder as a wearable device sends data back to the company and tells them where they are, what they are doing, how healthy they are, allows for transactions and what they like and don’t like.
Stories like the ones above where Google and Microsoft show they have access to private email and our storage data does not help to instill trust in users. These particular incidents will spur much debate about ethics and as I stated I am glad they caught these predators. But it does give me and many others pause that these companies have access to my most personal information and can do with it what they want. Yes, these instances represented isolated content and violated these companies rules as well as federal laws. But it opened a can of worms in consumer’s minds about what these companies know about us and will cause even casual consumers to be even more leery of what they post.
I see the issue of building trust as the biggest problem the many companies and industries who will be rolling out all types of IoT, wearables, data storage, new transactions, personal ID etc. in an age of connected intelligence. If they don’t nail this issue, I fear we may never fully realize the potential of this connected era where all tech is smart, interconnected and supposed to deliver new levels of services and efficiencies and bring us into this next era of intelligent connections.
7 thoughts on “The Biggest Problem the Tech Industry Must Solve”
For me trust is built upon three key features: you do not deceive or mislead me up front; you are consistent over time; you do not change the relationship without telling me first.
I do not think your point is right for this particular case in reference to a violation of privacy.
every email services providers, whether is Google, Microsoft, Yahoo, or Apple scan your email to prevent spam and detect viruses. and they are also required to report this kind of photo that match a database to law enforcement as a way to fight against child pornography the same way they report know Spam hence unlike many others i do not consider this kind of scanning activities to be a great violation of user’s privacy.
There are two big differences:
The first is that scanning for spam and viruses is done on behalf of the user, plus the user knows about it beforehand and actually seeks it out as a feature. This is also something that can usually be turned off.
The second is this opens the door or paves the way for easy state monitoring and reporting of anything a government may find objectionable. While in a “benign” democracy this may only be things that the vast majority would agree should be illegal, in an oppressive regime this would include freedom of speech, dissent, and other simply “undesirable” communication. (See the Great Firewall of China, plus: North Korea, fundamentalist nations in the Middle East/etc., and now Russia’s increasingly oppressive stance against gay rights.)
There’s also a problem with false positives. Should innocent photos of infants in a bubble bath being sent to relatives get flagged? What about selfies taken by teenagers? Note that the age of consent varies widely across the world, and while a teenager of a certain age sending a selfie may be a really dumb idea, it may be legal content in one country while it’s illegal in another. See also: Facebook and their actions against photos of women breastfeeding. In fundamentalist nations even these photos could get people thrown in jail.
Similarly, should someone be arrested simply for receiving illegal content in email or via messaging services? If so, it’d be easy for malicious individuals to get a lot of people arrested simply by sending them some nasty stuff that they know will get scanned and flagged for attention by law enforcement.
Then the major media corporations can get involved, suing people for sending each other copyrighted works, or for even just sending links that facilitate the illegal downloading of copyrighted works!
Besides all this, even the scanning of email in order to target advertising is something that I personally consider a significant trust and privacy violation, which is why I will never ever use Gmail.
IIRC, the email services look at emails for jpegs and Zips with known child porn hashes for those files. The FBI or other Government org has a list that email providers can voluntarily (which they most all do) subscribe to…
While the examples are one thing, the article in general is a good example of the need for trust in all the new gadgets and services we use.
Does my gym, Fitbit, or other apps sell my usage data to my health insurance company.
Hey Sam, me,
We saw you haven’t been to the gym in two weeks but you bought steaks and a lot of beer.
Next month, we’ll raise your health insurance rates.
Thanks for signing up with ur insurance.
Don’t forget our/your credit card profiting off your shopping habits.
Our receipts say date and $19.97
The retailer tells our credit card company that it was a 12elve pack, a pack of cigs and a honeybun.
Who can we trust?
Microsof’s Service Agreement includes a Code of Conduct that is pretty clear on this. Specifically, it states, “…we also deploy automated technologies to detect child pornography…” I know most people don’t read these documents, but that is no excuse. In this case, Microsoft is clear and the information is not buried in pages of legal mumbo-jumbo.
As far as I know, Google’s terms of service does not cover this — Google seems more concerned about copyright than anything else.
Yea, they told us, kinda. We did not opt in for this service and they definitely did not advertise it. They just started doing it one day with nothing but a small disclaimer in a 20 page user agreement. I have to agree with Mr. Bajarin, that the way they are doing this does not instill trust.