The Flaw in Tech Companies’ AI Strategy

There is a lot of talk about artificial intelligence; sadly, not a lot of substance. We are in such early days of AI that I prefer to talk about what is happening in Machine Learning since that is the current stage of the AI future we are in. We are currently trying to teach computers to see, hear, learn, and more. Right now, that is the focus of every major effort that will someday be considered AI. When I think about how tech companies will progress in this area, I think about it from the standpoint of what data they have access to. In reality, data is the foundation of machine learning and thus, the foundation for the future of AI. I fully expect many companies to turn data into intelligence and use what they have collected to teach machines. There may very well be a plethora of specialized artificial intelligence engines for things like commerce, banking, oil and gas, astrology, science, etc., but the real question in my mind is who is in the best position to develop a specialized AI assistant tuned to me.

While several of the tech companies I’m going to mention may not be focused on personal AI, I’m going to make some points within the lens of the goal of personal AI vs. a general purpose AI. The question is, who is developing Tony Stark’s version of Jarvis for individual consumers? The ultimate computing assistant designed to learn, adapt, and augment all of our weaknesses as humans and bring new levels of computational capabilities to the forefront for its user.

With the assumption that Facebook, Amazon, Google, Microsoft, and Apple are trying to build highly personalized agents, I want to look at the flaws and challenges each of them face in the present day.

Facebook no doubt wants to be the personal assistant at all levels for consumers. However, like all the companies I’m going to mention, they have a data problem. This problem does not mean they don’t have a lot of data — quite the contrary. Facebook has a tremendous amount of data. However, they have a lot of the wrong kind of data to deliver a highly personalized artificial assistant for every area of your life.

Facebook’s dilemma is they see only the person the consumer wants them to see. The data shared on Facebook by a user is often not the full picture of that person. It is often a facade or a highly curated version of one’s self. You present yourself on Facebook the way you want to be perceived and do not share all the deep personal issues, preferences, problems or truly intimate aspects of your life. Facebook sees and is learning about the facade and not the true person behind the highly curated image presented on Facebook.

We share with Facebook only what we want others to see and that means Facebook is only seeing part of us and not the whole picture. Certainly not the kind of data that helps create a truly personalized AI agent.

I remain convinced Amazon is one of the more serious players in the AI field and potentially in a strong position to compete for the job of being my personal assistant. Amazon’s challenge is it is commonly a shared service. More often than not, people share an Amazon Prime account or an Amazon account in general across their family. So. Amazon sees a great deal of my family’s commerce data. However, it has no idea if it is me or my wife or my kids who are making the transaction. It’s so often blatantly clarified for me as I’m surfing Facebook or some other site that is an Amazon affiliate and I see all the personal hygiene and cosmetic ads for items my wife has searched for on Amazon. Nothing like killing time on Facebook and seeing ads for Snail and Bee facial masks presented to me in every way possible.

While Amazon, with their Alexa assistant, is competing for the AI agent in my life, it has no idea how to distinguish me from other people who share my Amazon account thus making it very hard for Amazon to build a personalized agent just for me when it observes and learns from the vast data set of my shopping experience but does not know what I’m shopping for versus what my family is shopping for. The shared dynamic of the data Amazon is getting makes it hard for them to truly compete for the personal AI. However, it does put them in a good position to compete more for the family or group AI than the individual.

Google is an interesting one. Billions of people use Google’s search engine every day, but the key question remains, how much can you learn about a person from their search query? You can certainly get a glimpse into the context and interest at any given time by someone who is running a query and, if you keep building a profile of that person from their searches then, over time, it is certainly possible to get a surface level understanding. But I’m not sure you can know a person intimately from their searches.

No doubt, Google is building a knowledge profile of its users on more than just their search queries as you use more of Google’s services. Places you go if you use Maps. Conversations you have if you use their messaging apps and email, etc. No doubt, the more Google services you use, the more Google can know and learn about you. The challenge is that, for many consumers, they do not fully and extensively use all of Google’s services. So Google is also seeing only a partial portrait of a person and not the entirety which is necessary to develop a truly personal and intimate AI agent.

Microsoft is in an interesting position because they, like Google and Apple, own an operating system hundreds of millions of people use on a daily basis for hours on end. However, I would argue the position Microsoft is in is to learn about your work self, not so much your personal self. Because they are only relevant, from an OS and machine learning standpoint on the desktop and laptop, then they are stuck learning mostly and, in many cases only, about your work self. Indeed, this is incredibly valuable in itself and Microsoft is in a position to develop an AI designed to help you be productive and get more work done in an efficient manner. The challenge for Microsoft is to be able to learn more about the personal side of one’s life when all they will see and learn from is the work side.

Lastly, we turn to Apple. On paper, Apple is in one of the best positions to develop an agent like Siri to fully know all the intimate dynamics of those who use Apple devices. Unlike Google, it is more common for consumers to use the full suite of Apple’s services from Maps, to email, to cloud storage and sync, to music, to photos, etc. However, Apple’s stance to champion consumer privacy has put them in a position to willingly and purposely collect less data rather than more.

If data is the backbone of creating a useful AI agent designed to know you and help you in all circumstances of your life, then the more it knows about you the better. Apple seems to want to grab as little data as possible, with the added dynamic of anonymizing that data so they don’t truly know it’s you, in order to err on the side of privacy.

I have no problem with these goals but I am worried Apple’s stance puts them in a compromised position to truly get the data they need to make better products and services.

In each of these cases, all the main tech companies have flaws in their grand AI strategy. Now, we certainly have many years until AI becomes a reality but the way I’m analyzing the potential winners and losers today is on the basis of the data they have on their customers in order to build a true personal assistant that adds value at every corner of your life. While many companies are well positioned, there remain significant holes in their strategy.

Published by

Ben Bajarin

Ben Bajarin is a Principal Analyst and the head of primary research at Creative Strategies, Inc - An industry analysis, market intelligence and research firm located in Silicon Valley. His primary focus is consumer technology and market trend research and he is responsible for studying over 30 countries. Full Bio

21 thoughts on “The Flaw in Tech Companies’ AI Strategy”

  1. My questions and doubts are a bit upstream from that.

    You say “The ultimate computing assistant designed to learn, adapt, and augment all of our weaknesses as humans and bring new levels of computational capabilities to the forefront for its user.”. I find that concept *very* fuzzy. I might be lacking creativity/imagination, but shouldn’t AIs start with solving specific problems ? Killer apps for AIs ?

    Also, should an AI be reactive or proactive ? Do I want Jarvis to be always available, or to interrupt me with reminders, suggestions, quips, info… “Hey, it’s been 3 days since you called your mom !”; “Madeleine Peyroux has a new album, should I order it ?”

    Plus I can’t help but think AIs are being unnecessarily sneaky. Once their usefulness is established, I’d be glad to directly state what I want help with. I’m waiting on a sale to buy a Lenovo Yoga Tab 3 Plus, Civilization VI; I’m waiting on a 650/652 version of the Xiaomi Redmi Note 4, on either the LG G5 or GS7 to hit $350; on an A72 Android desktop from a sufficiently serious OEM… just let me know where I put that info for “some-i” to take care of it ? Currently I’ve got 5 pinned browser tabs just to track that.

    1. I agree.

      In the article, Ben says;

      There may very well be a plethora of specialized artificial intelligence engines for things like commerce, banking, oil and gas, astrology, science, etc., but the real question in my mind is who is in the best position to develop a specialized AI assistant tuned to me.

      I would actually turn this around and ask which kind of problems people would actually want to use AI for? Personally, I would very much want an AI that took care of my finances, but I’m not sure that I would want a Jarvis, at least not everyday.

      In fact, the way I view Siri for example, is not as much as a personal assistant but more as a convenient voice-activated macro system for sending commands to my phone. I don’t need it to be really intelligent or to know a lot of things about me. It just has to understand my commands better.

      1. Indeed, is “voice as a UI” really an assistant ? I mean, it uses AI to understand both what we say (words, context, meaning) and how to handle it, but it’s utterly reactive and non-creative. It’s intelligence in the etymologically original “it understands” sense, but not in the new “it does stuff on its own” sense.

        Edit: there’s probably a continuum from “it transcribes the words I enunciate” to “as useful and proactive as a human personal assistant”, with your macro case in-between but towards the “simple”.

  2. Interesting how the whole question of why anyone would want to relinquish their critical thinking skills to a machine never comes up….

    1. You’ve exposed a MAJOR assumption. Sadly though, too many have given up their critical thinking skills to slogans, celebrities, and swamp drainers.

      1. You sound as if the surrender of one’s critical faculties is an information age phenomenon. I think the proportion of people who don’t think critically have remained basically the same since human history began. What has increased with the advent of mass media and the information age are the opportunities (via tweets, blogs, videos, comments, etc.) for a person to reveal his or her capacity for critical thought.

        1. Of course you’re right. We’re both right…
          Enabling technologies make for more effective people. Problem is, idiots become more effective too! ๐Ÿ˜‰

  3. “[Microsoft] is in a position to develop an AI designed to help you be productive and get more work done in an efficient manner.”

    And when they have it, they will personify it as an annoying paperclip and no one will ever use it.

    “So Google is also seeing only a partial portrait of a person and not
    the entirety which is necessary to develop a truly personal and intimate
    AI agent.”

    So, you want to give over to a megacorporation your entire life in order that they may make you a personal Jarvis. Not satisfied with the degree of privacy invasion that we all put up with every day already, you think it would be a good thing for these companies to know even more about us so they can be even more creepy and more invasive and more annoying, and so their advertising customers can productize you even more than they do already. You’ve bought into the “privacy is obsolete” line of cow dung peddled by the sociopaths who run Facebook and Google, hook line and sinker.

    “but I am worried Appleโ€™s stance puts them in a compromised position to
    truly get the data they need to make better products and services.”

    Well, I hope that Apple never gets the bizarro world religion you are peddling. Because I would much rather not have a Jarvis in my life than put up with the wholesale destruction of all privacy that you wish for.

    1. They are ALL sociopaths. Hook, line, and sinker!
      “Corporatocracy” rules the day, some even are fans of their chosen corporate team.

  4. Well put overall. Several assumptions are being made by companies that are going to skew conclusions all over the place. It will be done in baby steps, but the concept will be spun and overmarketed until it loses credibility as the real tech evolves.

  5. I think this may come down to: How much of yourself do you really want to give up to a machine? At least, that’s how I look at it.

    1. That’s information, information is not transferred from one to another, it is shared and both have it. Slight distinction, but still.

  6. The psychological interaction of how we want to be perceived versus what we may really be (and whether they are really all that extricable) aside, how does AI or ML _learn_ what data is fake and what data is real, never mind simply what data is relevant? What values have to be programmed in to parse such things out? Is it even programable? Will a human always have to be part of the process, such as the humans behind Facebook news feeds?


  7. I agree at large with conclusions of this article. I think though that as soon as home voice appliances learn how to distinguish individual voices of the members of the household they will become valuable contenders for the role of the personal assistant. Also I think that the personal assistants are better utilized by using a smart cooperation of the phone personal assistant and a voice appliance at home.

  8. I don’t know if it’s the tech companies’ fault in the way they talk about their AI goals, or how journalists cover the topic, or maybe just most people’s sci-fi influenced preconceived notions of what an AI-powered personal assistant is, but I think the expectation is that something like HAL, the Star Trek computer, or even a C3PO is just within grasp. In truth nobody has any non-fantastical idea how to build a computer that “understands” the way those fictional computers do.

    We want AI to be able to adapt on the fly to human modes of thinking and communicating. In actuality, if anything, the opposite is happening –humans are (unconsciously, mostly) adapting to computer modes. Language growth and creativity in general will be stifled if we are governed in our speech, thoughts and actions by the mental habit of “I need to say, think and do this in a way that the computer can ‘understand’.”

    1. Really interesting thought. Humans have always have had to adopt to computers. Usually it impacted how humans type and how they present the problem to a computer. That is due to computer’s deficiencies, not humans.

      Now that so called more natural interfaces are present are we going to be influenced so that we talk funny? That we need to correct our questions more often? Even 95% recognition is far too low. Like the commercial says, will we be speaking in “ALL CAPS”?

      1. I think the great, mostly unnoticed benefit of computers is that it forced people, at least the people who have to use or program them, to be more logical thinkers. In order to use a computer to solve a problem, you need to understand and lay out your problem clearly, systematically, and hopefully elegantly. No fuzzy thinking allowed because well, garbage in garbage out, right? So yes, in a way you have to adapt to the way a computer ‘thinks’; you cannot bring in anything outside of a computer’s sphere of ‘knowledge’, if I may speak metaphorically. But when computers weren’t ubiquitous, this is a compartmentalized mode of thinking and functioning that you turned on only when you were working with/on a computer.

        But if computers are everywhere, I fear we might find that we will be limiting our everyday thoughts, functions and actions to the subset that computers can ‘understand’ or process, i.e. something that they already have in their data banks. It would just be the path of least resistance. Less glitches occur, less correction and clarification is needed (we all know the pleasures of a voice interface that doesn’t understand you), less time wasted. A mode of thought though that limits itself only in the already known, what is already in some data bank somewhere, means the (gradual) death of imagination and creativity. Admittedly, I’m looking at the worst-case scenario but even if the extreme case doesn’t occur, we still don’t want to be headed in that direction.

        1. My mother, rest her soul, was a vey wise woman (as I’m sure most everyone else’s is too). She would not allow me to get a calculator until I could get consistent “A”‘s in arithmetic that included long division, and even then only to check my work.

          However, I did find myself playing with numbers and acquiring an intuition of how they fir, just by playing with that calculator.

          Finding answers is important, but what is more important… setting up the right equation or solving the wrong one? ๐Ÿ˜‰

          I other words, I agree with you. It’s what you know, and how you use your tools that matters.

  9. Throw in Moore’s law, and Apples own GPUs. The result is local learning on the device, without privacy restrictions.

Leave a Reply

Your email address will not be published. Required fields are marked *