The more I think about the recent breakthroughs in machine learning and deep learning algorithms, the more I think we are finally heading toward a smarter software future.
For years I had been writing about the need for better predictive intelligence in our software. It seems ridiculous that my smartphone does not know more about my context and take relevant actions on my behalf. If I’m in a meeting, send all calls to VM or send a text message. If I’m running late to a meeting, offer to send an email or text to those I’m meeting with to let them know I’m running late and an ETA of when I’ll be there (since it knows where I am on the road, the traffic situation, and my time to destination). Our smartphones are really not that smart when it comes to the intelligence equation. That is about to change.
Most modern software platforms like iOS and Android are increasingly adding elements of machine learning and beginning to extend more of the core platform intelligence to developers. The software community can now, in more ways, take advantage of platform APIs and start taking advantage of many of these advancements in deep learning to provide new and more valuable customer experiences.
For a while now, my friend Benedict Evans has been referencing a saying from Eric Raymond that, “A computer should never ask the user for any information that it can auto-detect, copy, or deduce”. Yet, for the longest time, that is exactly what computers have been doing when it comes to machine learning and AI. My smartphone need never to ask where I am. It has that data via the built-in GPS. My smartphone also has my full calendar so it should be able to deduce what I am doing and where I am. The key point is, with the advancements in machine/deep learning, we are on the cusp of computers having to ask us significantly fewer questions going forward as they will increasingly be able to auto-detect, copy, and deduce more relevant information for us on their own. This is what has changed and will open the door to a much more intelligence based era of computing.
Communal vs. Personal Intelligence
One area of this discussion I’m very interested in is the difference between “communal machine learning” and “individual machine learning”. Most people discussing AI/machine/deep learning have not made a division between the two. Admittedly, communal machine learning, or big data being collected from a massive number of users for things like maps, visual recognition of general image knowledge, etc., has been the primary ways machine learning has been taking place. We have yet to crack the AI/machine/deep learning that will take place as a computer begins to study its user in more intimate ways.
The companies who will focus on communal vs. personal AI/machine/deep learning seem clear. Google wants to be the deepest domain expert in communal knowledge. They seem less focused on going deep on the specific and intimate details of the user and trying to gather just enough information to be able to relevantly answer any query. This is one reason why I believe Google is not building something with a name like Siri/Cortana/Alexa which would encourage a user to build a relationship. Rather Google seems content to let others focus on the personal AI and simply let their big communal data sets feed the personal AI chosen/hired by the consumer to be their personal assistant.
On the other hand, Apple, Amazon, and Microsoft seem to be looking to go deeper with the individual user and build tools that let them have deeper relationships and thus, reveal more intimate details about themselves to their personal agents. Trust about privacy is the key here and it’s something the companies I mentioned seem to have a better “trust-centered relationship” than Google. Again, perhaps Google knows this and that is why their focus is elsewhere, at least for the moment.
As I talk with those in the industry thinking and building products in this area, I encourage people to understand this is likely not a one size fits all solution. Nor is it a winner take all market. Just because I may hire Siri as my personal assistant and allow Apple’s AI to learn more intimate details of my life, it does not mean I can’t use Google’s, Amazon’s, Microsoft’s, or a host of other “bots” or “agents”. In fact, it seems unwise of any company to wall off their agent from the rest of them. Ideally, some kind of generic standard will be created so all these agents can talk to each other. But at the end of the day, I do believe consumers will cling to one as their primary agent. That is a battle only a few companies can realistically engage in.
I’m currently of the mind that companies in the AI/machine/deep learning space need to focus on being domain experts. For example, Amazon may ultimately be the commerce domain expert, Google for queries, Microsoft for business, and Apple/Siri the expert for my personal life. Similarly in China, Baidu would be the search domain expert, Alibaba for commerce, etc. All these AI engines need to work together to be able to ask the user fewer questions and return more value as a result.
We don’t want to spend increasingly longer portions of our day with machines. The less we have to tell them what we want, the better. This allows us to have more time to be better at the things humans will be better at than machines. That’s where we are headed.
24 thoughts on “Toward a Smarter Software Future”
Privacy is but the beginning. How voluntary is the information collected? Is it only Public Domain information? If not, then the relationship between each user, anonymous, or not, must be established.
I’ve said this before regarding Google, can I choose my actions so as to NOT make Google money? That is, choose whether to purchase goods and services? With their tentacles as wide and as deep as they are merely using the internet makes money for Google. I say that makes Google a public utility and should be regulated as such. The same would hold for ‘Communal Intelligence”. If public domain information, then fine, otherwise the user has the right to stay off the grid. That’s what Apple did for now with differential privacy. It’s opt-in.
I would think the key to this whole privacy issue is opt-in, possibly opt-in for each piece of information the computer might ask for.
If certain companies had always done this (I won’t mention names) people might not be feeling their privacy had been invaded. I guess those companies either weren’t smart enough to know not to step over the lines, or they were just too eager for profits.
We do, or should, have a say over who profits from us as individuals either as product or as consumers of product.
I like your identifying us as “product,” because it reminds me of “If you’re not paying for it, YOU are the product being sold.” I’m really tempted to name the company that referred to, but I won’t.
Okay, I’m not. It starts with a G and rhymes with frugal!
No that’s not the one. I’m thinking of a company that has well over a billion users. That’s all I’ll say.
The content from those billion users do not seem very deep or broad.
I would guess Facebook. People are not only the product but are also the content creators. Facebook can be quite useful, ask any mom running a family, but a good rule of thumb is to never put anything on Facebook that you wouldn’t mind the whole world seeing.
Separate post for separate point.
“We don’t want to spend increasingly longer portions of our day with machines. The less we have to tell them what we want, the better.”
So you mean the actual interactive part, not the control part. Skynet anyone?
[[“A computer should never ask the user for any information that it can
auto-detect, copy, or deduce”. Yet, for the longest time, that is
exactly what computers have been doing]]
On the one hand, blame crap programmers, and/or bad programming habits acquired from the previous 20 or 30 or 40 years of computing, when computers were too primitive to behave in an intelligent manner. Most bad software UI exists today simply because it was necessary long ago when computers were ludicrously primitive, and continues to exist simply because it’s what everyone has become used to, to the point of becoming invisible.
On the other hand:
[[If I’m in a meeting, send all calls to VM or send a text message. If I’m running late to a meeting, offer to send an email or text to those I’m meeting with to let them know I’m running late]]
The problem is that almost every attempt to implement conveniences like this in the past has resulted in something that either acts bone headed or something that acts creepy. I think the problem with implementing features like this that most people will accept rather than reject lies in the culture of programming, the mind set of most programmers, which has elevated having interpersonal ineptitude and cultural blind spots and denigrated being a normal person with normal social skills.
As long as software firms continue to make nerds into CEOs, I think it is going to be very hard for them to implement conveniences like the ones you are wishing for that work unobtrusively rather than creepily.
Agree completely. You’ve described what I think is the basic flaw of the whole AI program that the major tech companies are pursuing.
The more you try to make machines think like humans, the more they will make the same mental mistakes (outside of just plain carelessness) that humans make. That is, errors that are the inevitable result of using intuition, errors that result from misjudging the relative importance of factors that go into a decision, errors that result from misuse or misapplication of analogy, etc.
No matter how well you program a machine, no matter how much information you cram into it, it will work well until it doesn’t. (i.e. it will seem like it’s working flawlessly so you keep asking it to do more and more until it reaches the limits of its ‘intelligence’ and then makes a mistake that hopefully isn’t harmful. Then programmers will fix the program, make it ‘more intelligent’, until people hit the next limit. And so on and so forth.)
Predicting the next unexpected mistake — what it will be, when it will occur– is a logical impossibility. AI programmers have implicitly assumed they have surmounted this obstacle.
I would be happy if my iPhone were at least smart enough to switch to “vibrate only” mode when I am on a call.
“Nor is it a winner take all market.”
This is a key point that should inform the discussion and analysis. The dominance of Microsoft/Windows was the exception, not the rule. Consumer markets normally have many winners.
It depends what kind of job you let AI to do. Can AI be a doctor? Yes. A salesman? Yes. An artist? Probably not.
AI can be a rudimentary doctor or salesman. Not a good one. All fields have creativity akin to those of an artist. Intuition being an important one.
Google presented their DeepMind computer outplaying a human in Go as the computer having “an intuition”. Basically AI just crunch a bunch of numbers and makes predictions. If Google AI is better than human at guessing does not mean it has “an intution”.
I think we agree. I disagree with Google using Go as a means of demonstrating intuitiveness.
Actually Go is based on maxima/minima and finite element analysis. Would I call it “intuitive”? Not really.
Any way, any AI scheme that uses a human database for learning is not really AI, it’s data mining. A real AI dreams! 🙂
I thought Google Go was the programming language created by Robert Griesemer(sp?) and it is statically typed after the language C?
Is it? Vadim Dumin said outplaying at Go, Go is indeed a game.
Vadim made it sound like the game Go, but when you stated Google using Go as a means of demonstrating intuitiveness, I thought you were referring to the language.
There has been some movement with AI making some art. I hold, though, that until an AI burns out and dies at 27 years old it isn’t a real artist yet!
The spirit is strong and the flesh is weak… Until you fall down and it is not that bleak 😉
When AI can convey to me the existential purpose of the Man from Nantucket, I will believe….