Algorithms Aren’t Always The Answer

On November 17 in his weekly Monday Note, Jean-Louis Gassée wrote: “App Store Curation: An Open Letter To Tim Cook“. He summed up his own letter best when he said:

With one million titles and no human guides, the Apple App Store has become incomprehensible for mere mortals. A simple solution exists: curation by humans instead of algorithms.

Is he right?

Where Have We Heard This Before?

When I read Monsieur Gassée’s article, I was immediately reminded of Beats. When Apple acquired Beats, Jimmy Iovine also opined on the importance of human curation, this time in regards to music.

There is a sea of music, an ocean of music and absolutely no curation for it. Your friends can’t curate for you.

(P)eople need navigation through all this music and somebody to help curate what song comes next.

Right now, somebody’s giving you 12 million songs, and you give them your credit card, and they tell you ‘good luck.’ … I’m going to offer you a guide … it’s going to be a trusted voice, and it’s going to be really good. ~ Jimmy Iovine

Algorithms Aren’t Always The Answer

What’s going on here? I thought this was the age of algorithms. Google was going to allow us to search the world’s information and give us driverless cars. Pandora was going to use the Music Genome Project to give us the music we loved. And eHarmony was going to match us with our soul mate. Yet now we’re retreating to human curation? What’s gone wrong?

Stereotypes And Subjectivity

A cowboy and a biker are on death row and are to be executed on the same day. The day comes, and they are brought to the gas chamber.
 The warden asks the cowboy if he has a last request, to which the cowboy replies:

“Ah shore do, warden. Ah’d be mighty grateful if ’n yoo’d play ‘Achy Breaky Heart’ fur me bahfore ah hafta go.”

“Sure enough, cowboy, we can do that,” says the warden. He turns to the biker, “And you, biker, what’s your last request?”

“Kill me first.”

Funny right? Only it’s a stereotype, not a reliable rule. In reality, The Biker may have liked ‘Achy Breaky Heart’ and The Cowboy may have preferred the gas chamber to having to hear that song even one more time. Machine Learning is great at learning rules. But human beings don’t use algorithms. We use common sense. And there’s nothing harder to replicate than common sense.

common sense

Machine Learning

Turns out we need to distinguish between Machine Learning and Common Sense. In his book “Everything Is Obvious,” Duncan J. Watts explains why computers use Machine Learning instead of common sense:

(Machine learning) is a statistical model of data rather than thought processes. This approach…was far less intuitive than the original cognitive approach, but it has proved to be much more productive, leading to all kinds of impressive breakthroughs, from the almost magical ability of search engines to complete queries as you type them to building autonomous robot cars, and even a computer that can play Jeopardy. ((Excerpt From: Duncan J. Watts. “Everything Is Obvious.” iBooks. https://itun.es/us/jw4lz.l))

Common Sense

Machine Learning is great. It makes search engines like Google work and it may someday give us driverless cars. But Machine Learning can’t curate App Stores, Music Stores and dating sites because it measures things differently than we do. Which reminds me of another joke:

An attorney, an accountant and a statistician went deer hunting. The attorney loosed his arrow at the deer but it landed five feet beyond the deer. The accountant loosed his arrow at the deer but it landed five feet short. The statistician then began to wildly celebrate yelling: “We hit it! We hit it!”

The point? The statistician was obviously using the wrong method to determine what constituted hitting the deer. Computer algorithms use the wrong method too — not because they’re stupid but because they’re smart and because the rules we use to guide our preferences are not subject to smart, logical constructs.

(V)irtually every everyday task is difficult for essentially the same reason—that the list of potentially relevant facts and rules is staggeringly long. Nor does it help that most of this list can be safely ignored most of the time—because it’s generally impossible to know in advance which things can be ignored and which cannot. So in practice, the researchers found that they had to wildly overprogram their creations in order to perform even the most trivial tasks.

(C)ommonsense knowledge has proven so hard to replicate in computers—because, in contrast with theoretical knowledge, it requires a relatively large number of rules to deal with even a small number of special cases.

[pullquote]For computer to understand us, you would have to teach it everything about the world.[/pullquote]

Attempts to formalize common sense knowledge have all encountered versions of this problem—that, in order to teach a robot to imitate even a limited range of human behavior, you would have to, in a sense, teach it everything about the world.

Excerpts From: Duncan J. Watts. “Everything Is Obvious.” iBooks. https://itun.es/us/jw4lz.l

Conclusion

Robert A. Heinlein once said:

Don’t explain computers to laymen. Simpler to explain sex to a virgin.

It turns out, trying to explain humans to a computer is even more difficult. And not half as much fun.

For a computer to understand my music, my dating, or even my app preferences, it would need to know almost everything there is to know about me. Even then it wouldn’t be able to apply the same mishmash of rules to the problem as I would.

Human curation seems like a step back to me. But when it comes to providing humans with what they prefer, that step back may end up being a huge leap forward.

Published by

John Kirk

John R. Kirk is a recovering attorney. He has also worked as a financial advisor and a business coach. His love affair with computing started with his purchase of the original Mac in 1985. His primary interest is the field of personal computing (which includes phones, tablets, notebooks and desktops) and his primary focus is on long-term business strategies: What makes a company unique; How do those unique qualities aid or inhibit the success of the company; and why don’t (or can’t) other companies adopt the successful attributes of their competitors?

17 thoughts on “Algorithms Aren’t Always The Answer”

  1. Beautifully said. And funny!

    Meanwhile, from the great beyond, Mark Twain was hanging out, watching those clowns that were hunting. Socrates walked by and asked him what he was doing. Twain said that he was observing physical proof of one of his theories…the behavior of a liar (accountant), a damn liar (attorney), and a statistician!

    The eternally wise Socrates said “oh, I don’t know…”.

    The “random element”, be it entropy in the physical world, or “free will” in philosophy is the “giver of freedom”. That’s the problem with algorithms, they can’t randomize right.

  2. While I am a firm believer in accurate data and evidence, there really is no such thing as “just the facts”. Data and facts always require interpretation. This is why, as your foil in another article, Brian Hall, accurately points out, online ads are so poor. Big data is only capable of so much, especially in the hands of an unthinking, incapable of empathy, algorithm.

    This is not just a technology industry problem. Everywhere we have to move past the notion of data for data’s sake, http://createquity.com/2014/11/is-the-cultural-sector-ready-to-move-beyond-data-for-datas-sake/ , and use common sense and empathy (currently not an algorithm capacity) to figure out what is relevant. No small feat that.

    Joe

    [eta: this is one reason I enjoy Techpinions over other tech sites. There is a cultural, humanistic significance to the numbers and data that most other sites ignore.]

    1. While I appreciate the sentiment of your post, the scientific method fully covers this. Observe, hypothesize, experiment (data and facts), theory, law.
      All it takes is one ugly fact to kill a beautiful theory.

      1. I certainly did not mean to imply that the scientific method did otherwise, which is why i didn’t mention the scientific method.

        Joe

        1. Oh, and I didn’t mean to put words in your mouth. Let me re-state, having facts IS sufficient. The hard part is having the right facts. This is because “facts combine” to give an astronomical number of nuances. So the fault isn’t with the “facts” rather having the judgment to pursue the correct ones.

          1. I don’t think I placed the fault on “facts” either. I am pretty sure the error is someone thinking there is _just_ the facts. Scientists, at least the good ones, are interpreters. This is, in my estimation, a large part of Einstein meant when he said “After a certain high level of technical skill is achieved, science and art tend to coalesce in esthetics, plasticity, and form. The greatest scientists are always artists as well.”

            In the context of this article, passing off the duty of curating onto an algorithm, no matter how well crafted, will never replace the insight a human can provide.

            Joe

          2. Completely agree on all points. In fact, there’s a definition of “art” running around technical circles… “If you can’t program a computer to do it, it’s art”. This of course does not mean using a computer to replicate what was born in a human mind, rather to program a computer to generate it on it’s own.

            When IBM’s Watson (or was it Big Blue) beat Kasparov at chess, did it really? Or was Kasparov playing chess against the score of programmers and data enterers? DId the computer show any genuine creativity, or was it regurgitating?

          3. I’ve seen that before. It is interesting that it takes an economic tack on the idea. What it avoids is the whole quid pro quo of commerce. There is no point making something to sell if no one has a way to pay for it. Forty-five percent unemployment means almost half of the once employable population can no longer pay for what the other half is making.

            Even if you think “Well, the rich will always have money” there are two problems with that—1) The rich are only capable of eating so much and buying so many sweaters and 2) A lot of that rich got there by selling to that 45%. Making them unemployable by furthering automation to the point the video portends undermines their own employment. I suppose the 55% can continue to only do business with themselves. But then the 45% will create their own economy out of necessity.

            The problem with the horse analogy is that the horse wasn’t an active part in it’s employment. It was a slave to others’ actions. the horse ultimately did not care if it was employable. Nor did it make money for itself.

            Maybe this is the way we get to a post-monetary economy? I would be OK with that.

            Joe

  3. Perhaps the main theme of the original Start Trek series was that making decisions based only on logic (which is all algorithms are) is inferior to the way humans make decisions. Humans have perceptive abilities that incorporate all the senses to enhance logic to arrive at the best decision, often when logic would not. It was always Spock who learned from Kirk, not the other way around. If we end up with a world ruled by algorithms in which humans must conform, we will lose our humanity.

    1. I think the dilemma Spock, and later Data, illustrates is that to logically consider human nature without including the emotional in the equation is illogical.

      Joe

Leave a Reply

Your email address will not be published. Required fields are marked *