Algorithms Aren’t Always The Answer

On November 17 in his weekly Monday Note, Jean-Louis Gassée wrote: “App Store Curation: An Open Letter To Tim Cook“. He summed up his own letter best when he said:

With one million titles and no human guides, the Apple App Store has become incomprehensible for mere mortals. A simple solution exists: curation by humans instead of algorithms.

Is he right?

Where Have We Heard This Before?

When I read Monsieur Gassée’s article, I was immediately reminded of Beats. When Apple acquired Beats, Jimmy Iovine also opined on the importance of human curation, this time in regards to music.

There is a sea of music, an ocean of music and absolutely no curation for it. Your friends can’t curate for you.

(P)eople need navigation through all this music and somebody to help curate what song comes next.

Right now, somebody’s giving you 12 million songs, and you give them your credit card, and they tell you ‘good luck.’ … I’m going to offer you a guide … it’s going to be a trusted voice, and it’s going to be really good. ~ Jimmy Iovine

Algorithms Aren’t Always The Answer

What’s going on here? I thought this was the age of algorithms. Google was going to allow us to search the world’s information and give us driverless cars. Pandora was going to use the Music Genome Project to give us the music we loved. And eHarmony was going to match us with our soul mate. Yet now we’re retreating to human curation? What’s gone wrong?

Stereotypes And Subjectivity

A cowboy and a biker are on death row and are to be executed on the same day. The day comes, and they are brought to the gas chamber.
 The warden asks the cowboy if he has a last request, to which the cowboy replies:

“Ah shore do, warden. Ah’d be mighty grateful if ’n yoo’d play ‘Achy Breaky Heart’ fur me bahfore ah hafta go.”

“Sure enough, cowboy, we can do that,” says the warden. He turns to the biker, “And you, biker, what’s your last request?”

“Kill me first.”

Funny right? Only it’s a stereotype, not a reliable rule. In reality, The Biker may have liked ‘Achy Breaky Heart’ and The Cowboy may have preferred the gas chamber to having to hear that song even one more time. Machine Learning is great at learning rules. But human beings don’t use algorithms. We use common sense. And there’s nothing harder to replicate than common sense.

common sense

Machine Learning

Turns out we need to distinguish between Machine Learning and Common Sense. In his book “Everything Is Obvious,” Duncan J. Watts explains why computers use Machine Learning instead of common sense:

(Machine learning) is a statistical model of data rather than thought processes. This approach…was far less intuitive than the original cognitive approach, but it has proved to be much more productive, leading to all kinds of impressive breakthroughs, from the almost magical ability of search engines to complete queries as you type them to building autonomous robot cars, and even a computer that can play Jeopardy. ((Excerpt From: Duncan J. Watts. “Everything Is Obvious.” iBooks.

Common Sense

Machine Learning is great. It makes search engines like Google work and it may someday give us driverless cars. But Machine Learning can’t curate App Stores, Music Stores and dating sites because it measures things differently than we do. Which reminds me of another joke:

An attorney, an accountant and a statistician went deer hunting. The attorney loosed his arrow at the deer but it landed five feet beyond the deer. The accountant loosed his arrow at the deer but it landed five feet short. The statistician then began to wildly celebrate yelling: “We hit it! We hit it!”

The point? The statistician was obviously using the wrong method to determine what constituted hitting the deer. Computer algorithms use the wrong method too — not because they’re stupid but because they’re smart and because the rules we use to guide our preferences are not subject to smart, logical constructs.

(V)irtually every everyday task is difficult for essentially the same reason—that the list of potentially relevant facts and rules is staggeringly long. Nor does it help that most of this list can be safely ignored most of the time—because it’s generally impossible to know in advance which things can be ignored and which cannot. So in practice, the researchers found that they had to wildly overprogram their creations in order to perform even the most trivial tasks.

(C)ommonsense knowledge has proven so hard to replicate in computers—because, in contrast with theoretical knowledge, it requires a relatively large number of rules to deal with even a small number of special cases.

[pullquote]For computer to understand us, you would have to teach it everything about the world.[/pullquote]

Attempts to formalize common sense knowledge have all encountered versions of this problem—that, in order to teach a robot to imitate even a limited range of human behavior, you would have to, in a sense, teach it everything about the world.

Excerpts From: Duncan J. Watts. “Everything Is Obvious.” iBooks.


Robert A. Heinlein once said:

Don’t explain computers to laymen. Simpler to explain sex to a virgin.

It turns out, trying to explain humans to a computer is even more difficult. And not half as much fun.

For a computer to understand my music, my dating, or even my app preferences, it would need to know almost everything there is to know about me. Even then it wouldn’t be able to apply the same mishmash of rules to the problem as I would.

Human curation seems like a step back to me. But when it comes to providing humans with what they prefer, that step back may end up being a huge leap forward.

Why Apple Had To Release Siri Half-Baked

Siri has been having a bad week. Gizmodo’s Mat Honan called Apple’s voice-response service “a lie.” Daring Fireball’s John Gruber, who rarely has bad things to say about Apple efforts, said it “isn’t up to Apple’s usual level of fit and finish, not by a long shot.”  And my colleague Patrick Moorhead tweeted that inconsistency was leading him to reduce his use ofSiri screen shot the service.

Hang in there, Pat, Siri needs you. I share the frustrations and annoyances of Siri users, but the only way she’s going to get better.

Here’s what I think is going on, with the usual caveat that Apple only shares its thinking with people it can legally keep from talking about it, leaving the rest of us free to speculate. Apple doesn’t much like public beta testing. Before a major release, Microsoft will typically make a new version of Windows or Office to tens of thousand of users for months,  allowing developers to find and fix most of the bugs. Apple limits beta testing mostly to internal users and selected developers. It can get away with this because the real-world combinations of Mac or iOS hardware and software are orders of magnitude simpler than in the Windows world.

Siri is very different. The artificial intelligence engine behind the service lacks any inherent understanding of language. It has to be trained to make connections, to extract meaning from a semantic jumble. To even get to the databases and search tools Siri uses to answer question, it first must contract a query from the free-form natural language that humans begin mastering long before they can talk, but which machines find daunting. (See Danny Sullivan’s Search Engine Land post for an excellent analysis of Siri’s struggles with queries about abortion clinics.)

The secret to machine learning is feedback. I expect that Siri carefully logs every failed query, along with what the user does next. And algorithmic analysis of those logs, combined perhaps with some human intervention, means that every mistake contributes to the process of correction. In other words, Siri learns from its errors and the more people use it, the faster it will get better. Benoit Maison has a good explanation of how this works on his blog.

The server-based learning creates a very different situation from the troubled handwriting recognition that helped doom Apple’s Newton 15 years ago (and to which some critics have compared Siri’s troubles.) Newtons were products of a preconnected age, so there was no way for the community of MessagePads to learn from each other’s mistakes. And the extremely limited processing power memory on the Newton itself made the claim that it would learn from its errors an empty promise. The Newton could never get past “egg freckles.”

Now, all of this said, Apple’s approach to Siri is a distinct departure from its usual practice of under-promising and over-delivering. It properly labeled Siri as a “beta” product. But, at the same time, it is using the half-backed feature as a major selling point for the iPhone 4S, hitting it hard in commercials. This is a disservice to customers, who have learned to expect a high polish on Apple products, and has saddled Siri with unreasonably high expectations that now are inspiring a backlash. Apple had to release Siri prematurely to let the learning process go forward. Let’s hope that Apple did not do the service permanent damage with its hype.