This article is exclusively for subscribers to the Think.Tank.
Siri has been having a bad week. Gizmodo’s Mat Honan called Apple’s voice-response service “a lie.” Daring Fireball’s John Gruber, who rarely has bad things to say about Apple efforts, said it “isn’t up to Apple’s usual level of fit and finish, not by a long shot.” And my colleague Patrick Moorhead tweeted that inconsistency was leading him to reduce his use of the service.
Hang in there, Pat, Siri needs you. I share the frustrations and annoyances of Siri users, but the only way she’s going to get better.
Here’s what I think is going on, with the usual caveat that Apple only shares its thinking with people it can legally keep from talking about it, leaving the rest of us free to speculate. Apple doesn’t much like public beta testing. Before a major release, Microsoft will typically make a new version of Windows or Office to tens of thousand of users for months, allowing developers to find and fix most of the bugs. Apple limits beta testing mostly to internal users and selected developers. It can get away with this because the real-world combinations of Mac or iOS hardware and software are orders of magnitude simpler than in the Windows world.
Siri is very different. The artificial intelligence engine behind the service lacks any inherent understanding of language. It has to be trained to make connections, to extract meaning from a semantic jumble. To even get to the databases and search tools Siri uses to answer question, it first must contract a query from the free-form natural language that humans begin mastering long before they can talk, but which machines find daunting. (See Danny Sullivan’s Search Engine Land post for an excellent analysis of Siri’s struggles with queries about abortion clinics.)
The secret to machine learning is feedback. I expect that Siri carefully logs every failed query, along with what the user does next. And algorithmic analysis of those logs, combined perhaps with some human intervention, means that every mistake contributes to the process of correction. In other words, Siri learns from its errors and the more people use it, the faster it will get better. Benoit Maison has a good explanation of how this works on his blog.
The server-based learning creates a very different situation from the troubled handwriting recognition that helped doom Apple’s Newton 15 years ago (and to which some critics have compared Siri’s troubles.) Newtons were products of a preconnected age, so there was no way for the community of MessagePads to learn from each other’s mistakes. And the extremely limited processing power memory on the Newton itself made the claim that it would learn from its errors an empty promise. The Newton could never get past “egg freckles.”
Now, all of this said, Apple’s approach to Siri is a distinct departure from its usual practice of under-promising and over-delivering. It properly labeled Siri as a “beta” product. But, at the same time, it is using the half-backed feature as a major selling point for the iPhone 4S, hitting it hard in commercials. This is a disservice to customers, who have learned to expect a high polish on Apple products, and has saddled Siri with unreasonably high expectations that now are inspiring a backlash. Apple had to release Siri prematurely to let the learning process go forward. Let’s hope that Apple did not do the service permanent damage with its hype.