Should Intelligent Agents Teach Us How to Use Them

Not long after Apple introduced the Newton in 1991, their first personal digital assistant (PDA), it became pretty clear that this product’s life would be a short one. While the concept of the Newton got real attention, its design and functions were weak and in the end did not work as stated by Apple. Its most significant problem was its handwriting recognition technology that was deeply flawed. It did not work for a lot of reasons, the key one being the mobile processors available at that time were incapable of handling this task with any level of accuracy or precision. And the software Apple used for this was inferior in its execution.

I remember flying to Chicago for the launch of the Newton at the request of then Apple CEO John Scully, who drove this project from the beginning. He introduced the concept of the PDA to a very broad audience and tried to make Newton Apple’s next big hit.

But at that event, when it was demoed, the handwriting recognition failed continually, and even though we were told it was an early version of this software, I had a strong feeling that this was a product that Apple overpromised and unfortunately would under deliver.

Newton had a short life but during its early years Jeff Hawkins, who I had met when he was at Grid computing, began working on his own version of a PDA that he called the Palm Pilot. In the early development stages of his PDA, Jeff invited me over to his office to see his mockup, which was a wooden block sculpted to look like what eventually became the first Palm Pilot.

During that time we talked about Apple’s Newton, and I asked him why he thought it had failed. He stated that while he was at Grid Computers, who introduced the first real pen computing laptop in 1988 called the GridPad, he learned that when it came to pen input and character recognition, one needed to follow an exact formula and write the characters as stated in the manual.

At the time the GridPad came out, it too had a low-level CPU and was not able to handle accurate character recognition. He said that Apple was overly optimistic about Newton’s ability to manage real character recognition and with so many writing variables, it was doomed to fail.

That is why when he introduced the Palm Pilot he also introduced the Graffiti writing system which taught a user how to write a number, letter of the alphabet or specific characters like #, $, etc., in ways the technology in the Palm Pilot could recognize these inputs.

I was one of the first to test a Palm Pilot. I found that Graffiti was very intuitive and within a couple of days, I had mastered its characters as long as I used the prescribed ways to write these letters, numbers or characters. By doing that it was translated into digital information on the screen in real time. One could call this a form of reverse programming as in this case; the machine was teaching me how to use it in the language it understands.

Fast forward to today, and I believe we have a similar thing going on with digital assistants whether they are delivered on a PC, smartphone, smart speaker, or smart TV. One big difference this time around is that the processing power, along with AI and machine learning, is making these digital assistants much smarter, but not always accurate.

In what I think of as a Graffiti-like move, Amazon sends me weekly emails that include over a dozen new questions Alexa can answer. This too is a reverse programming example, and by teaching me to ask Alexa the proper questions, I am assured of getting highly accurate answers.

Here are some of the new things Alexa can respond to that came in an email from Amazon last week:

* “Alexa, what’s on your mind?”
* “Alexa, what are you doing for Black History Month?”
* “Alexa, give me a presidential speech.”
In honor of President’s Day, listen to 2-minute speeches from past US presidents—search by decade or president and let the inspiration begin.
* “Alexa, what’s another word for ‘happy?'”
* “Alexa, who is hosting the Oscars?”
* “Alexa, play the Long Weekend Indie playlist from Amazon Music.”
* “Alexa, give me a quiz for Black History Month.”
* “Alexa, what can I make with chicken and spinach?”
* “Alexa, tell me a President’s Day fact.”
* “Alexa, call Mom.”
Try a new way of connecting with the people you love. Learn more about Alexa calling and messaging.
* “Alexa, test my spelling skills.”
* “Alexa, wake me up in the morning.”
* “Alexa, how long is the movie Black Panther?”
* “Alexa, speak in iambic pentameter.”
* “Alexa, how many days until Memorial Day?”

These types of prompts that I get every week allow me or any user to begin to understand the proper way to ask Alexa a question and has the bonus of giving users a set of questions that may be of interest to them to get an accurate answer. Doing this builds up a users confidence in using, in this case, the Alexa smart assistant.

I have no doubt that as faster processors, machine learning and AI are applied to digital assistants, they will get smarter. I suspect that more and more companies who create digital assistants will also start using Amazon’s model of teaching people how to ask questions that are more in line with how their digital assistants want a query to be stated. It will also give us more ideas of things to ask so that it teaches people how to use them so that they can get more accurate answers to their questions in the future.

Published by

Tim Bajarin

Tim Bajarin is the President of Creative Strategies, Inc. He is recognized as one of the leading industry consultants, analysts and futurists covering the field of personal computers and consumer technology. Mr. Bajarin has been with Creative Strategies since 1981 and has served as a consultant to most of the leading hardware and software vendors in the industry including IBM, Apple, Xerox, Compaq, Dell, AT&T, Microsoft, Polaroid, Lotus, Epson, Toshiba and numerous others.

One thought on “Should Intelligent Agents Teach Us How to Use Them”

Leave a Reply

Your email address will not be published. Required fields are marked *