Android and iOS: Two Very Different Philosophies

In this column, I in no way intend to say one of these platforms is superior to the other. I simply want to explore how they both represent completely different approaches to software and user experiences.

We have to start with a fundamental agreement that we live in a free world and support a free market. In this world consumer choice is the most powerful market driver. Competition brings choice and choice is very good.

Therefore, consumers are free to choose whatever products in hardware, software, and services they so desire. Companies compete in an attempt to create features that appeal to consumer segments, interests, and preferences. Certain features in hardware, software, and services will appeal differently to different people. There is nothing wrong with that, as I said it is very good.

The Android Philosophy
At this point we must point out that Google is a services company. It is for this reason that we should expect a different hardware and software philosophy. As I continually point out in our analysis of Android for clients, Google is a services company and all hardware and software is to Google is simply a front-end to access their services.

Android was created for the primary reason to help consumers access Google’s services on non-PC devices. Hardware for Google is just the physical object needed to run the software that is designed to access Google’s services.

Google starts with a services mindset and philosophy then works backwards on how best to make those services as broadly accessible as possible.

Google is also an engineering company and engineering companies historically struggle with making innovations accessible to tech lay-people.

With all of that context, what Google has done with Android is impressive. Those who get excited about technology for technology sake get very excited about Android. Google and Android engineers regularly show some very visionary and perhaps “ahead-of-their-time-technologies.”

This is not to say that tech lay-people can’t use Android. Many do, however, I would argue that those who have a tendency to tinker, customize, and tweak their hardware themselves, get the most excited about Android.

Android’s challenge is to take many of these forward thinking things like, face recognition, fully customizable UI, flexible widgets, Android Beam (features found in Ice Cream Sandwich), etc, easier and compelling for every day people to use.

The iOS Philosophy
Apple on the other hand is a software company, who also cares deeply about making their own hardware. Apple is on the cusp of adding robust services to their ecosystem but unlike Google they approach everything as a hardware and software company not a services company. Services to Apple are a means where to Google services is the ends.

To Apple, making innovations accessible to the masses is the underlying theme of all their hardware, software and now services philosophies. This is why they may not always be first with certain features but it is clear that if they don’t offer something the market wants out of the gate they will certainly add it and make it simple to use.

Apple’s target with their products is those to whom technology is mostly foreign. Meaning not a core and central part of their every day lives. This is why when they release new products they only focus on certain features. The features they focus on solve tangible and every day needs and strike emotional chords with consumers.

For example, when they launched the iPhone 4, they could have touted any number of features, instead they just demoed FaceTime and that was enough. It spoke for itself and showed consumers the value of the latest feature.

Apple’s goal is to make technologists out of people who never cared about technology before. Their desire is to provide these consumers with sophisticated solutions that are extremely simple to use. I can’t stress how difficult this is but it is something Apple does extremely well.

As I stated in the beginning, these two approaches represent just that–two different approaches. To each his own is the critical point I want to make.

I am in the privileged position to get to provide opinion and analysis on all the platforms on the market. To some consumers where I influence buying cycles, like friends and family, I am comfortable recommending Android devices; to others, I recommend iOS.

Where this really gets interesting is with the generations who grow up with technology, some call them “Digital Natives.” I watch my kids, for instance, who are perfectly comfortable jumping back and forth between my iOS and my Android devices.

This next generation will grow up incredibly technical and tech savvy. Because of that, their demands and expectation of next generation personal computers will far exceed anything we can imagine today.

[thumbsup group_id=”3485″ display=”both” orderby=”date” order=”ASC” show_group_title=”0″ show_group_desc=”0″ show_item_desc=”0″ show_item_title=”1″ ]

How Apple Won the Mobile War

HP LX 95
The HP 95 LX

I have been following handheld computing products for about as long as they have existed, going back to such forgotten products as the Hewlett-Packard 95 LX and the Psion Series 5. In 20 years of effort, only three products truly caught the popular imagination: The Palm PDA, the BlackBerry, and the iPhone. And of these, only iPhone became a true mass market success.

Why? In the early days, especially, these products faced impossible technological hurdles. Miniaturization was still in a fairly primitive state,  so the devices were saddled with seriously inadequate processing power. Displays were awful–low-resolution, low-contrast LCD screens. And wireless connectivity was nonexistant.

But designers managed to make a bad situation worse by trying to make devices do too much. The HP 95 LX and its successors were actually tiny MS-DOS computers; their ability to run Lotus 1-2-3 was a key selling point. But only a relative handful of people, mostly engineers, had any desire for such a product and it attracted an enthusiastic, but tiny, market. Numerous other devices came along in the mid- to late-1990s in an assortment of sizes and form-factors: the Apple Newton MessagePad, the Casio Zoomer, the IBM/BellSouth Simon (perhaps the first smartphone), the AT&T EO, the Motorola Envoy. All tried to do too much with too little, and all failed miserably.

Palm P{ilot photo
The original Palm

The first device the break the paradigm was the original Palm Pilot of 1996. Its designer, Jeff Hawkins, had a Jobsian focus on the user experience; during development, he dropped any functions that he felt were too complicated and he swore that Palm users would never see an error message on their screens.

The Palm didn’t try to do much; essentially it maintained contacts and calendar in sync with your computer and took input through a modified handwriting called Graffiti. But it worked vastly better than anything else at the time and was a hit. It was also, by way of the Handspring Treo, the direct ancestor of the modern smartphone, though its only means of communication was to a PC over a cable. (My review of the original Palm Pilot.)

The first BlackBerry, in 1999, was also a very specific solution to a specific problem: mobile email. Early BlackBerries  had no voice capability. They were built on pager technology and the first model was called the RIM Inter@ctive Pager 950–the BlackBerry name came along a bit later.  The name is something of a giveaway; RIM came out of the pager industry and the 950 was conceived as a vastly improved pager.

Instead of having to know a special pager number, send a page, and wait for the recipient to call back, the BlackBerry let you send an ordinary email and reach the recipient anywhere, any time.  A tiny but surprisingly functional keyboard, much better than those on the primitive “two-way pagers” of the time, allowed replies.And like the Palm, it also synchronized contacts and calendar with a computer. (Read my review of the original 950.)

BlackBerry 950 photo
The BlackBerry 950

The BlackBerry was not an instant success. It started to catch on in a big way once RIM created the service that provided a secure link to corporate mail systems and enterprises started deploying the devices in large numbers. And, of course, the popularity grew once it gained voice capability. Like the Palm, the BlackBerry caught on because it served a real need and concentrated on doing one thing really well.

Throughout the late 90s and early 00s, there was a continuing effort to build handheld computers. Microsoft and partners such as Compaq, Hewlett-Packard, and Toshiba, struggled mightily to cram something resembling Windows into a handheld product, but its PocketPCs, with their miniaturized Windows desktops, left users cold. It was only when Windows Mobile imitated the much simpler design of the Palm Treo that it achieved some modest success.

Apple, after the failure of the Newton, avoided handheld computing in favor of creating a new market for the iPod. In typical Apple fashion, it let others get beat up and learned from their mistakes. By the time Apple came out with the iPhone in 2007, the world had changed again. The amount of processing power you could cram into a small device had grown tremendously. Big, high-resolution, touch-screen displays were economical. And wireless networks were ubiquitous.

But like its few successful predecessors, the iPhone didn’t try to do too much, at least not all at once. The original iPhone was a limited device. There was no app store and no apps other than the ones Apple provided. Despite the widespread availability of 3G wireless networks, the phone was limited to 2G. And the battery struggled to get through a day of normal use. But it was an  instant hit because it did what it did well, without compromise, and in a way that delighted users. A year later, the iPhone 3G remedied the most glaring defects of the original, the App Store let a million apps bloom, and people finally had a full-fledged computer that fit in a pocket.

Strangely enough, the rest of the industry was pathetically slow to respond to the iPhone. Microsoft stuck by Windows Mobile, not seeming to realize that the iPhone’s design had rendered its Windows-derived user interface as obsolete as punch cards. RIM, too, saw no need for fundamental change even as the iPhone began to steal away its core corporate market. Only Google, with no history in the business, rose to the challenge with Android. Android is good enough, and has an attractive enough business model, to make it the only real remaining challenger to Apple. But even it has yet to prove that it can do any better than remain a beat behind the iPhone.

The Era of Personal Computing

I have adopted a philosophy in my analysis over the past few years where I distinguish between personal computing and personalized computing.

In a post a few months ago, I wrote about these differences and pointed out that because of the differences in personal and personalized computing the Post PC Era will happen in two different stages.

The first stage is personalized computing. In this era, the one we are currently in, all of our personal computing devices are personalized by us. What I mean by this is we take the time to personalize the devices with our personal content, apps, preferences, interests, etc. In reality, however, how personal are these devices? They don’t actually know anything about us we just simply use them to get jobs done. We customize them and they contain our personal content but they really aren’t that personal.

However in this next phase, the era of personal computing, things may actually get very interesting. In this era our devices will actually start to learn things about us and in the process become truly personal. Our most personal devices will learn our interests, schedule, preferences, habits, personality, etc. I know it sounds a bit scary but that is where we will inevitably end up.

I believe Apple’s latest feature–Siri–demonstrates this future reality of personal computing. As Tim pointed out in his article yesterday, Siri and the underlying artificial intelligence engine, will learn key things about our unique tastes, interests, and more and over time become even more useful as a personal assistant.

What is absolutely central for this personal computing era to become reality is we have to allow our devices to get to know us. Perhaps more specifically we have to trust our devices or the underlying company providing us the personal computing experience.

John Gruber points this very point out in a post with some comments from Ed Wrenbeck, former lead developer of Siri.

In an interview with VectorForm Labs Ed Wrenbeck states:

“For Siri to be really effective, it has to learn a great deal about the user. If it knows where you work and where you live and what kind of places you like to go, it can really start to tailor itself as it becomes an expert on you. This requires a great deal of trust in the institution collecting this data. Siri didn’t have this, but Apple has earned a very high level of trust from its customers.”

In the era of personal computing we will get beyond personalizing our devices and instead enter the era where they truly become personal to us because of their ability to know, learn, and be trained about who we are and our unique interests and needs.

There are many great examples of this in Sci-Fi movies and novels but perhaps my favorite, because it is fresh, is how Tony Stark interacted with Jarvis in the Iron Man movies. Jarvis is what Tony Stark named his personal computer and as you can tell from his interactions in the movie, Jarvis knew quite a bit of the intimate details of Tony Stark.

Jarvis was a personal computer, one that took on an entirely new way to be useful because of the artificial intelligence that was built on top of incredible computing power.

Of course, this all sounds extremely futuristic but it will be the basis of what takes us from having to manually personalize our devices, to a future where our devices truly become personal and indispensable parts of our lives.

Why Siri is Strategic for Apple

Now that I have had some time to work with the new iPhone, and especially the new Siri Voice technology, I have been able to form a couple of opinions about this products market impact.

As I mentioned in a previous post, from a big picture stand point, Apple’s use of voice and speech as a form of input marks the third time Apple has influenced the market when it comes to UI design and navigation. The first time they did it with the mouse and its integration into the Mac, and then with touch by making it the key input for the iPhone. Now comes voice, which I believe will usher in the era of voice input and will start to dramatically impact the future of man-machine interface.

While voice input is a significant part of Siri’s feature set within the new iPhone 4S, it is its AI and speech comprehension technology that really makes it unique. More importantly, the more I use it the more it gets to know who I am, where I live, what I like, who I am related to and the more info it gets on me, the better it gets as well. For example, with in a few searches for Italian restaurants it now knows that this is a type of ethnic cuisine I like and remembers that. So, the next time ask it to find me an Italian restaurant, it becomes more accurate in its recommendations. It now knows my home address and office address and I can give it commands that play off these locations. For example, I can say,“remind me to call my wife when I get to the office” and as I walk into the door of my office complex it reminds me to call her.

There are hundreds of ways that, once it begins to learn more about me, it can be quite useful and helpful. And as Apple has said, they will continue to link it to more powerful databases over time, giving it even greater reach to the information that I might need in my daily life. That linked with its continuing ability to learn about me makes Siri perhaps the stickiest application I have ever used. In the short time I have used it, it has become almost indispensable in a couple of areas.

First, I now mostly speak my tweets and messages instead of typing them in. Second, I use it to input short emails as well. Having the Siri microphone integrated into the keyboard makes it so simple to use and this is now my first line for data entry.

But the third way I use it is related to my business. As a market researcher, I have to do a lot of percentage comparisons when I look at various numbers. Over the years I have become pretty good at working out this math in my mind, but this method is not very precise. I normally come within one-to-three points of the correct answer and in a lot of cases that may be all I need for our predictions since these are based on known data and are informed projections. And in the past if I wanted precise percentages I would bring out the old calculator. But now when I want this number I just ask Siri and she does not guess. Her answers are always exact–and fast.

The other thing it does extremely well is deal with appointments. I just tell it to schedule an appointment and it is done. And if there is a conflict it tells me that as well. Think of it as a smart personal assistant.

BTW, this is not Apple’s first stab at this voice, speech AI concept. In fact, they pretty highlighted it in their Knowledge Navigator multimedia video they did in 1989. In this video it shows a professor interacting with a computer asking it questions and getting direct answers from it in ways that Siri does now. Ironically, this video and futuristic thinking was the brainchild of former CEO John Scully and former Apple Fellow Alan Kay, one of the most futuristic thinkers we have in the world today. But at the time, the technology was not there to do what was projected in the Knowledge Navigator. Even more impressive is the fact that while the Knowledge Navigator was apparently connected to a very large computer, Siri is being done in a pocket computer.

Now, as Siri develops a strong database about me and my likes and dislikes, it is quickly becoming indispensable as a mobile assistant. I suspect that the more Siri and I become closer and it gets to know me better, I am going to be highly unlikely to use something else by another platform. Thus, the stickiness. Something that makes it very likely that I will stay within the Apple ecosystem as long as they continue to innovate and make Siri smarter and even more useful.

iOS Morphing Into a Desktop OS?

imageDuring the Apple WWDC, I was really struck at just how many features were added into iOS 5 and just how few new features had been added to Lion. Don’t get me wrong here, I like Lion a lot but after using many of the 250 new features, few altered how or what someone can do with a computer or already to with a tablet. The one exception was AirDrop, which makes peer-to-peer sharing easier. Also, many of the iOS features seemed like desktop features, and the new Lion features appeared to make it look more like iOS features. Let’s take a look.

New Desktop-Like Features in iOS 5

  • Tabbed Browsing: I remember some apologists explaining away the lack of tabbed browsing with the iPad 1. Now Safari has tabs…. on its 9.7″ display.
  • Basic Photo Editing: No longer an add-on app like my favorite, Photogene, photo enhancements are available right inside the Photos app. Users can use auto-enhance, remove red eye and even crop photos.
  • Reading List: Previously available on the Mac, the iOS Safari browser has the Reading list, a place to save articles you wish to read later.
  • Mail Features: Now users can edit email text, add or delete email folders, and even search all the email text, not just the subject line for topics. All of this in the new Mail.
  • Calendar Features: Like on Lion, users can drag time bars to set meeting time, can view attachments inside the calendar app and even share calendars.
  • Mirroring: Via a cable to wirelessly through an Apple TV 2, see on a monitor or TV exactly what is on the iPad 2 or iPhone 4s.
  • Improved Task Switching: With new “multitasking” gestures, users no longer need to click the home button to return to the home screen or switch between apps. They use a four-finger left-to-right gesture to switch tasks and what I call the “claw” to go to the home screen.

New iOS-Like Features in Lion

New Gestures: Every iOS user is familiar with finger scrolling, tap to zoom, pinch to zoom and swipe to navigate. Now this is available on a Lion Mac.

  • image
  • Full Screen Apps: By design, every iOS is full screen. Now Lion has this capability.
  • App Store: Required since the first iPhone, now ships with Lion.
  • Launchpad: This is Lion’s fancy name for iOSs Home Screen. A bunch of app icons.
  • Mail Improvements: Yes, even desktop Mail is getting more like iOS. In this case, adding full height message panes.

image

So What? Why Should we Care?

So what does this mean, if anything? It is too early to tell, but it could signal a few alternative scenarios:

  • Unity of UI? By uniting many of the UI elements across phone, tablet and computer, quite possibly it could make switching between iPhone, iPad and Mac easier. Also, as advanced HCI techniques like voice and air gesture emerge, do input techniques get even closer? Can one metaphor work across three different sized devices?
  • Easier Switch to Mac from Windows? The logic here says, even if you were brought up on a Windows PC, if you can use an iPhone or iPad, you can use a Mac.
  • Modularity? I’ve always believed that a modular approach could work well in certain regions and consumer segments, but only if the OS and apps morphed with it. For example, a tablet with a desktop metaphor makes no more sense that a desktop with a tablet metaphor. What if they could morph based on the state but keep some unifying elements? For instance, my tablet is a tablet when it’s not docked. When docked it acts more like you would expect with keyboard and mouse. They two experiences would be unified visually and with gestures so that they didn’t look like two different planets, but two different neighborhoods in the same city.
  • Desktop OS Dead or Changing Dramatically? What is a desktop OS now? If a desktop OS is a slow-booting, energy-consuming, keyboard-mouse only, complex system, then Microsoft is killing it with Windows 8 next year anyways, so no impact.
  • Simplicity Dead? If phone and tablet OSs are becoming more like desktop OSs, is that good for simplicity? Or are desktop operating systems getting more like phone and tablet operating systems? How do you mask the complexity and still be able to do a lot?

Where We Go From Here

We will all get a front row seat next year to see how users react to one interface on three platforms. Windows 8 will test this next year and Metro UI will be on phones, tablets and PCs. The only caveat here is the Windows 8 desktop app for traditional desktop which will server as a release valve for angst and a bridge to the future. Whatever the future holds, it will be interesting.

Passings: Of Steve Jobs and Dennis Ritchie

The death of Steve Jobs was a major world event,  accompanied  by an odd but affecting outpouring of grief from people who did not know him but whose lives he had somehow touched. So I was a little saddened, but hardly surprised, when the death of Dennis M. Ritchie attracted hardly any notice outside the world of computer science. Ritchie’s work touched at least as many people of Jobs’s–they just never knew it.

Unlike the showman Jobs, Ritchie toiled quietly in the vineyard’s of AT&T (later Lucent) Bell Labs where, with Ken Thompson, he revolutionized computing by making software independent from the hardware it ran on. Prior to the 1970s, software was bound tightly to its hardware. An IBM computer ran a proprietary IBM operating system and programs written in higher-level languages such as Fortran or COBOL were translated into code the computer could run using a proprietary IBM compiler.

Ritchie’s biggest contribution was the C programming language. C was a new sort of language, high level enough that could be written relatively quickly and without knowing much about the architecture of the computer it would run on, but with enough low-level control over things such as memory allocation that it could be used to write the operating system itself.

And the first great accomplishment of C was the writing of the UNIX operating system, on which Ritchie collaborated with Thompson. It was the first machine-independent operating system.  Originally written for Digital Equipment minicomputers, it was quickly ported to run on a vast variety of hardware. Today, UNIX and it descendants, which include Linux, Mac OS X, and Android, run on everything from telephones a TV set top boxes to the world’s largest supercomputer.

Though little known by the public at large, Ritchie was extensively honored by his fellow computer scientists. He received the Turing Award from the Association for computing Machinery, the Hamming Medal from the IEEE, the National Medal of Technology, and the Japan Prize. His memorial sits on the bookshelf of just about anyone serious about programming, a slender white volume  he wrote with Brian W. Kernighan called simply The C Programming Language and known universally as K&R.

 

iTunes and Consumer Share of Wallet

I recently read an interesting article in the Harvard Business Review which proposed a theory that consumers give more share of their wallet (money) to brands they rank highly.

The premise of the article was that companies need to focus more on their brand identity in the minds of consumers if they want to command more share of consumers wallets.

I’ve had a similar theory but it wasn’t related to brand loyalty, although that makes sense, but more directly tied to a brands ability to be sticky.

Granted, I am looking at this as it relates to the technology industry where the HBR article was focusing more broadly.

From a technology industry perspective, companies who have more sticky solutions have a higher chance of maintaining or growing consumer share of wallet.

To test my theory I researched and then plotted out my own annual spending in iTunes. I figured I was as good a test as any since I have used iTunes since the beginning in 2003. And I believe Apple has created one of the more sticky ecosystems on the market.

Take a look at the chart below which we will call exhibit A.
 

 
If you notice my annual spending in iTunes either stayed steady or grew on an annual basis. As Apple introduced more products into their ecosystem both in terms of hardware, new forms of media, and then apps, my iTunes spending went up significantly.

Once I was committed to the Apple ecosystem and as Apple provided me with more value as a part of that ecosystem; they continued to get a steady share of my wallet.

There are some essential points to understand as a part of this theory. First of all, I may very well spend more than most people in iTunes but I would still argue that annual iTunes spends would stay steady or grow the longer a consumer is in the Apple ecosystem.

Second, the more products or “touch points” in that ecosystem either owned by a consumer, or by a family, contributes to the ecosystem loyalty as well as the overall opportunities to spend money.

Of course brand is important and plays a role but perhaps not quite as much as the HBR article points out–or at least not as much in realm of tech.

For example, if brand was directly tied to share of wallet then Google or Microsoft for that matter would have a larger share of wallet. I use those brands as an example because they are both ranked on the top 10 list of brands, both ahead of Apple according to InterBrand.

I would argue, more important than brand in the mind of consumers is brand trust when it comes to share of wallet–especially in tech.

The most important observation about this theory of brand loyalty equalling share of wallet in my test is that the obvious first step is to get consumers into the brand ecosystem so that brand can compete for share of wallet.

In retail for example the common saying is “the first step is to get the consumer in the door.”

For Apple they got consumers in the door with the iPod,then iPhone, iPad etc. This strategy continues as they offer more products at attractive price points which continue to get consumers into Apple’s door and more importantly into Apple’s ecosystem.

Amazon has a similar strategy with the Kindle and now the Kindle Fire. These products, or screens, are the things that get consumers into the door and into the Amazon ecosystem. Amazon wants to provide as many touch points as possible for consumers to utilize their retail services.

Similar to my iTunes spend history I would be willing to bet that folks who examine their Amazon history find a similar pattern. Namely that the longer you are committed to that service the more your annual spending goes up.

In both my examples Amazon and Apple have a strong share of consumer wallet. Companies like Google and Facebook and others who want to drive commerce are having a harder time–even though they have strong brand rank in the minds of consumers. This is because they lack consumer trust.

Companies who want to own a larger share of wallet need to create compelling products that get consumers into their door. Continue to create a trusted brand experience with their products, offer a vast array of products or services, is a sound strategy to keep consumers loyal to their ecosystems.

Nook Color Users Like Apps–And Pay for Them

Owners of the Barnes & Noble Nook Color e-reader/tablet don’t just buy books. They also consume apps, quite few of them, it seems.

Nook Color photoThe buying habits of Nook Color owners are a bit surprising, and that could have interesting implications for Amazon.com’s  forthcoming Kindle Fire. Both the Nook Color and the Fire are highly modified Android tablets that identify themselves primarily with their retailer sponsors, not Google and Android. And both are connected to their own dedicated app stores, not Google’s Android Market.

“Our customers are used to buying content,” says Claudia Romanini Backus, a tech industry veteran who serves as director of developer relations for Barnes & Noble. That is a contrast to other Android products, whose owners have developed a reputation for having a fierce appetite for apps, as long as they don’t actually have to pay for them.

I had a chat with Romanini at the CTIA Enterprise & Applications show, where B&N was appearing for the first time. Compared to the typical Android customer, the typical Color buyer is far more likely to be female (women buy about 75% of the units) and older. The tablets are bought primarily as book readers and users begin to download apps when they realize that the Nook can do more.

About 9 of every 10 apps downloaded are paid, with the typical price at $2.99. However, a surprise best-seller is the most expensive item in the catalog, the $14.99 QuickOffice, which allows both viewing and editing of Microsoft Office documents. Other big sellers are games, including the several variants of Angry Birds; apps aimed at children, including iStory Time from DreamWorks; and educational apps. Productivity apps are surprisingly popular, with the free Taptu news reader being a popular download.

“We’re doing something unique and different from mobile,” says Romanini. “It’s not about the apps. It’s an additional way of consuming content. What differentiates us is that we come at it as reading first.”

When Markets Are No Longer Price Sensitive

There will always be a customer who only wants the lowest cost products. That truth however, does not represent the whole market. Price, for the majority of consumers, is not the only driving purchasing force.

If in every market the lowest cost product was all consumers wanted–then we would all be driving Toyota Corollas.

The fact of the matter is, in markets where consumers are mature low-cost is only attractive to a segment of the market but not the market as a whole.

Keep in mind, I am making a distinction between mature markets and mature consumers. Mature markets are one where a category or product is no longer new and well understood. Mature consumers are ones who have been shopping long enough to have pre-determined needs, wants, and desires on a variety of goods.

Developed markets for the most part have mature consumers. Because of that fact, new product categories will mature faster than in emerging markets. So for example, smart phones are still largely an immature market. Many consumers still do not own a smart phone. This market however, is maturing rapidly because we have mature consumers. Interestingly, they are not just buying the cheapest smart phones on the market.

Emerging markets consist of consumers who are maturing, still developing their needs, wants, and desires for a variety of goods. This is because the big trend in emerging markets is the rising middle class. The rising middle class has not historically had much disposable income prior to their “rising”; therefore, they were not generally consumers of a large variety of goods.

Since they have attained more disposable income they have began to consume more goods and are therefore, maturing as a consumer learning what their needs, wants, and desires are with a variety of goods.

I belive that in a market where consumers themselves are maturing price is more important. You need to first consume goods for the first time before you refine your tastes and begin to appreciate differentiation. Therefore, low price products help these consumers consume the goods because of the lower barrier to consume said good.

PCs, smart phones, and tablets are a good example of this in emerging markets. Lower costs will help these consumers first experience these products and learn what they like and don’t like. As they flesh out their needs, wants, and desires for these products they will then begin to shop with a more keen eye. When that happens, differentiation or products designed for a market segment becomes the strategy–not low-cost.

In a number of books I’ve read on the subject the observation is continually made that when a market matures it fragments. The below slide shows how this happened within the automobiles market.

 
Consumers first owned a car that was of lower cost. As they continued to own more cars they began to mature as a consumer of automobiles and eventually decided they wanted a minivan, truck, sports, or economy car. They made this decision based on their needs, wants, and desires and then chose the appropriate product. To re-emphasize my point, this decision was not based on price alone but on needs, wants, and/or desires.

All of this has a profound impact on how consumer technology companies orient themselves going forward. The reality is some markets are price sensitive and some are not. Companies need to be wise to understand which markets to enter and have an appropriate strategy.

The bottom line is developing a product to fill a consumers need, wants, and desire is a better strategy than trying to be the low-cost leader.

“PC Free” in iOS 5 Doesn’t Mean “Free from PCs” (or Macs)

There’s a new feature in iOS 5 that’s called “PC Free”.  While the definition is very specific, it conjures up a lot of images I would guess, specifically getting rid of the PC and Mac. So exactly what parts of the PC and Mac is it removing?

“PC Free” is about removing the PC for a few tasks that are frankly awful parts of the iOS experience and primarily administrative. Here is how it’s described on the iOS 5 landing page:

 

image“Independence for all iOS devices. With iOS 5, you no longer need a computer to own an iPad, iPhone, or iPod touch. Activate and set up your device wirelessly, right out of the box. Download free iOS software updates directly on your device. Do more with your apps — like editing your photos or adding new email folders — on your device, without the need for a Mac or PC. And back up and restore your device automatically using iCloud”.

It sounds promising, the promise of getting rid of that nasty horrible PC or Mac. :-).  Can you really dump your Mac or Windows PC?

I asked a few people in my family and at work what they liked doing on their PC and didn’t do on their tablet.  Here’s why they said they couldn’t ditch their PC or Mac to (UPDATED):

  • Text chat with someone on Google Chat at the same time as you are looking at FaceBook.
  • Quickly create a somewhat complex spreadsheet or presentation.  You really need a mouse to do this productively and iOS doesn’t support mice with Keynote or Numbers.
  • Download a file from multiple web sites in the background as you do something in the foreground.  There are a few exceptions with some apps, but certainly cannot be done in the iOS browser.
  • Compress a big file and email it.  Zipping or Rarring a file, attaching it, then emailing it.
  • Watch 1080p video. iPad has “768P” display for lack of a better term.  Yes, a user can watch 1080P on the iPad 2 on an extra display like an HDTV.
  • Importing HD video into the iPad that wasn’t taken on an iPhone or another iPad.  I am not aware of HD source video that’s shot to iOS specs.  I’ve had to reconvert gobs of videos on my PC to play on the iPhone or iPad.
  • Storing all your pictures. I am talking the multiple gigabytes of years and years of pictures. Alternatively you can rent iCloud space.
  • Store your entire music collection beyond iPads storage.
  • Store lots of personal videos.
  • “Perfect” personal video you’ve downloaded or shot with a camcorder that’s shaky, dark, etc.  Things that software like VReveal can do.
  • Face tagging. You’ll need iPhoto, Picasa, or Windows Live Photo Gallery for this.
  • Display different content on one display and different display on another.  There are a few exceptions, very few.
  • Any web site that uses Flash for navigation, like my local Mexican restaurant.
  • Print. I know, iOS says it can print. Have you gotten it to reliably print?  I didn’t think so. You think people don’t need printers anymore?  Tell my teenagers science and English teacher that.

OK, so you get the point here.  PC Free means you don’t need a PC to do some very basic and fundamental things. If you do need to do something the very basics, you will still need a PC or Mac.

iCloud is Awesome Yet Incomplete

After release to developers at Apple’s WWDC, the Apple iCloud is available to all consumers today with access to iOS 5 and updated iTunes.  In many ways, it is incredible that millions will have access to the consumer power of the cloud.  It’s very integrated into the experience, but then again, it’s not as complete or comprehensive when compared to the best-in-breed cloud apps and services available today.  Will that make a difference in consumer acceptance?  Let’s see.

icloud

What Makes a Great Cloud Experience?

A few applications define by example what a great cloud app or service can provide.  To a consumer, this will change over time and will also be dependent of their comfort and knowledge.   Some sites that are ahead of the cloud service game are Evernote, Amazon Kindle, and Netflix.  What makes these great examples of consumer cloud offer?   While very different in terms of usage, they share similar variables that in aggregate make them awesome:

  • Cross Platform: Windows, OSX, iOS, Android and the web.  Kindle and Netlix are even available on special-purpose devices like the Kindle and Roku.  Consumers can buy into the service and not worry about the platform going away.
  • Continuous Computing: Continuous computing means a few different things. On content consumption, the next device picks up exactly where the last device left off. On Netflix, if I am halfway through a movie on my iPad I can pickup at the same spot on my Roku. When I pick up another Kindle device, it asks me whether I want to go to the latest bookmark.
  • Sync: While a step back from continuous computing, it does assure that the same files are on the same system. On Evernote, every change I make is in synch when I open up the next device.
  • Continuous Improvement: Monthly and even weekly updates to add features and functionality.
  • Compatible and Data Integrity: Even with all these updates, the data keeps its integrity.  If the service has a question about which version is the master, it asks me.  Evernote will tell me that I have a duplicate entry and lets me pick the version or content I want.

iCloud: Cross Platform

As we all know, Apple by design works in its own “walled garden” but that doesn’t mean its completely closed off.  You cannot get iCloud-enabled apps like Pages, Numbers, Keynote or iBooks for Windows or Android.   Even worse, you cannot get to your photos and PhotoStream on any mobile device other than iOS.  To be fair, users can get access to Photo Stream on a Windows PC , but users should at least be allowed access to their own photos over the web if they want. Users can access iWork compatible documents on all “modern” browsers by going to iCloud.com and downloading files.  Windows users then need to drag and drop the updated file inside the web-based iCloud.com to update the file. – Grade D

iCloud: Sync

iCloud will automatically  “sync” photos (Photo Stream), purchased music and TV shows (iTunes), apps, letters (Pages), spreadsheets (Numbers), and presentations (Keynote), Reading Lists and Bookmarks (Safari), reminders (Reminders), calendar (Calendar), email (Mail), notes (Notes), and contacts (Contacts).

There are some major exceptions.  iWork documents will not auto sync with the Windows “Documents” folder, as I think users would expect.  Sugarsync and Drobbox will automatically sync documents with Windows and any other file type with Windows.  Also, personal videos and commercial movies do not sync on any iCloud platform which I don’t fully understand.  Maybe its a concern with storage on iOS devices or storage and throughput  in the iCloud.  – Grade B

iCloud: Continuous Computing

Within iOS phones and tablets, users can start right where they left off for TV shows (Videos) , games (Games Center) and book bookmarks (iBooks).  These are real awesome capabilities especially for those where it’s hard to know where you left off.

imageiCloud will not save the “state” for playing music (Music), playing movies (Videos), or web pages (Safari).  Add the PC and Mac into the continuous computing arena and iCloud experience starts to degrade for most all use cases for a variety of reasons.  iOS games don’t run or sync on a Mac or PC and on Windows  platforms iWork isn’t available.  Consumers over time will expect continuous computing on every usage model on every platform, the way Evernote does it today.   Grade C

iCloud: Continuous Improvement

I cannot definitively answer this question as it will emerge over time, but I must extrapolate from what I have seen from previous drops of Apple software. Apple software app drops, with iOS in particular, have been consistent, very often, and very solid code. – Grade A

iCloud: Compatible and Data Integrity

So far so good, even on difficult to manage applications like word processing, spreadsheets, and presentations.  I make a one line change to a document without going back to “Documents” inside iOS and web Pages, the one line changed on every other system. – Grade A

What, not Straight A’s and Does it Matter?

Apple has never needed to achieve a 4.0 in everything to be successful.  Getting all A’s in the core segment of users and building useful solutions that just work has been the Apple hallmark.  The first iPhone proved this and the iPhone 4s will prove this again as everyone else offers 4G but Apple doesn’t have to. A good fallback to Continuous Computing in good Sync, and I believe that as long as Apple still allows other services with better cloud capabilities into their walled garden, it won’t be an issue now. Over time, I believe Apple will fill in the gaps in iCloud and that have fully thought through where they could add the most value and that’s what they hit first.  Your move, Google, Amazon and Microsoft.

The 30 Percent Solution

 

Blogs are unforgiving. The entire world can see that I expected the highlight of Apple’s iPhone 5 introduction last week to be a kumbaya love fest between Apple and Facebook. Facebook is number one in mobile social media apps. Apple is number one in smartphones and tablets. Yet even after 18 months, there was no official Facebook app for the iPad.

So, other than the fact that there was no iPhone 5, and that no one at the Apple event even mentioned Facebook, let alone invited Mark Zuckerberg up on stage, my blog post was … well, pretty much in English.

But today, Facebook finally announced its Facebook app for iPhone and iPad.

According to the blogosphere, the hangup was caused by “negotiations” over who would be allowed to make money from apps sold by developers through the Facebook platform on the iPad. (You know, the developers who sell apps like Angry Birds, Hipstamatic, FarmVille,  weather, etc.)

Apple’s standard deal is: We take 30 percent, bitch.

Thirty percent on all paid apps, on all in-app purchases, and on subscriptions.

(Google has a variant on this revenue model for its own developer ecosystem: We take 30 percent but we’re not evil, bitch.)

Facebook sees that Apple and Google are rolling in clover, so it tells Apple it wants to create a separate revenue platform for mobile apps called Facebook Credits, and it wants to build it into Facebook apps that run on the iPhone and the iPad, which would bypass the Apple 30 Percent Bitch system and send that money directly to Facebook.  Negotiations must have gone something like this:

Apple: ”You know what we think of Facebook? We own the operating system and you don’t, and just to make that point absolutely clear we’re choosing Twitter over Facebook as the social layer of the iOS 5 operating system. We have more than 200 million registered users on the iTunes store and we have their credit card numbers, and we make buying apps completely smooth and painless and frictionless, and Facebook doesn’t.”

Facebook: “Oh yeah? Well, in that case, we’ll develop our app for HP’s new TouchPad. How do you like them apples?”

Apple: [30-day pause] “And how did that work out for you?”

Facebook: “Never mind. But we still have the most popular social platform in the universe, with 750 million users, and the only third-party Facebook apps you’ve got now on the iPhone and iPad basically suck. Even so, more than 250 million people use Facebook on a mobile device today.”

Someday I'll be remembered for this.

Apple: “True, but don’t forget, we have ‘Ping.’”

So it appears they have reached a compromise: Apple gets 30 percent, bitch, on all Facebook for iPhone and Facebook for iPad apps developed for the iOS operating system. Developers who want to create apps for Facebook on the Web, in HTML5, must use Facebook’s virtual currency, Credits.

Folks, what we have here are the latest salvos of a global war to create a new type of currency for online transactions, one that will rival cash and credit cards. Microsoft wants to be a player. PayPal (a division of eBay) wants to be a player. All of the mobile phone companies want to be players. When I walk down University Avenue in Palo Alto to get a cup of coffee, I pass what seems like 50 online payment startups that want to be players. The market for mobile apps and virtual goods sold through social networks and app stores is huge, and hundreds of millions of people are already opening their virtual wallets.

The Apple-Google-Facebook mobile app revenue models appears to be evolving along the Mastercard, Visa, American Express models. Developers who want to sell apps or virtual goods online can choose to whom they want to pay, except that the bites are larger by an order of magnitude. Apple wants 30 percent, Google wants 30 percent, and now Facebook wants 30 percent.

Aside: Will you trust Facebook Credits as your medium for buying things online? Google Wallet? A couple of months ago the marketing and advertising firm Ogilvy & Mather commissioned a survey on which brands consumers would trust with their money. It looked like this:

 


 

Anyway, I’m so glad that Apple and Facebook have decided, at last, to “friend” one another. I wonder how long they’ll stay friends?

I think this is going to be a very interesting battleground. But hey, I’ve been wrong before.

 

Nuance Exec on iPhone 4S, Siri, and the Future of Speech

Though the iPhone 4S appears nearly identical to the current iPhone 4, it is, as my colleague Tim Bajarin points out a revolutionary device because of its voice-based Siri interface. For the past 20 years, we humans have learned to point and click, but this has never been a natural way to interact with our environment. Touch and speech, on the other hand, have been around since we were living in caves.

Photo of Vald Sejnoha
Nuance CTO Vlad Sejnoha

“Speech is no longer an add-on,” says Vladimir Sejnoha, chief technical officer of Nuance, probably the world’s leading speech technology company. “It is a fundamental building block when designing the next generation of user interfaces.”

Sejnoha is faithful to the code of omerta that Apple imposes on its vendors. Although Nuance has supplied technology both to Apple and to Siri before its 2010 acquisition by Apple, he declined to discuss Nuance’s role in the iPhone 4S: “We have a great relationship with Apple. We license technology to them for a number of products. I am not able to go into greater detail. But we are very excited by what they have done. It’s a huge validation of the maturity of the speech market.”

But Sejnoha made no effort to hide his enthusiasm for the Siri approach. “It allows you to find functionality or content that is not even visible,” he says. “It provides a new dimension to smartphone interfaces, which have been sophisticated but shrunken-down desktop metaphors.”

It’s has been a long, hard slog for speech to become a core user interface technology. It took a good thirty years, from the late 60s to the late 90s for speech recognition—the ability to turn spoken words into text—to become practical. “Speech recognition is not completely solved,” says Sejnoha. “We have made great strides over the generations and the environment has changed in our favor. We now have connected systems that can send data through the clouds and update the speech models on devices.”

Recognition alone is a necessary but hardly sufficient tool for building a speech interface. For years, speech input systems have let users do little—sometimes nothing—more than speak menu commands. This made speech very useful in situations were hands-free operation was desirable or necessary, but left speech as a poor second choice where point-and-click or touch controls were available.

The big change embodied by Siri is the marriage of speech recognition with advanced natural language processing. The artificial intelligence, which required both advances in the underlying algorithms and leaps in processing power both on mobile devices and the servers that share the workload, allows software to understand not just words but the intentions behind them. “Set up an appointment with Scott Forstall for 3 pm next Wednesday” requires a program to integrate calendar, contact list, and email apps, create and send and invitation, and come back with an appropriate spoken response.

Sejnoha sees Siri in the iPhone as just a beginning.  “Lots of handset OEMs are working on it,” he says. “There is a deep need for differentiation in Andoid and Apple will only light a fire under that. Our model is to work closely with customers and build unique systems tailored to their visions.” And while a speech interface can drive search, it can also become an alternative to it: “One consequence of using natural language in the user interface is direct access to information. We can figure out what you are looking for and take you directly there. You don’t always have to go through a traditional search portal. It will change some business models.”

Nor do the opportunities stop at handsets. “Speech is a big theme for in-car apps because that is a hands busy, eyes busy environment,” Sejnoha says. “All the automotive OEMs are working on next-generation connected systems. The industry is undergoing revolutionary change.”

The health care market is another hot spot.  “Natural language is taking center stage in health care,” Sejnoha says. “We are mining data and using the results to populate electronic health records.” Nuance recently signed a deal with IBM to provide technology for a speech front-end to the health care implementation of its Watson question-answering system.

The key to the next breakthroughs in speech technology, Sejnoha says,  is making effective use of the vast amount of  speech data that now exists, a challenge that has also attracted Nuance competitors Google and Microsoft. “Most algorithms use machine learning and are very data-hungry,” he says. “No one knows yet what to do with tens of thousands of hours of speech data. The race to do that is one. We are doing fundamental research and have a relationship with IBM Research as well. It requires a broad array of techniques to model speech in a robust way and to learn the long tail statistically and the build techniques that can benefit from large amounts of data. It’s a very exciting time.”

 

 

Why We Witnessed History at the iPhone 4S Launch

While some people were disappointed that Apple did not introduce the iPhone 5, most pretty much missed the significance of the event and the fact that they were witnessing history.

In 1984, when Steve Jobs introduced the Mac, he did something quite historic. He introduced the Mac’s graphical user interface. But he actually topped himself with the introduction of another technology-the mouse. In essence, he introduced the next user input device that has been at the heart of personal computing for nearly two decades.

What’s interesting about this is that he did not invent the GUI. That came from Xerox Parc. And he did not invent the mouse. Douglas Engelbart invented the mouse. But by marrying them to his OS he reinvented the GUI and OS and gave us a completely new way to deliver the man-machine interface through the mouse. Until that time all computer input was done by textual typing.

Then, in 2007, with the introduction of the iPhone, Jobs and team did it again. He created the touch user interface and this time married it to his iOS. He did not invent touch computing. That technology has been around for 20 years via pen input or minimally within desktop touch UI’s such as those used in HP’s Touchsmart desktops. But he integrated it within iOS and gave the world a completely new way to interact with small, handheld computers. With the new touch gestures part of their laptop trackpad designs, they have even extended it to their core Mac portable computing platform as well. In essence, Jobs second UI act was to bring touch UI’s to mainstream computing.

Now, with the introduction of SIRI, integrated into iOS and a core part of the new iPhone OS, he and the Apple team have given to the world what we will look back on and realize is the next major user input technology-Voice and Speech. As reader Hari Seldon points out, the real breakthrough we will come to realize is in Siri’s “applied artificial intelligence.” It is its speech comprehension that will be its greatest advancement.

Again, he did not invent this technology. But Apple’s genius is to keep trying to make the man-machine interface easier to use and with each form, be it the mouse, touch, or voice, Apple has been the main company to popularize these new inputs and thus help advance the overall way man communicates with machines.

I have personally witnessed all three of these historical technology introductions. When the Mac was introduced in 1984, I was sitting third row center at the Foothill Community College’s auditorium. Then in 2007, I was at Moscone West, fourth row Center when Jobs and team introduced the iPhone with its touch UI. And most recently, I was at their campus auditorium, Building 4 of Infinite Loop, 5th row center, when Tim Cook and his team introduced the iPhone 4S and the new Siri Voice and Speech interface, making this their third major contribution to the advancements of computer input. (I make a habit of remembering exactly where I am when I watch history being made.)

Now here is another interesting point. Although Apple has had this touch UI in place and integrated in to iOS since 2007 and the Mac OS X since last year, only now is the Windows world starting to get serious about integrating touch into their phone and computer operating systems. Although Apple will continue to advance their various touch UI’s, they can rightfully say-been, there, done, that.

It is time to take it up a notch and for them their next user input mountain to scale will be the use of voice and speech as part of their future man-machine interface. It may start with iOS but like touch, I expect this UI to be in the Mac in short time as well.

Yes folks, for those of us at the iPhone 4S launch we witnessed history being made. Unfortunately, for a lot of people in at that event, they missed it.

[thumbsup group_id=”3294″ display=”both” orderby=”date” order=”ASC” show_group_title=”0″ show_group_desc=”0″ show_item_desc=”0″ show_item_title=”1″ ]

Can Smart Radios Save Us from Spectrum Stew?

SpectrumI’ve been hearing about smart, also known as agile or software-controlled, radios for what seems like 20 years now. The idea is to use software rather than hardware to control transmit and receive frequencies so that a single radio could operate on a broad swatch of spectrum instead of a few narrow bands–and perhaps also use software to control multiple radio protocols. Given the proliferation of frequencies and technologies being used for wireless data, it’s an idea whose time should be now.

Sprint’s wireless broadband announcement today added to an already complex picture. Sprint operates its basic CDMA/EV-DO network nationwide at 1900 MHz and offers WiMAX from Clearwire in selected markets at 2500 MHz. Today it announced that it will begin deploying 4G LTE on its 1900 MHz network and add 800 MHz service as it retires the Nextel network that currently uses that band.

Meanwhile, Verizon wireless runs CDMA/EV-DO at 800 and 1900 MHz and LTE at 700. AT&T offers GSM/HSPA at 850 and 1900 and is deploying LTE in the 700 MHz band.  Just to be different, T-Mobile runs GSM/EDGE at 1900 Mhz and HSPA at 1700 and 2100 Mhz. In case you lost count, that’s four carriers, seven frequency bands, and four fundamentally different radio technologies.

In most of the rest of the world, things are a lot simpler. Most carriers provide GSM and EDGE at 900 and 1800 MHz and HSPA at 2100. 4G plans, however, are literally all over the place.

I’m not sure it’s possible to build a phone that covers all bases with today’s technology, especially given the pressure for ever-thinner handsets. Its Qualcomm dual-mode radio provides CDMA/EV-DO at 800 and 1900 MHz,  GSPA and 850, 900, 1900, and 2100 MHz, and GSM/EDGE at 850, 900, 1800, and 1900 Mhz.  No wonder they left LTE out of this edition.

Unfortunately, smart radios seem to be one of those technologies that always remain a couple of years away from prime time. Given the proliferation of frequencies and technologies, they can’t come too soon.

 

 

Does Google Need a New Strategy with Android?

 
I believe Google is coming to a cross roads with Android. The reality is that we live in a software world. Hardware design is nice but software is what makes our devices useful.

Steve Wildstrom wrote an article asking the question about whether Android was a mistake for Google. I don’t believe Android is or was a mistake for Google, however I do believe they need a more hardware centric strategy.

Several things have happened and are continuing to happen around Android that leads me to believe a better strategy can be employed.

The first is that companies, Amazon namely, have taken what Google created as a base OS in Android and fully customized it stripping all benefit to Google.

Originally Google encouraged this idea of customization of Android for vendor differentiation. However things changed as Android began to become more popular and enginners realized scaling a truly open platform would be difficult.

At the turning point for Android, which I believe was 2.2 or Froyo, Google began to attempt to control Android more tightly thus making it harder for hardware partners to customize Android and differentiate their products. Google began to promote and encourage a non-customized version of Android to their hardware partners. Their Nexus line of devices are the evidence of what Google wants to see happen with Android hardware. Namely that the hardware is good but the software all looks the same.

Those who make Android hardware whether they be tablets or smart phones are longing for Google to help them differentiate their hardware. Because of the many restrictions Google is putting on Android devices get lost in the sea of sameness.

Because of that vendors like Amazon, or entire countries like China, have taken the basic Android code and made it their own completely separate from Google’s version of Android.

This is important because Google created Android as a software front end to their services. When a company takes the basic code but strips it from using Google’s services, the custom implemenation loses all benefit to Google.

The other market development that could impact Android is vendors seeking their own software solutions. An example of this is what Intel and Samsung are backing with Tizen. I am skeptical of Tizen however the fact that a key Android partner, like Samsung, is putting resources into a solution other than Android is not a good sign.

So What Should Google Do?
What Google needs to do, and I think they need some serious help to do this, is to figure out how they can work with their hardware partners to differentiate their Android solutions but still utilize Google services.

Now in the case of Amazon even if Google had an Android differentiation startegy I don’t think Amazon would have used it. Amazon is also a services company.

The rest of the market however would benefit from a Android strategy that allowed for differentiation but also still tightly integrated Google’s services. I don’t believe we will see this kind of solution in Ice Cream Sandwich, where Google allows for heavy customization. This is a real issue that Google needs to address with coming versions.

I’ve wrote extensively about product differentation and I will continue to but what we have right now with Android is the sea of sameness. That needs to change if companies want to stay in business.

This same problem exists for Microsoft but that is for another article.

Recommended Reading:
Dear Industry: Dare to Differentiate

Applesauce

 

The Wall Street Journal: “More fizzle than pop.”

The Los Angeles Times: “An evolution, not a revolution.”

The Washington Post: “It wasn’t exactly blowing my mind.”

FoxNews.com: “Lunch-bag letdown.”

Business Insider: “A huge disappointment, or just a regular sized disappointment?”

Analyst Roger Kay: “Underwhelming.”

 

People, please. There’s nothing wrong with evolution. Without evolution, we would still be apes. (Insert your own snide comment here.) Apple obviously thinks the new iPhone 4S is evolutionary. Otherwise Apple would have given it a new name, like, say, Shebang, or Razzmatazz, or maybe even Five.

Tesla
Lotus

But Apple’s new iPhone 4S is the same old iPhone 4 in the same way that a new Tesla Roadster is the same old Lotus Elise. Physically, they’re both sleek and sexy. Under the hood, though, the new model is revolutionary.

Not because of the dual-core processor. Other smartphones already have dual-core chips. Dual-band world phone? Faster upload and download speeds? Fancy camera and high-def video? Others have been there, done that.

No, the iPhone 4S is revolutionary because of Apple’s software, specifically iOS 5, iCloud, and Siri.

Disclaimer: I have not reviewed the iPhone 4S and have no idea if it works as advertised beyond the boundaries of Building 4 on Apple’s Cupertino campus. Apple stresses that the Siri personal assistant software is still in beta mode, even now, a week before the iPhone 4S goes on sale. But if the software does work in the real world, it’s a change as profound as replacing gasoline with electricity.

What is the future of the personal computer interface? Voice and gestures, not keyboards and mice.

Apple patented the capacitive multi-touch interface it introduced with the original iPhone. It included a gyroscope in the iPhone 4, transforming gameplay but also opening the way for new gesture controls. And now, with Siri (and backed by the new A5 and digital signal processors), Apple has added natural language voice control to the computer in your pocket.

Hello, Siri?

Remember the scene in one of the Star Trek movies where a bemused Scotty tries to control a 20th century computer by talking into a mouse? Seriously, does anyone doubt that our grandchildren will operate computers by voice?

Yes, Android phones introduced voice commands a while back. But from the day when Steve Jobs first walked through Xerox’s Palo Alto Research Center (PARC), Apple’s true genius has been to seize nascent technologies and make them so simple and elegant that they catch fire. Did Apple invent the MP3 player? No. Did Apple invent the mobile phone? No. Did it invent the portable game system? Negative. Did it invent the tablet computer? Nope. The music store? Uh-uh.

So, what are the best-selling MP3 players and game players and mobile phones and tablets and music stores in the world today? (SPOILER ALERT: iPod, iPod Touch, iPhone, iPad, iTunes Store.)

Did Apple invent the television? Wait, that’s likely to be the subject of a future column.

In my view, we’ve just seen a revolutionary shift from mobile phones to mobile personal assistants.

What’s on my calendar today? What’s the weather? What’s traffic like? How many calories in this bagel? Remind me to stop to buy coffee on the way home. Read me my mail. Send a message to Ben and Tim telling them I’ll be late to the office. Play this morning’s National Public Radio podcast. What’s the stock market doing now? Call my wife. Let me know when Steve gets to the office. Schedule a lunch with Dave and Kelley for tomorrow. When is Laura’s birthday?

The Siri software “understands” conversational language. It “understands” context. I am unaware of any other voice command system on any other smartphone that reaches this level of competence.

Add this to the intelligent ecosystem of iOS 5 and iCloud – comprising hundreds of new features, all of which make their debuts with the iPhone 4S – and it’s difficult to understand the griping and grousing that followed Apple’s announcement yesterday.

Was it also disappointing that Apple dropped the price of the original 8GB iPhone 4 to $99 (with the usual two-year mobile carrier contract)? Or that it dropped the price of the iPhone 3GS to free? Or that it priced the iPhone 4S at $199 and up? Those were evolutionary changes, too, but if I am Nokia, and my cheapest dumb phone is now the same price as Apple cheapest smartphone, my business plan just got sent back to the drawing board. Ditto Google-Motorola, now that the price bar for state-of-the-art smartphones has been set at $199.

The iPhone 4S is still the thinnest and snazziest smartphone in the world. Okay, so it doesn’t have a four-inch screen, and it’s not shaped like a tear-drop. (Darn, I was hoping I’d have to go buy all-new iPhone accessories.) It does not have built-in near-field radio communications, which prevents me from using it to pay my toll when I board the subway in Seoul, since that’s about the only place I’ve seen that accepts NFC payments. Has anyone seen NFC payment terminals here in the States?

And speaking of NFC, is your mobile phone carrier so trustworthy and transparent that you would trust it handling your daily purchases? Would you trust AT&T as your bank?

Which leaves me to conclude that the biggest cause for pundit, analyst and fanboy disappointment with the new phone is that your friends and co-workers won’t be able to tell that you have the new iPhone 4S just by looking at it, obviating its value as a status symbol. Here’s an idea for a cheap upgrade: Paint a big number “5” on your iPhone case, and they’ll never know the difference.

Dear Industry: We Owe Steve Jobs a Standing Ovation

Last night we learned of the passing of Steve Jobs, one of the most visionary and innovative leaders this industry has ever and perhaps will ever see. Because of all that Steve Jobs has meant to this industry we thought it appropriate to have our second installment in the Dear Industry series take a quick look at how much this industry owes to this master innovator.

It would be hard to imagine what this industry would have looked like had Steve Jobs and Steve Wozniak never dreamed up their vision for personal computing.

Steve was the only tech executive who had an eye for design and understood technology then married them together to create some of the most iconic products we have today.

Steve Jobs understood that innovation isn’t always about pure invention. Whether or not he invented a particular technology or took what existed and made it better, more useful and more valuable, he was constantly innovating.

He was the ultimate super user or super consumer. He had an un-matched discerning sense of what people wanted with technology before they knew they wanted it. I call this the forward thinking experience and Steve Jobs was an expert at it.

With his vision and leadership Apple never reacted to the trends they always set them.

He was the chief visionary, not just of Apple but of the entire technology industry. His products have challenged and inspired others to be better. He put massive pressure on any and all competitors and challenged them to raise the bar.

He helped create this industry and as a result created value with nearly everything he touched. His innovations made new industries, companies, jobs and more possible.

A great many people employed in the technology industry owe their careers to Steve Jobs.

If we had a technology hall of fame he would be a first ballot inductee and he would of course receive a well deserved standing ovation from the entire technology industry.

Image Credit: Jonathan Mak

My Thoughts on the Passing of Steve Jobs

I have been asked by many in the media for my thoughts on Steve Jobs. I felt that I needed to write them out so that I could be succinct at this time. Please feel free to use them as quotes directly from me.

Steve Jobs will always be remembered as a pioneer and tech icon. While he will always be known for the great products he created, perhaps his greatest contribution was the creation of a new Apple that is one of the most valued companies on the planet.

Many tech executives would be thrilled if they had one major hit in their lives. Steve gave us the Apple II, The Mac, the iPod, the iPhone, the iPad and Pixar.

In a sense, Steve Jobs was part Thomas Edison, Walt Disney and P.T. Barnum. A modern technology visionary, focused in delivering products that are useful and provide entertainment and a masterful showman who really knew how to keep people on the edge of their seats wanting more.

His impact on the world of technology and American business can not be underestimated. His simple vision of creating products that he would want – ones that were elegant and easy to use, is what drove him and Apple to spectular success.

Under his leadership, Apple has become one the most recognized brands in the world. He created products that people line up for around the world.

Over 30 years of covering Steve Jobs as an analyst, I saw him at his highs and lows. But even in his lows he never took his eye off of the vision of creating products that were stylish and simple to use.

When he came back to Apple in 1997, I met with him on his second day on the job. At the time, Apple was $1 billion in the red and in serious trouble. So I asked him how he planned to save Apple. He said he would go back and meet the need of their core customers and then he said something that at the time puzzled me. He said he would pay close attention to industrial design when creating products. Not long after that he gave us the candy colored Macs that broke the mold of what a PC should look like. And as they say, the rest is history. From that point on, all of Apple’s products have bore the imprint of his eye for style and ease of use.

I am confident that Apple can move forward under Tim Cook and his executive team and that Apple will continue to be one of the most important technology companies in the world. Tim and his team fully understand Steve’s vision for Apple and will carry it forward and continue his legacy of creating products that will be elegantly designed, easy to use and that people will want as part of their digital lifestyles.

He was one of a kind and I doubt I will ever see anyone like him in my lifetime again.

Our thoughts and prayers go out to his family and close friends. So many of us in this industry owe much of our careers to Steve Jobs. He will truly be missed.

Why Apple Didn’t Release the iPhone 5

I have been fascinated by the various comments from people, Wall Street analysts included, that were disappointed with the new iPhone 4S. These folks have been having dreams of delusion trying to coax Apple to make each new iPhone conform to their imaginations. When I polled a few of them to see what they expected, they mostly tripped over themselves trying to explain their vision for a new iPhone. Common points were things like it should have been thinner, lighter, with a tapered designed to make it sleek and more unique.

I am pretty sure these folks who want this design don’t live in the world of engineering, manufacturing or even have a working understanding of physics. If you look at the iPhone 4 from an engineering stand point, it is already packed with more chips, batteries, antennas, radios, etc in order to give it the kind of features and functions it has today.

Now imagine that Apple decides that these folks are right. They make it slimmer, lighter and taper it at the bottom. That means they must use a smaller battery thus impacting total battery life. And it means they have to put in sub par or smaller antennas and chips on a smaller die, thus less functionality. And they would possibly have to change the kinds of radios they use to fit them in this new design, also affecting the quality of wireless voice and data signals.

Now, if I am a consumer and have the option of having a slimmer, sleeker iPhone but with less battery life, less power and less functionality, versus having Apple give me a similar physical design but with a CPU that is 50% faster than the one in the iPhone 4, a graphics chip with 7X the power of the one in the last phone, and better antenna and radios so that my voice and data connections are solid, the same size battery that is now tweaked with new software to give them even more talk time, music listening time, etc, which iPhone do you think they will choose?

While this is a key reason for Apple to stay with the “if it ain’t broke, don’t fixit” strategy for the iPhone 4S, there is another even more practical reason for staying with this design.

You may have noticed reports over the last two weeks from the channel that Apple was selling all of the iPhone 4’s they can make even though people were fully aware that a new iPhone would be coming out this fall. And we know that Apple can’t make these fast enough to even meet current market demand.

One thing it appears Apple concluded was that, after 16 months of making the iPhone 4, they actually do have the manufacturing of this phone down and in fact, are starting to ramp up even more production lines to meet demand. With this in mind, it made perfect sense to re-design it from the inside out, and still keep all of the manufacturing tooling and processes in place so that they could also make the new iPhone 4S in the kind of volume needed to meet market demands.

The manufacturing experts I know tell me that had Apple actually done a radical new design for this phone, they would have had to retool a lot of the production lines and that this would have been very disruptive, in a negative way. What people don’t realize is that this phone is not that easy to manufacture and Apple, in some cases, has to actually invent the manufacturing tools and machines just to make them in the first place.

Now, this does not mean that they could not have a new or even a radically designed iPhone in the future. But the process to ramp up a completely new manufacturing system takes time and is very difficult to do even on an annual basis. So while they are maximizing the current manufacturing lines for all the iPhone 4’s current physical designs, I am certain they are working behind the scenes to create perhaps a new form factor that can still have this level of functionality and designing the manufacturing procedures and machinery even now for when they will need it in the future. I suspect the next iPhone will be specifically designed to support LTE, a technology that is not ready for primetime because of modest US coverage but by late next year should be available in about 85% of the US.

I am also certain that once consumers really understand that this is a completely new phone even though it is in the same design package, they will flock to it in huge numbers. And Apple will not skip a beat.

Related:
Consumers Will Be Delighted by the iPhone 4S

But when people want to project their visions and ideas on Apple and hope that Apple responds to them, they need to look at the practical side of creating something as sophisticated as the iPhone. And in the end they need to realize that Apple actually does know what they are doing when it comes to designing the best and most powerful smart phone they can make and delivering something that customers really want and need in an iPhone, instead of delivering the design pipe dreams of over active imaginations.

Apple’s iPhone and Intel’s Tick-Tock

iPhone 4S web pageIntel has long followed a two-year product cycle it calls tick-tock. In a “tick” year Intel introduces new chips based on a major change in process technology, such as this year’s release of the Sandy Bridge processors. The next year, a “tock” brings refinement within the existing process.

This pattern is driven both by the pace of technology innovation and and the realities of manufacturing. Semiconductor technology evolves fast, but not so fast that major disruptive change is required every year. And a two-year cycle gives Intel the time it needs to perfect fabrication and reap the benefits of the investment in proces change.

It looks like Apple is falling into a similar patter with the iPhone. The 4S announced Oct. 4 was a tock to last year’s iPhone 4 tick. A similar tick-tock pattern marked the release of the iPhone 3G in 2008 and the 3GS in 2009.

There are still major changes in the 4S hardware, most notably the move to the A5 processor, the new camera system, and the use of a dual-mode GSM/CDMA radio. But the basic design is unchanged, allowing the new models to be slipstreamed smoothly into Apple’s (or Foxconn’s) production process.

A change in the industrial design of a handset may not be as disruptive as new semiconductor process technology, but it never happens without difficulty. Apple had problems ramping up production of the iPhone 4 and manufacturing difficulties caused many months delay in the release of the white version. Then there were the notorious problems with the antenna.

Keeping the  basic design the same gives Apple more time to perfect both the design and the manufacturings processes for what will almost certainly be next year’s tick, the iPhone 5, while maintaining smooth, high-volume production of the 4S.

 

Siri Could Be Reason Enough to Buy the iPhone 4S

Siri iconFolks who found Apple’s iPhone announcement disappointing, and there were plenty of them, weren’t really paying attention. My colleagues Tim Bajarin and Ben Bajarin have outlined the reasons consumers should be excited about the new phone, despite the fact that it looks identical to its predecessor. I’m going to focus on just one of them, the Siri personal assistant.

It’s a huge mistake to regard Siri as a speech recognition component. Speech recognition has become highly developed, but by itself, it doesn’t do very much. Anyone who has used voice control on an Android phone knows it is very good at letting you dictate messages, but not much else.

Siri cracks a much tougher nut. For it to work, the software, which runs partly on the iPhone 4s and partly on Apple’s servers, must understand not just your words but your meaning. If you ask “should I wear a raincoat today?” and Siri responds with a weather forecast, were are looking at very significant advance in machine intelligence.

At this point, a couple of very important caveats are in order. Siri looked spectacular in Apple marketing chief Phil Schiller’s demo. But it was a demo, and the people who create demos carefully limit their choices to commands and functions that they are confident will work. Apple didn’t give attendees at the announcement any hands-on time with the phone. So until users have a chance to try out Siri in the wild, we’ll have to reserve judgment on how good it really is. In a move that seems more Googley than Apple-like, Siri is being released with the iPhone 4 on Oct. 14, but it is officially designated as a beta product, perhaps in and effort to temper expectations.

A second question is just how good it has to be for people to find it useful. If it doesn’t truly make the iPhone easier to use, people will abandon it quickly. But if it works anywhere near as well as it did in the demo, I suspect it will revolutionized the way we interact with devices.

While science fiction computers has been able to carry on intelligent conversations for decades, it has taken real world computers about that long just to learn to recognize words reliably. Speech recognition, which companies such as IBM and AT&T began working on seriously in the 1960s, was based primarily on signal processing and statistical analysis. Natural language understanding seemed hopelessly beyond reach, whether the input was spoken or typed.

Siri was developed by a company of the same name that was acquired by Apple. The original research was funded by the Defense Advanced Projects Research Agency, but Apple may have thrown more engineering and computer science muscle into the project than even the Pentagon can afford these days. But it also had to wait foir a dramatic increase in the processing power of mobile devices—one reason that Siri will not be available with iOS 5 on older phones–and more seamless communications that allow the work to be split between the phone and the server.

As smart as smartphones have become, simple tasks can require annoyingly many steps. Setting up a meeting requires checking a calendar for the proposed time, finding attendees in a contact list, and sending out invitations. If all that can be replaced by pushing a button and saying, “Set up a meeting with Tim Cook for 10 am on Friday,” ease of use will have taken a great leap forward.

One secret to any successful attempt at natural language understanding is restricting the range of commands, known as the domain, that it must make sense of. If you tell Siri, “Write Mr. Smith a script for simvastatin,” your iPhone will probably stare at you blankly (unless, of course, someone uses the Siri application programming interface to create a prescription-writing program.) The range of things you can reasonably ask a smartphone to do is still pretty limited.

The critical question is how much of that repertoire of requests Siri will handle well.  If it is a reasonable fraction, Siri alone will provide ample reason for the iPhone 4’s success.