Should Apple Create an iWatch?

Last fall, I wrote an article on our site here suggesting four industries Apple could disrupt and one of the industries I mentioned was the watch industry. When Steve Jobs introduced the iPad Nano, he mentioned that one of his board members said he was going to find a way to strap it on to his wrist and use it also as a watch. And shortly after the Nano was released, a cottage industry of Nano watchband makers started to emerge and today you can find over 100 colorful watchbands that turn the Nano into a watch.

But in this article I stated that for the Nano to be disruptive, it would need a Bluetooth connection to the iPhone and serve as kind of a visual companion by showing on the Nano who might be calling, iMessages, news alerts, etc that would come through the iPhone and not make me take it out of my pocket to see these alerts or key bits of information unless I needed to act on them.

To date, Apple has not added any of these features to the Nano but even if they do, they will now not be the first to take this idea and concept and run with it. Last week there were two announcements of watches that would work as companions to smartphones and both show a lot of promise. One is coming from Sony and the other is coming from a small start up called Pebble Technology.

Sony’s Watch is basically like the Nano but with Android’s OS inside and works in a similar manner. It uses a color OLED and connects and serves as a companion screen to Android smartphones. It sells for $149 and is on the market now. But the Pebble Smartwatch is the one that is the most interesting to me since it can be used with Android or the iPhone. And while it’s screen is not a color OLED like Sony’s, its eInk screen means this product has a very long battery life and can be even thinner then Sony’s version. Both connect to smartphones and deliver info on calls from a phones caller ID system, messages and other alerts that can be programmed into the watch and tied to a smartphone.

While Sony’s smart watch is coming from a major company, Pebble Technology’s approach to creating their smart watch is quite unique. Instead of raising funds to build this from friends and family as many start-ups do, they appealed to the public for pledge funds that will be turned into actual shipping orders of their smart watch when it ships in September. They are not the first to try this approach but their pitch seems to have struck a nerve, especially with the early adopters. They had hoped to raise $100,000 from these pledges but in their first week, they actually got order/pledges worth over $2.6 million.

With all of this interest in the Nano being used as a pseudo smart watch and these new entries from Sony and Pebble, as well as earlier models from Motorola and others, is it time for Apple to create their own iWatch with iOS on it and tied directly to the iPhone? Given this competitive pressure, you would think the answer should be yes. But if history is our guide, doing something just to counter the competition at this early stage of smart watch interest is not their style.

We have solid examples of how Apple actually looks at these market opportunities and eventually responds. For example, Apple did not invent the MP3 player. But once these devices had become established as a product with major potential, Apple then brought out the iPod and today it owns 75% of the MP3 market. Now consider the iPhone. Apple did not invent the smart phone. But once these took off, they brought out the iPhone and today their smart phone owns a major position in this market. And Apple did not invent the tablet. But their model with this, as well as with the iPod and iPhone, is to look at the fundamentals surrounding each of these products and then apply their genius of design, eco system and marketing to any category of devices they feel that they can make better.

While they may not be first in a new product category, their approach to making them better and then using their design and marketing prowess to take very strong positions in these markets is at the heart of the way Apple works. Today, Smart watches tied to smart phones are in their very early stages and show promise. But don’t expect Apple to respond in kind just because the competition in this space is heating up. Instead, look for Apple to glean from these early smart watch trail blazer’s and once they believe they can create something that is very sleek, elegant and innovative, then, and only then would they bring out an iWatch.

One side note to this is that watches on a whole have been on a decline since cell phones have come out. This is especially true with Gen Y users who rely mostly on their cell or smartphone to find out what time it is. But if the watch is tied to their smartphone, this could actually reverse some of this decline in watches and even Gen Y and those younger than them just might start wearing watches again.

Why I Wouldn’t Invest in Facebook

First let me clarify that due to my line of work I do not invest personal money into any public tech stock. That being said, even if I did, I would not put my own money into Facebook when they go public.

This has been an interesting week for Facebook. They acquired an extremely popular piece of software for iOS and Android called Instragram for roughly $1 billion dollars. Instagram had an extremely loyal, engaged, and opinionated customer base (judging by their harsh reaction to the acquisition). In fact, fellow Tech.pinions columnist Patrick Moorhead wrote in his Tuesday column about the conversation he had, via text message, with his daughter after the deal was announced. If you haven’t read it already I encourage you to read it as she very deliberately called Facebook stupid and for old people, which is in fact the title of his column.

Column: Facebook is For Old People

Along those same lines I wrote a column for TIME in December entitled “The Beginning of the End of Facebook?” This column led to a raging debate in the comment section as some folks disagreed with me and others felt that Facebook may not be the king of social networks forever.

Facebook Becomes Routine

My premise was simple. I interviewed approximately 100 high school students in Silicon Valley, all who have been on Facebook at least two years and many had been on for four years or more. In every case with every interview the results were the same. They found themselves using Facebook less and less and were generally using it to simply get quick updates of friends and family. Since that column (and if you read the comments) I have gathered over 100 more responses all indicating the same thing. Those who have been on Facebook for a significant period of time see their time spent with the service decline.

Yet comScore tells an opposite story. The year over year increase of individuals average monthly minutes on Facebook is increasing at almost 50% each year. Now, comScore is looking at the total number of minutes not the number of minutes an specific individual averages. I am not sure if it is possible to track this but I would be 99% certain that the average number of minutes per month spent on Facebook actually declines the longer you have been on the service.

In my own experience and in many of the high schoolers I spoke with, the first year or so on Facebook was the most intense. Discovering and keeping up with new or old friends. The lure to share socially and show off all the things you are doing, eating, etc. All of it becomes very addicting, but after a while that drive goes away. This is at the heart where I think the problem with Facebook lies. It commands a high average of a persons time for a short while but then Facebook becomes more of a routine rather than a passion.

The numbers that ComScore is observing is because much of the globe is still having its first year on Facebook. There is also no doubt in my mind that even for those for whom Facebook becomes routine go through a season of more intense usage. Like when a friend or loved one goes on a trip or moves to a different location.

However, even with my theory of average time spent declining and the longer a consumer has used the service is correct, it is only one small part of why I wouldn’t put money into Facebook. Ultimately, however, I don’t believe the time spent on Facebook averages we are seeing today is sustainable in the long term.

The Economics of Pleasing Everyone

The problem I see with Facebook’s business in the long term is that it is trying to be all things to all people. Within that vein of thinking it is also trying to develop an advertising / revenue strategy that is also all things to all people. Generally speaking, when a service tries to be all things to all people, it is not actually good for anything.

In 2008 I was on a panel at the OnMedia summit in NY talking about this very thing. I highlighted a few networks of interest at the time. One was called Dogster.com which still exists today and is simply a social network for those passionate about Dogs. It links people up by region or just by type of dog in order to link up people of like minded interest in dogs or even certain types of dogs.

Similarly a few years prior I was spending quite a bit of time with the folks of LaLa.com and its founder Bill Nguyen. As you may know Apple has since acquired LaLa and used its technology to build Ping into iTunes. The premise of LaLa.com was simple in the beginning. Link people together who had similar tastes in music and let them help each other discover new music. The results were as expected as for quite a length of time LaLa.com’s average time on site per customer exceeded two hours per day. The allure in the case of LaLa.com was like minded people and the discovery of new music. Lala’s customer base continued to spend significant time on the service until LaLa changed their strategy.

This is the power of vertical social networks and where I believe the best advertising strategies will lie in the future. In the case of Dogster.com the community is an extremely interested one in all things related to dogs. If I was the head of marketing at Purina, would it make more sense for me to advertise on Facebook or Dogster? Similarly, how about social networks for car lovers / aficionado’s, mothers, fitness fanatics, artist lovers, etc. The total size of one of these vertical networks may not be nearly as big as Facebook’s but the target audience would more engaged, more targeted, and more passionate about the interest and thus more valuable to advertise to in my opinion.

We are already seeing these vertical networks creep up and I assume we will see many more in the future and this reality isat the root of my concern over Facebook’s long term sustainability as a profitable business. My concern is that these vertical social networks will become more valuable to brands and advertisers and command more of their ad spends than does Facebook.

The bottom line is that if I am a brand looking to advertise to a certain type of consumer, I am going to want to go where those consumers are in big numbers. My belief is that brands will find more success marketing with vertical social networks that are oriented around special interests rather than a network like Facebook which is trying to be all things to all people and not doing a good job of it.

iCloud: The Center of the Universe

Over the past year Apple has given us a glimpse of what iCloud can do, but it’s the service’s potential that has me excited. Even with the small changes we’ve seen, there can be little doubt that the service will be at the center of future Apple products.

iCloud took over from MobileMe, doing the mundane but important task of syncing our calendars, contacts, bookmarks and other personal data between devices. This means that all changes will be synced between our Macs, iPhones and iPads, instantly and seamlessly.

It was with a release of a new version of iOS that we first saw how useful iCloud could be outside of a syncing service. The ability to see all of the apps that we already purchased and downloaded in one place and re-download them. I’ve used this quite often as I’ve switched devices, deleted apps and wanted to re-install. Easy and convenient — it’s what Apple is all about.

With the addition of iTunes in the Cloud allowing users to access songs in the cloud from any device, and an Apple TV update that stored purchased movies and TV shows in the cloud, we started to see how iCloud could be used in the home.

The home entertainment system is an area that companies have tried, and failed, to control for years. Like most things these days, it’s about the content. We fully expect devices like TVs to have a sleek, modern design, and look good in our entertainment center, but without the content they are just TVs.

Some TVs come with services like Netflix built-in, giving us easy access to that content. I love the Netflix service and use it all the time, however, it’s limited in a lot of ways. The most important being that it’s not the place where I get most of my digital content — that is iTunes.

In the future, if I’m going to pay for a device or television, I want to know that I have access to all of my content. That means movies, TV shows, music, podcasts, and anything else I’ve purchased. I also want the ability to seamlessly purchase new content and have that available on any other device that I want to consume it.

Apple is the only company in the industry that could provide this at the moment. Clearly we don’t have a television from them yet, but they do have the infrastructure to deliver the content. In that respect, Apple is a decade ahead of its competition.

With iCloud’s ability to deliver content to connected devices, it’s not unreasonable to envision a time when Apple could deliver all of your purchased content, as well as subscription-based content from television networks and other specialty media companies, to any or all of your devices.

iCloud is not just a syncing service — it’s a content delivery mechanism that will play an increasingly important role in future products.

Where in the App Store is Carmen Sandiego?

One of the goals we have in my household is to develop and maintain an inquisitive culture and the desire to learn. Being immersed in the technology industry as I am, I naturally add technology as a part of that process. One of my favorite examples of how we have done this was with an app called iBird Explorer Western.

My family and I live just outside San Jose in an agricultural / rural part of the area and because of that we see quite a wide variety of birds we never encountered in the city. My oldest daughter (age 9) and I both have the app on our iDevices, mine on my iPhone and hers on the iPod Touch. It has been remarkable to see how quickly she can spot a new bird in the wild and quickly use the app to identify the bird and learn interesting facts.

Even more recently in this process she has begun playing a game called Stack the States fairly regularly. This game teaches her facts about US states as well as how to identify them and place them on a map. It does so in a way that makes learning fun and technology at its best should accomplish that goal when it comes to education.

Because of my desire to integrate technology into the learning process and inquisitive nature of my kids, I began thinking of games I appreciated as a kid that did the same. The first one that came to mind, for my wife and I, was Where in the World is Carmen Sandiego?

This game did a great job, in my opinion, of integrating game play with lessons on geography and other facts that was fun and educational. I had been watching for a while, and still to no avail, the arrival of this game in the iTunes App Store. This game seems like an ideal game for iOS devices and I am still surprised it is not there. The company that owns the rights called The Learning Company also owns the rights to The Oregon Trail, a game that is available for iOS and quite popular.

Game developers are smart to be using legacy franchises to bring games into the touch computing era. As devices like the iPad get integrated more into the learning process at different age levels, these games can provide a solid base to build upon and bring to tablets and more.

Apple’s re-invigoration of the software community is creating new possibilities with game software on computing devices and especially those that are touch based.

Where in the World is Carmen Sandiego is one of many legacy franchises that I hope make it to touch devices. Such software and the software development communities focus on creating games that are fun and educational are positive trends that I would like to see continue.

Why Siri Won’t Go Beyond the iPhone–For Now

Siri screenshotSince Apple launched the Siri app on the iPhone 4S last fall, there has been a widespread assumption that Siri’s voice-driven semantic search might soon find its way to other Apple products. At the top of everyone’s list was the still notional Apple television, bolstered by the belief that Steve Jobs’s deathbed claim to have “cracked” TV was based on the development of a voice interface.

Don’t get too excited. I think Siri will continue to improve on the iPhone and might well migrate to the iPad, but its not likely to go anywhere beyond these handheld devices for some time to come. Both the technology and the psychology have to be right, and both are far from ready.

Siri on the iPhone is a big step forward, but it is very far from perfect. Mostly it understands me, sometimes it doesn’t. sometimes it has a useful answer to a question, sometimes it doesn’t. It’s a lot better than any previous voice/natural language  effort, but I still rely on the keyboard or other touch interface elements most of the time. Actually, the iPhone makes a natural Siri development platform for Apple because even iPhone users are inured to mobile phones that fall well short of perfection. For example, calls drop, voice quality is often awful, messages arrive hours after they were sent. So we’re prepared to put up with a personal assistant who doesn’t always understand us. Apple, with its sharp focus on user experience, will be  reluctant to push Siri into territory where customers may be disappointed by the performance.

Our expectations for television and cars, the logical targets for voice control, are much higher than for mobile phones. At the same time making voice control work is much harder for engineering reasons. Cars are actually the easier challenge. Apple has avoided the automotive market, but others are in the game and Microsoft is the clear leader, especially with its partnership with Ford.

Natural language understanding is a big computer science challenge for voice systems, but there are also a considerable audio engineering issues to solve. Speech recognition requires  a high quality audio signal, and the more free-form the speech, the better the audio has to be. An airline reservation system can understand me over a poor cellphone connection (most of the time) largely because the vocabulary and syntax of airline reservations is very constrained. But a Siri-like system is supposed to understand anything.

Siri on the iPhone works as well as it does because the phone starts with a decent microphone system that is close to the speaker and filters out extraneous noise. Cars are a pretty good environment as well. Voice systems usually are activated by pressing a button on the steering wheel that can also mute the audio system. There are lots of good places to put microphone arrays close to the driver. And while the sounds of driving create a lot of ambient noise, it is of the predictable sort that noise-cancellation systems handle well. I expect to see car systems get a lot better, but I don’t see Apple becoming a player. Apple likes to be top dog, and that would not be the case in a relationship with auto makers, who are quite insistent that car buyers are their customers, not those of third-party vendors. (Microsoft may do the software and Nuance the speech recognition, but Sync is a Ford product through and through.)

The living room is far tougher, but here to Microsoft may well have the edge, this time because of Kinect sensor technology. Pure voice control of a television is extremely difficult. Unlike a car, you don’t know where the speaker is going to be, so you need a sophisticated speaker microphone array that can find and focus on the speaker, who might be 10 feet away. Such systems  exist, but they are mostly still in the lab and, at least initially, are likely to be quite expensive.

You also need the equivalent of a push-to-talk button, or the voice recognition system is going to be saddled with the near impossible task of hearing anything over the sound of its own audio. Here’s where Kinect might come in very handy. It’s ability to recognize gestures and to combine gestures with speech might yield a much better interface, much faster than voice alone. This plus an enormous research investment in speech and natural language understanding, which admittedly have yet to yiled much in the way of products, might give Microsoft a considerable edge in the battle of the living room.

Of course, the  big TV challenge for Apple, Microsoft, or anyone else is striking the deals needs with content owners that will permit a viewing experience that unifies internet video with cable and broadcast TV. Difficult as the technical issues are, this business challenge may prove tougher to crack.

 

 

 

Facebook is for Old People

FACEBOOK IS STUPID AND FOR OLD PEOPLE“, my 12 year old daughter texted me yesterday after FaceBook offered to purchase Instagram. If you have teenage or pre-teen girls or boys, this demonstrative behavior isn’t anything new. What I didn’t fully understand at the time is what a firestorm the acquisition set off in the community. Of deeper and longer-term significance, however, was the spotlight my daughter’s text to me shined upon the newest and most natural trend in social media; verticalization or specialization, which will reshape social media as we know it today.

As I probed to better understand what my daughter meant and how she felt by her text to me, she explained that with Facebook owning Instagram, it would ruin its entire purpose. Probing further, she feared that Facebook, because it’s for “old people”, would “change Instagram.” Taking this offline, she explained a few fears. For her, Instagram is a world for her and her friends in her grade that was protected from Facebook gawkers and lurkers. Her thinking was that by Facebook owning Instagram, those gawkers and lurkers would invade her and her young friend’s world. Mark Zuckerberg promises a standalone Instagram, but will, of course, import all the pictures in their context and metadata, to be monetized like everything else is in the Facebook network. My daughter wasn’t alone in her fears.

Like Mathew Ingram reported in GigaOM, many other people, including grown adults, were airing their grievances. Many even retweeted my daughter’s text in a sign of protest. As of right now, the text had been viewed on Twitpic over 71,000 times and was retweeted over 3,200 times. While the protesters probably represented small but vocal minority, they certainly were a passionate and diverse group. All of this passion highlights a theme I’ve been researching for a few months, the verticalization of social media.

Over an extended period of time, all markets go vertical or specialize, all the way to the point where the market cannot support any more divisions. Sometimes the segmentation is too gray and not demonstrable enough to support the business model. Look at TV channels, cars, tooth paste, and shampoo. They have all segmented beyond belief if you been alive long enough to see it from the beginnings. Cars are a good example. At one point, there were very few different types of cars consumers would want to buy and that manufactures offered. Now it seems that every brand has sedans, coupes, mini-vans, station wagons, SUVs, “minis”, sports cars, trucks, hybrids, etc. TV sport is another good example of specializtion. When I gew up, I could only watch sports on one of four network TV stations at very regimented times of the day. Now, from Austin’s Time Warner Cable, I can get access to over 50 different sports channels whenever I want, 24 hours a day. I see the same situation playing out with social media.

Socal media is now starting to mature, fragment and specialize. Facebook, for now, are many people’s “home base”, but as in life, all people have to leave home sometimes. That’s exactly what people started doing with a few sites like these:

  • Pinterest– Lifestlye social interactions around the “beautiful things you find on the web”.
  • GetGlue– Entertainment social interaction around what people are watching on TV, at the movies, playing, reading or listening to.
  • Foodspotting– Food social interaction between people who like to eat out and show off what they’re eating.
  • Goba-Face to face social interaction by bringing people together in the real world.

There are hundreds more services like these that cater to narrower slices of social interaction, but it’s not all rosy in the specialized social media world.

There are gating factors all markets need to overcome to move to specialization. For cars, it was a market large enough to warrant specialization plus the “sharing” of key parts like engines and chassis. For social media to specalize, it needed a home base, like Facebook, to provide login, authentication and open APIs to cross-post content and opinions.

Facebook has enabled the growth of these specialized social media sites. It’s a good thing they did, or Google would have done it and Facebook may not had nearly the lead they have today. This doesn’t mean Facebook will continue to leave its door open forever, though. Another potential growth-inhibiting factor is obvious; the number of active users and friends. There has to at least be enough users and friends to warrant going there in the first place. This is killer #1, but is facilitated by Facebook’s APIs. With today’s UIs and interaction models, I believe that consumers can really only tolerate one major social media “hub” like Facebook then one, maybe two specialized social media sites that are somewhat connected back to the home base. This could change over time due to aggregation work like Microsoft does with its “People” apps, but for now it’s a reality that there’s only so many sites we can handle. The final growth inhibitor is linked to the first. If you cannot gain scale, then you won’t be large enough to make enough money to stay in business. Most social media experiences who pass gate #1 fail gate #2.

Can we learn anything from a pre-teen girl’s reaction to a $100B company purchasing a tiny company with less than 15 employees for $1B? I hope so. I know I did. Consumers are very picky and if we offer them thin enough social media slices with enough mass to be considered a community, they like it. We only have to look at Instagram and Pinterest’s fast growing bases of active users as evidence that this is only the beginning of the social media specialization revolution. What does this mean for Facebook? Facebook needs to be the best “home base” it can be, integrating and facilitating traffic between smaller, specialized social media services. While Facebook has trumped Google many times in the last few years, they should get the YouTube playbook from them to show them how to do a branded integration the right way.

Is there a future for dedicated eReaders?

When Amazon introduced their first Kindle eReader, there were a lot of articles that suggested that this device represented the future of books. Many wrote that thanks to the Kindle, eBooks would go mainstream and be the most popular way people would read a book in the future. To some degree, there was a lot of logic and truth in this idea. eBooks can be downloaded instantly and in that sense they are much more convenient then having to go to the local bookstore and pick them up or order them online and wait for them to arrive days later.

The Kindle also had another thing going for it. It had an extremely long battery life and you could read it in direct sunlight. Not too long after the Kindle was released, other eBook readers came out from Kobo, Barnes and Noble and many more and even publishers started to jump on the eReader bandwagon and began releasing thousands of books in eReader formats. Over the last two years, prices have also come down so that you can get some eBook readers for as low as $79.00 today.

While eBook readers have had solid sales up to know, the entry of Apple’s iPad and other tablets are set to challenge their need to exist. Amazon, Barnes and Noble, and Kobo all realize this and have apps to their eBook stores on almost all tablet platforms today and in the case of Amazon and Barnes and Noble, they are also embracing tablets in a big way and, in a sense, starting to downplay their dedicated eReaders and pushing their customers to their tablet versions instead.

Do Amazon and Barnes and Noble think that demand for dedicated eReaders will completely disappear? Not necessarily. But they do know that something big is going on with tablets and that these types of devices will soon become the major platform for reading eBooks. In fact, Amazon is leading the way with their Kindle Fire and adding a key ingredient into their mix that has the potential of really shaking up the entire tablet market and potentially doom the eReader in the future.

The key ingredient is something called subsidization. The Kindle Fire sells for $199, but sources tell us that the bill of materials (BOM) for the Kindle Fire is at least $215.00. But Amazon is willing to sell it at this price because they expect a Kindle Fire buyer to purchase perhaps at last 10 ebooks, rent at least 5 movies and buy various products through the Kindle Fire from the Amazon store that they can amortize against the actual cost of the Kindle Fire and actually make a profit on it.

But Amazon does not have a patent on this idea. Indeed, Walmart has all of the things needed, included an eCommerce store for all of their products along with a real interest in renting eMovies, selling eMusic, etc online to do something similar. And in their case they also have the storefronts to back this up. They may count on the fact that their users will buy even more products from them if they own a Walmart tablet and make it really easy to buy eBooks, eMusic, download eMovies and buy products from their online store. They can use it also as an advertising vehicle for special Walmart offers. And in Walmart’s case, maybe they sell their subsidized tablet for $99 or in some cases, even give it away with special promotions. Although Walmart has shown no interest in doing this, they are the one major retailer who could map Amazon’s model and do something very interesting in the tablet space if they wanted to.

Take Proctor and Gamble as another example. They have over a hundred products they would like to sell you through their retail partners. What if they can get a reasonably priced P&G tablet built and branded for them and then uses it to drive promotions to the users and subsidize part of the cost of the tablet for their customers. From the users standpoint, the Web apps drive their broad content and app needs. But P&G now has a captive audience who is willing to get their ads in return for paying a very low price for this tablet.

If you add the subsidization equation to tablets, you might be able to see that the future for tablets may be where families could have four or five scattered around the house at their disposal and some could be subsidized by various vendors so that owning more than one is the norm. While the OS may still be important to handle localized apps for some, the most used feature will be the Web browser and Web apps, tied to the cloud where most of your personal digital life will reside. And since the Kindle app could be on all of them, your entire library could be in sync and you just pick up the tablet closest to you at the time and start reading where you left off.

The bottom line is that today’s tablets are great and thanks to Apple, the role of tablets in our lives is being flushed out now. But I believe that the tablets of the future will just be screens that use a browser to connect us to everything we need from the cloud and be cheap enough thanks to subsidization so that each room in our homes might have one and when you need one, you just pick up the one that is closest to you. Although there may be some people who would still want to buy a dedicated eReader, it seems to me that subsidized tablets could become so ubiquitous within the home that it could some day become the eReader of choice and the need for dedicated eReaders will disappear.

We Have Personal Clouds, Now We Need Family Clouds

Prior to the launch of iCloud last year I wrote a column looking at ways that iCloud might work well for families not just individuals. I have a houseful of Macs and other iOS devices and I like to keep them in sync. The problem is they aren’t all mine. Some are my kids and some are my wife’s. There are digital assets that we own that are communal and shared and there are ones that are personal. I had hoped that iCloud would address these issues more fully than it currently does but unfortunately iCloud is designed to be more a personal cloud than a communal one. It is the communal or family cloud that I think needs to be addressed.

Synchronization is at the foundation of any good personal cloud. If I have a multitude of connected devices which I use regularly I want them all to stay in sync. The power of this lies in software that contains what we call a change and detect engine. That means that when a change is made on one device, it makes a change across all devices. Take a photo on one device, it is already on the others. Buy a song on one device, it is already on the others. Edit a document on one device it is already on the others, etc. This solution has manifested itself in the marketplace for quite some time but only recently has it been any good. Personal clouds are evolving nicely but we need hardware and software makers to start thinking more communally as well.

Communal Clouds

One of the things that needs to be pointed out about personal clouds is that they only matter when you have more than one connected device which you use on a regular basis. If I only used one personal computing product, I wouldn’t really have a need to keep it in sync with other devices. But once you get a desktop/ notebook, smart phone and or tablet then the cloud data synchronization becomes important. This is also true with communal clouds.

When only one member of the family has multiple computing products then the notion of the person cloud works. But once several members of a family start getting connected devices then the problem grows. Link that up with the reality that not all family members share a same roof and you can see how a communal cloud could be of value.

There is certain data that is communal and of value to a larger group and there is certain data that is valuable to just the person. A solution in the market needs to exist that makes communal data sync as easy as personal data sync.

For Apple, they have built iCloud with mostly the personal cloud in mind. There are of course ways to sync libraries of photos or other digital data but they are mostly manual processes. iTunes library sync is great and to some degree. Home Sharing is a good start but what about photos for example? Perhaps some of the most communal content in any family ecosystem is photos and currently keeping photo libraries in sync across a number of devices and iCloud accounts in the family is a pain. My wife constantly complains that none of our photos are ever on her computer because I download them all to mine. My iCloud account helps me to a degree but she has her own iCloud account and both act and sync independently of each other.

Other areas of shared sync that could be of use are things like family calendar, chores or to do lists, family documents or spread sheets which can be worked on collaboratively– just to name a few.

Interestingly this is a concept Microsoft has actually marketed to a small degree. There was a line in commercial I saw earlier in the year during a commercial for Windows which said “It’s good to be a family again.” In the commercial the father was using a Word document on his Windows Phone as a shopping list. As he was shopping, new items kept appearing on the list for things like candy and other junk food items. He quickly realized what was going on and the commercial ended showing his kids adding to his shopping list from their Windows PC at home. Changes they made to a document were instantly there in real time on his phone. This idea of how a family uses the cloud in a more holistic way is one that I think needs further development in this new era of commuting.

This extends outside the home as well. It would be great if new photos I took were not just synced across mine and my wife’s iCloud account but also with my parents and her parents and her grandparents. I am constantly putting photos on thumb drives and moving them or uploading chunks the cloud or to DropBox to get them from one place to another. There are solutions in the market but I want the manual processes removed and key communal data to simply stay in sync with those for whom it is relevant.

The bottom line is personal clouds are great but if they only work for me personally than they are useless at a communal level. People don’t use technology in a vacuum and we need hardware and software manufactures to not only solve problems for the personal computing ecosystem but for the family computing ecosystem as well.

Why Tech Should Care About the Fate of Obamacare

Supreme Court portraitYou probably haven’t heard of Section 10330 of the Patient Protection and Affordable Care Act. It takes up just one of the law’s 2,700 pages, buried deep in a list of miscellaneous provisions.

But Section 10330, one of the many provisions far removed from controversies over individual mandates that could be lost if the Supreme Court rules the entire law unconstitutional, is important to the future of both health care and effective use of information technology by government. The substance of the section is so brief that it is worth reading in its entirety:

 

SEC. 10330. MODERNIZING COMPUTER AND DATA SYSTEMS OF THE CENTERS FOR MEDICARE & MEDICAID SERVICES TO SUPPORT IMPROVEMENTS IN CARE DELIVERY.

(a) IN GENERAL.—The Secretary of Health and Human Services
(in this section referred to as the ‘‘Secretary’’) shall develop a
plan (and detailed budget for the resources needed to implement
such plan) to modernize the computer and data systems of the
Centers for Medicare & Medicaid Services (in this section referred
to as ‘‘CMS’’).
(b) CONSIDERATIONS.—In developing the plan, the Secretary
shall consider how such modernized computer system could—
(1) in accordance with the regulations promulgated under
section 264(c) of the Health Insurance Portability and Accountability Act of 1996,
make available data in a reliable and timely manner to providers of services
and suppliers to support their efforts to better manage and coordinate care
furnished to beneficiaries of CMS programs; and

(2) support consistent evaluations of payment and delivery
system reforms under CMS programs.

The problem is simple. The Centers for Medicare & Medicaid Services, like many government agencies, is sitting on a treasure trove of data. Applying modern data analytics to this “big data” information could yield tremendous benefits to the public, from detecting fraud in medical payments to understanding regional disparities in pricing of services to–most important of all–learning which treatments provide the best outcomes for patients.

CMS, which administers Medicare, Medicaid, and the State Childrens’ Health Insurance Program (sCHIP) would be an analytics paradise if only it could figure out how to use its data. But the system described in the white paper “Modernizing CMS Computer and Data Systems to Support Improvements in Care Delivery,” Medicare payment information is stored in “in at least 25 different databases used for different program purposes.” These systems mostly have no way to communicate with each other. States maintain their Medicate data in 50 separate state databases, generally without even common data definitions.

Similar problems plague information systems throughout the federal government. The big difference is that in health care, Congress has at least tried to do something about it. The Health Improvement Technology for Economic and Clinical Health (HITECH–they obviously come up with the acronyms first) Act, part of the 2009 stimulus bill, created incentives for a switch to electronic medical records and started the process of CMS tech modernization. But the Affordable Care Act takes on the heavy lifting, authorizing a top-to-bottom overhaul of a technology infrastructure that, among other things, still depends on mainframes communicating over IBM’s 1970s-vintage Systems Network Architecture.

It’s tempting to argue that the overdue and uncontroversial IT modernization mandated by Section 10330 would survive even if the Affordable Care Act as a whole is struck down. It’s also almost certainly wrong. In today’s Washington, nothing is uncontroversial and almost nothing gets done. What is more likely instead is that any attempt to sweep up and resurrect the many technical provisions of ACA would instead end up being held hostage to the broader health care, economic, and political agenda of one side or the other.

I have no illusions that I can influence the Supreme Court, especially at this late date. But I would hope that in the general interest of progress, if the justices decide that the individual mandate is unconstitutional they would at least let the rest of the law go forward.

 

 

 

Google Created the Mess and Now Must Fix Android Tablets

Android for phones by any measure has been a success, while Android for “premium” tablets by every measure has been a disaster.  According to IDC, the iPad held 55% market share of all tablets in Q4 2011.  When you remove lower end tablets like the Fire and Nook and leave "premium" tablets at $399+, best case Android has approximately 13% market share, leaving Apple with 87% share.  This incorporates sales from some very nice Android tablets from Samsung and ASUS.  This is beginning to appear like the iPod market where Apple is squeezing every ounce of life out of the premium competition.  So who is to blame for the fiasco and who needs to fix it?  The responsibility lies squarely on the back of Google who in turn needs to fix the problem.

I was very excited about Android the first day I learned about it in 2005.  The market needed another strong choice for client operating systems to ensure the highest growth as Linux just wasn’t making headway. I bought the  T-Mobile G1 Android phone in October  2008, the Google Nexus One in January 2010 and many more Android phones including the HTC EVO 4G, the Motorola Atrix, and more after that.  The phone apps were there, more importantly the popular ones.  While the experience wasn’t as fluid as the iPhone, I and many others appreciated the openness, notifications, and live screens.  While the market was very excited about Android phones, it was a completely different story for tablets.
 
The first looks at Android for tablets, aka "Honeycomb" were amazing. Honeycomb, on paper and in demos, did almost everything better than the iPad. The interface was incredible and looked three dimensional and “Tron”-like. Multitasking, notifications, Flash video support, SD storage and Live Screens looked great.  The Motorola XOOM at CES 2011 won many awards including CES’s "Best of Show Award."  The anticipation mounted and the ecosystem was excited…. until it actually shipped.
 
As I explored here, I show that the XOOM was slow, buggy, without many apps, without Flash, without SD card support, and sold at a $300 premium to the iPad at $799. New models and prices were introduced starting at $379 seven months later.  Needless to say, it was a complete disaster. This was followed by Samsung with the Galaxy Tab 10.1 in June 2011 starting at $499.  This tablet experienced a similar fate as the XOOM but not as pronounced because it more quickly moved to Android 3.2.  The best premium Android tablet out there was and still is the ASUS Transformer Prime with its optional keyboard, but it also struggled because of Google’s operating system.  Google then released Android 4.0, aka "Ice Cream Sandwich" which didn’t add meaningful features for tablets, but instead aligned the application development environment between phone, tablet, and TV.  Android 4.0 tablets missed the holiday selling season and didn’t sell many at all compared to the iPad.
 
In summary, the following are the characteristics of what Google allowed to be introduced into the premium Android tablet market place:
  • buggy with crashes
  • slow interface
  • few tablet optimized applications
  • few services at launch for music, books, and movies
  • unfinished features
  • price points on top or higher than market leader Apple with lesser experience
  • missing key consumer retail time frames
So why do I place this primarily upon the shoulders of Google and not the brands, retailers, or component suppliers?  It’s about leadership.  If Google had fully understood what they were walking into, they should have:
  • waited to release Android 3.0 until it was feature complete.
  • waited to release Android 3.0 until there were at least 100 optimized, popular applications.
  • waited to release Android 3.0 until it had full support for movie, music and book services
  • waited to release Android 3.0 until there were greater levels of application compatibility issues that resulted in crashes.
  • instituted some tighter marketing management of hero SKUs to assure their experience was flawless

The result of Google allowing Android tablets out the door before it was fully baked is that the operating system is now viewed by most as a liability as opposed to an asset. Every major tablet maker that I’ve talked to loses money on premium Android tablets in a big way.  Also, anyone’s brand associated with the Android tablets has been marked as well. Motorola and Samsung both had premiere brands but I believe has been sullied by their association with Android for tablets.

Google’s reaction to all of this was to buy a hardware company (Motorola) versus working even more closely with their partners like ASUS and Samsung. Additionally, it’s rumored that Google will introduce their own Google branded tablet which will alienate Google all that much more.  Does the Google brand lend a cachet’ to the equation?  Absolutely not.

All of these issues and confusion benefits Microsoft right now. What was previously considered a free ride from Google with its "free" operating system now has turned OEMs directly into the arms of Microsoft and Windows 8 for tablet.  What a turn of events over the last 18 months.  The pandemonium isn’t over yet.  With undoubtedly more information coming out at this year’s Google I/O, Google is planning Android 5.0 which I am sure will be positioned as the savior of Android for tablets.
 
The problem is that there’s no savior in sight for Android on premium tablets.  We all know Android sells at $199 without much or any hardware profit, but how about $499 where the entire ecosystem can make money?  Google needs to seriously reconsider everything they are doing with  for tablets starting now because nothing else is working.  The new plan needs to fully account for the needs of the silicon partners, ODMs, OEMs, channel partners, application developers and most importantly, the end user.  It needs to find an entirely new name, too, because the Android name has been thoroughly destroyed in the high end tablet market. 
 
It’s time to stop treating Android for tablets like a hobby and start treating it more like a business.

How RIM Changed My Life

During the summer of 1997, I was contacted by the folks at RIM and asked if I would like to be a beta tester of their first Blackberry pager/email device. Up to then, pagers were the darlings of the mobile world but their key function was to send a phone number to the user if needed and then the user would have to find a phone to call them back. Although cellular phones were already on the market, they were still pretty pricey back then and most people who got paged had to find a land line to make any call backs.

However, there was one technology that had already gained a major foothold in business by then and that was email. But the only way people could get their email was to go to their desktop or laptop and log on to see what messages they had. And while consumers were also discovering email via AOL, Compuserve, MCI and a few other consumer services, email had started to become the lingua franca of business. In fact, as an analyst working strictly within the tech industry, all of our clients were heavy email users and by then we were doing most of our communications via email instead of the plain old telephone system. (POTS)

However, for a lot of business professionals, email was a two edged sword. While it was a very productive tool if people were at their desktops or laptops and plugged into the corporate network or connected to their email via dial-up services, it was worthless to any one that was away from their offices or homes. And even if they had their laptops with them on the road, back then we were not assured that we could get a connection to our email from our hotel phones given the sorry state of hotel phone systems then. In fact, I carried with me an acoustic coupler that had alligator clips on them and in more then one hotel around the world I would have to take the cap off the telephone’s wall connection and use the alligator clips to tie into the hotel phones systems to get a dial tone to make a connection since most phones did not have an RJ 11 phone plug on their phones in those days.

So when RIM showed me their first Blackberry and told me that I could have a wireless connection to my email, I jumped at the chance to be a very early tester of their services. And from that day on, my personal world of communications took a major leap forward and to be very honest, my life changed significantly. No longer was I tied to my desktop or laptop to get or respond to email and this was a very liberating experience. More importantly to me was the fact that email had become the lifeline to my clients and it now meant that I could get their messages to me anytime and respond to them in real time very quickly. And from an economic standpoint, this one thing helped my business increase as I became known for my personalized service to the clients and the fact that I was extremely responsive to their needs.

Opening up the world to mobile email was what put RIM on the map and their forethought and innovative thinking has had a dramatic impact on our business world. They pioneered wireless email for broad commercial use and it literally became one of the most important tools any business professional had in their bag of tricks. And because their back-end servers were so secure, it became the standard wireless mobile email device for most government agencies and those in the financial markets and as a result, the fortunes of RIM skyrocketed. Apple and any of the smartphone players today should be very grateful to RIM for this major contribution they made in blazing the trail for what is now smartphones and the many advanced services that have many of their roots in things that RIM did with their original Blackberry.

But it is these roots that should have been their guide when trying to drive the company and the Blackberry devices forward. While they owned the corporate market, their decision to try and make the Blackberry all-things-to-all- people is what really has caused them to be in a most difficult position today. That, and not keeping up with the technology consumers really wanted in a smartphone. By branching out and trying to bring the same features to consumers in the basic form factor that business users loved, they missed the major move to touch based smartphones. Instead they are now playing catch up with the more consumer-focused vendors like Apple and Samsung who understand consumer mentality and designed their products with this as their primary goal.

Even worse for RIM is the fact that while expanding their consumer range of products and putting so much emphasis on marketing to this user segment, they took their eyes off the corporate market they owned. And as a result, thanks to the major bring-your-own-device (BYOD) programs being implemented by their corporate customers, consumer centric smartphones like Apple’s iPhone and Samsung’s Android smartphones have encroached dramatically on their business territory and in the end, RIM is now losing in both market segments.

RIM’s new CEO pretty much admitted this mistake when he announced last week that they would shift their focus from the consumer market and concentrate on their corporate business. Of course, this is the right thing to do, but it should have been done years ago and at this point in time, I am not really sure this will bring the company back to health. And it is real shame to sit and watch what has been such an important company decline and struggle to even stay afloat given the competitive landscape today.

But for me, RIM will always be one of the most important companies in my own personal technology history. For over a decade, my Blackberry and I were attached at the hip, so-to-speak, and it was my lifeline to my family, friends and clients. And I did not give it up easily. It took a radical new design and approach to make me give up my Blackberry and had not the iPhone come along and completely revolutionized the smartphone market, it would probably still be my sidekick today.

I am not sure what will happen to RIM in the long run, but for many of us techies, RIM will always represent innovation and foresight and the one that introduced us to a new age of mobile and wireless technology. And for that, everyone working in the mobile and wireless world owes them a great deal of gratitude.

Apple Turns Technology Into Art

As I was reflecting on my first experience with the new iPad and its retina display I was intrigued with a thought. There has always been something about the iPhone’s retina display and now with the iPad’s display that has me mesmerized. When I first saw the new iPad and the screen at Apple’s event I couldn’t stop looking at it. Even today I sometimes just turn it on to look at it and shake my head in disbelief.

The thought that I was intrigued by is how the visual appeal of Apple’s devices, and in this case of the screen, causes us to be so emotionally attached to them. Even this NY Times article in September of last year points out that consumers do actually love their iPhones. I believe this affect however as everything to do with the visually appealing experience with Apple products.

In a TIME column I wrote last year, I pointed out that Apple’s desire to create products that are at the intersection of liberal arts and technology drives them to create technology products that are in essence art. Apple turns technology into art we can use. Apple exhibits an unparalleled focus in the technology industry to design some of the most visually appealing hardware in all of computing. This focus of creating objects of desire is one part of many that encompass the Apple experience. That experience, the visual and emotional experience tied to Apple products creates an emotional response in consumers of Apple products that create as much passion around a brand as I have ever seen.

The Most Passionate Community

I would challenge you to find a more passionate community anywhere in computing. I have attended many industry conferences and trade shows and the Macworld’s where Steve Jobs spoke had a level of energy associated with them that I am yet to encounter anywhere else in this industry.

The experience around Apple products is what I think many who compete with Apple take for granted and simply don’t understand. I’ve said often at industry talks I have given that consumers don’t buy products they buy experiences and that is what Apple delivers.

Consumers in droves are discovering what the hard core long time Apple community has known since the beginning and are converting in droves buying iPads, iPhones, and even Macs. It all leads with the visual experience and beautiful and attractive hardware. Believe it or not, however, beautifully designed things are easier to use.

What is Beautiful is Usable

In 2000 a scientist from Israel named Noam Tractinsky, wrote a book called “What is Beautiful is Usable.” He started with a theory and built the scientific evidence to back it up. To quote his report on the subject:

two Japanese researchers, Masaaki Kurosu and Kaori Kashimura1, claimed just that. They developed two forms of automated teller machines, the ATM machines that allow us to get money and do simple banking tasks any time of the day or night. Both forms were identical in function, the number of buttons, and how they worked, but one had the buttons and screens arranged attractively, the other unattractively. Surprise! The Japanese found that the attractive ones were easier to use.

Noam himself then wanting to test this theory with the Israeli culture so he duplicated the experiment. He thought that aesthetic preferences may be culturally dependent. His observation was that the Israeli culture is more action oriented and they care less about beauty and more about function. However when he duplicated the results with an Israeli group of people the conclusion was the same. In fact in his research the sentiment was stronger with the Israeli sample size. So much so that in his research report he remarked in his paper that beauty and function “were not expected to correlate” — He was so surprised that he put that phrase “were not expected” in italics.

It appears that Apple has been on to something from the beginning. Perhaps Steve Jobs absolute resolve to make technology products beautiful carried with it inherent user experience paradigms that simply made products easier to use and that theme is continued today all throughout Apple. This in my opinion is truly what is setting Apple apart in the market place. They create objects of desire and out of that focus comes a visually and easy to use user experience paradigm that drives emotional responses in consumers of their products.

We know humans are visual beings, especially men, and interestingly enough a great deal of science exists today linking beautiful things to ease of use. There are companies who can design objects of desire and easy to use products and there are those who can’t. Apple’s advantage in this area is that they create the hardware and the software with this technology and software as art philosophy. We see this in their hardware and their software and will eventually see it more in their services.

Noam Tractinsky is right and his book title highlights a profound truth. What is beautiful is usable and this philosophical truth carries over into computing and human interaction with computing.

Right now there is only one company who I think truly understands it.

References:
– Don Norman, Why We Love (or Hate) everyday things, Feb 4th 2003
– Tractinsky, N., Adi, S.-K., & Ikar, D. (2000). What is Beautiful is Usable. Interacting with Computers, 13 (2), 127-145.
– Tractinsky, N. (1997). Aesthetics and Apparent Usability: Empirically Assessing Cultural and Methodological Issues. CHI 97 Electronic Publications: Papers

Why Best Buy Is Struggling: A Personal Tale

Best Buy, which has been struggling of late, announced today that it was closing 50 big box stores as part of a restructuring. Some commentators viewed this as a sign that the big box retail model has outlived its usefulness, but I think a lot of Best Buy’s problems result from the chain’s flawed approach to retailing, especially customer service. This is just one personal anecdote, but I think it’s telling.

Last week, our aged dishwasher began leaking, so we went looking for a replacement. I find it hard to get excited about dishwashers, but I did a little bit of research on the web before we headed for or local Best Buy, with a newspaper flyer promising free installation in hand. As we looked at dishwashers in the store, an “associate” asked if he could help. My wife explained what we were looking for and asked about the free installation. He informed us it didn’t start until the next day. At that point, he seemed to lose interest andPhoto of Best Buy sotre wandered off.

We were in the store, ready to buy and a salesperson with a little incentive and a little authority would have found a way to close the deal. This associate apparently had neither, so we were out of there. Best Buy also announced today that it would begin basing sales staff’s compensation on customer service. That doesn’t seem to be part of the current equation.

We headed to our second choice, H.H. Gregg, which recently moved into a former Circuit City store about a mile down the pike. The salesperson there also told us the deal we had seen advertised didn’t start for another day, but he led us to an alternative available right away. He also noticed that the counter opening I had measured was a little shorter than normal (we had a ceramic tile floor installed after the old dishwasher was in place) and found us a model–actually less expensive than our original choice–that would fit more easily. Needless to say, he made the sale.

Admittedly, this is only one data point, but the indifferent attitude we encountered at Best Buy is a familiar one. About the only time I have seen Best Buy staff really engaged with customers is when they are trying to push overpriced and generally unnecessary service contracts. My friend Harry McCracken tweeted that what Best Buy really needs is Ron Johnson, the Apple Store chief recently departed to run J.C. Penney or someone like him.

We can’t expect Best Buys to turn magically into Apple Stores, which consistently offer the world’s greatest retail experience. But a recent piece in The New Yorker found evidence that retailers that pay their workers more than the competition tend to be more profitable, presumably because they can attract better employees. That’s something Best Buy ought to think about.

 

 

An iPad Firestorm About Nothing

Apple’s newest iPad hit the market three weeks ago and already their have been a number of controversies surrounding the device. As expected, all of the issues fizzled out because there was really nothing there in the first place.

iPad screen imageThe first issue brought up by Consumer Reports was that the iPad was much hotter than its predecessor. This, coupled with the organization’s appearance on CNBC saying the iPad is “hot enough to be uncomfortable at least,” sent the media scrambling for their computer keyboards to write a story.

The interesting thing about Consumer Reports that very few people picked up is that they contradicted themselves. In a blog post on its own Web site they said the iPad “felt very warm but not especially uncomfortable.”

So, which is it? Is it hot and uncomfortable or just warm?

From a news cycle standpoint, it doesn’t really matter. Consumer Reports got it’s moment of glory and every blog and news story written for the next 24 hours quoted them.

In what turned out to be a reality check for many, heat tests conducted by a number of media organizations revealed that the iPad heat problem could not be replicated.

“Though the new iPad did run hotter than the iPad 2, the difference wasn’t great,” wrote PC World’s Melissa J. Perenson. “And in repeated lab tests of the new iPad, we could not replicate the disturbingly high temperatures that some sources have reported. More important, the new iPad was not dramatically warmer than either the Asus Eee Pad Transformer Prime or the Samsung Galaxy Tab 10.1 LTE.”

There goes one controversy.

The next iPad issue people latched onto was the battery. The battery supposedly showed a false reading when charging and it took significantly longer to charge than the iPad 2.

According to AllThingsD, who spoke with Apple about this issue, all iOS devices will show 100 percent when it’s completely charged. The device will then discharge a bit and charge itself back up until it is unplugged.

The issue of taking significantly longer to charge is simply because Apple put a larger battery in the iPad. The company needed to do that to ensure similar battery life to the iPad 2, while still adding new features like improved graphics to the new iPad.

Another controversy gone.

The last major firestorm for the iPad came when people noticed they were running out of data on their plans very quickly.

The Wall Street Journal noted a user that watched hours of video found he used up all of his data. USA Today’s Ed Baig wrote that he used up his entire 2GB data plan downloading apps on the iPad.

This is not an iPad problem, it’s a user problem. If you have a 2GB data plan and you download 2GB of data, whether that’s watching video or downloading apps, you will have no data left. Simple math.

Third controversy gone.

Each Apple product launch is similar in a number of ways. One of the most disturbing is that people look for ways to knock Apple and its product down. More often than not, the so-called problems turn out to be untrue, but in many ways the damage has already been done.

All the average consumer hears is the iPad has heat problems and the battery is messed up and you can’t use the data connection because it uses too much.

Luckily consumers are educating themselves more each day about the products they buy, and it shows in the numbers. Apple sold 3 million iPads in the first weekend alone, making it the most successful iPad launch yet.

As consumers and journalists, it’s important to make sure all companies produce the best products they can, but making up controversies is not the way to do it.

Android is Losing Momentum

 
I wrote a column earlier this year titled “2012: The Year Google Fixes Android or Loses the War.” In that column I laid out a number of issues facing Android as well as the business reasons why many problems existed. When we think about Android we need to remember that Google is an advertising company and that is how they think. With that in mind Google’s platform decisions will be made with that agenda. This point needs to be clear, Google is an advertising company, Apple is an experience company.

Recently as well ZDNet writer Jason Perlow wrote an interesting article on why he is “sick to death of Android.” In fact if you survey the media sentiment toward Android over the past six months you will see that much of the excitement is gone and it has moved to frustration. With these observations in mind it comes as no surprise that recent Nielsen data gives evidence of Android’s momentum slowdown and what I believe will be inevitable market share decline.

Over the past six month’s iOS has closed the gap in smart phone platform share. Look at this data from Nielsen released about smart phone acquirers for the October 2011-December 2011 time frame. Then look at the data this morning Nielsen released and what you will see is that iOS closed the gap on Android platform share with buyers over the past 6 month’s with recent purchasers. In fact if you look back over the past 9 months you will see the momentum change. Since I was interested in this data I created a graph here using Nielsens data of smart phone buyers over the past 9 months.

Prior to June of 2011 Android was on an upswing then as you can see the months after with recent smart phone purchasers momentum has shifted. I anticipate that this trend of Android’s decline and momentum loss will continue unless Google shows me something to convince me otherwise. If developer interest, OEM support, market interest, etc, all continue to decline as it is right now then it would not surprise me that by the end of 2013 Android will no longer be the dominant OS platform in smart phones–at least in mature markets. This is of course contrary to much of the data and forecasts put out by my analyst colleagues but I believe momentum shift is happening just not in the same direction they do.

This actually opens the door for Windows Phone in my opinion. AT&T has been very vocal about being aggressive with the Nokia Lumia 900. Sascha Segan wrote a great article yesterday titled “Windows Phone Smokes Android, But Can’t Sell” . He highlights Windows Phone and how high it ranks in net promoter scores. We track net promoter scores closely because it represents user sentiment and specifically about their potential to recommend. Interestingly net promoter scores with the Nokia Lumia devices are very high.

The momentum downswing of Android and the inevitable decline of inventory as more OEMs support Windows Phone as well is why I agree with my colleagues at IDC that there is a platform shift taking place. However I believe iOS for sure and potentially Windows Phone are the longer term winners, unless Google can make some market moves to convince me otherwise.

5 Million Galaxy Note Shipments Proves One Thing

I am actually not surprised at the news that Samsung has shipped (not sold) 5 Million Galaxy Note smart phones. Given that it is doubtful that Samsung will reveal actual sales figures as well as regional breakdowns, my educated guess is that most of these devices were shipped and sold in Korea, as also pointed out by this CNET article.

If they shipped or sold nearly that many in the US that would be impressive but just like the first Galaxy Tab, I have a strong hunch the device was appealing in Korea. What this proves is how potentially different each region may be in terms of their consumer preferences. This is key to understanding these markets because the way in which different regions mature with regards to smart phones and tablets has huge impacts on regional device strategies.

My firm has a history of researching and studying consumer adoption cycles of technology products. What is becoming clear to us is that each regional consumer base may not have universally similar traits. This is reminiscent of much of the miniaturization efforts of Notebooks and other gadgets from Toshiba and Sony and their wild success in Japan alone. Of course both of those companies were based in Japan so of course Japanese consumers would be loyal. However, Japanese consumers did have a desire for smaller gadgets which is why many of those products were successful there and almost no where else in the world.

This again is what I think is happening with the Galaxy Note and there is of course nothing wrong with devices having regional success. In fact strategically designing for not only market segments but also regions is a smart global strategy.

I actually applaud Samsung’s efforts to differentiate and experiment with new form factors and use cases. The Note is positioned uniquely in the market both in terms of size and with the pen accessory and that alone is enough to get some consumers to at least check it out. This is one of the better examples of late of an Android device being designed to stand out in the sea of sameness that is Android devices.

Whether or not these devices are mass market or even global successes is largely irrelevant for the time being. What matters is that consumers are exposed to a variety of different choices and options with regards to their technology.

How Wi-Fi Can Save Data-Guzzling Mobile Devices

It didn’t take buyers of the new iPad long to discover an unpleasant reality about their new toys. The iPad, with its fast LTE data connection high resolution display, can devour data faster than wireless carriers want to supply it. It you watch a lot of video, your monthly allocation of 2 to 4 gigabytes of data will be used in days, and you’ll be running up bills at the rate of $10 a gig (on AT&T or Verizon in the U.S.)

Photo of cell tower

Wi-Fi and cellular-type services are, of course, both wireless, but there are some critical differences between them. Cellular service operate on specific frequencies assigned to carriers by the government; Wi-Fi is a free-for-all in a couple chunks of spectrum left free for unlicensed use. Cell services use expensive base stations and antennas on towers that cover anywhere from several city blocks to many square miles, depending largely on antenna height; Wi-Fi uses inexpensive access points put most anywhere, with a normal top range of 100 meters. handsets and other devices automatically authenticate themselves to cellular networks, which grant access to subscribers; Wi-Fi either requires external software authentication or is completely open to all comers.

The trick to greatly expanding capacity, especially in areas of maximum use, is to turn Wi-Fi’s weaknesses into strengths. Physically small, low-cost, short-range access points, for example, mean that the same spectrum can be used over and over again even in a relatively confined space (and reuse can be increased with careful placement of access points, turning down the transmit power, and use of directional antennas.)

Devices equipped to use both Wi-Fi and cellular networks are designed to favor Wi-Fi when it is available. But the switching to public Wi-Fi networks is far from seamless and often requires some level of human intervention.

Carriers are increasingly interested in moving traffic from their cellular networks to Wi-Fi but they want to limit access to subscribers. Wi-Fi is relatively cheap, but it’s not free, with one of the biggest costs being the “backhaul” connecting access points to the network via landlines. There currently are two ways to authenticate users. Wi-Fi Protected Access requires a password. It’s simple and lets devices connect automatically after the first time, but distributing the network identifiers (SSIDs) and passwords is difficult, and there is nothing to stop a subscriber from sharing the password with non-subscribers. 802.1x authentication requires a user name and a password, which typically must be entered in a login screen. (The use of 802.1x, which is sometimes used to provide encryption on otherwise open networks,  requires a login even if that merely means opening a web page and agreeing to the terms and conditions.)

Enter 802.11u, the latest item in the IEEE’s standard 802 alphabet soup collection. 802.11u, also known as Hotspot 2.0 or Passpoint, is a standard for linking Wi-Fi hotspots to cell networks. A your mobile devices will  sense the presence of a Hotspot 2.0 access point and connect to it automatically. The access point will verify the device’s subscription status,  most likely using the SIM card present in many phones,  and will exchange cryptographic keys so that all traffic between the device and the access point is encrypted (a major weakness in today’s open networks.)

Niels Jonker, chief technology officer for W-Fi  hotspot operator and aggregator Boingo, says he expects to see Hotspot 2.0 access points begin to make their appearance early next year. Jonker says sports stadiums are  logical places for early deployment. Cellular data networks sag under the strain of tens of thousands of fans texting, tweeting, and, especially, sending pictures all at once. And the reinforced concrete construction of stadiums makes it relatively easy to create small hotspots, allowing for high reuse of spectrum. Device-dense urban centers, such as the Times Square area of Manhattan, are a likely next step, along with airports.

In effect, Hotspot 2.0 will turn these hotspots into extensions of carrier networks. That can be a win for both the carriers, who ease the pressure on their cellular networks, and customers, who get fast wireless outside (unless the carriers really screw this up) increasingly restrictive data caps.

 

How Apple is Cornering the Market in Mobile Devices

I have been speaking with various vendors of tablets lately and more than once, the topic of Apple “iPodding” them has come up. iPodding basically refers to the fact that although Apple has had the iPod on the market for over 10 years now, they still have over 70% of the MP3 portable digital music player market. This fact is giving many of the tablet vendors nightmares. Although they see this tablet market as a very large one and believe there is room for multiple tablet vendors given the potential market size and potential world wide demand, they know very well that Apple has done a great job in cornering the MP3 player market with iPods and are afraid that Apple could do the same with tablets.

And even though Apple has not cornered the market in smartphones, all are amazed that Apple had record iPhone sales last quarter and realize that Apple has just started selling iPhones in the Chinese market and could be expanding to other BRIC (Brazil, Russia, India, China) countries too. And many of the smartphone vendors are certain that Apple will bring out a lower cost iPhone at some point and get very aggressive in emerging markets within the next two years. An even harder fact for them to swallow is that when it comes to smartphone profits, Apple takes about 75% of all profits made in cell phones.

While all of them think that they can compete with Apple when it comes to hardware, and maybe even software, what they all pretty much know is that the secret to Apple success is that they have built their hardware and software around an integrated ecosystem based on a very powerful platform. And it is here where their confidence level lags and the “iPodding” fears raise its head. And to be honest, this should really concern them.

Apple is in a most unique position in which they own the hardware, software and services and have built all of these around their eco-system platform. That means that when Apple engineers start designing a product, the center of its design is the platform. For most of Apple competitors, it is the reverse; the center of their design is the device itself, and then they look for apps and services that work with their device in hopes that this combination will attract new customers. In the end, this is Apple major advantage over their competitors and they can ride this platform in all kinds of directions.

For example, when they were working on the iPad, they already had in place the iTunes content store and since all were based on the iOS platform, it was pretty straight forward for them to now build the iOS iPad Apps environment that easily sat on top of this already existing software platform. Of course, the iOS app platform already existed for the iPhone so all they had to do is to create an apps toolkit to take advantage of the new screen size they now had with the iPad.

We will see this same concept repeated when they eventually release anything for the TV. The current Apple TV product is a good first step and is also based on this iOS platform and eco system. But let’s say they design an actual TV; the platform is already in place for them to tap into it and indeed, the center of design for any future TV is the platform itself.

For a lot of vendors, they had hoped that Google’s Android would deliver to them a similar platform to build on, but to date that has not been the case. The various versions of Android only complicate things for the vendors and the software community and in essence they really don’t have a solid unified platform to build anything as powerful as Apple’s iOS architecture. As a result there is a lot of fragmentation in the Android marketplace. This is more than problematic and has been at the heart of Android failures in tablets thus far.

And I am not sure Microsoft’s new Windows 8 platform will deliver what they need either. The key reason is that Windows 8 is still based on a PC Centric OS and this is being extended downward to tablets. At the same time, they have a Windows OS for their smartphones that share no code and no app base. In the end, it delivers at best splintered apps and a non-unified ecosystem even if all the devices have the same Metro UI. I believe this OS has more of a chance to challenge Apple then Google’s Android will, especially in tablets. But the lack of a powerful unified platform that the vendors can really design around and support, along with vendors own quests to differentiate, could cause this approach to have a hard time competing with Apple too.

The bottom line is that when it comes to competing with Apple, it really is all about the platform. And at the moment, I don’t see anybody creating a unified and powerful enough platform that comes close to or is equal to what Apple already has in the market. That is why Apple is cornering the market in mobile devices today and why it could continue to grow its user base WW at the expense of their competitors. Based on marketing material on Apple’s own website, I would say they understand this as well.

Software Updates: Another Reason iPhone Keeps Winning

When Apple introduced the iPhone in 2007 with AT&T as its exclusive partner, it made two revolutionary changes in how mobile phones were sold and managed. First, it was to be sold without a carrier subsidy. Second Apple would control both the initial software load and all updates.

Windows Phone Tango logo
Tango is the newest version of Windows Phone software

The first change didn’t last long. Faced with customer resistance, the price was cut from $600 to $400 two months after introduction. And when the iPhone 3G came out a year later, it was priced at $200 with a traditional carrier subsidy.

But Apple held firm on software and insisted on similar terms with all other carriers offering the iPhone. The result has been a tremendous advantage for Apple. When it pushes out an update, it is immediately available to every iOS device that can run it. Typically, a large percentage of users upgrade within days, especially now that the updates are distributed over the air. This means that the great mass of iPhone users are all running the same software at any given time. The users always have the latest and greatest and developers have a single OS version to target.

When it was introducing Windows Phone 7, Microsoft perceived the Apple advantage and was determined to follow the same part. Microsoft officials declared that it, not carriers or handset makers, would determine when upgrades went out. Unfortunately, however, they apparently failed to get that in writing. Trouble started with the very first “NoDo” update, and every software change since has staggered out on the carriers’ own schedule.

On a recent post on his Supersite for Windows, Paul Thurrott accused AT&T of hurting Windows Phone users and Windows Phone itself by holding off on distribution of an important bug fix that Microsoft made available in early January. Thurrott writes:

AT&T, I’d like my on-screen keyboard to stop disappearing when I’m typing. Microsoft fixed this bug in January, after putting that update through a wringer of tests that, get this, were partially designed by AT&T. There is no good reason for me and other Windows Phone users not to have this update already. No good reason at all.

The best that can be said about the Windows Phone situation is that it isn’t as bad as the  horrific version fragmentation in the Android world, where handset makers are shipping new phones running the 2010 Gingerbread version of the operating system rather than the Ice Cream Sandwich version released last fall.

Apple has enjoyed phenomenal success with iPhone because the product is very good. But it has also been blessed by the monumental incompetence of the competition. Apple was able to force a major, pro-consumer change on carriers at a time when it had 0% of the market. And its sorry competition is still unable to match it.

 

Microsoft Needs to Get its Apps Together

Last week in my Friday column I outlined a few of the challenges that I think Microsoft has in front of them with Windows 8. I cited lack of Windows momentum in the market along with changing software and app economics that are going to challenge Microsoft in ways they have never had to deal with. That being said I am rooting for Microsoft on this one as I have followed every major release since Windows 95. Although, I am not sure I have ever analyzed a release where I personally have had so much uncertainty about its chance of success.

Why I am Excited About Windows 8

Before I hit the larger direction behind this column, I want to make a few points about why I am excited and optimistic. What has me excited about Windows 8 is the kind of hardware innovation we are going to see because of it. Intel is helping this hardware innovation around Windows 8 with their UltraBook initiative and many of the products that will hit the market later this year and next are very interesting. Tim wrote earlier in the week about a category we are looking at heavily called “hybrids” which are tablet first hardware designs paired with a keyboard for when a consumer may need or want it. This is just one of many hardware designs that I think are very interesting and I am anxious to see how the market responds to them.

We write frequently about how the technology industry moves in cycles where a clear and obvious value shift moves from hardware, software and then to services. This example is clear in Apple’s ecosystem where hardware remains relatively constant and the major value has moved to software and now creeping into services.

Windows 8 because it is new and blends two unique experiences together will ignite a short term value trend where we will see new and innovative hardware built around the operating system. Inevitably, however, many of the designs we will see in hardware may not stick and the market will dictate which Windows 8 form factors are the winner. Because of the speed of this market and how mature the Apple and to a degree Android ecosystems are Microsoft–and partners– can not simply rely on the hardware and Windows brand alone to give them momentum in this market. Rather, for Microsoft to have a shot when they launch they need to get their apps together.

More than Hardware

To my point above of how the value chain evolves, it is as if, for Microsoft with this release, they need to come to market with as mature an ecosystem as Apple and Google in terms of apps and a software developer community. This, in my mind, is one of the most important factors necessary to truly evaluate and form an opinion of how successful the Windows 8 launch may be.

Microsoft needs to learn from Google on this one as the utter failure of Android tablets to gain any real traction is due to the lackluster apps built for tablets. Microsoft is in a similar position with Windows 8 Metro Apps. Of course Microsoft has legacy apps to fall back on but I still question how valid that really is in pure consumer markets.

My Techpinions colleagues Steve Wildstrom and Patrick Moorhead have already covered some of the potential legacy hardware issues with Windows 8 and perhaps some of the challenges Windows 8 faces on non-touch notebooks or desktops–which will still be a healthy portion of the market. I agree with and share their concerns in those areas but I am mostly concerned about what new and exciting software that will be waiting for consumers when they purchase these new Windows 8 devices.

Related Columns:
Windows 8 CP Tablet Experience: Distinctive yet Risky for Holiday 2012
Windows 8 and Mountain Lion: Same Problem, Different Answers

To use a video gaming industry analogy, Microsoft needs a title franchise to drive the hardware. They had this with XBOX and Halo where many consumers bought the XBOX simply for this title– it was that valuable. There has to be something that grabs consumers attention and appeals to them in a way that no other platform can.

This is clearly one of the strengths of Apple as they continually put products on the market both in hardware and software that drive demand. Microsoft and others lag in this category and it needs to change fast or they will face and even tougher uphill challenge than they already do.

As I stated earlier, we are rooting for Microsoft. We need healthy competition in this industry. However, our expertise in being industry and market analysts gives us insights into the challenging road ahead. To be fair, this is one of the riskiest things Microsoft has done in a while. Taking risks can bring great reward or fail miserably. Let’s just hope Windows 8 is more like the Windows 95 launch in terms of success and less like Vista, or even worse, Bob.

NVIDIA Solved the Ultrabook Discrete Graphics Problem with Kepler

When Intel released their first Ultrabook specification, one of the first component implications I thought of were the impact 633882_NVLogo_3D_DarkTypeto discrete graphics.  My thought process was simple; based on the Intel specifications for battery life, weight and thickness, designing-in discrete graphics that were additive to Intel’s own graphics would be difficult, but not impossible. By additive, I mean really making a demonstrable difference to the experience versus just a spec bump.  While I respect OEMs need to add discrete graphics for line logic and perception, sometimes it doesn’t make an experiential difference.  This is why I was so surprised and pleased to see NVIDIA’s latest discrete graphics solutions inside Ultrabooks. NVIDIA’s new GPUs based on the “Kepler” architecture not only provide an OEM differentiator, but they also provide a demonstrable, experiential bump to games and video.

Today’s Ultrabooks share similar specs

Today’s field of Ultrabooks is impressive but lack a sense of differentiated hardware specification and usage models. I deeply respect differentiation in design as I point out in my assessment of the Dell XPS 13, but on the whole, I can do very similar things and run very similar apps with the current top crop of Ultrabooks.

As an example, let’s take a look at the offerings at Best Buy.   Of the 13 Ultrabooks, all offer roughly the same or similar specifications: processor (Intel Core I-Series), graphics (Intel HD), operating system (Windows 7 64-bit), display size (13-14″), display resolution (1,366×768), memory (4GB RAM), and storage (128 GB).

image

Of these specifications, the level of the Intel Core CPU primarily determines the differential in what a user can actually do with their Ultrabook.  As Ultrabooks have matured a full cycle, differentiating with graphics makes a lot of sense, particularly in the consumer space.

NVIDIA’s Kepler-based GeForce GT 640M Mobile Graphics

Today, NVIDIA launched the first of their latest and greatest GeForce 600M graphics family, the GeForce GT 640M. This GPU features a new architecture code-named “Kepler” which is destined for desktops, notebooks, Ultrabooks, and workstations.  Designed to be incredibly powerful and efficient and created on TSMC’s lower-power HP 28nm process, test results I’ve seen show these new GPUs deliver twice the performance per watt of the prior generation.  Anandtech has thoroughly reviewed the desktop variant, the NVIDIA GeForce GTX 680, and have given NVIDIA the single card graphics performance crown.

NVIDIAs Kepler differentiates the Acer Timeline Ultra M3

acer m3With the NVIDIA GeForce GT 640M, users can now get the new graphics and the Ultrabook benefits of thin, light, responsive and great battery life. Consumers can actually buy this capability today in Acer’s new Timeline Ultra M3.  The M3 can play all the greatest game titles like Battlefield 3 at Ultra settings, is only 20mm thin and gets 8 hours of battery life.  NVIDIA suggests that this new combination of Ultrabook and Kepler-based graphics equates to the “World’s First True Ultrabook”. I need to test this for myself, but they have a point here given that it provides between a 2X and 10x bump in the most demand gaming titles over the Intel HD graphics.

How NVIDIA’s Kepler-based GPU fits in an Ultrabook

As I said earlier, when I saw the Ultrabook specification, I thought it would be very difficult to get decent discrete graphics into an Ultrabook. My concerns were around power draw to achieve minimum battery requirements and chassis height in 13 and 14″ form factors to include a proper cooling solution.

Between NVIDIA and their OEMs, many different factors played into enabling this capability:

  • NVIDIA Kepler architecture is twice as efficient as the prior SM architecture. The inverse of this is that at half the power, you can provide the same performance. For instance, the GT 640M reportedly provides the same performance as the previous GTX 460M enthusiast class GPU, at around half the power consumption.
  • NVIDIA Optimus technology automatically shifts between the lower power/performance of the Intel HD graphics and the higher power/performance NVIDIA discrete graphics. When the user is doing email, the Intel graphics are operating and the GeForce GPU is consuming zero power.  When the consumer is playing Battlefield 3, Optimus automatically turns on GeForce GPU to provide the best possible performance.
  • New and better power management allows GeForce GPUs to maximize performance by intelligently utilizing the full potential of the notebook’s power and thermal budget. For example, if the notebook’s heat sink assembly has spare thermal headroom, the GeForce GPU can dynamically increase frequency to provide the best possible performance without adversely effecting operating temperature or stability.

I was correct earlier in that this was very challenging and between NVIDIA and its OEMs. It’s clear they stepped up and made it happen.

Ecosystem and NVIDIA Implications

Having NVIDIA’s new high performance graphics inside Ultrabooks is good for the entire ecosystem of consumers, channel partners, OEMs, ODMs, game ISVs and of course, NVIDIA:

  • Consumers get between 2-10X the gaming performance plus all the other Ultrabook attributes.
  • Channel partners, OEMs, and ODMs can now offer a much more differentiated and profitable line of Ultrabooks.
  • Game ISVs and their distribution partners can now participate more fully in the Ultrabook ecosystem.

And, of course, NVIDIA has a big potential win here, too.  According to GFK, over the past two quarters NVIDIA has picked up nearly 10 points of market share inside Intel-based notebooks. NVIDIA’s Kepler only enables them to further increase this share, particularly with Intel Ivy Bridge-based Ultrabooks.  AMD hasn’t yet played their full mobile cards yet, but given AMD’s known GCN architecture and TSMC’s 28nm; they have limited weapons to pull out of their 2012 arsenal in Ultrabooks. The graphics world is a very dynamic market, so you never can be certain what each player is holding back.  AMD held the discrete graphics leadership position for a while, but 2012 looks very good for NVIDIA.

Windows 8, Metro, and Desktop: The ISV App Challenge

Windows 8, at least in its current Consumer Preview form, presents a confusing picture to folks trying it out on a conventional, non-touch PC. It’s one operating system with two user interfaces–the traditional Desktop and the new tabletized Metro–and you find yourself jumping back and forth between them a lot.

Windows 8 screen shotBut users aren’t the only ones who will be jumping. The split personality of  Windows 8 creates some big challenges for the independent software vendors whose efforts will play a big role in the new operating system’s success. And these choices will not be easy.

Sticking with Desktop requires minimal changes. Existing programs will run fine, but ISVs looking to update will probably want to include a “touch mode,” similar to that used in the forthcoming Office 15, enlarges icons and other UI elements to make their use easier on tablets or touchscreen PCs. But it remains to be seen how usable these touch mode Desktop applications will be on mouseless, keyboardless tablets. Perhaps more significantly, sticking with Desktop closes an ISV out of the Windows on ARM tablet market because ARM tablets will support only Metro apps.

Apps rewritten (or, to really work well, reconceptualized) for Metro will run on all platforms. But  just as Desktop apps are awkward to use on tablets, Metro apps are not very comfortable on traditional PCs. The requirement that they run full screen, or at best, as a second app in a sidebar, won’t make many computer users happy. (As I write this, I have 10 windows open in 10 different applications on my 27″ iMac.  And by my standards, that’s an empty desktop.)

Microsoft itself is splitting the difference with Office 15. Based on information that has leaked out from test of a technical preview edition, Microsoft is splitting the difference, creating Desktop applications with a Metro look and feel. But Microsoft has a unique advantage: Office applications and the Windows Explorer file manager will be the only Desktop apps allowed to run on ARM tablets.

Most heavyweight Windows productivity applications are likely to stay with Desktop. Filemaker Pro, for example, has no plans for a Metro version of its flagship product, though I wouldn’t be surprised to see a  Metro edition of something like the iOS Filemaker Go for database field entry and lookup on windows tablets. (Filemaker is owned by Apple, but over half of its installed base is Windows.)

Related post: Windows 8 and Mountain Lion: Same Problem, Different Answers

Adobe has invested a great deal over the years in creating an Adobe UI that achieved a high level of consistency between the Windows and Mac versions of its Creative Suite products and I can;t see them giving it up for Metro.  But Adobe has a real opportunity in creating lightweight, distinct versions for Metro. Windows tablets, for example, will desperately need an app to compete with the new iPhoto for iPad, an app that rips the heart out of Adobe’s consumer-oriented Photoshop Elements.

In a world of unconstrained resources, ISVs would develop touch mode Desktop apps that retain the full capability of current versions as well as lighter weight Metro editions. But in the real world, the constraints are tight and getting tighter as the reluctance of consumers to pay as much as $10 for tablet apps puts relentless downward pressure on software prices and margins.

I suspect the overwhelming majority of ISVs will stick with Desktop. That’s where the installed base is and where pricing still gives them a chance to make some money. And that could be bad news for Metro and for Windows tablets, because if the iPad vs. Android has proved anything, it’s that apps are the key to tablet success.

 

 

Why The iPad Will Change How We Work

What is becoming more clear every day is the way in which tablets are changing paradigms of computing that have existed for decades. The entire way we think about computers, and computing in general, is undergoing significant change. In the days of the desktop and notebook, computing hardware and software was functionally the same and remained relatively unchanged. Specifically how we used a mouse and keyboard as the main way to interact, work, play, produce, create, etc.

The iPad launched a new day in computing, one where the paradigm of mouse and keyboard computing gave way to touch based computing. In the early days it was programs like VisiCalc which paved the way for computers to move from hobby to office tool. Today we have a slew of apps on the iPad that are being created every day that are proving the iPad is more than a consumption and entertainment device and is a powerful tool in which genuine creation and productive jobs can be accomplished.

I have thought about this for a while and we have written extensively about many of the ways touch computing opens the door to new opportunities. However, it wasn’t until recently, with the launch of iPhoto on the iPad, that I have come to a deeper realization of how profound this change may be. That is why I choose to title this column the way I did. I truly believe the iPad and more specifically touch based computing will entirely change the way we work, create, produce, and more.

Tough Tasks Become Easier

While going through and analyzing the slew of information in the help tips for iPhoto for iPad I came to a profound realization. When it comes to content creation, touch and software optimized for touch, allows us to do with ease, tasks that were either very difficult or extremely time consuming with mouse and keyboard computing. This may or may not apply to all tasks or all software but there are certainly tasks that shine on touch platforms. iPhoto for iPad is one of the clearest cases of this.

I have been into photography since high school, taking photo for three years, and staying active since always trying to make perfect photographs. I also would call myself a advanced user of Photoshop. As I have been using iPhoto for iPad more and more it has become clear how powerful of a tool iPhoto for iPad is when it comes to photo editing. What’s more is that iPhoto, when paired with touch optimized software, actually makes extremely complex tasks much easier and enjoyable than with a mouse and keyboard.

A key example of this is adjusting colors in a photo. If I took a photo and wanted to adjust the color in just the sky for example, I would need to isolate the sky and then tweak the color elements independently. With iPhoto for iPad you simply touch then slide to the left and the software adjusts just the blue skies to your liking. With one single touch iPhoto on iPad accomplishes a task that would take a minimum of 5 clicks with a keyboard and mouse and probably 5 min or so of precision mouse work. This is just one example of many of a way that touch computing will change how we work today and of course in the future.

Mainstream Consumers Can Now Participate

Using again the Photoshop example another realization struck me. If I sat my kids or my wife down in front of the desktop or notebook, opened Photoshop and an image and had them try to edit it, there would be mass confusion. I would have to spend quite a bit of time teaching them several basic things just to get them started.

Mastering a program like photoshop is no easy task for the non-techie, think of all the seminars that exist for software and computer literacy. All of this changes with the iPad and touch based computing. I gave my kids the iPad, opened iPhoto and an image, and let them go. Watching them for five minutes they figured out how to adjust colors, lighten areas of an image and add effects (they are 6 and 9).

They nearly mastered a program in under 10 minutes and began doing professional level tasks in that short time frame. This would be nearly impossible without extensive time and training using a mouse, keyboard, menus, icon palettes, etc.

Touch based computing opens the doors to brining true computing to the masses. Think about how many consumers out there have notebooks or desktops, running software capable of creating amazing things and they never use it or when they do they don’t take advantage of its full potential. Touch computing changes all of this and is the foundation that will bring more consumers to create and produce things they never would have using a mouse and keyboard.

A quote I am fond of is “simple solutions require sophisticated technology.” The iPad, and it touch computing software ecosystem is one of the most sophisticated technologies on the market today. It is no wonder that the iPad is enabling simple solutions and inviting more and more consumers to participate in computing in ways they never have before.

Although I focused this column on how the iPad and touch based computing will change how we work, produce, and create, we ultimately believe that this platform will also change the way we play, learn, be entertained, and much more.

It all boils down to the fact that the iPad is changing everything.

Intel and Microsoft’s Secret Weapon Against Apple

Intel and their partners are about to launch the biggest promotion in a decade for a new product category called UltraBooks. Microsoft is also about to launch a major update to Windows called Windows 8 that introduces a new user interface based on touch with their new Metro UI. Together they are critical products for the future of each company individually.

Form Factor Evolution

In the case of UltraBooks, I actually see them as the natural evolution of laptops and not revolutionary as Intel would like us to think. Rather, they take advantage of the industry’s constant push to make things smaller, lighter, thinner and have better battery life. For mainstream consumers who have had to lug around their rather bulky laptops for the last 5 years, they would be justified in asking Intel and other Wintel vendors “what took you so long?” Given the fact that Apple has had their MacBook Air on the market for 5 years and it has defined what an Ultrabook should be.

With Windows 8 and Metro, Microsoft is also following an evolutionary path towards touch interfaces with their Metro based smart phones and soon to be Metro based tablets and PCs. Again, consumers could ask Microsoft “what took you so long?” since Apple has had their touch UI on the iPhone for 5 years and on their iPads for 2 years.

But both products have some interesting challenges attached to them when they launch later this year. In the case of UltraBooks, they most likely will have starting prices of at least $799-$899 although I hear there could be at least one that is pretty stripped down coming out at around $699.00. At these prices, they completely miss the mainstream laptop market that represents the bulk of laptops sold and are priced from $299 to $599.

In the case of Windows 8 and Metro, while Metro is great on Microsoft’s phones and works very well on the tablets I have tested it on, it does not translate well to the laptop or PC since 100% of existing PCs don’t have touch screens on them. And most of the PC vendors are not putting touch screens on the majority of their new laptops because to do so adds at least another $100-$150 in cost to the customer. If you have tested the consumer preview of Windows 8 and Metro on an existing laptop, you know how frustrating it is to use it on existing trackpads. I consider this an Achilles’ heal for Windows 8 and one that could really hurt its short-term prospects.

To be fair, Microsoft has recently (three weeks ago) released recommended guidelines for next generation track pads and a new design I have seen from Synaptics could make laptops work well with Metro once it gets into new laptops. But this should have been something Microsoft focused on a year ago and had all of the new laptops “Metro” enabled at launch. My sense is that Microsoft should have only launched Metro on tablets this year and gradually moved Windows 8 Metro to the consumer PC markets once they had laptops optimized for it.

Instead, I see a lot of consumer confusion on the horizon when they try to use Metro on existing trackpads and any other non-touch input device, as the experience will be confusing at first and frustrating afterwards. Also, you notice that Apple has not put touch screens on their laptops and desktops and instead, worked extra hard to create trackpads and external trackpads that map to the touch experience on the iPhone and iPad.

I consider the initial pricing for UltraBooks and putting Metro on laptops and desktops issues that could slow down any early adoption of these products this year and perhaps deliver a graduated adoption in the future. The two companies do have a secret weapon in the works that could get them a lot of kudo’s from the marketplace and be a key component in getting users really interested in Intel and Microsoft again.

A New Category

The secret weapon comes in the form of a new form factor often referred to as “hybrids.” These are either tablets that can be docked into a keyboard, turning them into a laptop or a laptop with a detachable keyboard. You might think they are one in the same, but they are very different in terms of design goals. In the case of the first, the design is specifically around the tablet and the keyboard dock is modular. We already have a lot of examples of this with the iPad where the tablet is the central device and the attachable Bluetooth keyboards are more of an after thought. In this case the keyboard just supports the input functions of the tablet. The same is true with the Asus Transformer line of devices.

But in the latter case, the design is around a slim laptop case and the screen (tablet) can be taken off and used as a tablet. I believe this latter design is the secret weapon that Microsoft and Intel can use against Apple and at least on paper, give Apple a run for the money especially in business and the enterprise. To a lesser extent it could be hot in some consumer segments where the keyboard is critical to what they do with a tablet and want a laptop centered experience as well.

This is where Apple’s current strategy can be challenged as they are offering these market two distinct products. There is the iPad that stands by itself, and then the MacBook Air, their UltraBook that like the iPad, also stands by itself as a separate product. The key reason is that each has their own operating system and although Mountain Lion, Apple’s new version of OS X brings a lot of iPad like iOS features to OSX, they are still separate and distinct operating systems.

But with the introduction of Windows 8 and used especially on a laptop centered hybrid in which the screen (tablet) can be detached and used as a true tablet that takes full advantage of Metro, Microsoft and Intel can give their customers the best of both worlds in a single device. When in “UltraBook” laptop mode, users can use Windows 7 and its comfortable UI they are used to and have available to them the over hundreds of thousands of Windows apps as is. But when the screen detaches, it automatically defaults to the Metro UI and the touch experience is now central to the device. Now apps designed for Metro can give the users a rich tablet experience out of the box. Sure, they could default to old Windows programs if needed, but running those on a tablet is clunky at best.

If done right, the user would end up with a Windows 8 UltraBook with a detachable screen (tablet) and have to only buy one device instead of two. Our research shows that IT and even some consumers would have no trouble paying $999 and above for this combo product. At this price it would be a bargain. Most IT purchased laptops are in the $699-$999 range now and those who bought iPads to augment their users work experiences cost at least $599 so a combo device say at even $1299-$1399 is more then reasonable for them. Intel knows this and believes that as much as 50% of all Windows tablets will be hybrids. And Microsoft will push these types of designed products especially if the uptake on Windows 8 on laptops doesn’t take off as planned.

Could anything potentially derail Intel and Microsoft’s “hybrid” strategy? Well, if Apple applied their great innovative design knowledge to creating a hybrid that blends the iPad and the MacBook Air into a single device, it could have an impact their ability to dominate this market. On the other hand, it would validate Intel and Microsoft’s strategy as well. If they beat Apple to the market with their version, which is highly likely since at least four hybrids are set to come out by Oct, it could be the “hero” product of the launch that shows users the value of an X 86 ecosystem and highlight to Windows users the need for Ultrabooks and Tablets and Win 8.

Why the Latest iPad Forecasts Are Wrong

The recent 2012 WW forecast for tablets from IDC which forecasts sales of 106MM units in 2012 with Apple’s iPad numbers at a little under 60MM has been widely picked up and republished across the internet. The report also predicted that Apple could lose dominant marketshare to the Android platform by 2015. Windows tablets do not figure in the IDC forecasts as currently IDC defines them as PCs.

While I’ve a lot of respect for IDC’s ability to identify key market trends, especially in the Enterprise IT market but I’m not convinced they have their finger on the pulse of the Apple iPad market nor Apple’s iOS strategy.

It appears that IDC has consistently under-estimated the iPad market since its launch and their recent forecasts seem to follow that pattern. Just to verify my suspicions I looked back at the IDC forecasts since the launch of the iPad on April 3rd 2010 when Apple sold 300,000 iPads on the first day and 3 million in the next 80 days.

I recall at the iPad launch IDC analysts noting that the iPad would do remarkably well if they sold 5MM units by the end of 2010. IDC subsequently estimated the total number of all tablets to be sold in 2010 at 7.6MM units. But when the final numbers were reported, Apple alone had sold nearly 15MM units. IDC’s forecast for 2011 was set at 44.6 MM units (the final sales for 2011 came in at nearly 69MM units with Apple selling 40 MM units). IDC first predicted 2012 sales of 70.8 MM units – this forecast was increased to 88 MM units and now stands at the 106 MM number announced by IDC a few days ago.

Based on the historical sales growth and the launch of the new iPad it is hard to believe that Apple will sell less than 60MM units in 2012.

With 40MM sales in 2012 – at least a doubling of that number is to be expected. Unlike the original iPad, which initially launched in the US, the new iPad will be sold in 36 countries by March 23rd. The combination of the price reduction on the iPad 2, the new iPad (third generation) and the highly likely launch of the 7.85′ iPad Mini for $299 should drive Apple iPad sales to well over IDC’s forecast number and I suggest that a number well above 80MM units is achievable for 2012 with an annual run rate of over 100 MM units.

IDC is hardly the only analyst firm underestimating Apple’s potential but is one of the most conservative.

The research company also predicts that Android tablets will have a higher market share than the iPad by 2015. Many have predicted that the growth of Android tablets will follow the success of Android smartphones but the markets are very different. The predicted success of Android tablets has not happened so far. The only tablet to get any traction is Amazon’s Kindle Fire. Amazon’s Kindle Fire is a gateway to Amazon’s retail store – it’s not really a tablet strategy – it’s a commerce strategy. To boost their m-commerce platform, Amazon is likely to drive their hardware sales by aggressive pricing – to near zero (perhaps even bundling the Kindle Fire with the Amazon Prime Free Shipping Service).

However, Amazon, if they could negotiate terms with Apple (which is a tall order) could be better off having a Prime app on the iPad rather that being in the hardware business.

At the low end, Android tablets may see some traction where they will be used low cost mobile web browsers and simple readers – especially in emerging countries where low pricing is essential to drive sales. But will customers want a product that has limited functionality, a sub-optimal experience and does not come with a massive eco-system of applications designed specifically for the device?

When Apple launched the iPad many questioned its role as a “Tweener” devices between the smartphone and the PC. Apple was however able to define the category due to the quality of the product, the user interface and experience but more importantly the totality of their eco-system -hardware, software, an apps development platform and a massive distribution system via iTunes.

Related Column: iPad: It’s More Than Just The Hardware

No other company gets close – so today, we don’t really have a tablet market – we have an iPad market. Note that Apple never refers to their product as a tablet – as they associate the tablet with Microsoft’s earlier failures.

Apple’s dominance of the tablet market has significant implications for media companies. Most have assumed that some equilibrium will eventually come into the tablet market, so a strategy of delivering content across multiple devices was a safe distribution strategy, even with the challenge of optimizing for many different devices. The publishers’ consortium Next Issue Media (made up of Condé Nast, Meredith, Hearst, News Corp and Time) decided, after negotiating difficulties, to eschew the Apple platform and support Android. A decision they are probably regretting. The success of the iPad platform lured each of the consortium members to find a way to eventually work with Apple so the value of the consortium is unclear if they remain solely focused on Android.. The question media companies now have to answer is whether the competitive platforms to Apple’s iPad can do justice to their digital publications. Can these platforms meet reader expectations or provide a significant large enough digital distribution channel to drive user and advertiser revenues ?

Apple will never compete for the low revenue, low margin low quality “budget” end of the market. Apple will always prefer a lower marketshare position so long as they maintain a high revenue and margin share of the segment. It’s possible that eventually the sheer numbers of very low cost tablets could outsell Apple’s premium products but I doubt it. Customers won’t be satisfied with an underpowered tablet any more than they were satisfied with the concept of the netbook. It’s much more likely that as smartphones increase in capabilities and significantly drop in price that they will be the mobile devices of choice in emerging markets. The functionality of even low cost smartphones will be superior in virtually every case to low cost tablets (other than display size and even then, foldable displays, 3D and projection could step in to solve that issue).

The tablet expected to take share from the PC market, overtaking it in unit sales by the 2015/16 timeframe. There will be a few specific cases where PC will remain superior to tablets in input and processing power, but that gap will narrow over the next few years and customers will flock to the convenience of the “tablet”. However the size of the tablet market , while significant, is never going to get close to the volume of the smartphone market which will be measured in billions.

We’re living in a world of digital mobility – it’s a multi-screen world – currently the dominant displays are smartphones, ultra-portable laptops, tablets and the TV but as the Corning Concept video suggests that will evolve. Apple’s goal is to take significant market share in each of the segments and bind them all other with an iOS platform that attracts the world’s best developers.

I don’t have IDC’s resources, contacts or detailed knowledge of the industry but I’ve been around the Apple marketplace for 25 years. My predictions are based on both gut and industry instincts – but are far from scientific but I’m willing to wager my 2012 estimates of iPad will be closer to the mark than the IDC forecasts. For the record I predict Apple will sell over 1MM units on March 16th. It will sell close to 10MM new iPad units within the first 30 days of launch as it rolls out t in 37 countries and in 2012 the total sales of iPads will be in excess of 90MM. My colleagues at IDC are willing to bet a nice bottle of wine that they will end up being more accurate than I am. Sounds good to me and no matter who wins I look forward to sharing it while we develop our own 2013 predictions for this exciting, emerging market.

Apple’s stock price is currently around $600, valuing the company over $500 billion. Already analysts are upping their target range to $700. If Apple continues to execute as well as it has this may too be conservative. I only wish I had the foresight to hold the Apple stock I purchased back in 1997.