Looking Forward to the Next Round of Innovation

I was surprised by a number of conversations I had while at this years CES. More than once the conversation turned to the staleness of innovation shown at the show. It is true there wasn’t too much to get excited about this year, but the remarks I heard seemed to indicate that there is a belief that we may be headed for a period where innovation is stagnant. I have to say that I disagree.

On Monday I wrote in my column about why I believe the PC landscape is about to change. I pointed out that the barrier to entry to create consumer electronics has dropped to an all time low. Making it feasible for any company with enough cash and a market strategy to start creating electronics of all shapes and sizes. My overall point was that consumer electronics is ripe for new entrants. More specifically new entrants with fresh ideas.

That being said we have to look at innovation as pillars. There is hardware innovation, software innovation, and services innovation. One could also throw in experience innovation as a pillar as well but it is intertwined with hardware, software, and services. Each of these pillars feed off each other and spur parallel innovations.

There are countless examples of how this chain of events works. We could look at examples from the first land line phones, to the PC, to the smart phone and more. However I am going to use the iPad as an example.

The iPad was a hardware innovation (not a conceptual innovation) that integrated all the right pieces of hardware into a touch computing package. The iPad then set in motion the opportunity for software innovation and eventually we will see more innovation in services as well. This leads us to what we can expect in this next round of innovation. Namely that it will come more from the software and services pillars.

This is not to say there will be zero hardware innovation. I simply believe we will see more innovation come from software and services which will take advantage of the hardware platforms that gain mass market attraction. Namely around devices like the PC, tablet, smart phone, and TV. All of those devices represent the platforms of the future. So although we will see some hardware advancements in those devices I don’t believe they will be monumental but more incremental. Screens will get better, semiconductors will get faster, devices will be go through design evolution, etc.

All those hardware platform innovations will continue to lead to new software, services, and experience innovation. Take yesterday’s news from Apple about iBooks 2.0 and the new interactive e-book experience. Tim stated that Apple just re-invented the book and he is right. The point that needs to be made, however, is that without the iPad and the platform innovation of tablets, it would never have been possible to even think about re-inventing the book. The hardware innovation created this possibility. Tim also rightly pointed out that if publishers are not careful they could be disrupted quite easily. The hardware platform innovation leads to not just the re-birth of something like a book but the re-birth of the publishing industry. This can also be said of the music industry, motion pictures, network TV, magazine, and perhaps even government or politics? All of these industries have the opportunity to re-invent themselves in light of new and innovative hardware.

The opportunities will be endless, and again, I am not saying that hardware innovation is dead, perhaps only that it is cyclical. The next cycle of innovation will be more focused on software and services rather than ground breaking new hardware. We could discuss new computing hardware like the smart watch, automobile and more, but perhaps those are more extensions of existing platforms rather than platforms themselves. I will leave that topic for another column.

Apple Just Re-Invented Books

This morning’s announcement from Apple about creating tools for interactive textbooks is actually a landmark announcement for four major reasons.

The first is how these tools can impact education. Ben wrote a good piece on this so I won’t elaborate on this too much here, other than to say that these tools will completely re-define how textbooks can be created and distributed. It is ideal for higher Ed textbooks but Apple and their major publishing partners are even doing high school level interactive books that should push iPads into education circles even faster.

Related Columns: Why the iPad is an Investment in your Child’s Future

The second thing iBook Author does is lay the groundwork for non-education publishers to create interactive eBooks as well. But, as Phil Schiller pointed out at the iBook 2 announcement event in NYC today, this tool can be used to create any book of any kind, not just interactive books. This free authoring tool is a major step towards making Apple not only a publisher in their own right but a distributor as well as delivering the hardware platform optimized for enhanced eBooks in general.

While the first push with these tools will be to educational authors, it won’t be long until mainstream authors start using these tools and use the iBookstore as their preferred distribution medium. And since these tools are so easy to use, authors who only write text-based content will begin playing with the integration of color drawings, illustrations and other media to enhance their story lines, which will only work properly on an iPad.

The third thing these tools do is give Apple a serious competitive advantage over other tablet vendors. The iPad is already the leading tablet, but by developing these rich authoring tools for creating interactive and enhanced eBooks for the iPad, it makes the iPad even more interesting to consumers and eBook readers from all angles. To date, Apple has sold about 70+ million iPads and we expect them to sell at least that many in 2012. This means that they are rapidly increasing their user base, which in turn becomes more attractive as an eBook publishing and distribution platform for all types of authors. This move really distances them from any other tablets on the market

But the 4th thing these tools could do is quite interesting. It has the potential of doing to the publishing industry what Apple did to the music industry. Although Apple did not invent the MP3 player, they re-invented it and then created the iTunes store, which with the iPod, became the # 1 vehicle for digital music distribution. Today, Apple owns 75-80% of the MP3 player market even though many others have tried to duplicate their success. But they created the iPod, the tools and the distribution medium for digital music that helped Apple own that market. Yes, music is now available on smartphones, but it took Apple’s competitors almost a decade to replicate their success and even then, it had to come on a completely different digital device.

Now Apple has a chance to re-invent eBooks by delivering a complete eco system of hardware, software development tools for creating next generation interactive eBooks, a publishing and distribution medium and a powerful hardware device for delivering this optimized content. On the surface this looks like a major move to get Apple more entrenched into the education market. But I see it as Apple’s first move to disrupt the entire publishing industry. If Apple’s does this properly, they could become the largest publisher and distributor of eBooks and in many ways, change the economics and overall distribution of eBooks in the future.

One more thing. If Apple was concerned about Amazon’s Kindle Fire and even Amazon’s role as a publisher and distributor of eBooks, they aren’t anymore. In fact, this is Apple’s response to the Kindle Fire and Amazon’s overall position as an eBook distributor. The key reason is that with these tools, Apple will completely raise the expectations of what should be in an eBook in the future by pushing the idea that all eBooks should have some type of rich interactive format that delivers an enhanced reading experience.

Of course, the Android or even Windows 8 tablet crowd could respond in kind, but at the very least, Apple has a two-year head start on them and given the competitors track record in trying to catch Apple that lead in this area could even be longer.

I also think that this probably signals that a lower cost iPad is on the way. For Apple to really get iPads into education and leverage this new interactive eBook development platform, they will need to have some models with lower prices. Given the tight budgets of schools and families who could really use something like this to help their kids education, iPads will need to be much more affordable if Apple is going to “own” this segment of the tablet market.

Why the iPad is an Investment in Your Child’s Future

Whether or not Apple uses this positioning, it is perhaps one of the best angles for the iPad. When friends, family, colleagues, or anyone who asks me, asks for my recommendation about iPad, I always add the benefit to kids – if they have them.

From the first iPad, and ever since, I have marveled at how my kids have taken to the iPad and more importantly how I have been able to use very helpful apps to assist in building critical skills. My kids both used the digital version of the popular “Bob Books” to help them prepare to read for kindergarten. I have been able to find apps at nearly every level of their education to let them engage more with relevant age-based subject matter.

I can say with conviction that the iPad has helped my kids learn to identify objects, colors, learn to read, build critical observational and critical thinking skills and more. This is not to say they could not have built these skills without the iPad, of course they could, only that the iPad has made the process more engaging, fun, and natural.

Touch Computing is the Future
When I was young, everyone was pushing to teach kids how to type as well as overall computer literacy. If you think about it, touch computing as well as things like the iPad in general, make computer literacy instant. My kids didn’t need to go sit through computer literacy classes to start using an iPad and begin computing. They picked it up and from day one used it to its full potential – for them. I would argue this is the case with any age group.

I have written extensively on the subject of touch computing, constantly highlighting its importance to our computing future. I believe touch represents the most natural computing paradigm, along with speech computing (which has not fully come to fruition). Touch breaks down traditional barriers to computing that a mouse and keyboard had traditionally created. Mouse and keyboard computing paradigms are still relevant, but have been designated to task specific usage.

Although touch computing is natural, exposing children to it at a young age will set their expectations for computing higher and potentially help create the next generation of leaders. Growing up with touch computing as the driving computer paradigm will lay an important base for our children’s future.

Related Columns Mentioned:
Why Tablets Represent the Future of Computing – at TIME.com
From Click to Touch – iPad and the Era of Touch Computing – At SlashGear.com

Re-Inventing The Book
Today Apple took that truth one step further with their announcement of iBooks 2 and the Author toolset. Today’s announcement on the surface is re-inventing the textbook and providing next generation publishing tool kits. It is however, quite a bit more. This announcement lays the foundation for the complete and total re-invention of books in general.

Up to this point, I have been disappointed with the publishing industries strategy to simply re-purpose books in e-reading form. Last year I wrote about the need to re-invent the book and to date it still hasn’t happened.

Hopefully with the toolkits Apple has developed and will continue to develop, publishers will get savvy and start being more creative with how they create package content. Which is essentially all a book is—the packaging of content. This packaging of content was limited to static words on a page, but with iPad the packaging of content is taken to a new level.

Publishers will get disrupted if they do not embrace this wholly and quickly. What is to stop smart people with a great idea to create the next era of interactive books? If the publishing industry is not careful, they could face the same fate as the music industry but perhaps to an even bigger extent.

Interactive books are the future and the iPad is the perfect platform for them to thrive. We will soon hopefully have not only next generation text books, but next generation children’s books, novels, graphic novels, biographies, and more.

For now, I intend to purchase these new interactive books for my kids and get them engaging with educational content. Since I truly do believe that having them use the iPad and integrating it into their educational routine is an investment in their future.

Related Columns Mentioned:
Re-Inventing the Book in the Digital Age – at SlashGear

Maybe Apple Can Fix Television; Someone Has To

Not long before his death Steve Job famously told biographer Walter Isaacson that he had “finally cracked” the problem of television. No one knows quite what he meant, and Apple has shed no light on the subject, but for the sake of the future of TV, let’s hope Steve left something important behind.

Photo of LG booth
The LG booth at CES 2012

At the International Consumer Electronics Show, the overwhelming feeling I got about television is stasis. My colleague Patrick Moorhead has a solid piece on TV makers’ experiments with new user interfaces. But those remain experiments, with no commitment to when, or if, we will see them on TVs you can actually buy. And the user interface, while desperately in need of improvement, is only one piece of a much bigger puzzle.

 
Related Column: How Sony can beat Samsung and LG on Smart TV Interfaces
 
The sad truth if you had told me that the TV displays in the Panasonic, Samsung, and Sony booths were actually left over the the 2011 show, I wouldn’t have argued with you. The main difference was much less emphasis on 3D, which the makers now realize is just a feature, not a revolutionary product. Only LG’s booth showed real commitment to 3D, and not necessarily in a good way. Its booth was a jarring riot of gimmicky 3D images coming at you from all sides, an effect allowed by LG’s move to passive, battery-free glasses that don’t need to sync to a particular set. Both LG and Samsung showed 55″ OLED displays, each claiming the world’s largest,  but to my eyes OLED remains oversaturated, garish, and a dubious improvement on LED-backlit LCD or plasma.

Even the internet connected TVs, which the makers promoted as this year’s big thing, seemed tired. Basically, they build the capability of a Roku box or other internet-connected device directly into the set. It’s an improvement in convenience, mainly though getting rid of one remote, but hardly enough to send anyone out to buy a new TV.

The fix TV desperately needs is an integrated solution. I want to get all of my TV–the stuff I get over cable as well as the content streamed over the internet in a single box that seamlessly combines all the sources. I don’t much care whether this is built into the set or done in a separate box–the box would have the advantage of allowing ample local storage, while a TV solution would probably have to rely on the cloud to save recorded programs. The difference in convenience is not very significant.

Such a solution would require a new user interface, something much better than Google managed for Google TV. But much more important, and much harder, it requires an entire new business model for content distribution. As I have written many times, the biggest impediment to a this breakthrough is not technology, since the technology needed to make it happen is available today, but breaking the iron triangle of content owners, networks, and cable and satellite  distributors who are prospering under the status quo. Can Apple succeed where everyone else has failed? I rather doubt it. But I’m cheering for them anyway.

 

 

 

How Sony can beat Samsung and LG on Smart TV Interfaces

As I wrote last week, Samsung and LG are following Microsoft’s lead in future interfaces for the living room. Both Samsung and LG showed off future voice control and in Samsung’s case, far-field air gestures. Given what Samsung and LG showed at CES, I believe that Sony could actually beat both of them for ease of interaction and satisfaction.

HCI Matters
I have been researching in one way or another, HCI for over 20 years as an OEM, technologist, and now analyst. I’ve conducted in context, in home testing and have sat behind the glass watching consumers struggle, and in many cases breeze though intuitive tasks. Human Computer Interface (HCI) is just the fancy trade name for how humans interact with other electronic devices. Don’t be confused by the word “computer” as it also used for TVs, set top boxes and even remote controls.

Microsoft recently started using the term “natural user interface” and many in the industry have been using this term a lot lately. Whether it’s HCI or NUI doesn’t matter. What does matter is its fundamental game-changing impact on markets, brands and products. Look no farther than the iPhone with direct touch model and Microsoft Kinect with far-field air gestures and voice control. I have been very critical of Siri’s quality but am confident Apple will wring out those issues over time.

At CES 2012 last week, Samsung, Sony, and LG showed three different approaches to advanced TV user interfaces, or HCI.

Samsung20120117-133700.jpg
Samsung took the riskiest approach, integrating a camera and microphone array into each Smart TV. Samsung Smart Interaction can do far field air gestures and voice control. The CES demo I saw did not go well at all; speech had to be repeated multiple times and it performed incorrect functions. The air gestures performed even more poorly in that it was slow and misfired often. The demoer keep repeating that this feature was optional and consumers could fall back to a standard remote. While I expect Smart Interaction to improve before shipment, there’s only so much that can be done.

LG
LG used their Magic Motion Remote to use voice commands and search and to be a virtual mouse pointer. The mouse

20120117-133949.jpg

pointer for icons went well, but the mouse for keyboard functions didn’t do well at all. Imaging clicking, button by button, “r-e-v-e-n-g-e”. Yes, that hard. Voice command search worked better than Samsung, but not as good as Siri, which has issues. It was smart to place the mic on the remote now as it is closer to the user and the the system knows who to listen to.

Sony
Sony, ironically, took the safe route, pairing smart TVs with a remote that reminded me of the Boxee Box remote which has a full keypad one side. Sony implemented a QWERTY keyboard on one side and trackpad on the other side which could be used with a thumb, similar to a smartphone. This approach was reliable in a demo and consumers will use this well after they stop using the Samsung and LG approaches. The Sony remote has microphone, too which I believe will be enabled for smart TV once it improves in reliability. Today the microphone works with a Blu-ray player with a limited command dictionary, a positive for speech control. This is similar to Microsoft Kinect where you “say what you see”.

       

I believe that Sony will win the 2012 smart TV interface battle due to simplicity. Consumers will be much happier with this more straight forward and reliable approach. I expect Sony to add voice control and far field gestures once the technology works the way it would. Sony hopes that consumers will thank them too as they have thanked Apple for shipping fully completed products. Samsung and LG’s latest interaction models as demonstrated at CES are not ready to be unleashed to the consumers as they are clearly alpha or beta stage. I want to stress that winning the interface battle doesn’t mean winning the war. Apple, your move.

The PC Landscape is About to Change – Here’s Why

One of my favorite quotes about change is:
“Life is a journey, and on a journey the scenery changes.”

The technology industry is also on a journey and on that journey the scenery will change. Whether many industry insiders recognize it or not the scenery is changing and it’s happening quickly.

The line is blurring between what is a PC and what isn’t. Devices like smart phones and tablets are proving to many that computing can take place on a number of different form factors. It is important for those who watch the personal computing industry closely to realize that the landscape as we know it is about to change drastically.

Tablets Take the Computing Challenge
It all began with the iPad. In as many times, in as many years, Apple again released a product that challenged the industry and forced many companies to turn introspective and re-think their product strategy.

The iPad has done quite a bit more than just challenge the industry, it has also challenged consumers to re-consider what exactly a personal computer is and what their needs are with one. What I mean by that is that our research is indicating that many consumers bought an iPad as a partial PC replacement. Meaning they were in the market for a new PC but instead bought an iPad, relegating their old PC as a backup for when they need a mouse and keyboard experience for certain tasks. What is interesting to the last point is that once integrating an iPad consumers realize they need the PC less and less for many tasks, especially when the iPad is paired with a keyboard. There are however, a few tasks like writing long emails or using certain software that these consumer still want a traditional mouse and keyboard experience for, only their observation is that those use cases do not occupy the majority of computing time for them on a regular basis. For that they remark the iPad suffices for their needs the majority of time.

As those in the industry who make PCs are already figuring out, tablets are a viable computing platform and having a tablet strategy is essential for anyone currently competing for PC market share.

We expect quite a bit of innovation in hardware, software, and services in the category over the next few years. Tablet / PC hybrids, which is a tablet with a detachable keyboard, could be one of the most interesting form factors we will see over the next few years. This product, if done right, will give consumers a two-in-one experience where they can have a tablet when they want it and a traditional mouse and keyboard experience when they want it, all in the same product. The big key – if done right.

Anyone Can Make PCs
Tim made the observation last week in his column that a fundamental issue within the technology industry is that the bulk of consumer product companies are simply chasing Apple rather than emerging as leaders themselves.

As companies look to duplicate the iPad and the MacBook Air this point becomes increasingly clear. What this creates is the opportunity for new entrants to create new and disruptive computing products by bringing fresh thinking to the computing landscape.

Perhaps a glimpse at this reality is Vizio’s announcement that they are getting into the personal computer game. With much of the hardware design for electronics moving into the hands of the ODMs, it makes it possible for anyone with a brand, channel, and cash to start making any number of personal electronics.

This is perhaps the biggest evidence about the change we are about to see in the PC landscape. The reality that the traditional companies, who were historically the leaders in this category may get displaced by new and emerging entrants.

Simply put, those who we expected to lead the PC industry may not be those who lead in the future. The truth is innovation does not stand still and if the traditional companies don’t want to do it someone else will.

The Simple Reason for Apple’s Success

Back in 1984, one of the major PC companies, who was spectacularly successful with their business PCs, decided that they could be just as successful if they created PCs for consumers. But they wanted them to be different from their business PCs since they knew a consumer model would have to be priced much less than their business models.

So they created a consumer PC that, for all intent and purposes was a “wounded” version of their business models, with a lousy keyboard, very weak processor and the cheapest monitor they could dig up. To say that it was a failure would be an understatement. To make things worse, the only OS they had at the time was MS/DOS so that meant they were giving consumers an OS that was hard to use and difficult to learn from scratch. But they reasoned that since so many business users had their PC with DOS at work, they would gladly buy a similar model for their home and since they knew DOS from the office, it only made sense that they could use it on their home PC.

Interestingly, when it failed, they were dumbfounded. They were certain that they had a winner on their hands and some of the top management kept pushing to re-design it and take a new model back to the consumers the following year. But to their credit, some of the people in the group questioned its potential and turned to outside experts to give a 3rd party opinion on the potential of a PC for consumers at that time.

I was lucky to be one of the few outside persons asked to weigh in on this subject so I went back to their HQ on the east coast two times to give my thoughts on the subject. In my presentation and documentation I gave them, I pointed out the major difference between business and consumer users were that business users had serious motivation to go through the hassle of learning a text-based OS, while the mainstream consumer did not. At the time, PCs pretty much only had software for business use. I argued that for PCs to take off, there would have to be a major reason for consumers to buy them, and emphasized areas like using PCs for educational purposes as well as possibly entertainment as well. I also told them they needed to be cheap.

I drew them a picture of the traditional marketing pyramid and showed that at the top we would find the truly early adopters, which at this time were quite IT driven. I then told them the second layer would possibly come from the worker bees whose IT leaders would push them to learn DOS and harness the PC to make their work more productive. But I told them the third layer would come from what today we call prosumers and, even at that time, I felt it would take at least 3-5 years to get these folks excited about PCs and get the PCs to a price point that they could afford.

And at the bottom layer of the pyramid, which is always the largest audience, I said they would find the mainstream consumer, but pointed out that I felt it would take at least 10 years before this crowd would finally buy into the PC vision.

I never found out how much my outside work on this project impacted their decisions but I do know that a week after I made this presentation, their consumer PC was killed off for good.

But there was another key point that I emphasized in this document. I said that the OS had to be easy to use and the PCs had to be simple enough so that consumers did not need a degree in engineering to run them. And if you know the history of the PC business, you know that consumer interest in PCs for the home did not kick in until Windows 95 hit the market, exactly 10 years after this company killed their consumer PC.

Ironically, even though our PCs have gotten spiffy new user interfaces and are clearly easier to use, to the point that PCs have penetrated pretty much every home in the US in some way or another, the fact remains that they are actually more complicated to use. Consumers not only have to deal with the plethora of desktops and laptops to choose from, they now also have to deal with Internet connections to the home, wireless connectivity, security, identity theft, multiple passwords, personal data in numerous non-connected files, and most recently, this new thing called the cloud.

But in the end, consumers want things simple and some handholding when things go awry. I am convinced that this is really at the heart of Apple’s success. They have one phone–the iPhone. They have one tablet–the iPad. They have two laptops but except for sizes and optical drives in the Pro models, they are actually all the same. And they have one major desktop–the iMac. Even in the iPod line, they have streamlined it to the iPod Touch and the Nano. If a person needs help, they have their Genius Bars and 24-hour hotlines in which the people on the other end actually now how fix your problem.

By comparison, there are now over 80 Android phones to choose from as well as at least 5 versions of an Android OS to deal with. And in the PC space, if something goes wrong, people don’t know who to go to for help. While some of the mainstream PC vendors do have 24 hour hotlines, my experience with them has been only marginally successful. And I have even stumped Best Buys geek squad a few times over the last year with problems with Windows laptops.

While we can point to Apple’s powerful OS, industrial designs and ecosystems of products and services as key to their success, I actually think, that at its heart, the real reason for their amazing success is Jobs’ own mantra to his team, which is to keep things as simple and intuitive as possible. And he was smart enough to know that even with that, given the nature of technology and the fact that things get more powerful and complex over time, provide a place for people to get help that is easy to access and stock it with people who can help when a problem arises.

As I walked the floor of CES recently, I saw over a dozen phones at one vendor, nine new PCs from another vendor and five tablets from another vendor, all with different versions of Android on them. While choice is great, I really think that keeping things simple and easy to understand–and buy–is even more important than choice. While Apple has powerful products in many categories, the real reason for Apple’s success that they just keep things simple.

The Most Interesting Things I Saw at CES 2012

CES is certainly the technology lovers candy store. It is nearly impossible for any one person to see everything of interest at CES. So my approach is to look for the hidden gems or something that exposes me to a concept or an idea that could have lasting industry impact.

So in this, my Friday column, I figured I would highlight a few of the most interesting things I saw at this years CES.

Recon Instruments GPS Goggles
The first was a fascinating product made by a company called Recon Instruments and in partnership with a number of Ski/Snowboard goggle companies. What makes this unique and interesting is that the pair of goggles has Recon Instruments modular technology that feature a built-in LCD screen into goggles.

The Recon Instruments module is packed with features useful while on the slopes. Things like speed, location of friends, temperature, altitude, current GPS location, vertical stats on jumps and much more.

Think of this as your heads up display while skiing or snowboarding. The module can also connect wirelessly to your Android phone allowing you to see caller ID and audio / music controls.

Go Pro Hero 2 + WiFi Backpack
In the same sort of extreme sports technology category, I was interested in the newest Go Pro the Hero2 and Wi-fi backpack accessory. I wrote about the Go Pro HD back in December and mentioned it as one of my favorite pieces of technology at the moment. The Hero2 and wi-fi backpack makes it possible to use the Go Pro in conduction with a smart phone and companion app to see what you are recording or have recorded using your smart phone display. This is useful in so many ways but what makes it interesting is I believe it represents a trend where hardware companies develop companion software or apps that create a compelling extension of the hardware experience. I am excited to see more companies take this approach and use software and apps to extend the hardware they create.

In this case the companion app acts as an accessory to the Go Pro Hero2 hardware and provides a useful and compelling experience. Another compelling feature is that you can use your smart phone and the live link to the Go Pro Hero2 to stream live video of what you are recording to the web in real-time. This would make it possible for friends, family, and loved ones to see memories being created in real-time.

Dell XPS 13 UltraBook
Dell came out strong in the UltraBook category and created possibly the best notebook they have created in some time. The XPS 13 UltraBook’s coolest features are the near edge to edge Gorilla Glass display, which needs to be seen to be appreciated, and the unique carbon fiber bottom which keeps the underside cool.

The 13.3 inch display looks amazing with the Gorilla Glass and packed into an ultra slim bezel like that of an 11-inch display. It surprises me to say that if I was to use a notebook other than my Air, this would be the one.

Samsung 55-inch OLED TV
A sight to behold was the Samsung 55-inch OLED TV. I had a similar experience when I saw this TV as I did when I first saw a HDTV running HD content. The vivid picture quality and rich deep color are hard to put into words. Samsung is leading the charge in developing as near to edge-to-edge glass on TVs and this one is even closer. The bezel and edge virtually disappear into the background leaving just the amazing picture to enjoy.

We have been waiting for OLED displays to make it to market, for the sheer reality that in five years they may be affordable. OLED represents one of the most exciting display technologies in a while and it is important the industry embrace this technology so we can get OLED on all devices with a display as fast as possible.

Samsung didn’t mention any pricing yet but said it would be available toward the end of the year. It will most likely cost an arm and a leg.

Intel’s X86 Smart Phone Reference Design
Intel made a huge leap forward this CES by finally showing the world their latest 32nm “Medfield” SOC running on a smart phone reference design. I spent a few minutes with the design, which was running Android version 2.3, and I was impressed with how snappy it was including web page pinch and view, as well as graphics capabilities.

Battery life is still a concern of mine but Intel’s expertise in hyper-threading and core management could help this. The most amazing thing about the smart phone reference design is that it didn’t’ need a fan.

Motorola announced that they would bring Intel based smart phones to market in 2012. This is one of the things I am very excited about as It could mark a new era for Intel and the level of competition we will see in the upcoming ARM vx X86 is going to fun to watch and great for the industry and consumers.

Motorola Droid Razr Maxx
Last but not least the Motorola Razr Maxx has my vote for most interesting smart phone. It was a toss-up between the Razr Maxx and the Nokia Lumia 900. I simply choose the Razr Maxx due to the feature that I think made it most interesting. Which was the 3300 mAh (12.54 Whr) battery that Moto packed into the form factor of the Razr – it’s just slightly thicker than the Droid Razr. Motorola is claiming that the Razr Maxx can get up to 21 hours of talk time. I talked to several Motorola executives who had been using the phone while at the show and they remarked how with normal usage during the show they were able to go several days without charging. To contrast, every day while at CES my iPhone was dead by 3pm.

Image Credit - AnandTech

Making our mobile batteries last is of the utmost importance going forward. I applaud Motorola for their engineering work and creating a product that is sleek, powerful, and has superior battery life.

Going Nuclear to stop SOPA

I'm sure this violates someone's copyright

The online news site reddit said it will invoke the “nuclear option” on Jan. 18 – next Wednesday — against two pieces of federal legislation, the House’s Stop Online Piracy Act (SOPA) and its Senate cousin, the Protect Intellectual Property Act (PIPA).

For 12 hours on Wednesday, reddit’s normally busy “front page of the Internet” will blacked out and replaced by a live video feed of hearings by the House Committee on Oversight and Government Reform, which is debating proposed legislation to give the government the ability to shut down foreign websites that infringe copyrighted material, and to penalize domestic companies  that “facilitate” alleged infringement.

It remains unclear if Google, Amazon, Facebook, Twitter, Wikipedia, Craigslist, eBay, PayPal, Yahoo and other Internet titans will join in a simultaneous blackout to protest the legislation, although the trade association that represents them all says it is a possibility. “There have been some serious discussions about that,” Markham C. Erickson, Executive Director and General Counsel of The NetCoalition, told CNET’s Declan McCullagh. The Net Coalition is not involved with reddit’s action next week, a spokeswoman said.

A coordinated systemwide blackout, proponents say, would demonstrate to millions of Americans what could happen to any website that carries user-generated content, if SOPA or PIPA were enacted.

In current forms, the bills would require online service providers, Internet search engines, payment providers and Internet advertising services to police their customers and banish offenders. Companies that did not comply with the government’s order to prevent their customers from connecting with foreign rogue sites would be punished.

Let’s say a company like YouTube, which publishes an average of 48 hours of video every minute, fails to stop one of its 490 million monthly users from uploading a chunk of video that is copyrighted by a Hollywood studio. Let’s say further that one of Twitter’s 400 tweets per minute that link to YouTube videos contains a link to that copyrighted material. And maybe one of Facebook’s 800 million users reposts the link. YouTube says Facebook users watch 150 years worth of YouTube videos every day. And let’s say you hear about the video and enter a search for it on Google.

Under the proposed legislation, YouTube, Twitter, Facebook and Google are responsible for keeping their users within the law. SOPA grants those companies immunity from punishment if they shut down or block suspected wrongdoers. But if they don’t shut down or block the miscreants, they could be punished themselves.

Both the House and Senate bills are strongly backed by Old Media companies, and equally opposed by New Media companies, along with an astonishing confederation of civil libertarians, venture capitalists, entrepreneurs, journalists and academics.

Both sides cast the legislation as a battle of life and death for the future of the Internet.

Opponents contend that SOPA would shut down the free flow of information and prevent Americans from fully exercising their First Amendment rights. Venture capitalists say it will kill innovation in Silicon Valley by setting up impossible burdens for the social media companies that now drive the area’s economic engine. Some critics say SOPA will hand Big Business a “kill switch” on the Internet similar to the shutoff valves used by China, Egypt and other repressive countries to stifle dissent.

Supporters of the legislation, meanwhile, say new laws are needed to fight online trafficking on copyrighted materials and counterfeit goods. No one can deny that the Internet is awash in fake Viagra and bootlegged MP3 files. Lamar Smith, the Texas Republican who sponsored SOPA, says it will stop foreign online criminals from stealing and selling America’s intellectual property and keeping the profits for themselves. Unless copyright holders are given the new protections under SOPA, Mr. Smith argues, American innovation will stop, American jobs will be lost, and the American economy will continue to lose $100 billion a year to online pirates. And people will die, Mr. Smith says, if we fail to stop foreign villains from selling dangerous counterfeit drugs, fake automobile parts and tainted baby food.

“The criticism of this bill is completely hypothetical; none of it is based in reality,” Mr. Smith told Roll Call recently. “Not one of the critics was able to point to any language in the bill that would in any way harm the Internet. Their accusations are simply not supported by any facts.”

“It’s a vocal minority, Mr. Smith told Roll Call. “Because they’re strident doesn’t mean they’re either legitimate or large in number. One, they need to read the language. Show me the language. There’s nothing they can point to that does what they say it does do.”

Who are these clueless critics who don’t know anything about the Internet?

Vint Cerf, Steven Bellovin, Esther Dyson, Dan Kaminsky and dozens of other Internet innovators and engineers wrote an open letter that said: “If enacted, either of these bills will create an environment of tremendous fear and uncertainty for technological innovation, and seriously harm the credibility of the United States in its role as a steward of key Internet infrastructure.”

AOL, LinkedIn, Mozilla, Zynga and other Internet companies joined in an open letter to write, “We are very concerned that the bills as written would seriously undermine the effective mechanism Congress enacted in the Digital Millenium Copyright Act (DMCA) to provide a safe harbor for Internet companies that act in good faith to remove infringing content from their sites.”

Marc Andreessen, Craig Newmark, Jerry Yang, Reid Hoffman, Caterina Fake, Pierre Omidyar, Biz Stone, Jack Dorsey, Jimmy Wales and other Internet entrepreneurs contend that the bills would:

  •             “Require web services to monitor what users link to, or upload. This would have a chilling effect on innovation.
  •             “Deny website owners the right to due process or law.
  •             “Give the U.S. government the power to censor the web using techniques similar to those used by China, Malaysia and Iran; and
  •             “Undermine security online by changing the basic structure of the Internet.”

A couple of guys named Sergey Brin and Larry Page have been particularly vocal in opposing the legislation.

Well of course, Mr. Smith argues. “Companies like Google have made billions by working with and promoting foreign rogue websites, so they have a vested interest in preventing Congress from stopping rogue sites,” he said at a news conference last month. “Their opposition to this legislation is self-serving since they profit from doing business with rogue sites that steal and sell America’s intellectual property.”

I think everyone agrees that something must be done to combat rampant online piracy and the sale of bogus goods and services by foreign rogue websites. But Old Media is once asking for heavy-handed remedies that resist rather than adapt to technological change. It tried to outlaw videocassette recorders, and it tried to throw students and grandmothers into prison for downloading MP3 files, and now it wants kill-switches on the Internet. Perhaps reddit’s nuclear option will be the kind of heavy-handed rebuttal we need to prompt discussions about a smarter, mutually agreeable solution.

 

 

Do Nokia and Windows Phone Have Any Hope for 2012?

There were a number of priorities for me at this years CES.  One of my top priorities was to better understand Nokia’s strategy for Windows Phone and the US Market.  Secondarily to Nokia’s US strategy was Microsoft in general and whether Windows Phone can grow in market share in the US in 2012.   

As I have written before, Nokia has again entered the conversation at large, but more importantly, they have become relevant in the US smart phone market.  I have expressed my belief that they contain some fundamental strengths, like brand, quality design, and marketing smarts, to at least compete in the US.

For Nokia, this years CES bore two important and timely US events.  The first was that their US presence was solidified when the US sales of their Lumia 710 officially became available at T-Mobile this week.  The second was the announcement at this years CES of the Lumia 900 which will come to market on AT&T.   

Both products are well designed and the Windows Phone experience is impressive.  That being said, Nokia’s and Microsoft’s challenge is primarily convincing consumers that Windows Phone is an OS worth investing in.

I use that terminology because that is exactly what an OS platform is asking consumers to do.  Not only invest but allow this most personal device to become a part of their life.

Currently, only a small fraction of consumers are convinced that they should buy into Windows Phone 7 and it will take quite a bit more convincing for most.  Nokia and Windows Phone face stiff competition with the army of Android devices and the industry leader in Apple. If anything, Nokia and Windows Phone have a small window of opportunity to rise above what is the Android sea of sameness – but it is only a small window. This is because many more of Android’s core and loyal (on the surface) partners will continue to invest resources in Windows Phone over the next few years. If Microsoft and Nokia are successful the result should be that the market will contain not only a sea of Android devices but of Windows Phone devices as well.

This is why the battle will again turn to differentiation across the board on both the Android and Windows Phone platform. I have previously dared the industry to differentiate and this will need to be the focus going forward.

As I look at where we are right now, it appears that Nokia is faced with an unfortunate dilemma.  Nokia now bears the difficult task of not only spending money to develop their brand in the US but to also help Microsoft convince consumers Windows Phone is the right platform for them.

Microsoft is unfortunately not building or investing in the Windows Phone consumer marketing as aggressively as they should on their own.  So rather then be able to simply focus on their brand, Nokia must also invest in marketing Windows Phone. This will inevitably help Nokia but also their competitors in the long term.

All of this, however, presents Microsoft with what is the chance of a lifetime and it all relates to Windows 8.  The importance of Windows 8 to Microsoft seems to be wildly shrugged off by many.  But I believe that if Microsoft does not succeed in creating consumer demand with Windows 8, they will begin to loose OS market share even faster than they are right now.  

Windows Phone’s success in 2012 can pave the way for Windows 8.  If Microsoft can, at the very least, create some level of interest and ultimately generate demand for Windows Phone, it will almost certainly do the same for Windows 8.  This is because once you have gotten used to the user experience of Windows Phone, it creates a seamless transition to the Windows 8 experience.   

If Microsoft can generate some level of success for Windows Phone in 2012, it will build a needed level of momentum for Windows 8. Primarily because the Windows Phone and Windows 8 Metro UI are very similar.  All of these steps are necessary for Microsoft to not only create demand for their OS platforms but to also create demand for their ecosystem.  I have emphasized the importance of the ecosystem in past columns and Microsoft must leverage their assets to create loyal consumers.

So what is my conclusion for 2012?  Simply put, and to use a sports analogy, it is a rebuilding year for Microsoft and Nokia.  Both companies need to view 2012 as a “laying-a-foundation-for-the-future” year.  I do expect Windows Phone and Nokia to grow in market share in the US but I am not sure if we can count on double digit growth. If both companies play their cards right in 2012, then 2013 will present them with the growth opportunities they both desire.

Past Columns Mentioned:
Why Nokia is Interesting
Dear Industry – Dare to Differentiate
Why It’s All About the Ecosystem

OnLive Brings Superfast Windows to the iPad

I just lost my last excuse for traveling with a laptop.

I usually find myself traveling with my MacBook Air because some tasks, such as writing this post at the Consumer Electronics Show, is just a bit more than I can manage on the iPad. But OnLive Desktop is about to change that–and could bring big changes to mobile computing for business.

OnLive Desktop screenshotOnLive is the company that did the seemingly impossible by creating a platform where high-performance games are run on its servers with just screen images transmitted to networked clients including computers, tablets, phones, and connected TVs. By running instances of Windows on a server instead of a game, OnLive has duplicated the trick for productivity software. It works a bit like Citrix’s server-based Windows, but with performance so good you think the software is running locally, and on a really fast machine at that. The key to the performance, says OnLive CEO Steve Perlman, is that it was “built against the discipline of instant-action gaming.”

The OnLive Desktop app will be available from the iTunes Store later today. A basic version, which includes Microsoft Word, PowerPoint, and Excel and 2 gigabytes of online storage, is free.

A $10 a month premium version, which will be of more interest to serious users when it becomes available, includes the full Office suite and 50 GB of storage. It also provides for persistent user preferences in Office, superfast server-based web browsing, and the ability of users to upload applications.

Adding your own applications would add dramatically to the usefulness of the services. However, Perlman was a bit vague on exactly how it would work, especially with applications such as Adobe Creative Suite, which have complicated licensing arrangements. Autodesk applications are likely to be available pre-installed on OnLive’s servers, since Autodesk is an investor in the company.

OnLive also plans to offer an enterprise version. This would allow companies to set up virtual Windows machines on OnLive servers using their own custom images, a service aimed at the heart of Citrix’s business.

When I first saw a demo of OnLive’s gaming service, I was deeply skeptical that it could work. Trying it when it first became available quickly made me a believer, and even though I have only seen the Desktop service in a demo, I have every reason to believe it will work as promised over any decent internet connection.

Actually using Office on an iPad is a bit clumsy for reasons that have more to do with Office than with either OnLine or the iPad. Office is notoriously unfriendly to touch, even when installed on a touchscreen PC or Windows slate.  When a keyboard is needed, the user has a choice between the Microsoft on-screen keyboard (the iPad keyboards lack keys that Windows needs for full functionality) or the standard office Text Input Panel, which can be used with any iPad-compatible pen. I think most users will be much happier with an external physical keyboard.

On the other hand, OnLive Desktop will let you display even the most complex PowerPoint slide show, including Flash video, without a hitch. (This works because the Flash is being executed on the server, with only the frames sent down to the notoriously Flash-less iPad.)

OnLive Desktop could really come into its own with Windows 8 and the expected, though as yet unannounced, touch friendly version of Office.

 

 

The Day A Smart Phone Changed an Industry

Five years ago today Apple introduced the iPhone. On this day five years ago, Apple opened our eyes to the reality that the devices we considered “smart” were not really smart at all. They re-invented the smart phone and made the industry re-evaluate what we knew a smart phone to be, changing the landscape entirely.

I remember the day vividly because our team had split up and one person from Creative Strategies (not me) got to attend history in the making at the iPhone launch event, while I was stuck at CES doing my analyst duties.

I have never seen the buzz around CES be so focused on something not present at the show. That year the iPhone completely overshadowed CES in a way I have never seen and may never see again.

The industry leading up the launch of the iPhone was a mess. Handset innovation was at an all time low and purely focused on business users. Carriers controlled nearly every aspect of the device. Developers knew mobile apps were the big opportunity but had to fight for “on deck” promotion through carriers’ walled gardens if they hoped to make any money. To sum it up, there was no unity, no vision, and almost zero innovation as it related to smart phones. Apple changed all that with the iPhone.

So now here we are five years later and how is the iPhone doing? If ChangeWave’s recent data is any indication, the iPhone is not only continuing to thrive five years later, but it is dominating at an unprecedented level.

Today ChangeWave released findings of a survey that intends to gauge smartphone buying intent by consumers. The results of this survey of 4,000 US-based consumers showed that among respondents planning to buy a new smart phone in the next 90 days, better than one-in-two, or 54%, say they’ll get an iPhone. Perhaps a quote in the ChangeWave press release says it best.

“Apple has never dominated smart phone planned buying to this extent more than two months after a major new release.”

I have made this observation time and time again, but the volumes that Apple ships in a single model device is unprecedented in this industry. There is no arguing that Android vendors as a whole are moving volumes. But the point has to be made that it takes an army of Android devices, to compete with one single model of the iPhone. One could argue, quite strongly that, five years later, the competition is just now catching up — or not depending on your perspective.

I’m not sure any of us could have predicted that the iPhone would not only be thriving, but dominating, and expected to continue to dominate, the smart phone landscape. I truly hope the next five years bring even more excitement and innovation to this industry, and it’s probably a safe bet that Apple will continue to lead this charge.

I’ll close with an anecdote that highlights for me my memory of the day the iPhone launched. As I mentioned, I didn’t attend the iPhone launch because I decided to stay back and cover CES for our firm. After the launch a Sr. Executive at Apple along with my father called my cell from a working iPhone. That iPhone then proceeded to be shown on TV and have images taken with my cell phone number clearly displayed on the dial pad. For about the next month I received on average 2,000+ calls a day from strangers asking if I was Steve Jobs or if they could talk to Steve Jobs.

People are strange and no I didn’t change my number. My cell phone number is, however, forever engraved into some of the first media images used the day the device launched.

I guess that counts for something.

Catching up with Apple – This Years CES Theme

CES hasn’t even started, but after sitting through various pre-show press conferences and meetings, one thing is clear: Apple is casting a very long shadow on this show. And many of the products I have seen have been various implementations of something Apple has already brought to market.

This is especially true in two categories.

First is the iPad. Pretty much every tablet vendor here hopes they can develop a tablet that is at least competitive with Apple. Some are going for cheap and basic as differentiators, while others are trying to bring out models with a unique design, tied to Android, and still be cheaper than Apple.

The recent success of Amazon’s Kindle Fire has given them another target to go after, but even this is colored by Apple’s iPad and its strong success in the market. And when talking to all of these “clone” vendors, they don’t even pretend they are doing something new or unique. Rather, many point out that they hope to tag along on Apple’s success and tap into new users Apple may not get because of their higher prices. But make no mistake; all of these are iPad wannabees.

The second product they are all chasing is Apple’s MacBook Air. If you look at Intel’s Ultrabook program, you can see that this is a blatant attempt by the Windows crowd to ride Apple’s successful coattails in design and give their audience something that Apple has had on the market for their customers for five years. Now that is not necessarily a bad thing…it just amazes me that it has taken the WinTel world that long to even catch up with Apple.

But when talking to these vendors who are hopefully bullish about any of their offerings in either of these categories, I sense something else. While they know what Apple already has, the fact that they don’t know what Apple will have in the future really weighs heavily on them. Or in other words, they keep waiting for another shoe to drop.

While they rush to market versions 1 or 2 of their tablets, they know that Apple has the iPad 3 and iPad 4 just around the corner. And while they feel Apple’s prices for the iPads are too costly for most people today, they all fear that Apple could drop prices and seriously impact their chances for success. In fact, to many it is a foregone conclusion that Apple could drop as much as $100 out of their base entry model as soon as this year. And given Apple’s history of maximizing their supply chain as well as pre-purchasing components in huge quantities so as to get the best prices on parts, that is a real possibility.

The other thing I picked up is that many of the Ultrabook vendors are working on what are called hybrids. These are laptops where the screen pops off and turns into a tablet. The first generation of these “hybrids” sported Windows on the laptop and Android on the tablet and the two did not mix well. But the Windows world is counting on Microsoft’s Windows 8 to be the magic bullet that lets Windows 8 with its Metro UI work on the laptop and the tablet and provide a unified experience. And some of the models I have seen are quite innovative.

But, this depends on Windows 8, which means that none of these can get to market until at least mid Oct. And some of the vendors have a sinking feeling that Apple is working on a hybrid as well and that they could beat them to market. And what’s worse for them is that if Apple does theirs as elegant and innovatively as they normally do, some vendors I spoke with feel that they would be immediately behind even though on paper they seem to be way ahead of Apple with their hybrids.

You can even see copied elements of Apple TV in the new Google TV being shown. In fact, all of the smart TV vendors know full well that Jobs told his biographer that he “nailed” smart TV, so these vendors also know that no matter what they offer now, once Apple finally releases a TV solution, they will have to go back to their labs and make big changes just to stay competitive.

One of Apple’s core strategies is to keep ahead of the competition by at least two years. And their competitors have finally realized this truth.

That is why no matter how happy they are about their new offerings at CES this year, they are looking over their shoulders because they know with 100% certainty that Apple could do something significant at any time and send them all back to the drawing board to play catch up.

The ARM Wrestle Match

I have an un-healthy fascination with semiconductors. I am not an engineer nor I do know much about quantum physics but I still love semiconductors. Perhaps because I started my career drawing chip diagrams at Cypress Semiconductor.

I genuinely enjoy digging into architecture differences and exploring how different semiconductor companies look to innovate and tackle our computing problems of the future.

This is probably why I am so deeply interested in the coming processor architecture war between X86 and ARM. For the time being, however, there is a current battle within several ARM vendors that I find interesting.

Qualcomm and Nvidia, at this point in time, have two of the leading solutions for most of the cutting edge smart phones and tablets inside non-Apple products.

Both companies are keeping a healthy pace of innovation looking to bring next generation computing processors to the mass market.

What is interesting to me is how both these companies are looking to bring maximum performance to their designs without sacrificing low-power efficiency with two completely different approaches.

One problem in particular I want to explore is how each chipset tackles tasks that require both computationally complex functions (like playing a game or transcoding a video) and ones that require less complex functions (like using Twitter or Facebook). Performing computationally complex functions generally require a great deal of processing power and result in draining battery life quickly.

Not all computing tasks are computationally complex however. Therefore the chipset that will win is one that has a great deal of performance but also can utilize that performance with very low power draw. Both Nvidia and Qualcomm license the ARM architecture which for the time being is the high performance-low power leader.

Nvidia’s Tegra 3
With their next chipset, Tegra 3, Nvidia is going to be the first to market with a quad-core chipset. Tegra 3 actually has five cores but the primary four cores will be used for computationally complex functions while the fifth core will be used to handle tasks that do not require a tremendous amount of processing power.

The terminology for this solution is called Variable SMP (symmetric multiprocessing). What makes this solution interesting is that it provides a strategic and task based approach to utilizing all four cores. For example when playing a multi-media rich game or other multi-media apps all four cores can be utilized as needed. Yet when doing a task like loading a media rich web page two cores may be sufficient rather than all four. Tegra 3 can manage the cores usage, based on the task and amount of computer power needed, to deliver the appropriate amount of performance for the task at hand.

Tegra 3’s four cores are throttled at 1.4Ghz in “single core mode” and 1.3Ghz when more than one core is active. The fifth core’s frequency is .5Ghz and is used for things like background tasks , active standby, and playing video or music, all things that do not require much performance. This fifth core because it is only running at .5Ghz requires very little power to function and will cover many of the “normal” usage tasks of many consumers.

The strategic managing of cores is what makes Tegra 3 interesting. This is important because the cores that run at 1.4 Ghz can all turn off completely when not needed. Therefore Tegra 3 will deliver performance when you need it but save the four cores only for computationally complex tasks which will in essence save battery life. Nvidia’s approach is clever and basically gives you both a low power single-core, and quad-core performance computer at the same time.

Qualcomm’s S40 Chipset
Qualcomm, with their SnapDragon chipset, takes a different approach with how they tackle the high performance yet low power goal. There are two parts of Qualcomm’s S40 Snapdragon chipsets that interest me.

The first is that the S40 chipset from Qualcomm will be the first out the door on the latest ARM process the Cortex A15. There are many advantages to this new architecture, namely that it takes place on the new 28nm process technology that provides inherent advantages in frequency scaling, power consumption and chipset size reduction.

The second is that Qualcomm uses a proprietary process in their chipsets called asynchronous symmetric multiprocessing or aSMP. The advantage to aSMP is that the frequency of the core can support a range of performance rather than be static at just one frequency. In the case of the S40 each core has a range of 1.5Ghz to 2.5Ghz and can scale up and down the frequency latter based on the task at hand.

Qualcomm’s intelligent approach to frequency scaling that is built into each core allows the core to operate at different frequencies giving a wide range of performance and power efficiency. For tasks that do not require much performance like opening a document or playing a simple video, the core will run at the minimum performance level thus being power efficient. While when running a task like playing a game, the core can run at a higher frequency delivering maximum performance.

This approach of intelligently managing each core and scaling core frequency depending on tasks and independent of other processes is an innovative approach to simultaneously delivering performance while consuming less power.

I choose to highlight Nvidia and Qualcomm in this analysis not to suggest that other silicon vendors are not doing interesting things as well. Quite the contrary actually as TI, Apple, Marvel, Broadcom, Samsung and others certainly are innovating as well. I choose Qualcomm and Nvidia simply because I am hearing that they are getting the majority of vendor design wins.

The Role of Software in Battery Management
Although the processor play’s a key role in managing overall power and performance of a piece of hardware, the software also plays a critical role.

Software, like the processor, needs to be tuned and optimized for maximum efficiency. If software is not optimized as well it can lead to significant power drains and result in less than stellar battery life.

This is the opportunity and the challenge staring everyone who makes mobile devices in the face. Making key decisions on using the right silicon along with effectively optimizing the software both in terms of the OS and the apps is central going forward.

I am hoping that when it comes to software both Google and Microsoft are diligently working on making their next generation operating systems intelligent enough to take advantage of the ARM multi-core innovations from companies like Qualcomm and Nvidia.

These new ARM chipset designs combined with software that can intelligently take advantage of them is a key element to solving our problem with battery life. For too long we consumers have had an un-healthy addiction to power chords. I hope this changes in the years to come.

Why PayPal Is a Bigger Challenge Than Yahoo

 

A month ago The Wall Street Journal had a big story headlined “War Over the Digital Wallet.” “The subhead: “Google, Verizon Wireless Spar in Race to Build Mobile Payment Services.”

Article mentioned AT&T, T-Mobile, MasterCard, Visa, Citigroup, Sprint, and Apple, among others. The word “PayPal” was never mentioned, which is curious because eBay’s PayPal division is by far the global leader in electronic payments.

But not all of the media were ignoring PayPal. TechCrunch the next day carried a story that began, “Hey PayPal, do you realize people no longer trust you?” It continued: “The public’s perception is that there’s a risk in keeping money with PayPal. If something doesn’t change, startups, causes, and merchants will start processing donations and payments elsewhere.”

Something changed. PayPal’s president, Scott Thompson, quit to take over the CEO job at Yahoo!, a media company. When top executives quit, it’s usually because they want a shot at running a bigger or more interesting company. Yahoo is interesting, in the same way that train wrecks are interesting. He will be the fourth CEO of Yahoo in the past five years, not counting those who held the job on an interim basis. None of the previous CEOs, including Carol Bartz, who was fired unceremoniously in September, were able to reverse Yahoo’s seemingly inexorable slide into oblivion.

It’s hard not to chuckle at the highly respected Thompson’s statement that he was leaving PayPal to seek new challenges. “I like doing complicated, very difficult, very challenging things,” he told Reuters. There are challenges galore right under his nose at PayPal’s headquarters in San Jose.

Being ignored completely by the nation’s leading business newspaper in a major story about digital payments, when you are by far the market leader, suggests a nontrivial problem of public perception.

When a major tech blog (itself criticized recently for potential conflicts on interests) scolds that “people no longer trust you,” that stings. Do people really think that AT&T and Google are more trustworthy than PayPal to handle their electronic banking? When I look at my monthly AT&T wireless statement and ponder AT&T’s craven and almost enthusiastic cooperation with the government’s warrantless eavesdropping on American citizens, I can’t imagine ever trusting my digital wallet to a phone company.

PayPal grew impressively under Thompson’s watch at PayPal, doubling its user base to more than 100 million. PayPal in the third quarter of 2011 processed $29 billion in payments. It operates in 190 countries and 24 currencies and has 15,000 bank partners. Revenue was expected to top $4 billion in 2011, and margins were solid at close to 20 percent. PayPal has grown to the point that it now accounts for more than a third of eBay’s operating profits; I would not be surprised to see the tail wagging the dog before too long. John Donahoe, eBay’s CEO, said last year that he expected PayPal to be bigger than eBay two years from now.

Thompson, who is quite savvy about technology and commerce (“e” and otherwise), is credited with the idea to push PayPal out of the cloud and into retail stores. But Google beat him to it, in part by poaching a couple of Thompson’s top lieutenants. (PayPal’s parent, eBay, is suing Google, alleging that PayPal and Google spent two years developing a partnership, then hired PayPal’s point man, who departed with a laptop full of trade secrets; Google denies the charges.) Google then launched its own “Google Wallet” application, beating PayPal to the punch. PayPal still hasn’t articulated its “wallet” strategy.

PayPal’s push into brick-and-mortar retail stores does not appear to be going well. On a visit to PayPal headquarters a few months ago I tried to buy a cup of coffee from the café that operates in its lobby. Sorry, cash or credit cards only. PayPal was not accepted in PayPal’s own headquarters.

Ouch.

Naturally, everyone wonders what Thompson will be able to do in the Augean stables of Yahoo. It is astonishingly hard to revive a declining Internet company, and the task is made more challenging because Yahoo is a media and advertising company very different from PayPal. Both companies recognize, however, that the future belongs to the company that can harvest and sift and parse data, and that’s an area where Thompson has strong chops.

PayPal’s Donohoe said he was shocked by Thompson’s sudden departure; Thomson resigned Tuesday and starts his new job at Yahoo on Monday. Donohoe himself will act as PayPal’s interim president, and promised a “seamless transition.” The person who eventually takes the big chair at PayPal has huge challenges ahead, starting with getting PayPal accepted in its own building.

Mamas (and Dads), Help Your Babies Grow Up To Be Coders

My kids were lucky. They were born at about the same time as the Apple ][ and they grew up during the all-too-brief period when learning to program a computer was considered part of a normal elementary school education. That window only lasted from around 1980 to the early 90s, when the complexities of graphical user interfaces began to kill amateur programming.

It’s time to bring back coding as part of kids’ education. Not because it is important to know how to program a computer to use one anymore than understanding of how engines work is important to driving a car. The virtue of learning programming is that it develops some very useful good habits, especially clear, precise, and careful thinking.

Unlike so much else in life and education, there’s no such thing as a good-enough piece of code. It either runs or it doesn’t and it either produces a correct result or not. But coding does provide instant gratification for doing the job right. Coding problems are inherently fair and objective, giving them all the characteristics of great pedagogical tools.

I don’t have any illusions about programming returning to elementary school curricula any time soon. There’s too much competition for classroom time, and way too few qualified teachers. There’s no one lobbying for it, and no studies showing that learning programming improves scores on standardized tests (though I wouldn’t be surprised if it did.)

Fortunately, excellent free tools exist that will let kids learn programming at home. For younger children, Kodu, a project of Microsoft Research, offers a graphical, drag-and-drop approach.  Kids can use it to design simples games while learning priciples of programming.

Kodu screen shot
A Kodu programming screen
Codeacademy screenshot
Interactive instruction at Codeacademy

Lots of folks in the tech world (venture capitalist Fred Wilson, for example) responded to a campaign by Codeacademy.com by offering new year’s resolutions to revive or improve their coding skills. But I think it is even more important for kids. Codeacademy offers interactive lessons in convenient small bytes designed to teach the basics of programming JavaScript.

(One note of learning programming: The choice of a language is largely irrelevant. The principles of programming are the same regardless of language, and the mainstream languages used today all derive their syntax from C++ and in most ways are more alike than different.)

For a deeper dive into coding, the estimable Khan Academy’s computer science section  provides more formal training in coding techniques. There’s more of a do-it-yourself element to the Khan approach: To actually work the examples and do problem sets, you’ll have to set up a Python development on your computer. Fortunately, that’s about a five-minute job.

I learned coding in completely haphazard fashion back in the mainframe era. In those days, the only way to do anything with a computer was to program it yourself and the data processing I needed to do for an undergraduate research project forced me to learn Fortran—and debug code by reading a printout of a core dump. In truth, I never became more than a marginally adequate programmer, but I believe the experience made me a better, more analytical thinker.

My kids made better use of their opportunities. One is now a mathematician working at the boundary of math, computer science, and operations research. The other is a down-to-the-silicon operating system developer for IBM Research. The might have gotten their without their expeience as young boys banging away at an Apple ][ (and later, in high school, a MicroVAX), but I think those formative experiences were critical.

So take the resolution yourself and make this the years your kids (and please, don’t forget the girls) learn to code. Some day, they’ll thank you.

Why Amazon is Not the New Apple

Over the last few months I have heard and read many comments about the idea that Amazon is the new Apple. In fact, in a very good piece in Forbes, E.D. Cain asks if Amazon is the new Apple directly. He makes some good points to suggest that Amazon is very much following in Apple’s footsteps and even has created some innovations of their own with their price check mobile app and their Kindle book purchasing process.

But I believe the answer to this question is no, Amazon is not the new Apple. The reasons for this are many. Now, don’t get me wrong. I have great respect for Amazon and Jeff Bezos. However, Jeff Bezos’s is not the second coming of Steve Jobs and he would be the first one to tell you that. Amazon’s business is very different from Apple’s and though they have some similar goals, such as creating an ecosystem of products and services for their customers, their approaches differ greatly.

Perhaps the most glaring difference is Apple’s total approach to the market. Most importantly, all of their hardware, software and services originate inside Apple. They write the OS so they can customize hardware to be maximized around their proprietary OS platform. Amazon, as well as all of the other vendors competing with Apple, must rely on Google or Microsoft for their code and are always at a disadvantage to Apple in this area.

Also, Apple has all of their design in-house led by recently Knighted Sir Jonathon Ives. Most of the vendors have to rely on ODMs for their products and this too is a disadvantage when it comes to industrial design and its integration with their software offerings.

While Amazon is the world’s greatest retailer, Apple’s stores are re-writing the rules of technology retailing around the world. You buy from Amazon if you know exactly what you want since you can’t touch or feel the products online. But the reason that Apple is driving millions of people into their stores around the world is that Apple knows full well that the majority of potential users are not tech literate and need help buying exactly what they need.

But one thing that really distinguishes Apple from Amazon and their competitors, is that Apple is a leader and all of the others are followers . This started when Apple introduced the Mac itself and decided to use the 3.5 inch floppy in the Mac and made their competitors kill the 5.5 inch floppies back in 1985-1986. Apple was the first to put a CD Rom drive in Macs in 1989 and ushered in the era of multimedia computing. By 1992, all PCs had CD Roms inside. In 1999 Apple added color to PC cases and created the all-in-one PC. Now all PCs have color and all-in-ones rule the desktop market. Even more recently, Apple created the first real “ultrabook” with the MacBook Air and now everyone is chasing them again.

Apple’s genius is also in re-inventing products, which continues to reinforce their leadership position. They did not invent the MP3 player. They did however re-invent it. They did not invent the Smartphone. They re-invented it. They did not invent the tablet. They re-invented it. And in each of these product categories, they force their competitors to play catch up.

What’s more is that Apple casts a long shadow in this area. A more current example is the hybrid computing. In my 2012 predictions, I stated that we should see many hybrids (laptops with screens that come off and double as tablets) this year. But, many of the vendors I talk to who have hybrids in the works have told me that their biggest fear is that Apple will do a hybrid and do it so well that it will force them back to the drawing boards and put them behind, even though they thought they would be ahead. Now, nobody even knows if Apple is doing a hybrid but just the threat of Apple doing one strikes fear in their competitors.

But in the end, the fact that Apple continues to play a major leadership role in the industry is the real reason Amazon and other companies, who might like to think that they could become the next Apple, are still only followers. It is unclear to me if Apple will ever give up that role given their complete control of their ecosystem and the rich talent they have inside the company. But until another company can create this same dynamic, I suspect that there will not be another company that can lay claim to being the next Apple.

New Years Resolutions for the Tech Industry in 2012

We thought we would recommend some new years resolutions for the tech industry at large for 2012. Some of these are company specific and some are general.

From Patrick Moorhead

Tablet OEMs: Invest what it takes to create and market something dramatically valuable, demonstrable, and most of all, differentiated. The answer to that lies with the usage models. The solution should solve a non obvious problem or open up a new way of having fun. Don’t immediately dismiss ideas just because they didn’t work before or because the resources don’t appear to be there. Take some risks and partner on the gaps you cannot afford. The other option is a money losing price war with the iPad or the Kindle Fire.

Consumer PC OEMs: Start adding incremental value over and above a convertible tablet or docked smartphone or there may be a much smaller PC market in the future. Leverage the larger design (versus tablets) to house better hardware components which when paired with the right software create new experiences. Think effortless and accurate personal video face tagging, 99% accurate speech command, and dictation, the highest possible quality video communications, in-home PC game streaming to phones and tablets, etc. Forget about the past of what new usage models sold and what didn’t sell, because those solutions were half baked.

Social media companies: Two different social models exist, “broadcast” and “personal”. Services like Path, while more intimate, are still, broadcasting somewhat randomly to an audience that may or may not see something or may not even be relevant. In real life, there are an infinite number of “micro-circles” that exist with varying levels of context. Companies need to grasp this concept of “personal” and build tools to leverage it.

From Steve Wildstrom

For RIM: work to salvage your enterprise customers’ investment in BlackBerry Enterprise Server infrastructure even if you can’t save their investment in BlackBerrys.

For PC OEMs: Stop trying to imitate the Mac Book Air. Ultrabooks can’t win that game on price or design. Show some creativity of your own.

For tech bloggers: Stop chasing page views by running uncritically with every Apple rumor, notated how silly, unlikely, or old.

From Ben Bajarin

For PC OEMs: Stop innovating in the rear view mirror. Simply trying to make MacBook Air clones is not a strategy that will yield much fruit. A friend and colleague Rob Enderle once told me that when Toyota was grabbing market share from GM in the late 70s, GM simply tried to reverse engineer Toyota cars. Which meant that GM was making great 1970’s cars in the 1980’s while Toyota was focuses on the future GM was focused on the present. Create value, experiment, try things that are new and most importantly create a vision for your products future.

Create Feature of Value: Focus on finding and creating features that your target customer base find valuable. It is important to know what your customers want or what kind of technology innovations you can create that solve real world problems for consumers or make every day tasks easier and simpler to accomplish using your technology.

From Peter Lewis

Resolved: Stay Hungry, Stay Foolish. And, as Scott McNealy says, Stay Nervous.

Resolved: In 2012, the tech industry must make computer and data security its No. 1 priority. Accelerate the use of biometric log-ins for computers and mobile devices.

Resolved: Vote against any Congressperson who votes for the House’s Stop Online Privacy Act (SOPA) or the Senate’s even-more-evil Protect IP Act (PIPA).

Resolved: We don’t say we’re e-writing someone, or e-calling, or e-reading. So let’s stop calling it e-mail and e-books and e-commerce, et cetera.

The Top Tech.pinions Columns of 2011

As we bring 2011 to a close we thought we would share the top five most popular columns of 2011. Even though our technology opinion column based website is only 6 month’s old, many of our columns made it around the webosphere. So here are the top five Tech.pinions columns of 2011

1. Why Google Should Buy Motorola
At the time, we simply wrote a theoretical analysis of all the reason why Google should buy Motorola and the benefit such an acquisition would bring to both companies. Turns out five days later, Google did actually purchase Motorola.

2. Why Google and Microsoft Hate Siri
Siri’s potential impact on search is the subject of this column. The potential impact to Google and Microsoft in terms of search is analyzed as well. This was also the most commented on article of the year.

3. Why We Witnessed History at the iPhone 4S Launch
History isn’t made every day. Seems like the past few years have been history by themselves. Looking at some of the ways Siri could impact the future as an inflection point for today.

4. Nuance Exec on iPhone, 4S, Siri, and the Future of Speech
A great interview with Vladimir Sejnoha, chief technical officer of Nuance, as well as some analysis and commentary around the subject of voice and artificial intelligence.

5. Apple Will Re-Invent TV
A deeper look at how the television transforming into a platform, to deliver rich software and services, will lead to its re-invention.

There they are, the top five most read columns of 2011. Other than our very timely Google and Motorola acquisition suggestion, it seems like Apple was yet again a hot topic in 2011. Looking forward to seeing what 2012 will bring!

Why Microsoft should buy RIM

Three years ago, in my annual prediction list, I said that Microsoft would buy RIM. However, I also stated that this was a very wild prediction that I doubted would happen.

Last week, All Things D wrote a piece that said Microsoft and Nokia had discussed jointly buying RIM but that the talks did not go anywhere.

But if you think about it, Microsoft owning RIM, especially their customer base, makes a great deal of sense. At the moment, Microsoft’s Windows Phone is basically designed for the consumer market and has little traction in corporate offices. In fact, Apple’s iPhone is eating Windows and RIM’s lunch in smartphone enterprise deployments. And while Google and their partners who have Android smart phones are taking aim at the enterprise, their acceptance in this market has been weak up to now.

But RIM’s assets still carry significant value in the enterprise. From their secure servers to their BBM messaging service, RIM still has serious technology that draws great interest from the corporate set. But, RIM is at a major junction in their history. If they are to have any chance of growing their business, they must move their customer base from its existing Blackberry OS to one that is much more powerful and will meet the needs of their business users as smartphones get smarter. To that end, they bought QNX and are planning to migrate to this smartphone OS by sometime in 2012. But here is the rub for them. Besides being very late to the market and having only a minor ecosystem of apps and services to work with now, the investment needed to get software developers to write apps for QNX will be very steep. And given the fact that developers are already backing iOS, Android and Windows Phone 7, it will be a tough sell as well.

In the mean time, Apple’s iOS and Android’s ecosystem that targets the enterprise is rising fast. And even though most of the apps written for Microsoft’s Windows Phone are consumer based, Microsoft too has their eyes on the corporate market.

In my viewpoint, the chance that RIM can be successful with their strategy, given their lateness in providing a powerful smartphone OS for their business users and what it would take to get software developers to back it is marginal at best. And although their market value has taken a big hit over the last three quarters, I doubt that it will recover given the difficult position they are in considering the current competitive smartphone climate.

Consequently, this is a perfect time for Microsoft to make a serious attempt at buying RIM and use this to jumpstart their enterprise smartphone business. Interestingly, the idea of Microsoft using RIM to counter Apple’s iPhone move into the business market was at the heart of my wild prediction 3 years ago.

While RIM has been trying to move QNX into their business smartphones and getting software developers to support it with minimal success to date, Microsoft could instead move very quickly to marry their Windows Phone 7 architecture to replace RIM’s QNX. Then they tell their current Windows Phone 7 software developers that it is now time to begin writing powerful business apps for this smartphone platform. I say quickly but I realize this would take some serious software engineering to make this happen. However, Microsoft’s smartphone OS is very stable and already has strong developer support and a move like this could make Microsoft a serious player in enterprise smartphones almost overnight.

So, will this happen? Probably not. RIM’s management seems determined to try to save the company with QNX and hoping to get developers to support them. Good luck to them but in my view, that ship has passed.

But it sure would be interesting if Microsoft did buy RIM and tap into their loyal customer base and over time move all of them to Widows Phone 7. In fact, it may be their only hope of gaining any ground on Apple in the enterprise and keeping Android at-bay in business as well. And while it would be risky, the upside of owning RIM’s customer base and transitioning them WW to Windows Phone could be huge. I am sure that is what Microsoft and Nokia were thinking about when they discussed this idea recently.

But given RIM’s managements current position, it seems likely that this will never happen, even though it would be best thing for both of them.

My Favorite Piece of Tech Gear Right Now

I have nearly every gadget and gismo imaginable. Luckily for me, analysts get great gear to review as well. A friend at a party, who knows all to well about all the great tech gear I get to play with, asked me what my favorite was at the moment. I didn’t even hesitate and I said my GoPro HD.

Before I go further you have to understand that I make a lot of home movies and take a lot of pictures. For me preserving memories is a very high priority. So I’m that dad that takes pictures and video in a simple attempt to preserve as many memories as possible and is always looking for a great moment to capture.

The GoPro HD was designed primarily with extreme adrenaline junkies in mind (which I used to be) and not necessarily for dads who like to take video walking around Disneyland but that is exactly how I used it.

Convenient Hands Free Video Recording
One of the problems with taking a lot of video to capture memories and moments is that you often miss the actual moment. You are so focused on holding the video camera or camera phone and making sure the moment is in focus and captured accordingly that you are staring at the moment through the phone or video camera lens.

I constantly see others trying to record a moment on video, while simultaneously trying to look over the camera so they can see the moment first hand. All the while looking back and forth between the video camera and watching what they are trying to record.

This is what a wearable recording device solves. It gives the watcher the ability to record a first person point of view recording while also being able to focus on the moment.

One particular operating challenge they solved was how to operate with only one hand and while not looking at the camera. This is needed because more often than not the camera and casing are either on your head with the head strap or mounted to your helmet. Understanding this the team at GoPro made the device dead simple to operate. One button turns it on, and the other starts and stops the recording. The on/off button also allows you some menu customization but I rarely use it for that.

The GoPro was specifically designed for the extreme sports enthusiasts and of course it works brilliantly for this use case. I use it frequently when I ride ATVs. I, however, found it interesting how useful it came in for non-extreme sporting events and everyday life events. Like swimming in the ocean or a pool with my kids, or on roller coasters and other rides at Disneyland, and riding bikes with the family in Tahoe. Although these were not the primary use cases marketed for the GoPro, I am convinced that even for the non-extreme sports junky this wearable recording device is a easy and convenient way to capture great and unique video.

The only dilemma you have to overcome to use it outside of extreme sports, is the odd looks people give you when you go out in public with a camera mounted to your head. Here is a slightly embarrassing picture of me on the Tea Cups at Disneyland sporing the GoPro.

The key is to not take yourself too seriously.

The camera while in the case is very durable. It is waterproof, sand proof, dust proof, tree branch proof (since I whacked it on a low hanging tree brance while riding my ATV on a trail), and a whole lot more.

So why is the GoPro HD my favorite piece of tech right now? The answer is simple. Most of what I have in the way of tablets, notebooks, smart phones–and more–are personal electronics and mostly only enjoyed by me. The GoPro, however, although used by me produces things that can be enjoyed and fun for everyone. It enables a memorable and shared experience that is fun and entertaining. This is what makes it great. It is fun to use, I am having fun when I use it, and it produces content that can be shared, consumed, and valued by my family and friends. Therefore everyone wins–not just me.

The Tech.pinions Predictions For 2012

It’s fun to make predictions. Luckily none of us are in the predictions business but it’s fun to analyze, speculate, and simply hope for interesting things to come prior to each new year. This year, rather than have each of our columnists write a number of predictions we decided to have each submit two. So below for your reading pleasure is our bold proclamations for the technology industry in 2012.

Peter Lewis

1) The existence of the Higgs Boson, also known as “the God particle,” finally will be confirmed in 2012 as the Large Hadron Collider (LHC) at CERN in Geneva ramps up to full power. Not to be confused with the Higgs Boston, which confers Mass. to Beantown – I’d love to take credit for that line, but The Onion beat me to it – the Higgs Boson is a theoretical subatomic particle whose existence would take humankind a step or two closer to understanding the very nature of matter, the mysteries of space and time, and the future of the universe, which could come in handy in case you’re trying to decide whether to buy or rent. This very tiny particle will be the biggest science story of the coming year. At the very least, it will justify the estimated $4.4 billion cost of one of the largest and most complex pieces of technology ever built, not counting Windows Vista.

2) This was the year of Big Data and Cloud Computing. Next year will be the year of trying to actually move Big Data through the Cloud at useful speeds. Scientists in 2012 will achieve a breakthrough in sustained data transfer speeds on wide-area networks, paving the way for government and academic transfer rates approaching 100 gigabits per second. Unfortunately, you’ll be very old, or perhaps even up in the clouds yourself, by the time such speeds are available to personal computer and mobile device users. In theory, you’ll be able to download the entire Library of Netflix in 14.4 seconds, but. In practice. Your movie. Will. Download. And download and. (Go get a cup of coffee.) Download. Like. This. On the bright side: I predict that the average broadband speed in the United States in 2012 will finally catch up to the average broadband speed in South Korea in 2002.

Tim Bajarin

1) Netbooks will make a comeback.
In 2011, netbooks fell out of favor with consumers as tablets became the hot mobile product. The education market is still interested, though. If vendors bring out netbooks that look more like Ultrabooks but are priced between $299 and $350, these types of products could strike a nerve with consumers again. Of course, they would have lower end processors, a shortage of memory, Android as the OS, and could even just ship with the Chrome Browser on it.
Although they may only be a small part of the PC shipment mix, I believe there is still real interest in a lightweight, very low-cost laptop. While Ultrabooks will fit the bill for those with more cash on hand, a fresh generation of netbooks could find new life at the very low-end of the laptop market.

2) Ultrabook-tablet combo devices will become a big hit.
Ultrabooks with detachable screens that turn into tablets could be the sleeper hit of 2012. Also known as hybrids, the early models of this concept used an illogical mixed operating systems; Windows when in PC mode and Android when in tablet mode. But by the year’s end, both Windows 8 for tablet and Windows 8 for laptops will be out and these hybrids will be completely compatible. I expect to see solid models of this type of hybrid by quarter four.

Patrick Moorhead

1) Smartphones and Tablets erode PCs even more than expected
Smartphones and tablets will disrupt consumer PC sales even more than anyone predicted. The “modularity effect” will start to engage where smartphones and tablets, when wirelessly connected to large displays and full-sized input devices, can replace a PC for basic usage models. That sefment of consumers will be willing to pay even more for their smartphones and even less

2) Auto check-in subsidized phone or service launched
The first phones with private “auto check-ins” for stores, restaurants, bars, coffee shops, malls, and gas stations will be launched in exchange for an additional $49-$99 subsidy. Competitive deals and loyalty benefits will be presented to the consumer based upon where they are checking in. The auto check-in will only automatically be shared with the company providing the subsidy and not be public, unless the consumer decides so. The phone will be marketed to middle-income, younger consumers who are willing to trade privacy and advertising for cash.

Steve Wildstrom

1) A major professional sports league will do a deal with Microsoft for over-the-top streaming of live games via Xbox. This will be a major step in breaking the iron triangle of content owners, networks, and cable/satellite distributors and will increase Microsoft’s lead over Apple and Google in streaming content.

2) The U.S. government will conclude its antitrust investigation of Google without bringing any charges. The EU, however, may take a harder line, so Google won’t be out of the woods.

Ben Bajarin

1) Google will sell the Motorola hardware division. When I wrote back in August about why Google should buy Motorola, I didn’t intend it to be a prediction. Even though a week later they actually did buy Motorola. For me it was more of a theoretical analysis of what I thought Google should do and what would be best long-term for Motorola. Given that the patents are what Google is claiming is most valuable to them, once the acquisition is complete and the active lawsuits are settled, Google can legally sell the hardware division and still keep the patents for future protection. If Google truly wants to maintain good relations with their customers, it behooves them to get rid of the Motorola hardware business.

Although, I wouldn’t sell this business until 2013 if I was Google. Just in case their current partners like HTC and Samsung for example begin to shift their loyalty to Windows Phone or even perhaps webOS. This would inevitably hurt their market share and could lead them to go the vertical route, which they would need to Motorola hardware division to do.

2) Google will launch a Chrome based tablet, probably called the Chromepad. It will be priced at $99 and only be used for browsing the web and web services through Google’s Chrome OS. It will be highly disruptive and usher in the era of low-priced, web and web app only connected tablets.

BONUS Far Out Prediction

I’d like to throw in a bonus wild prediction. I think it would be great and completely re-shape the broadcast and over-the-top TV landscape. Microsoft will buy DirecTV and integrate it with the XBOX 360 and all future US-based XBOX’s going forward.

From all of us at Tech.pinions, Happy Holiday and have a great New Year’s.

The NTSB’s Cluelessness Could Actually Hurt Car Safety

In Maryland, where I live, it is illegal to have a phone in your hand to talk or text while driving. But it seems that maybe one in four drivers I see on the road have a phone to an ear–and often, they are driving really badly. I fully support the notion that people should not phone and drive. But I think the recent call by the National Transportation Safety Board to ban the use of portable electronics by drivers is seriously misguided.

It’s a little hard to tell what the NTSB, whose powers a purely advisory, not regulatory, wants states to do since its recommendations seem to consist of a vague press release. Its justification for the recommendation consists of a string of scary anecdotes, the primary one being a horrifying tale of a driver who caused a multiple-fatality accident after sending 11 text messages in 11 minutes. Quoting the press release:

The safety recommendation specifically calls for the 50 states and the District of Columbia to ban the nonemergency use of portable electronic devices (other than those designed to support the driving task) for all drivers. The safety recommendation also urges use of the NHTSA model of high-visibility enforcement to support these bans and implementation of targeted communication campaigns to inform motorists of the new law and heightened enforcement.

This is a truly bad idea for a number of reasons. First, the NTSB doesn’t tell us what a “portable electronic device” is or what it means for it to be “designed to support the driving task.” Are navigation devices, which seem to me to support driving, acceptable? Is it OK to type in your destination on some navigation device’s horrible keyboard while tooling down the road? What about speaking your destination to Google Maps on an  Android phone?

Second, the ban is unenforceable. Judging by what I see every day, the police cannot or will not enforce the laws already on the books.  Broadening the law, especially if it is complicated by making fine distinctions about what devices are permissible, will only make things worse. In my Acura TL, I can make or receive a call without taking even one hand off the wheel through a combination of buttons built into the steering wheel and voice control. Presumably, the NTSB recommendation would make using it illegal. But no law enforcement officer could ever say with certainty that I was talking on the phone while driving, and I can destroy the evidence at the push of a button. (Cathy Gellis discusses the legal and enforcement issues in more depth here.)

But finally, and most important, the NTSB seems to have no sense whatever of the growing use of mobile phones as the data link in telematics systems for everything from entertainment to safety. The craziest idea, not included in the NTSB recommendation but discussed by the U.S. Department of Transportation, is technology that would somehow block phone transmissions from inside of moving cars.  Never mind the technical difficulties in doing this or the protests that are sure to arise from the FCC, it’s a terrible idea.

If On Star is acceptable, even welcome as a safety enhancement, what is wrong with a system that performs similar functions through a phone rather than a radio embedded in the car? SYNC, a collaboration between Ford and Microsoft uses a phone to link to everything from in-car entertainment to real-time car diagnostics–and even monitoring the health of the driver.

The truth is that cars are becoming connected devices and for a whole lot of reasons, it makes more sense to use a phone for the link than building it into the car. The NTSB seems to be perfectly oblivious to this trend and in the long run, the board’s recommendation is more likely to hurt than help safety.

Distracted driving is a real menace, and phone use is a big part of the problem. Education and common sense might go a long way toward alleviating it: use a hands-free system, never text and drive, and if you must use the phone while driving, keep the conversations short and simple. The NTSB ban is the wrong way to go and buy denying the many benefits of electronics in cars, it could actually make things worse.

 

Voice Control Will Disrupt Living Room Electronics

In what seems to be a routine in high-tech journalism and social media now is to speculate on what Apple will do next. The latest and greatest rumor is that Apple will develop an HDTV set. I wrote back in September that Apple should build aTV given the lousy experience and Apple’s ability to fix big user challenges. What hasn’t been talked about a lot is why voice command and control makes so much sense in home electronics and why it will dominate the living room. Its all about the content.

History of U.S. TV Content

mic

For many growing up in the U.S., there were 4-5 stations on TV; ABC, NBC, CBS, PBS and an independent UHF channel. If you ever wanted to know what was on, you just looked into the daily newspaper that was dropped off every morning on the front porch. Then around the early 80’s cable started rolling out and TV moved to around 10-20 channels and included ESPN, MTV CNN, and HBO. The next step was an explosion in channels brought by analog cable, digital cable and satellite. My satellite company, Time Warner, offers 512 different channels. Add that to the unlimited of over the top “channels” or titles available on Netflix, Boxee, and you can easily see the challenge.

The Consumer Problem

With an unlimited amount of things to watch, record, and interact with, finding what you want to watch becomes a huge issue. Paper guides are worthless and integrated TV guides from the cable or satellite boxes are slow and cumbersome. Given the flat and long tail characteristic of choices, multi-variate and unstructured “search” is the answer to find the right content. That is, directories aren’t the answer. The question then becomes, what’s the best way to search.

The Right Kind of Search

If search is the answer, what kind of search? The answer lies in how people would want to find something. Consumers have many ways they look for things.

Some like to do surgical searching where they have exacts. They ask for “The Matrix Revolutions.” Others have a concept or idea of what they are looking for but not exactly; “find the car movie with Will Ferrell and John Reilly” and back comes a few movies like Step Brothers and Talladega Nights. Others may search by an unlimited amount of “mental genres”, or those which are created by the user. They may ask for “all Emmy Award winning movies between 2005 and 2010”. You get the point; the consumer is best served with answers to natural language search and then the call to action is to get that person to the content immediately.

Natural Language Voice Search and Control

The answer to the content search challenge is natural language voice search and control. That’s a mouthful, but basically, tell the TV what you want to watch and it guides you there from thousands of entry points. Two popular implementations exist today for voice search. There are others, like Dragon Naturally Speaking, but those are niche commercial plays.

Microsoft Kinect

Microsoft has done more more to enhance the living room than any other company including Apple, Roku, Boxee and Sony. Microsoft is a leader in IPTV and the innovation leader in entertainment game consoles. With Kinect, a user can use Bing to search and find content. It works well in specific circumstances and at certain points in the experience, but it needs a lot of improvement. Bing needs to find content anywhere in the menu structure, not just at the top level. It also needs to improve upon its ability to work well in a living room full of viewers. Its beam-forming is awesome but needs to get better to the point that it serves as a virtual remote.

Finally, it needs to support natural language search and the ability to narrow down the choices. I have full confidence that they will add these features, but a big question is the hardware. The hardware is seven years old. Software gymnastics and offloading some processing to the Kinect module has been brilliant, but at some point, hardware runs out of gas.

Apple Siri

While certainly not the first to bring voice command and dictation to phones, Apple was the first to bring natural language to the phone. The problem with the current Siri is that its not connected to an entertainment database, its logic isn’t there to narrow down choices, and it isn’t connected to a TV so that once you find what you are looking for you can immediately switch the TV.

As I wrote in September (before Apple 4s and Siri), Apple “could master controlling the TV’s content via voice primarily.” If Apple were to build a TV, they could hypothetically leverage iPhones, iPads, iPods to improve the voice results. While Kinect has a full microphone array and operates best at 6-8 feet, an iPhone microphone could be 6 inches away and would certainly help with the “who owns the remote” problem and with voice recognition. Even better would be if multiple iOS devices could leverage each others sensors. That would be powerful.

While I am skeptical in driving voice control and cognition from the cloud, Apple, if they built a TV, could do more local processing and increase the speed of results. Anyone who has ever used Siri extensively knows what I am talking about here. The first few times Siri for TV fails to bring back results or says “system unavailable”, it gets shelved and never gets used again by many in the household. Part of the the entertainment database needs to be local until the cloud can be 99% accurate.

What about Sony, Samsung, LG, and Toshiba?

I believe that all major CE manufacturers are working on advanced HCI techniques to control CE devices with voice and air gestures. The big question is, do they have the IP and time to “perfect” the interface before Apple and Microsoft dominate the space? There are two parts to natural language control, the “what did they say”, and the “what did they mean”. Apple licences the first part from Nuance but the back end is Siri. Competitors could license the Nuiance front end, but would need to buy or build the “what did they mean” part.

Now that HDTV sales are slowing down, it is even harder to differentiate between HDTVs. Consumers haven’t been willing to spend more for 3D but have been willing to spend more for LED and Smart TV. Once every HDTV is LED, 3D and “smart”, the key differentiator could become voice and air gestures. If Sony, Samsung, LG and Toshiba, aren’t prepared, their world could change dramatically and Microsoft and Apple could have the edge..

Why The Android Update Alliance Was Doomed From the Start

When Google announced the Android Update Alliance, an initiative to bring each new Android OS release to all devices in a timely manner, it was well-intentioned but doomed from the start.

Jamie Lendino over at PC Magazine had a great column called “Google’s Android Update Alliance is Already Dead.” I recommend a read of this column in order to get some more context from the handset OEMs and carrier quotes on the subject. The reality is that this alliance was flawed at a fundamental level from the beginning and was destined to failure.

There is an important element to understand about this industry and it comes down to two types of strategies to bring devices to market. The first strategy is a direct to consumer product development approach. This is the strategy most closely followed by Apple, due to the fact that they have their own retail stores and control their own retail presence. Both of those strategic points in Apple’s favor are strengths at a competitive level. In this strategy the end consumer is your customer, they are the ones you are attempting to sell directly to. When a more direct to consumer strategy is employed, a more limited product mix is possible.

The second strategy is a channel strategy. This is the strategy that many take by order of necessity. In this strategy, although devices are made for consumers, the customer is actually the channel, or the retailer and carrier. Device manufacturers actually create products specifically for the channel in the hopes that the channel can sell them to consumers. Device manufacturers are not guaranteed that the channel will sell their device or give them favorable margins on devices sold. Because of this fact, device OEMs must create a device menu in order to give many different channels the opportunity to sell different devices. The other key point in a channel strategy, is that the channel (whether it be a retailer or a carrier) is not interested in selling two products that are too similar to each other or target the same market segment. This is why we see such a heavy device mix in carrier retail for example. I empathize with companies who have to employ a channel strategy because it is very hard and very frustrating–and also very political. However, employing a channel strategy engrains in a device OEM what I call a “ship-and-forget” mentality. This is at a fundamental level why the Android Update Alliance was destined to fail.

This mindset is unfortunate but necessary to employ a successful channel strategy. Companies that make a menu of devices to sell to the channel need to move quickly to the next batch of devices and commit existing development resources to this new batch of devices.   This makes supporting legacy devices more difficult due to most of the engineering always having to move to new product development. There are fewer resources, and less priorities frankly, for legacy devices because almost all the focus is on the future not the past. This again is fed by the business model of those who are selling to the channel which yields low margins but requires high volume.

It is also partially Google’s fault because they put updating and supporting devices in the hands of the OEMs. Often this is because the OEMs have changed Android slightly in order to differentiate their handsets, therefore said OEM is responsible for the engineering to get their legacy devices up to speed. It is hard to side with one or the other on this issue. Of course if no one changed Android and left it stock, it would be easier to update quickly. The only problem with that is that there is VERY little differentiation in that world and any differentiation is limited to hardware. This is the sea of sameness I talk frequently about and in the past it led to spec battles and very little innovation.

If you want to see the sea of sameness in action, go to a big box retailer who sells PCs and look at the wall of Windows machines, all running identical software thus the only difference is in hardware. Hardware differentiation alone would be a boring future.

The channel strategy that is employed by many in the industry is a simple truth about how this industry works. It has its plusses but it also has its minuses. Vendors must differentiate, but they also have to cater to the channel. The channel, and horizontal operating system solutions create this sea of sameness due to the nature of the business model.

Everyone from the OEM, to the channel (retailer and carrier), as well as the software platform (Google) have to align for the good of the ecosystem if this is to get any better. The only problem is from what I see so far they are still more dis-aligned than aligned.

So although it was well-intentioned, the channel strategy and lack of Google’s own committing of more resources to assist OEMs is what keeps the Android OS unity a pipe dream.