The Most Interesting Things I Saw at CES 2012

CES is certainly the technology lovers candy store. It is nearly impossible for any one person to see everything of interest at CES. So my approach is to look for the hidden gems or something that exposes me to a concept or an idea that could have lasting industry impact.

So in this, my Friday column, I figured I would highlight a few of the most interesting things I saw at this years CES.

Recon Instruments GPS Goggles
The first was a fascinating product made by a company called Recon Instruments and in partnership with a number of Ski/Snowboard goggle companies. What makes this unique and interesting is that the pair of goggles has Recon Instruments modular technology that feature a built-in LCD screen into goggles.

The Recon Instruments module is packed with features useful while on the slopes. Things like speed, location of friends, temperature, altitude, current GPS location, vertical stats on jumps and much more.

Think of this as your heads up display while skiing or snowboarding. The module can also connect wirelessly to your Android phone allowing you to see caller ID and audio / music controls.

Go Pro Hero 2 + WiFi Backpack
In the same sort of extreme sports technology category, I was interested in the newest Go Pro the Hero2 and Wi-fi backpack accessory. I wrote about the Go Pro HD back in December and mentioned it as one of my favorite pieces of technology at the moment. The Hero2 and wi-fi backpack makes it possible to use the Go Pro in conduction with a smart phone and companion app to see what you are recording or have recorded using your smart phone display. This is useful in so many ways but what makes it interesting is I believe it represents a trend where hardware companies develop companion software or apps that create a compelling extension of the hardware experience. I am excited to see more companies take this approach and use software and apps to extend the hardware they create.

In this case the companion app acts as an accessory to the Go Pro Hero2 hardware and provides a useful and compelling experience. Another compelling feature is that you can use your smart phone and the live link to the Go Pro Hero2 to stream live video of what you are recording to the web in real-time. This would make it possible for friends, family, and loved ones to see memories being created in real-time.

Dell XPS 13 UltraBook
Dell came out strong in the UltraBook category and created possibly the best notebook they have created in some time. The XPS 13 UltraBook’s coolest features are the near edge to edge Gorilla Glass display, which needs to be seen to be appreciated, and the unique carbon fiber bottom which keeps the underside cool.

The 13.3 inch display looks amazing with the Gorilla Glass and packed into an ultra slim bezel like that of an 11-inch display. It surprises me to say that if I was to use a notebook other than my Air, this would be the one.

Samsung 55-inch OLED TV
A sight to behold was the Samsung 55-inch OLED TV. I had a similar experience when I saw this TV as I did when I first saw a HDTV running HD content. The vivid picture quality and rich deep color are hard to put into words. Samsung is leading the charge in developing as near to edge-to-edge glass on TVs and this one is even closer. The bezel and edge virtually disappear into the background leaving just the amazing picture to enjoy.

We have been waiting for OLED displays to make it to market, for the sheer reality that in five years they may be affordable. OLED represents one of the most exciting display technologies in a while and it is important the industry embrace this technology so we can get OLED on all devices with a display as fast as possible.

Samsung didn’t mention any pricing yet but said it would be available toward the end of the year. It will most likely cost an arm and a leg.

Intel’s X86 Smart Phone Reference Design
Intel made a huge leap forward this CES by finally showing the world their latest 32nm “Medfield” SOC running on a smart phone reference design. I spent a few minutes with the design, which was running Android version 2.3, and I was impressed with how snappy it was including web page pinch and view, as well as graphics capabilities.

Battery life is still a concern of mine but Intel’s expertise in hyper-threading and core management could help this. The most amazing thing about the smart phone reference design is that it didn’t’ need a fan.

Motorola announced that they would bring Intel based smart phones to market in 2012. This is one of the things I am very excited about as It could mark a new era for Intel and the level of competition we will see in the upcoming ARM vx X86 is going to fun to watch and great for the industry and consumers.

Motorola Droid Razr Maxx
Last but not least the Motorola Razr Maxx has my vote for most interesting smart phone. It was a toss-up between the Razr Maxx and the Nokia Lumia 900. I simply choose the Razr Maxx due to the feature that I think made it most interesting. Which was the 3300 mAh (12.54 Whr) battery that Moto packed into the form factor of the Razr – it’s just slightly thicker than the Droid Razr. Motorola is claiming that the Razr Maxx can get up to 21 hours of talk time. I talked to several Motorola executives who had been using the phone while at the show and they remarked how with normal usage during the show they were able to go several days without charging. To contrast, every day while at CES my iPhone was dead by 3pm.

Image Credit - AnandTech

Making our mobile batteries last is of the utmost importance going forward. I applaud Motorola for their engineering work and creating a product that is sleek, powerful, and has superior battery life.

Going Nuclear to stop SOPA

I'm sure this violates someone's copyright

The online news site reddit said it will invoke the “nuclear option” on Jan. 18 – next Wednesday — against two pieces of federal legislation, the House’s Stop Online Piracy Act (SOPA) and its Senate cousin, the Protect Intellectual Property Act (PIPA).

For 12 hours on Wednesday, reddit’s normally busy “front page of the Internet” will blacked out and replaced by a live video feed of hearings by the House Committee on Oversight and Government Reform, which is debating proposed legislation to give the government the ability to shut down foreign websites that infringe copyrighted material, and to penalize domestic companies  that “facilitate” alleged infringement.

It remains unclear if Google, Amazon, Facebook, Twitter, Wikipedia, Craigslist, eBay, PayPal, Yahoo and other Internet titans will join in a simultaneous blackout to protest the legislation, although the trade association that represents them all says it is a possibility. “There have been some serious discussions about that,” Markham C. Erickson, Executive Director and General Counsel of The NetCoalition, told CNET’s Declan McCullagh. The Net Coalition is not involved with reddit’s action next week, a spokeswoman said.

A coordinated systemwide blackout, proponents say, would demonstrate to millions of Americans what could happen to any website that carries user-generated content, if SOPA or PIPA were enacted.

In current forms, the bills would require online service providers, Internet search engines, payment providers and Internet advertising services to police their customers and banish offenders. Companies that did not comply with the government’s order to prevent their customers from connecting with foreign rogue sites would be punished.

Let’s say a company like YouTube, which publishes an average of 48 hours of video every minute, fails to stop one of its 490 million monthly users from uploading a chunk of video that is copyrighted by a Hollywood studio. Let’s say further that one of Twitter’s 400 tweets per minute that link to YouTube videos contains a link to that copyrighted material. And maybe one of Facebook’s 800 million users reposts the link. YouTube says Facebook users watch 150 years worth of YouTube videos every day. And let’s say you hear about the video and enter a search for it on Google.

Under the proposed legislation, YouTube, Twitter, Facebook and Google are responsible for keeping their users within the law. SOPA grants those companies immunity from punishment if they shut down or block suspected wrongdoers. But if they don’t shut down or block the miscreants, they could be punished themselves.

Both the House and Senate bills are strongly backed by Old Media companies, and equally opposed by New Media companies, along with an astonishing confederation of civil libertarians, venture capitalists, entrepreneurs, journalists and academics.

Both sides cast the legislation as a battle of life and death for the future of the Internet.

Opponents contend that SOPA would shut down the free flow of information and prevent Americans from fully exercising their First Amendment rights. Venture capitalists say it will kill innovation in Silicon Valley by setting up impossible burdens for the social media companies that now drive the area’s economic engine. Some critics say SOPA will hand Big Business a “kill switch” on the Internet similar to the shutoff valves used by China, Egypt and other repressive countries to stifle dissent.

Supporters of the legislation, meanwhile, say new laws are needed to fight online trafficking on copyrighted materials and counterfeit goods. No one can deny that the Internet is awash in fake Viagra and bootlegged MP3 files. Lamar Smith, the Texas Republican who sponsored SOPA, says it will stop foreign online criminals from stealing and selling America’s intellectual property and keeping the profits for themselves. Unless copyright holders are given the new protections under SOPA, Mr. Smith argues, American innovation will stop, American jobs will be lost, and the American economy will continue to lose $100 billion a year to online pirates. And people will die, Mr. Smith says, if we fail to stop foreign villains from selling dangerous counterfeit drugs, fake automobile parts and tainted baby food.

“The criticism of this bill is completely hypothetical; none of it is based in reality,” Mr. Smith told Roll Call recently. “Not one of the critics was able to point to any language in the bill that would in any way harm the Internet. Their accusations are simply not supported by any facts.”

“It’s a vocal minority, Mr. Smith told Roll Call. “Because they’re strident doesn’t mean they’re either legitimate or large in number. One, they need to read the language. Show me the language. There’s nothing they can point to that does what they say it does do.”

Who are these clueless critics who don’t know anything about the Internet?

Vint Cerf, Steven Bellovin, Esther Dyson, Dan Kaminsky and dozens of other Internet innovators and engineers wrote an open letter that said: “If enacted, either of these bills will create an environment of tremendous fear and uncertainty for technological innovation, and seriously harm the credibility of the United States in its role as a steward of key Internet infrastructure.”

AOL, LinkedIn, Mozilla, Zynga and other Internet companies joined in an open letter to write, “We are very concerned that the bills as written would seriously undermine the effective mechanism Congress enacted in the Digital Millenium Copyright Act (DMCA) to provide a safe harbor for Internet companies that act in good faith to remove infringing content from their sites.”

Marc Andreessen, Craig Newmark, Jerry Yang, Reid Hoffman, Caterina Fake, Pierre Omidyar, Biz Stone, Jack Dorsey, Jimmy Wales and other Internet entrepreneurs contend that the bills would:

  •             “Require web services to monitor what users link to, or upload. This would have a chilling effect on innovation.
  •             “Deny website owners the right to due process or law.
  •             “Give the U.S. government the power to censor the web using techniques similar to those used by China, Malaysia and Iran; and
  •             “Undermine security online by changing the basic structure of the Internet.”

A couple of guys named Sergey Brin and Larry Page have been particularly vocal in opposing the legislation.

Well of course, Mr. Smith argues. “Companies like Google have made billions by working with and promoting foreign rogue websites, so they have a vested interest in preventing Congress from stopping rogue sites,” he said at a news conference last month. “Their opposition to this legislation is self-serving since they profit from doing business with rogue sites that steal and sell America’s intellectual property.”

I think everyone agrees that something must be done to combat rampant online piracy and the sale of bogus goods and services by foreign rogue websites. But Old Media is once asking for heavy-handed remedies that resist rather than adapt to technological change. It tried to outlaw videocassette recorders, and it tried to throw students and grandmothers into prison for downloading MP3 files, and now it wants kill-switches on the Internet. Perhaps reddit’s nuclear option will be the kind of heavy-handed rebuttal we need to prompt discussions about a smarter, mutually agreeable solution.

 

 

Do Nokia and Windows Phone Have Any Hope for 2012?

There were a number of priorities for me at this years CES.  One of my top priorities was to better understand Nokia’s strategy for Windows Phone and the US Market.  Secondarily to Nokia’s US strategy was Microsoft in general and whether Windows Phone can grow in market share in the US in 2012.   

As I have written before, Nokia has again entered the conversation at large, but more importantly, they have become relevant in the US smart phone market.  I have expressed my belief that they contain some fundamental strengths, like brand, quality design, and marketing smarts, to at least compete in the US.

For Nokia, this years CES bore two important and timely US events.  The first was that their US presence was solidified when the US sales of their Lumia 710 officially became available at T-Mobile this week.  The second was the announcement at this years CES of the Lumia 900 which will come to market on AT&T.   

Both products are well designed and the Windows Phone experience is impressive.  That being said, Nokia’s and Microsoft’s challenge is primarily convincing consumers that Windows Phone is an OS worth investing in.

I use that terminology because that is exactly what an OS platform is asking consumers to do.  Not only invest but allow this most personal device to become a part of their life.

Currently, only a small fraction of consumers are convinced that they should buy into Windows Phone 7 and it will take quite a bit more convincing for most.  Nokia and Windows Phone face stiff competition with the army of Android devices and the industry leader in Apple. If anything, Nokia and Windows Phone have a small window of opportunity to rise above what is the Android sea of sameness – but it is only a small window. This is because many more of Android’s core and loyal (on the surface) partners will continue to invest resources in Windows Phone over the next few years. If Microsoft and Nokia are successful the result should be that the market will contain not only a sea of Android devices but of Windows Phone devices as well.

This is why the battle will again turn to differentiation across the board on both the Android and Windows Phone platform. I have previously dared the industry to differentiate and this will need to be the focus going forward.

As I look at where we are right now, it appears that Nokia is faced with an unfortunate dilemma.  Nokia now bears the difficult task of not only spending money to develop their brand in the US but to also help Microsoft convince consumers Windows Phone is the right platform for them.

Microsoft is unfortunately not building or investing in the Windows Phone consumer marketing as aggressively as they should on their own.  So rather then be able to simply focus on their brand, Nokia must also invest in marketing Windows Phone. This will inevitably help Nokia but also their competitors in the long term.

All of this, however, presents Microsoft with what is the chance of a lifetime and it all relates to Windows 8.  The importance of Windows 8 to Microsoft seems to be wildly shrugged off by many.  But I believe that if Microsoft does not succeed in creating consumer demand with Windows 8, they will begin to loose OS market share even faster than they are right now.  

Windows Phone’s success in 2012 can pave the way for Windows 8.  If Microsoft can, at the very least, create some level of interest and ultimately generate demand for Windows Phone, it will almost certainly do the same for Windows 8.  This is because once you have gotten used to the user experience of Windows Phone, it creates a seamless transition to the Windows 8 experience.   

If Microsoft can generate some level of success for Windows Phone in 2012, it will build a needed level of momentum for Windows 8. Primarily because the Windows Phone and Windows 8 Metro UI are very similar.  All of these steps are necessary for Microsoft to not only create demand for their OS platforms but to also create demand for their ecosystem.  I have emphasized the importance of the ecosystem in past columns and Microsoft must leverage their assets to create loyal consumers.

So what is my conclusion for 2012?  Simply put, and to use a sports analogy, it is a rebuilding year for Microsoft and Nokia.  Both companies need to view 2012 as a “laying-a-foundation-for-the-future” year.  I do expect Windows Phone and Nokia to grow in market share in the US but I am not sure if we can count on double digit growth. If both companies play their cards right in 2012, then 2013 will present them with the growth opportunities they both desire.

Past Columns Mentioned:
Why Nokia is Interesting
Dear Industry – Dare to Differentiate
Why It’s All About the Ecosystem

OnLive Brings Superfast Windows to the iPad

I just lost my last excuse for traveling with a laptop.

I usually find myself traveling with my MacBook Air because some tasks, such as writing this post at the Consumer Electronics Show, is just a bit more than I can manage on the iPad. But OnLive Desktop is about to change that–and could bring big changes to mobile computing for business.

OnLive Desktop screenshotOnLive is the company that did the seemingly impossible by creating a platform where high-performance games are run on its servers with just screen images transmitted to networked clients including computers, tablets, phones, and connected TVs. By running instances of Windows on a server instead of a game, OnLive has duplicated the trick for productivity software. It works a bit like Citrix’s server-based Windows, but with performance so good you think the software is running locally, and on a really fast machine at that. The key to the performance, says OnLive CEO Steve Perlman, is that it was “built against the discipline of instant-action gaming.”

The OnLive Desktop app will be available from the iTunes Store later today. A basic version, which includes Microsoft Word, PowerPoint, and Excel and 2 gigabytes of online storage, is free.

A $10 a month premium version, which will be of more interest to serious users when it becomes available, includes the full Office suite and 50 GB of storage. It also provides for persistent user preferences in Office, superfast server-based web browsing, and the ability of users to upload applications.

Adding your own applications would add dramatically to the usefulness of the services. However, Perlman was a bit vague on exactly how it would work, especially with applications such as Adobe Creative Suite, which have complicated licensing arrangements. Autodesk applications are likely to be available pre-installed on OnLive’s servers, since Autodesk is an investor in the company.

OnLive also plans to offer an enterprise version. This would allow companies to set up virtual Windows machines on OnLive servers using their own custom images, a service aimed at the heart of Citrix’s business.

When I first saw a demo of OnLive’s gaming service, I was deeply skeptical that it could work. Trying it when it first became available quickly made me a believer, and even though I have only seen the Desktop service in a demo, I have every reason to believe it will work as promised over any decent internet connection.

Actually using Office on an iPad is a bit clumsy for reasons that have more to do with Office than with either OnLine or the iPad. Office is notoriously unfriendly to touch, even when installed on a touchscreen PC or Windows slate.  When a keyboard is needed, the user has a choice between the Microsoft on-screen keyboard (the iPad keyboards lack keys that Windows needs for full functionality) or the standard office Text Input Panel, which can be used with any iPad-compatible pen. I think most users will be much happier with an external physical keyboard.

On the other hand, OnLive Desktop will let you display even the most complex PowerPoint slide show, including Flash video, without a hitch. (This works because the Flash is being executed on the server, with only the frames sent down to the notoriously Flash-less iPad.)

OnLive Desktop could really come into its own with Windows 8 and the expected, though as yet unannounced, touch friendly version of Office.

 

 

Samsung & LG Validate Microsoft’s Living Room Interaction Model

Microsoft launched Kinect back in November 2010 in a  move to change the man-to-machine interface between the consumer to their living room content.  While incredibly risky, the gamble paid off in the fastest selling consumer device, ever.  I saw the potential after analyzing the usage models and technology for a few months after Kinect launch and predicted that at least all DMA’s would have the capability.

The Kinect launch sent shock waves into the industry because the titans of the living room like Sony, Samsung, and Toshiba hadn’t even gotten close to duplicating or leading with voice and air-gesture techniques.  With Samsung and LG announcing future TVs with this capability at CES, Microsoft’s living room interaction strategy has officially been affirmed at CES and most importantly, the CE industry.

Samsung “Smart Interaction”sammy

Samsung launched what it called “Smart Interaction”, which  allows users to control and interact with their HDTVs.  Smart Interaction allows the user to control the TV with their voice, air-gestures, and passively with their face.  The voice and air gestures operate in a manner similar to Microsoft in that pre-defined gestures exist for different interactions. For instance, users can select an item by grabbing it, which signifies clicking an icon on a remote.  Facial recognition essentially “logs you in” to your profile like a PC would giving you your personal settings for TV and also gives you the virtual remote.

A Step Further Than Microsoft ?

Samsung has one-upped Microsoft on one indicator, at least publicly, with their application development model.  Samsung has broadly opened their APIs via an SDK which could pull in tens of thousands of developers.  If this gains traction, we could see a future challenge arise where platforms are fighting for the number of apps in the same way Apple initially trumped everyone in smartphones.  The initial iPhone lure was its design but also  the apps, the hundreds of thousands of apps that were developed.  It made Google Android look very weak initially until it caught up, still makes Blackberry and Windows Phone appear weaker, and can be argued it was the death blow to HP’s webOS. I believe that Microsoft is gearing up for a major “opening” of the Kinect ecosystem in the Windows 8 timeframe where Windows 8 Metro apps can be run inside the Kinect environment.

Challenges for Samsung and LG

Advanced HCI like voice and air-gesture control is a monumental undertaking and risk.  Changing anything that stands between a CE user and the content is risky in that if it’s not perfect, and I mean perfect, users will stop using it.  Look at version 1 of Apple’s Siri.  Everyone who bought the phone tried it and most stopped using it because it wasn’t reliable or consistent.  Microsoft Kinect has many, many contingencies to work well including standing in a specific “zone” to get the best air gestures to work correctly.  Voice control only works in certain modes, not all interactions.

The fallback Apple has is that users don’t have to use Siri, it’s an option and it can be very personal in that most use Siri when others aren’t looking or listening.  The Kinect fallback is a painful one, in that you wasted that cool looking $149 peripheral.  Similarly, Samsung  “Smart Interaction” users can fallback to the remote, and most will initially, until it’s perfected.

There are meaningful differences in consumer audiences of Siri, Kinect, and Samsung “Smart Interaction”.  I argue that Siri and Kinect users are “pathfinders” and “explorers” in that they enjoy the challenge of trying new things.  The traditional HDTV buyer doesn’t want any pathfinding or exploring; they want to watch content and if they’re feeling adventurous, they’ll go out on a limb and check sports scores.   This means that Samsung’s customers won’t appreciate anything that just doesn’t work and don’t admire the “good try” or a Siri beta product.

One often-overlooked challenge in this space is content, or the amount of content you can actually control with voice and air gestures.  Over the top services like Netflix and Hulu are fine if the app is resident in the TV, but what if you have a cable or satellite box which most of the living population have? What if you want to PVR something or want to play specific content that was saved on it?  This is solvable if the TV has a perfect channel guide for the STB and service provider with IR-blasting capabilities to talk to it.  That didn’t work out too well for Google TV V1, its end users or its partners.

This is the Future, Embrace It

The CE industry won’t get this right initially with a broad base of consumers but that won’t kill the interaction model. Hardware and software developers will keep improving until it finally does, and it truly becomes natural, consistent, and reliable. At some point in the very near future, most consumers will be able to control their HDTVs with their voice and air gestures.  Many won’t want to do this, particularly those who are tech-phobic or late adopters.

In terms of industry investment, the positive part is that other devices like phones, tablets, PCs and even washing machines leverage the same interactions and technologies so there is a lot of investment and shared risk.  The biggest question is, will one company other than Microsoft lead the future of living room?  Your move, Apple.

The Day A Smart Phone Changed an Industry

Five years ago today Apple introduced the iPhone. On this day five years ago, Apple opened our eyes to the reality that the devices we considered “smart” were not really smart at all. They re-invented the smart phone and made the industry re-evaluate what we knew a smart phone to be, changing the landscape entirely.

I remember the day vividly because our team had split up and one person from Creative Strategies (not me) got to attend history in the making at the iPhone launch event, while I was stuck at CES doing my analyst duties.

I have never seen the buzz around CES be so focused on something not present at the show. That year the iPhone completely overshadowed CES in a way I have never seen and may never see again.

The industry leading up the launch of the iPhone was a mess. Handset innovation was at an all time low and purely focused on business users. Carriers controlled nearly every aspect of the device. Developers knew mobile apps were the big opportunity but had to fight for “on deck” promotion through carriers’ walled gardens if they hoped to make any money. To sum it up, there was no unity, no vision, and almost zero innovation as it related to smart phones. Apple changed all that with the iPhone.

So now here we are five years later and how is the iPhone doing? If ChangeWave’s recent data is any indication, the iPhone is not only continuing to thrive five years later, but it is dominating at an unprecedented level.

Today ChangeWave released findings of a survey that intends to gauge smartphone buying intent by consumers. The results of this survey of 4,000 US-based consumers showed that among respondents planning to buy a new smart phone in the next 90 days, better than one-in-two, or 54%, say they’ll get an iPhone. Perhaps a quote in the ChangeWave press release says it best.

“Apple has never dominated smart phone planned buying to this extent more than two months after a major new release.”

I have made this observation time and time again, but the volumes that Apple ships in a single model device is unprecedented in this industry. There is no arguing that Android vendors as a whole are moving volumes. But the point has to be made that it takes an army of Android devices, to compete with one single model of the iPhone. One could argue, quite strongly that, five years later, the competition is just now catching up — or not depending on your perspective.

I’m not sure any of us could have predicted that the iPhone would not only be thriving, but dominating, and expected to continue to dominate, the smart phone landscape. I truly hope the next five years bring even more excitement and innovation to this industry, and it’s probably a safe bet that Apple will continue to lead this charge.

I’ll close with an anecdote that highlights for me my memory of the day the iPhone launched. As I mentioned, I didn’t attend the iPhone launch because I decided to stay back and cover CES for our firm. After the launch a Sr. Executive at Apple along with my father called my cell from a working iPhone. That iPhone then proceeded to be shown on TV and have images taken with my cell phone number clearly displayed on the dial pad. For about the next month I received on average 2,000+ calls a day from strangers asking if I was Steve Jobs or if they could talk to Steve Jobs.

People are strange and no I didn’t change my number. My cell phone number is, however, forever engraved into some of the first media images used the day the device launched.

I guess that counts for something.

Catching up with Apple – This Years CES Theme

CES hasn’t even started, but after sitting through various pre-show press conferences and meetings, one thing is clear: Apple is casting a very long shadow on this show. And many of the products I have seen have been various implementations of something Apple has already brought to market.

This is especially true in two categories.

First is the iPad. Pretty much every tablet vendor here hopes they can develop a tablet that is at least competitive with Apple. Some are going for cheap and basic as differentiators, while others are trying to bring out models with a unique design, tied to Android, and still be cheaper than Apple.

The recent success of Amazon’s Kindle Fire has given them another target to go after, but even this is colored by Apple’s iPad and its strong success in the market. And when talking to all of these “clone” vendors, they don’t even pretend they are doing something new or unique. Rather, many point out that they hope to tag along on Apple’s success and tap into new users Apple may not get because of their higher prices. But make no mistake; all of these are iPad wannabees.

The second product they are all chasing is Apple’s MacBook Air. If you look at Intel’s Ultrabook program, you can see that this is a blatant attempt by the Windows crowd to ride Apple’s successful coattails in design and give their audience something that Apple has had on the market for their customers for five years. Now that is not necessarily a bad thing…it just amazes me that it has taken the WinTel world that long to even catch up with Apple.

But when talking to these vendors who are hopefully bullish about any of their offerings in either of these categories, I sense something else. While they know what Apple already has, the fact that they don’t know what Apple will have in the future really weighs heavily on them. Or in other words, they keep waiting for another shoe to drop.

While they rush to market versions 1 or 2 of their tablets, they know that Apple has the iPad 3 and iPad 4 just around the corner. And while they feel Apple’s prices for the iPads are too costly for most people today, they all fear that Apple could drop prices and seriously impact their chances for success. In fact, to many it is a foregone conclusion that Apple could drop as much as $100 out of their base entry model as soon as this year. And given Apple’s history of maximizing their supply chain as well as pre-purchasing components in huge quantities so as to get the best prices on parts, that is a real possibility.

The other thing I picked up is that many of the Ultrabook vendors are working on what are called hybrids. These are laptops where the screen pops off and turns into a tablet. The first generation of these “hybrids” sported Windows on the laptop and Android on the tablet and the two did not mix well. But the Windows world is counting on Microsoft’s Windows 8 to be the magic bullet that lets Windows 8 with its Metro UI work on the laptop and the tablet and provide a unified experience. And some of the models I have seen are quite innovative.

But, this depends on Windows 8, which means that none of these can get to market until at least mid Oct. And some of the vendors have a sinking feeling that Apple is working on a hybrid as well and that they could beat them to market. And what’s worse for them is that if Apple does theirs as elegant and innovatively as they normally do, some vendors I spoke with feel that they would be immediately behind even though on paper they seem to be way ahead of Apple with their hybrids.

You can even see copied elements of Apple TV in the new Google TV being shown. In fact, all of the smart TV vendors know full well that Jobs told his biographer that he “nailed” smart TV, so these vendors also know that no matter what they offer now, once Apple finally releases a TV solution, they will have to go back to their labs and make big changes just to stay competitive.

One of Apple’s core strategies is to keep ahead of the competition by at least two years. And their competitors have finally realized this truth.

That is why no matter how happy they are about their new offerings at CES this year, they are looking over their shoulders because they know with 100% certainty that Apple could do something significant at any time and send them all back to the drawing board to play catch up.

Few Users Gobble Most of the Data: Why Are We Surprised?

Arieso, a wireless network management company, reported this week that a very small number of users consume the bulk of mobile wireless data. Specifically, Aireso found that the top 1% of users account for half of all data. For some reason, the tech world reacted with surprise.

Despite an abundance of data that threatens to drown us, most of us rely on preconceptions rather than hard numbers to form our notion of how the world works. This leads to bad business decisions and, in the public sphere, terrible policy.

Normals probability density
Normal distribution (Wikipedia)

One of  the big mistakes we keep making is the assumption that the distribution of most things follows the familiar bell-shaped curve, what statisticians call a normal or Gaussian distribution. Statisiticians love the normal distribution because it has wonderfully convenient mathematical properties. And while is does describe the distribution of a number of natural phenomena–the heights of people, for example–the classic bell curve is actually fairly rare in the real world.

Power law probability distribution
Power law distribution (Wikipedia)

What’s much more common is what’s known informally as the 80/20 distribution, from the ancient observation that 20% of the people drink 80% of the beer. This is more properly called a power law (or Pareto) distribution. The key to a power law distribution is that the bulk of what is being measured is found at the far left of the distribution, with a long tail off to the right.

We should realize that everything from wealth to wireless data consumption follows a power law distribution and stop being surprised by the fact. In the case of wireless data, this distribution actually has important business and public policy implications. If a very small number of consumers are responsible for a very large share of data usage, it becomes fairly simple to manage any shortages through pricing policies that affect the tiny minority of mega-users without affecting most of the public.

But neither government regulators nor wireless carriers are acting as though they understand this. We have vastly more data available to support decisions than ever in the past. But it does us no good unless we pay attention.

Amazon & the End of the Book

With the end of the Nook for Barnes & Noble and doom and gloom on expected losses and lowered guidance for fiscal 2012, the company’s stock fell 18 percent. The Nook was the poster child of Barnes & Noble’s in-store growth strategy.

Credit: Geek Sugar

It’s nemesis, Amazon, is doling out cash to authors who make their e-books available exclusively on Kindle for 90 days. Kindle Direct Publishing (or KDP, for those in the know) has put aside at least $6 million in 2012. Books can be “borrowed” for free and authors receive royalty payments based on the popularity of their titles. This may be one more step towards the end of the bookshelf as we know it.

While Amazon erodes the viability of the physical store, the Amazon storefront is fast becoming confusing to navigate, and it is a slippery slope for authors. If we let the age-old publishing process that allows a book to percolate (sometimes arduously) from manuscript to agent to editor to published work, to fade away, who will curate our content? Can the publisher and bookstore forge a new role in the value chain?

No more rejection letters
There is the age-old tale of the rejected writer: years of shipping manila envelopes to agents, years of returned manuscripts and polite decline letters from editors. J. K. Rowling’s agent submitted her wizard’s tale to twelve publishing houses and was rejected twelve times before she finally found a home. She is in good company as Stephen King and George Orwell were also rejected. One of Orwell’s critics wrote on the back of the Animal Farm manuscript, “It is impossible to sell animal stories in the USA.”
As the blog “Literary Rejections on Display” writes: “Remember this: Someone out there will always say no.” This is no longer the case.

Now there are aspirational tales such as Karen McQuestion who, after giving up on publishing her book, A Scattered Life, managed to self-published and sell over 35 thousand copies. There are stories of writers like Jim Kukral who went to the web to raise funds for his next book (remarkably $16,000 in a week). Kukral, author of This Book Will Make You Money says, “The walls are crumbling down, and aggressive and smart entrepreneurs are running through the gates to grab their share of self-publishing gold.”

But is this new business model sustainable? Is this the inevitable revolution of the masses against the traditional publishers? (Publishers, who many feel are removed from the new realities of digital publishing )

The answer is no.
According to R.R. Bowker, a publishing industry analyst, self-published titles in the U.S. nearly tripled to 133,036 in 2010 and will continue to grow. Like the flood of self-published Apps in the iTunes Store, there is a point where the author can no longer be found amidst the huge numbers of books being published. Finding a publisher becomes the easy part. Selling and driving profit becomes impossible.

Self-publishing your first novel and hoping that it reaches a mass audience is effectively the same as the delusionary garage app developer who decides to develop a game and post it to iTunes inspired by the success of Rovio’s Angry Birds. While Peter Vesterbacka, Rovio’s Chief Eagle, is touting his line of Red Bird sweat shirts, the developer’s app will be buried deep beneath the other one million assorted apps waiting for success.

The book is lost and the digital bookstore is becoming increasingly crowded with vanity press. With triple-digit growth in self-publishing it is difficult to know where to go to find an audience, and writers are flummoxed. With the surge of books self-published on the on Amazon’s storefront readers are flummoxed about where to go for quality content.

So how does the writer reach an audience? Amazon offers new reach and readers. But who is curating the explosive proliferation of content? What we collectively do not seem to understand is how the industry’s shifting roles are undermining the value chain for both the writer and the reader.

After years of battling the demons of book store conglomerates and then cloud commerce and eBook business models, the industry is teetering on reinvention.

We all know that what Amazon calls “pro-consumer” has been a major business disruptor for bookstores and now shoe, apparel and electronic stores. Could Amazon be simply using the book to build its m-commerce empire? Is the book industry a necessary sacrifice: mobile commerce road kill?

Book Countdown
Here are the modified Cliff’s Notes on how the book industry turned on its ear:

1. Bad-Boy Barnes & Noble: In the 90’s Barnes & Noble opened up superstore after superstore across America. They become the Wal-Mart of books, with the same vendor-facing attitude. Publishers were forced to grin and bear the harsh Barnes & Noble business terms: challenging discounts, sling-shot mechanize return policies, and more

2. Amazon Cloud: Ten years later, Amazon reinvented book browsing and shopping, and Barnes & Noble opened coffee shops and began selling household furniture. Smaller publishers and independent bookstores began to vanish.

3. The eBook: In 2007, we saw the first Kindle, the harbinger of a new power game and more importantly a new relationship with the mobile consumer. The Kindle became the new storefront further threatening the first market disruptor, Barnes & Noble. In order to promote its Kindle device, Amazon sold electronic books below wholesale prices. A tactical loss. Owning the commerce platform was the ultimate reward for Amazon.

4. Macmillan’s Counter Attack: This revenue model is understandably sub-optimal for the publisher. Led by New York-based Macmillan, the industry challenged Amazon’s hostile business model. Amazon pulled Macmillan content from its site. Macmillan held ground. Amazon caved. Round one.

5. Vanity Press: In the traditional publishing relationship, the writer should expect approximately a 7.5% royalty for paperback books and for digital, 25% of net receipts (which is the 70% that publisher receives from the retailer.) Amazon offers “publish direct” capability for writers on a 70/30 royalty share across the Kindle, Amazon Cloud, and the free Kindle apps. Direct is an attractive option.

The creative → agent →publisher → distributor relationship become dis-intermediated. Touted thriller-writer John Locke joined the Kindle Million Club (authors who have sold over a million books). And then there is the tenacious Amanda Hocking, who became a successful self-published author after receiving multiple rejections from traditional houses. (However, the Million Club is an elite club, and I would hazard a guess that there are many other would-be writers that will never go beyond vanity press.)

6. Slippery Slope: Book stores (Barnes & Noble) and publishers (the Perseus Books Group) launch self-publishing eBook services (PubIt! and Argo Navis respectively) with similar flattering revenue shares. With all stakeholders playing all the roles, the value chain is breaking up.

7. The Kindle Fire: Combining commerce with the immersive Kindle experience is the final frontier. Layering in Cha-Ching into the armchair reader is a natural and powerful evolution of the bookstore. Amazon is so confident that they are selling the unit at a loss ($199 for $210 unit cost)

8. Kindle Owners’ Lending Library: Amazon Prime members who own a Kindle can “borrow” one title per month, from this expanding library for free. Presently, there are a limited number of books available; Amazon has not received publisher consent to include titles from many publishers. In some cases, Amazon is simply paying the wholesale price for the book each time somebody borrows it.

This is Napster, the original peer-to-peer music file sharing service, but legal and underwritten by Amazon. The Authors Guild naturally has harshly criticized this business model.

Is this an eight-bullet epitaph for the book publisher? John Biggs blogs nostalgically, “While I will miss the creak of the Village Bookshop’s old church floor, the calm of Crescent City books, and the crankiness of the Provincetown Bookshop, the time has come to move on.”

Move On? The question is where to.

What is the beleaguered publisher’s new role? (Guaranteed the solution is not for the publisher to go digital by offering multimedia extras such as video and audio commentary with their eBooks.) The publisher can:

1. Taste Management: The publishing industry can retain its credibility as the purveyors of content. The publisher is providing rich content and is in the best position to build a long-term relationship with the customer, selling targeted stuff to this person, not once but many times.

2. Drive Subscription: Learn from mobile commerce. The mobile content aggregator never sold one ringtone (too much work for the publisher and for the buyer). The mobile content aggregator sold a subscription. The mobile consumer paid for music curation. (And a pretty penny at that.)

Perhaps we need to reconsider the idea of buying a book. Perhaps we should be buying a content subscription to chapters instead of books. Or see the book as a modern Dickensian novel serialized in mobile monthly installments.

3. Sell non-traditional: Fight Amazon in the cloud, not the store. Publishers need to find ways to sell digital content into competitive storefronts. The publisher needs to work closely with the remaining terrestrial booksellers to help them sell into their digital storefronts.

Publishers need to be aware that the 2010s are eerily reminiscent of the music industry in 2000’s. Books have changed. Reading and commerce behavior has changed. Publishers need to reaffirm their value proposition and find a way of reintroducing their mission critical role into the digital mall.

The ARM Wrestle Match

I have an un-healthy fascination with semiconductors. I am not an engineer nor I do know much about quantum physics but I still love semiconductors. Perhaps because I started my career drawing chip diagrams at Cypress Semiconductor.

I genuinely enjoy digging into architecture differences and exploring how different semiconductor companies look to innovate and tackle our computing problems of the future.

This is probably why I am so deeply interested in the coming processor architecture war between X86 and ARM. For the time being, however, there is a current battle within several ARM vendors that I find interesting.

Qualcomm and Nvidia, at this point in time, have two of the leading solutions for most of the cutting edge smart phones and tablets inside non-Apple products.

Both companies are keeping a healthy pace of innovation looking to bring next generation computing processors to the mass market.

What is interesting to me is how both these companies are looking to bring maximum performance to their designs without sacrificing low-power efficiency with two completely different approaches.

One problem in particular I want to explore is how each chipset tackles tasks that require both computationally complex functions (like playing a game or transcoding a video) and ones that require less complex functions (like using Twitter or Facebook). Performing computationally complex functions generally require a great deal of processing power and result in draining battery life quickly.

Not all computing tasks are computationally complex however. Therefore the chipset that will win is one that has a great deal of performance but also can utilize that performance with very low power draw. Both Nvidia and Qualcomm license the ARM architecture which for the time being is the high performance-low power leader.

Nvidia’s Tegra 3
With their next chipset, Tegra 3, Nvidia is going to be the first to market with a quad-core chipset. Tegra 3 actually has five cores but the primary four cores will be used for computationally complex functions while the fifth core will be used to handle tasks that do not require a tremendous amount of processing power.

The terminology for this solution is called Variable SMP (symmetric multiprocessing). What makes this solution interesting is that it provides a strategic and task based approach to utilizing all four cores. For example when playing a multi-media rich game or other multi-media apps all four cores can be utilized as needed. Yet when doing a task like loading a media rich web page two cores may be sufficient rather than all four. Tegra 3 can manage the cores usage, based on the task and amount of computer power needed, to deliver the appropriate amount of performance for the task at hand.

Tegra 3’s four cores are throttled at 1.4Ghz in “single core mode” and 1.3Ghz when more than one core is active. The fifth core’s frequency is .5Ghz and is used for things like background tasks , active standby, and playing video or music, all things that do not require much performance. This fifth core because it is only running at .5Ghz requires very little power to function and will cover many of the “normal” usage tasks of many consumers.

The strategic managing of cores is what makes Tegra 3 interesting. This is important because the cores that run at 1.4 Ghz can all turn off completely when not needed. Therefore Tegra 3 will deliver performance when you need it but save the four cores only for computationally complex tasks which will in essence save battery life. Nvidia’s approach is clever and basically gives you both a low power single-core, and quad-core performance computer at the same time.

Qualcomm’s S40 Chipset
Qualcomm, with their SnapDragon chipset, takes a different approach with how they tackle the high performance yet low power goal. There are two parts of Qualcomm’s S40 Snapdragon chipsets that interest me.

The first is that the S40 chipset from Qualcomm will be the first out the door on the latest ARM process the Cortex A15. There are many advantages to this new architecture, namely that it takes place on the new 28nm process technology that provides inherent advantages in frequency scaling, power consumption and chipset size reduction.

The second is that Qualcomm uses a proprietary process in their chipsets called asynchronous symmetric multiprocessing or aSMP. The advantage to aSMP is that the frequency of the core can support a range of performance rather than be static at just one frequency. In the case of the S40 each core has a range of 1.5Ghz to 2.5Ghz and can scale up and down the frequency latter based on the task at hand.

Qualcomm’s intelligent approach to frequency scaling that is built into each core allows the core to operate at different frequencies giving a wide range of performance and power efficiency. For tasks that do not require much performance like opening a document or playing a simple video, the core will run at the minimum performance level thus being power efficient. While when running a task like playing a game, the core can run at a higher frequency delivering maximum performance.

This approach of intelligently managing each core and scaling core frequency depending on tasks and independent of other processes is an innovative approach to simultaneously delivering performance while consuming less power.

I choose to highlight Nvidia and Qualcomm in this analysis not to suggest that other silicon vendors are not doing interesting things as well. Quite the contrary actually as TI, Apple, Marvel, Broadcom, Samsung and others certainly are innovating as well. I choose Qualcomm and Nvidia simply because I am hearing that they are getting the majority of vendor design wins.

The Role of Software in Battery Management
Although the processor play’s a key role in managing overall power and performance of a piece of hardware, the software also plays a critical role.

Software, like the processor, needs to be tuned and optimized for maximum efficiency. If software is not optimized as well it can lead to significant power drains and result in less than stellar battery life.

This is the opportunity and the challenge staring everyone who makes mobile devices in the face. Making key decisions on using the right silicon along with effectively optimizing the software both in terms of the OS and the apps is central going forward.

I am hoping that when it comes to software both Google and Microsoft are diligently working on making their next generation operating systems intelligent enough to take advantage of the ARM multi-core innovations from companies like Qualcomm and Nvidia.

These new ARM chipset designs combined with software that can intelligently take advantage of them is a key element to solving our problem with battery life. For too long we consumers have had an un-healthy addiction to power chords. I hope this changes in the years to come.

Why PayPal Is a Bigger Challenge Than Yahoo

 

A month ago The Wall Street Journal had a big story headlined “War Over the Digital Wallet.” “The subhead: “Google, Verizon Wireless Spar in Race to Build Mobile Payment Services.”

Article mentioned AT&T, T-Mobile, MasterCard, Visa, Citigroup, Sprint, and Apple, among others. The word “PayPal” was never mentioned, which is curious because eBay’s PayPal division is by far the global leader in electronic payments.

But not all of the media were ignoring PayPal. TechCrunch the next day carried a story that began, “Hey PayPal, do you realize people no longer trust you?” It continued: “The public’s perception is that there’s a risk in keeping money with PayPal. If something doesn’t change, startups, causes, and merchants will start processing donations and payments elsewhere.”

Something changed. PayPal’s president, Scott Thompson, quit to take over the CEO job at Yahoo!, a media company. When top executives quit, it’s usually because they want a shot at running a bigger or more interesting company. Yahoo is interesting, in the same way that train wrecks are interesting. He will be the fourth CEO of Yahoo in the past five years, not counting those who held the job on an interim basis. None of the previous CEOs, including Carol Bartz, who was fired unceremoniously in September, were able to reverse Yahoo’s seemingly inexorable slide into oblivion.

It’s hard not to chuckle at the highly respected Thompson’s statement that he was leaving PayPal to seek new challenges. “I like doing complicated, very difficult, very challenging things,” he told Reuters. There are challenges galore right under his nose at PayPal’s headquarters in San Jose.

Being ignored completely by the nation’s leading business newspaper in a major story about digital payments, when you are by far the market leader, suggests a nontrivial problem of public perception.

When a major tech blog (itself criticized recently for potential conflicts on interests) scolds that “people no longer trust you,” that stings. Do people really think that AT&T and Google are more trustworthy than PayPal to handle their electronic banking? When I look at my monthly AT&T wireless statement and ponder AT&T’s craven and almost enthusiastic cooperation with the government’s warrantless eavesdropping on American citizens, I can’t imagine ever trusting my digital wallet to a phone company.

PayPal grew impressively under Thompson’s watch at PayPal, doubling its user base to more than 100 million. PayPal in the third quarter of 2011 processed $29 billion in payments. It operates in 190 countries and 24 currencies and has 15,000 bank partners. Revenue was expected to top $4 billion in 2011, and margins were solid at close to 20 percent. PayPal has grown to the point that it now accounts for more than a third of eBay’s operating profits; I would not be surprised to see the tail wagging the dog before too long. John Donahoe, eBay’s CEO, said last year that he expected PayPal to be bigger than eBay two years from now.

Thompson, who is quite savvy about technology and commerce (“e” and otherwise), is credited with the idea to push PayPal out of the cloud and into retail stores. But Google beat him to it, in part by poaching a couple of Thompson’s top lieutenants. (PayPal’s parent, eBay, is suing Google, alleging that PayPal and Google spent two years developing a partnership, then hired PayPal’s point man, who departed with a laptop full of trade secrets; Google denies the charges.) Google then launched its own “Google Wallet” application, beating PayPal to the punch. PayPal still hasn’t articulated its “wallet” strategy.

PayPal’s push into brick-and-mortar retail stores does not appear to be going well. On a visit to PayPal headquarters a few months ago I tried to buy a cup of coffee from the café that operates in its lobby. Sorry, cash or credit cards only. PayPal was not accepted in PayPal’s own headquarters.

Ouch.

Naturally, everyone wonders what Thompson will be able to do in the Augean stables of Yahoo. It is astonishingly hard to revive a declining Internet company, and the task is made more challenging because Yahoo is a media and advertising company very different from PayPal. Both companies recognize, however, that the future belongs to the company that can harvest and sift and parse data, and that’s an area where Thompson has strong chops.

PayPal’s Donohoe said he was shocked by Thompson’s sudden departure; Thomson resigned Tuesday and starts his new job at Yahoo on Monday. Donohoe himself will act as PayPal’s interim president, and promised a “seamless transition.” The person who eventually takes the big chair at PayPal has huge challenges ahead, starting with getting PayPal accepted in its own building.

Interactive TV Trends – How the TV Experience is Changing, Part III

This is the third article in a three part series discussing key trends in TV. The first article looked at how new interface technologies are enabling new ways to control our TVs. The second article focused on the multi-screen TV experience. This article focuses on how interactive TV trends are driving the need for improvements in TV image quality.

Full HD is not enough for Future TV
Some might believe our latest flat panel televisions represent the zenith of picture quality. This is not surprising given we often hear that 1080P resolution or “Full HD” are “future proof” technologies. The oft-cited reasoning is that for a given screen size, viewed from a normal watching distance, the acuity of the eye cannot discern resolutions beyond Full HD. Another reason why Full HD is considered future proof is because actually a very small percentage of video content is even broadcast at this resolution. Most digital pay TV broadcasting systems transmit in lower resolution formats – the industry is still catching up.

Certainly, for those looking to buy their next TV set – no one should be concerned that 1080P is not good enough. Considering the horizon of time people buy and keep a TV set which is about 8 years– a consumer cannot go wrong with “Full HD”. But for people interested in where the industry is going in the long term –looking out over the next ten years, our image quality is going to see massive improvements making today’s TV technology look primitive.

Part of the reason why we can expect big improvements in TV video quality has to do with our superior eyesight. Our capacity to see is many multiple orders of magnitude greater than what our TVs can display. For example, a Full HD TV displays about 2 million pixels of video information. In real life, one of our eyes processes about 250 million pixels – but since we have two eyes channeling vision to our brains – our effective vision makes use of greater than 500 million pixels of video information. And while it is true that we can only discern a limited resolution from a given distance – our eyesight is also sensitive to contrast, color saturation, color accuracy and especially movement. All these areas are where TV systems can improve.

Detractors may argue TVs do not have to be perfect – just a reasonable representation. Others may argue that consumers only care about TV size and price that TV quality is not a selling point. But I argue TV image quality does matter – quality has always had to keep pace with the growing size of TV screens. TVs will continue to get larger – requiring improvements in resolution as our room sizes will start to limit viewing distance. Also, the nature of interactive TV and future 3D systems will make us want to sit closer to the TV set – again mandating video quality improvements.

Interactive TV’s Make You Sit Closer
Interactive TVs will bring games, virtual worlds and new video applications drawing us physically closer to the TV screen. Gaming is a huge industry- with almost $50B spent on gaming consoles, software and accessories. Virtual world games are increasingly popular. “World of Warcraft” is a massively multiplayer online role-playing game with over 10 million subscribers. All kinds of social virtual worlds such as the Sims, Second Life, IMVU and Club Penguin are attracting millions of players. IMVU, with over 50 million registered users, is a social game where people can develop personal avatars and spend time in virtual worlds chatting and interacting. While many of these games are still played on PCs – migration to the living room TV is inevitable. Console games have already shown the way – the size and immersive nature of the large screen TV will draw others into the living room as well.

3D display will also drive the need for improving display resolution and image quality. Sure, everyone hates 3D glasses – but technology will continue to evolve and glassless 3D displays will continue to improve and come down in price. There will be applications that consumers will demand in 3D such as sports – people will see the advantages of watching close up sports games on the large screen display in vivid, artifact free video.

OEMs and broadcast equipment companies are investing heavily in supplying the infrastructure to make this happen. 3D advertising will take on more importance – imagine having the option to tour a car or a house in extremely vivid 3D. On the entertainment side, movie and video directors will become much better at using 3D perspectives in such a way to take advantage of image quality improvement. Today 3D effects are more like a gimmick – watch the arrow fly into the room for example. But going forward directors will make more subtle use of 3D adeptly drawing viewers into to the film or the show. On a beautiful large screen display with ultra high resolution and image quality, viewers will practically feel like they are part of a movie or scene.

3D also opens up a world that we could only dream about when matched with the power of the internet. For example, the evolutions of virtual worlds and their capabilities becomes much more compelling with large screen displays. A simple example is virtual tourism and world exploration. Just as Google has taken a picture of all the street views of the world, there is no reason we cannot build a 3D model of the whole terrestrial experience on earth in a few years. Imagine then the capability to walk around the world as a virtual tourist and view the world from the comfort of your 3D television.

As virtual worlds improve and evolve, new immersive ways to interact with large screen TVs will continue to evolve. Many social activities come to mind as well as the concept of participating or viewing in e-sports. E-sports are virtual sports games that can also be viewed by others. The prospects for e-sports are boundless and limited only by imagination. Virtual bullfights, gladiator battles, racing events will be watched on-line the same way we watch football games today.

The display-use model will also change over time. Today our concept of a display is a TV set that sits in the living room – a piece of functional furniture. With the advent of new display materials like OLED, display will transform from furniture to architectural material. In fact there is no reason why the wall in your den cannot become a display. In fact, why stop with the wall? Imagine the immersive feeling of the ceiling, floor, and walls all around built of display – it’s the video equivalent of surround sound. In fact, the architectural use of display could add interesting use cases beyond entertainment.

For example, inlaid architectural materials can appear in almost in any room around the house. Touch screen uses in the kitchen, can provide not only control but also interactive recipe applications and videos on cooking instructions. Bathroom walls can provide wallpaper backgrounds or any kind of networked information that we already see on our PCs. Inlaid display technologies will appear on appliances as well as anywhere people need information or help with controls. The point of all this is that again there will be many reasons in the future of us needed to get close to the screen – and all this near proximity will demand increases in display quality.

TV Development Underway
Already major TV OEMs are working on the next step up in resolution over Full HD. There are multiple propositions in development for higher order resolution TV systems. TV OEMS are already demonstrating “4KX2K” systems that provide 4096 X 2160 pixel arrays. Even beyond “4KX2K” is Ultra High Definition (UHD) which provides 7,680X4320 pixels resolution which equals 33 million pixels or about 16 times the number of pixels used by Full HD systems. UHD was first introduced by Japan’s national TV broadcaster NHK in 2003. NHK, marketing the resolution as “Super Hi-Vision” had to build the cameras and display technology from scratch to be able to create a UHD demonstration system. Since then NHK has displayed the system at numerous broadcasting shows. Toshiba, LG and Panasonic showed UHD systems at CES 2011 – likely more UHD sets will be shown in 2012. UK’s BBC also is interested in this format. The BBC announced plans to provide UHD coverage of the 2012 London games.

In addition to higher resolution, OEMs continue to invest in superior display technologies like organic light emitting diode (OLED) displays. OLEDs have several advantages over LCD and plasma display technologies. For example, OLED do not make use of a backlight and emit light directly. Direct emission results in a much more vivid display of color, contrast and viewing angle over LCDs. Since there is no backplane in OLED TVs, OLEDS are a much more power efficient and lower in weight. OLED displays are also flexible – opening up new opportunities to use displays in various new applications in architectural display and even clothing.

OLEDs also have a very high response time over LCD. In fact, the relative low response time of LCD, required the industry to introduce all kinds of approaches to compensate by introducing frame rate conversion techniques. OLEDs response time increases response time by a factor of 1000 over LCD allowing for a much better display motion performance.

Improvements will also need to continue on the broadcast side. Higher resolution TVs consume bits at an alarming rate. For example, uncompressed ultra high HD would demand 24Gbps a major jump over ~1.5Gbps required for Full HD. Any increases in resolution will demand major improvements in data compression as well as networking, storage and broadcasting capacity.

But the march of improvements will continue. As TV screens get larger and the way we use these screens draw us in closer – the need for improved image quality will also continue to improve.

Our TV experience will change dramatically over the next ten years. As these series of articles have discussed the whole TV experience will continue to morph the way we spend our time watching large screen displays. 2012 will bring some interesting signs about how all this will play out. 2012 we will see OEMs developing much better ways to interact with TVs – our ability to control the TV through new remote technologies and improvements in finding and sharing content will make major advances. We can expect more use of our hand held tablets and smart phone devices joining us in front of our TV sets. Interactive TV will bring, not only more sources of content, but also new tools to help recommend as well as share content and media that we really want to see. Finally, the way we use TV will be much more immersive demanding major improvements in the video quality in TVs over what we have today.

Mamas (and Dads), Help Your Babies Grow Up To Be Coders

My kids were lucky. They were born at about the same time as the Apple ][ and they grew up during the all-too-brief period when learning to program a computer was considered part of a normal elementary school education. That window only lasted from around 1980 to the early 90s, when the complexities of graphical user interfaces began to kill amateur programming.

It’s time to bring back coding as part of kids’ education. Not because it is important to know how to program a computer to use one anymore than understanding of how engines work is important to driving a car. The virtue of learning programming is that it develops some very useful good habits, especially clear, precise, and careful thinking.

Unlike so much else in life and education, there’s no such thing as a good-enough piece of code. It either runs or it doesn’t and it either produces a correct result or not. But coding does provide instant gratification for doing the job right. Coding problems are inherently fair and objective, giving them all the characteristics of great pedagogical tools.

I don’t have any illusions about programming returning to elementary school curricula any time soon. There’s too much competition for classroom time, and way too few qualified teachers. There’s no one lobbying for it, and no studies showing that learning programming improves scores on standardized tests (though I wouldn’t be surprised if it did.)

Fortunately, excellent free tools exist that will let kids learn programming at home. For younger children, Kodu, a project of Microsoft Research, offers a graphical, drag-and-drop approach.  Kids can use it to design simples games while learning priciples of programming.

Kodu screen shot
A Kodu programming screen
Codeacademy screenshot
Interactive instruction at Codeacademy

Lots of folks in the tech world (venture capitalist Fred Wilson, for example) responded to a campaign by Codeacademy.com by offering new year’s resolutions to revive or improve their coding skills. But I think it is even more important for kids. Codeacademy offers interactive lessons in convenient small bytes designed to teach the basics of programming JavaScript.

(One note of learning programming: The choice of a language is largely irrelevant. The principles of programming are the same regardless of language, and the mainstream languages used today all derive their syntax from C++ and in most ways are more alike than different.)

For a deeper dive into coding, the estimable Khan Academy’s computer science section  provides more formal training in coding techniques. There’s more of a do-it-yourself element to the Khan approach: To actually work the examples and do problem sets, you’ll have to set up a Python development on your computer. Fortunately, that’s about a five-minute job.

I learned coding in completely haphazard fashion back in the mainframe era. In those days, the only way to do anything with a computer was to program it yourself and the data processing I needed to do for an undergraduate research project forced me to learn Fortran—and debug code by reading a printout of a core dump. In truth, I never became more than a marginally adequate programmer, but I believe the experience made me a better, more analytical thinker.

My kids made better use of their opportunities. One is now a mathematician working at the boundary of math, computer science, and operations research. The other is a down-to-the-silicon operating system developer for IBM Research. The might have gotten their without their expeience as young boys banging away at an Apple ][ (and later, in high school, a MicroVAX), but I think those formative experiences were critical.

So take the resolution yourself and make this the years your kids (and please, don’t forget the girls) learn to code. Some day, they’ll thank you.

How Intel Could Achieve the 40% Consumer Ultrabook Target in 2012

There has been a lot of industry skepticism since Intel predicted at Computex Taipei 2011 that Ultrabooks would account for 40% of consumer portable sales by the end of 2012. That included skepticism from me as well, and I continue to have that skepticism. Rather than dive into that discussion though, I think it’s more important and productive to examine how Intel could conceivably achieve that goal.

What Intel is Actually Predicting

It’s important to understand what Intel means when they made their prediction. First, they are making the prediction for the consumer market, not the slower moving SMB, government, or enterprise markets. Also, the prediction is not for the entire year, it is for the end of December, 2012. That is, 40% of consumer notebooks by the end of December 2012 would need to be Ultrabooks. This makes a huge difference when evaluating the probability of this actually occurring.

So what would it take for 40% of all consumer notebook sales to be Ultrabooks by the end of 2012?

Make Ultrabooks Look New, Relevant, and Sexy

Intel and their ecosystem need make Ultrabooks perceived as new, relevant and sexy. By relevant I mean making the direct connection between what the Ultrabook delivers and what the consumer thinks they need. Sexy, is, well sexy, like MacBook Airs. The ecosystem must make a connection with:

  • Thin and light– this is easier because Apple has blazed the trail and it is evident on the retail shelf.
  • Fast startup– this is somewhat straightforward and a communicated consumer pain point with Windows today
  • Secure– this is the most difficult in that it is always difficult to market a negative. It’s like life insurance; it sounds good, people say it’s important, then don’t buy it. I think Intel would be much more successful taking the same base technology and enabling exclusive consumer content or speeding up the on-line checkout or login process.
  • Performance- this is difficult to market in that no longer does performance have a comparable metric and chip makers have appeared to stop marketing why it is even important.
  • Convertibles- I am a big fan of future convertibles given the right design and OS. If OEMs can put together a classy, ~18mm design, it could very well motivate consumers to delay a tablet purchase. This will not work prior to Windows 8’s arrival, though because you really need Metro for good touch.

Probably the biggest impediment here is the “sexy” piece. Sexy is the “X” factor here. It’s cool to have an Apple MacBook Air. It isn’t cool yet to have an Ultrabook. A lot of that $300M UltraBook investment fund must pay for the Ultrabook positioning and re-positioning of anything Windows. This is a tough task, to say the least.

Steal Some Apple MacBook Air Market Share

Intel and their ecosystem, to hit the 40% target, will need to steal some of Apple’s market share. There is no way around this to achieve the 40% target unless they want to pull the dreaded “price lever”. Apple “owns” 90+% of the premium notebook market today and because Windows OEMs and Intel for that matter aren’t motivated to trash pricing now, they will need to steal some of Apple’s share. This will be a tough one, a real tough one particularly in that Intel shoots itself in the foot short-term by going aggressively after this one given they are inside every MacBook Air. So OEMs will need to take this one on their own, using Intel marketing funds as a weapon. This will be especially difficult given that Apple positioning isn’t going to be instantly erased by anything short term and Windows OEMs haven’t been able to penetrate this for years. Remember the Dell Adamo? Sexy, Windows 8 convertible designs could very well be the magic pill that could help steal share from Apple.

Lower Price Points

This is the last lever anyone wants to pull as it destroys positioning. Depending which data service you look at, the average consumer notebook ASP (average selling price) is between $600-700. This seems high, I know, when you look at what is being sold at local retailers, but remember that this includes on-line and Apple which has a higher ASP. Ultrabooks range from around $799 to $1,299 excluding Apple. This is well above the prices it would need to be to achieve the 40% goal. There are two ways to lower price; lower the cost or lower margins. I believe you will see a little bit of both.

As volumes increase, there will be immediate cost savings in expensive mechanicals like aluminum, plastic, and composites. Custom cooling solutions are very expensive required to cool thin chassis between 16-21mm in thickness. Tooling and design cost can be amortized over greater volumes to decrease the cost per unit. Intel Ivy Bridge, available in April 2012, will provide a shrink from 32nm to 22nm which would theoretically allow a lower price point at the same performance point, although I am sure Intel isn’t leading with that promise. Intel would much rather provide large marketing subsidies and pay NRE (non recurring engineering) costs to retailers and OEMS to design and promote the Ultrabook category. SSD is a tricky one to predict given spinning hard drive supply issues. Spinning hard drive price increases allow SSD makers to increase prices which doesn’t bode well for Ultrabook BOM costs in the short term.

Leverage Windows 8 Effect

The expected Windows 8 launch for the holiday of 2012 could help the Ultrabook cause on many fronts. First, it may give consumers a reason to consider buying a new laptop or notebook. I fully expect consumers to delay purchases and wait for Windows 8 to arrive. This could create a bubble in Q4 that, again, helps achieve the 40% goal.

Perceived Momentum

Finally, Ultrabooks need to get off to a solid start in 2012. Consumer influencers and the rest of the ecosystem needs to perceive UltraBooks as a success in 1H/2012 for them to “double-down” for 2H/2012. CES will be one tactic to do this, where I expect to see 100s of designs on display to demonstrate OEM acceptance to the press, analysts, and retail partners. Intel’s Ivy Bridge will give another boost in April, followed by the Windows 8 launch. Retailers cannot be stuck with excess inventory and cannot make drastic price cuts that would only deposition the category. Currently there is skepticism on the entire Ultrabook value proposition and the price points they can command so there is a lot of work to be done.

Will Ultrabooks Achieve the 40% Target by End of 2012

While this analysis is about what it would take to achieve the goal, I must weigh on what I think will happen. I like to bucket these kinds of things into “possible” and “probable”. I believe that if the Ultrabook ecosystem could accomplish everything outlined above, Ultrabooks could hit 40% of consumer notebook sales by the end of 2012. So it is possible, BUT, I don’t see it as probable, primarily due to the low price points that it will need to be hit. There just isn’t enough time to reposition a Windows notebook as premium and either raise price points of the Windows notebook category or steal Apple market share.

Why Amazon is Not the New Apple

Over the last few months I have heard and read many comments about the idea that Amazon is the new Apple. In fact, in a very good piece in Forbes, E.D. Cain asks if Amazon is the new Apple directly. He makes some good points to suggest that Amazon is very much following in Apple’s footsteps and even has created some innovations of their own with their price check mobile app and their Kindle book purchasing process.

But I believe the answer to this question is no, Amazon is not the new Apple. The reasons for this are many. Now, don’t get me wrong. I have great respect for Amazon and Jeff Bezos. However, Jeff Bezos’s is not the second coming of Steve Jobs and he would be the first one to tell you that. Amazon’s business is very different from Apple’s and though they have some similar goals, such as creating an ecosystem of products and services for their customers, their approaches differ greatly.

Perhaps the most glaring difference is Apple’s total approach to the market. Most importantly, all of their hardware, software and services originate inside Apple. They write the OS so they can customize hardware to be maximized around their proprietary OS platform. Amazon, as well as all of the other vendors competing with Apple, must rely on Google or Microsoft for their code and are always at a disadvantage to Apple in this area.

Also, Apple has all of their design in-house led by recently Knighted Sir Jonathon Ives. Most of the vendors have to rely on ODMs for their products and this too is a disadvantage when it comes to industrial design and its integration with their software offerings.

While Amazon is the world’s greatest retailer, Apple’s stores are re-writing the rules of technology retailing around the world. You buy from Amazon if you know exactly what you want since you can’t touch or feel the products online. But the reason that Apple is driving millions of people into their stores around the world is that Apple knows full well that the majority of potential users are not tech literate and need help buying exactly what they need.

But one thing that really distinguishes Apple from Amazon and their competitors, is that Apple is a leader and all of the others are followers . This started when Apple introduced the Mac itself and decided to use the 3.5 inch floppy in the Mac and made their competitors kill the 5.5 inch floppies back in 1985-1986. Apple was the first to put a CD Rom drive in Macs in 1989 and ushered in the era of multimedia computing. By 1992, all PCs had CD Roms inside. In 1999 Apple added color to PC cases and created the all-in-one PC. Now all PCs have color and all-in-ones rule the desktop market. Even more recently, Apple created the first real “ultrabook” with the MacBook Air and now everyone is chasing them again.

Apple’s genius is also in re-inventing products, which continues to reinforce their leadership position. They did not invent the MP3 player. They did however re-invent it. They did not invent the Smartphone. They re-invented it. They did not invent the tablet. They re-invented it. And in each of these product categories, they force their competitors to play catch up.

What’s more is that Apple casts a long shadow in this area. A more current example is the hybrid computing. In my 2012 predictions, I stated that we should see many hybrids (laptops with screens that come off and double as tablets) this year. But, many of the vendors I talk to who have hybrids in the works have told me that their biggest fear is that Apple will do a hybrid and do it so well that it will force them back to the drawing boards and put them behind, even though they thought they would be ahead. Now, nobody even knows if Apple is doing a hybrid but just the threat of Apple doing one strikes fear in their competitors.

But in the end, the fact that Apple continues to play a major leadership role in the industry is the real reason Amazon and other companies, who might like to think that they could become the next Apple, are still only followers. It is unclear to me if Apple will ever give up that role given their complete control of their ecosystem and the rich talent they have inside the company. But until another company can create this same dynamic, I suspect that there will not be another company that can lay claim to being the next Apple.

The Tablet Market in 2012: What it Means for Publishers

Those who planned their tablet strategies based on the predictions of key analysts and the excitement at the Consumer Electronics Show (CES) at the beginning of 2011 may want to be very wary of the wave of predictions for the 2012 tablet market after 2011 remained iPad dominant. Apple is now expected to sell around 39 million units worldwide, and even the top competitor in the tablet market, Samsung with its Galaxy Tab, has achieved very modest sales by comparison.

Motorola’s Xoom showed initial promise but faltered and was purchased by Google. RIM’s Playbook stalled with updates not expected until late-2012. HP launched and then abandoned the TouchPad and Dell shuttered the Streak.

At the low-end of the tablet market, competing against Barnes & Noble’s Nook, the Kindle Fire from Amazon is the only Android-based tablet seeing solid momentum with the company claiming around 1 million sales a week thanks to a $199 price point. However, despite the claims, inventory is still plentiful and the Fire’s software platform has come in for significant criticism (update promised soon) with a lot of customers giving it one to three stars.

A recent report from IDC , who now counts both the Kindle Fire and Barnes & Noble Nook as Android tablets, forecasts that Apple’s worldwide market share will dip to 60% in 2012 although it should be pointed put that the analysts’ firms use shipments numbers for Apple’s competitors that don’t necessarily equate to sales numbers. The current key players are Samsung, Amazon, B&N, Asus plus “others.”

CES 2012 from January 10-13 will no doubt showcase a vast range of new tablets, but if the history of the last couple of years repeats then virtually all will fail to gain any traction. Hardware alone without integrated software and an effective distribution system is doomed. Very few companies can provide the whole digital ecosystem, and Apple has a near unassailable advantage in this regard.

Merrill Lynch predicts growth of tablet market in chart below.

New iPads on the horizon?

Looking at the year ahead, Apple is expected to roll out the iPad3 in first-quarter 2012. The key features are the iPhone4 “retina” display and faster processors to cope with the demands of that higher resolution screen. Many expect Apple to launch a smaller iPad with a screen size of nearly 8 inches later in 2012, and this could help counter any competitive threat from Amazon. A possible pricing structure of around $250-$299 for an 8-inch tablet, $399 for the older iPad2 and the next iPad3 for $499 (or maybe a little higher) would allow Apple to cover the market from value to premium products. According to a recent report from J.P. Morgan Chase, total worldwide tablet shipments will reach 99 million in 2012 and will rise to 132.6 million in 2013. Although it will see some market-share erosion, Apple is likely to remain the dominate tablet supplier for the next few years so needs to be core to any publisher’s digital mobile strategy.
Publisher reset

For many publishers, 2011 has been a year of expensive experimentation. Now, many are reassessing their return on investment in tablet apps. Over-designed multimedia versions of consumer magazines appear to distract readers from the core content. The majority of tablet readers seem happier with simple enhanced PDF versions of their favorite brands or layouts that emphasize the readability of the articles on their devices. Photos and videos that enhance text are welcomed but not unnecessary interruptions and distractions. In a sea of often mediocre publisher apps, there are some real standouts–some of my favorites include National Geographic iPad version, the travel magazine TRVL and The Economist–that deliver an excellent balance that improves the tablet version over the print version.

Now, in addition to Apple’s iTunes, Amazon (and to a lessor extent Barnes & Noble) offers publishers additional digital distributions channels for paid subscriptions and digital newsstand copies. Outside these channels, the general Android marketplace is still searching for an efficient app marketplace for paid content. The industry consortium Next Issue Media (NIM), which represents Condé Nast, Hearst, Meredith, News Corp and Time Inc., has, so far, failed to achieve momentum, but maybe NIM will redeem itself in 2012 with an HTML5 strategy.

Publishers will continue to face a very challenging and fragmented market. They have to deal with Apple’s hardware dominance. Most mainstream publishers have somewhat reluctantly come to the conclusion they cannot ignore the iPad platform and have come terms with Apple–although there are some notable exceptions including Time magazine and the Financial Times. In rejecting Apple’s App store, the FT has developed a sophisticated HTML5 approach allowing content to be viewed in a mobile Web browser.

Although there are many compromises, an HTML5 approach allows publishers to take advantage of the growth of the tablet market without restricting themselves to any operating systems. As HTML5 continues to grow in sophistication, I expect the major publishers will experiment with both an app and HTML5 strategy.

On the iOS platform publishers can distribute directly via Apple’s Newsstand and also via Zinio’s reader app. Neither approach really gives the publisher the customer information they really want. On the Android platform the main app options for publishers are to go through Zinio (for tablets other than the KIndle Fire and Nook). For the Amazon and Nook platforms publishers can deal direct or go through Zinio. The publishers’ life is further complicated by a slew of app content aggregators – with Flipboard leading a very crowded field which recently saw the entry of Google’s Currents. Managing advertising and content metrics across these multiple platforms is extremely challenging.

Further clarification – the Zinio app on the Kindle Fire is not showing up for many users but instructions for downloading the app are available via Zinio’s site.

Where’s the breakthrough?

I believe it depends on the segments being served–the B2B market has always been more of a data information and service business and should see strong opportunities to drive revenues via quality information services targeted to their audiences. The key media brands can continue to play a trusted role aggregating services–content (original, aggregated and peer-to-peer /social), directory and supplier service, links to physical conference and events. Lead generation, premium paid services and sponsorship will be more important revenue streams than CPM-based advertising. In the long term, B2B publishers should really benefit from wider distribution as they can drive revenues outside of advertising.

For consumer publishers, it’s a greater challenge, but the B2B market can provide some pointers. Consumer publishers (and some are moving there) need to be more vested in the (customer) data business. They have to better understand the needs to their customers and engage with them through valuable content and services not just relying on impression based advertising.
The rapid move to mobility

The transition to consumption of content via mobile devices has been evident for several years–the technology and related services are now catching up with the vision. A mobile strategy needs to be at the center of all publishers’ long-range planning. The major challenge is that past revenue models are not transitioning easily to the new medium. Inevitably, the industry will go through a few more years of pain while new revenues models for the mobile world become obvious.

But the good news is that despite all the challenges, premium brands actually become more important as quality content is consumed by a wider audience and audiences look to trusted brands to guide them through an increasingly “noisy” content world.

The road ahead will be a bit rough but all publishers should be excited that in 2012 their content will be distributed much more widely and they will have a change to engage with new audiences.

That’s not a bad outlook.

This article originally appeared at MinOnline