Two Sides of the Consumer Coin to Windows RT

Yesterday, Microsoft unveiled via a blog the different Windows 8 editions and comparing the different features and functionalities.  There are three versions, Windows 8, Windows 8 Pro, and Windows RT.  One of the biggest changes in Windows 8 versus previous editions is the support for the ARM architecture with NVIDIA, Qualcomm, and Texas Instruments, and the new naming reflects it.  The Windows 8 on ARM, or WOA for short, gets its own name, called “Windows RT”.  I believe that this naming cuts both ways, some positive and some challenging for the ARM camp, but can be mitigated with marketing spend and education.

Windows RT (ARM) versus Windows 8 (X86)

Windows RT and Windows 8 are very similar but in other ways very different, and in some ways reflect Windows RT’s shedding of legacy…. but not completely.  The Microsoft blog had a lengthy line listing of differences, but here are the ones I feel are the most significant to the general, non-geeky consumer.

The following reflects relevant typical features Windows 8 provides over Windows RT:

  • Installation of X86 desktop software
  • Windows Media Player

The following reflects relevant typical consumer features Windows RT provides over Windows 8:

  • Pre-installed Word, Excel, PowerPoint, OneNote
  • Device encryption

Again, this isn’t the complete list and I urge you to check out the long listing, but these are the features most relevant to the non-geeky consumer.

What isn’t Addressed

What I would have like to seen discussed at length and in detail was support for hardware peripherals.   I will use a personal example to illustrate this.   Last week, I bought for $149 a new HP Photosmart 7510 printer, scanner and fax machine.  Will I am confident I will be able to do a basic print with a Windows RT machine, will I be able to use the advanced printer features and be able to scan and fax?  We won’t know these details until closer to launch, but this needs to be addressed sooner rather than later.

Next, I would have also liked to see some specifics on battery life and any specific height restrictions for Windows RT tablets.  If these devices are intended to be better than an iPad, they will need some experiential consistency to provide consumers with confidence, unlike Android.  As I address below, this wasn’t overt, but a little covert.

The Plusses with what Microsoft Disclosed with Windows RT

There are some positive items for the ARM camp that came from Microsoft’s blog post that covered Windows RT.  Windows RT does support the primary secondary tablet-based needs a general consumer would desire.  In the detailed blog posts, Windows RT supports many features.   This comes to light specifically when you put yourself in the shoes of the general consumer, who doesn’t need features like Group Policy, Domain Join, and Remote Desktop Host.  Also,  I don’t see the absence of Storage Spaces or Windows Media Player as major issues for different reasons.  Storage Spaces is very geeky and I do not believe the typical consumer would do much with it.  I believe Windows RT will have many, many methods of playing video as we see on the iPad and Android tablets, so the absence of Windows media Player isn’t a killer, specifically for tablets.

Windows RT also contains Office, specifically Word, Excel, PowerPoint and OneNote which sells for $99 today. Finally, while details are sketchy, Windows RT supports complete device encryption.  I can only speculate that all data, storage and memory operations are encrypted.  This can potentially leveraged with the consumer, but it’s not something that has kept the iPad from selling.

A final,  important note, is the consistent experience I expect Windows RT to deliver.  By definition, all Windows RT systems will be lightweight with impressive battery life. While this doesn’t come out as clearly in the blog post, I do read between the lines and see where this is headed.  I believe Microsoft wants to deliver the most consistency with Windows RT and leave the experience variability to Windows 8.

There will be challenges, though.

The Risks with what Microsoft Disclosed with Windows RT

While there are positives in what Microsoft disclosed on Windows RT, there are risks and potential downsides, too.  First of all and primarily is the absence of the “8”.  Regardless of how much Microsoft may attempt to downplay the “8”, consumers fixate on generational modifiers to add value to something.  Consumers do this because it makes it easy for them.  When a consumer walks into a store and sees Windows 8 and Windows RT, I expect them to ask about the difference.  What will the answer be from the Best Buy “blue shirt”?  Without a tremendous amount of training on “RT” I would expect them to say, “RT has MS Office, but won’t run older programs.  8 runs all your old programs but doesn’t come with Office.” With that said, the street price adder for Office isn’t public knowledge, but I know that it does add at least $50 to the street price.  This is a discount to $99, but then again, I don’t miss not having Office on my iPad.

As I discussed above, Microsoft needs to disclose more on backward hardware compatibility.  Every day that ensues without a more definitive statement, Microsoft draws in the skeptics.  What wasn’t discussed in the industry 6 months ago is being discussed now.  Finally, how can the lack of X86 desktop software be turned into a positive?  The basic consumer, if offered something more in their minds for the same price, will always choose more, unless they see a corresponding behavior to give up something.  Apple has done a fine job with this on the iPad.  When the iPad first launched, many focused on what it didn’t have, namely USB ports, SD cards, or the ability to print.  The iPad can print in limited fashion, still has no USB or SD card slot and is still selling great.  Windows RT needs a distinct value proposition related to Windows 8 but different too.

What Needs to Be Done Next

If I were in the ARM camp, I would plead with Microsoft to reconsider the naming.  Even adding an “8” to the naming to render “Windows 8 RT” would at least recognize it’s in the same family.  Without it, Windows RT looks like part of the Windows family, but not “new Windows” table. This can be overcome by spend on a unique value proposition.  This distinct value proposition may be that all RT units are thin and light weight and provide a consistent experience, something that Windows 8 cannot guarantee.  The ecosystem then would need to fill “RT” with value and meaning which will be expensive. Finally, the Windows RT ecosystem needs to start better communicating about peripheral compatibility, as every day passes, the broader ecosystem gets more nervous.  With six months to go, there’s a whole lot of work to do, and a lot more in the Windows RT camp than the Windows 8 camp.

 

Facebook is for Old People

FACEBOOK IS STUPID AND FOR OLD PEOPLE“, my 12 year old daughter texted me yesterday after FaceBook offered to purchase Instagram. If you have teenage or pre-teen girls or boys, this demonstrative behavior isn’t anything new. What I didn’t fully understand at the time is what a firestorm the acquisition set off in the community. Of deeper and longer-term significance, however, was the spotlight my daughter’s text to me shined upon the newest and most natural trend in social media; verticalization or specialization, which will reshape social media as we know it today.

As I probed to better understand what my daughter meant and how she felt by her text to me, she explained that with Facebook owning Instagram, it would ruin its entire purpose. Probing further, she feared that Facebook, because it’s for “old people”, would “change Instagram.” Taking this offline, she explained a few fears. For her, Instagram is a world for her and her friends in her grade that was protected from Facebook gawkers and lurkers. Her thinking was that by Facebook owning Instagram, those gawkers and lurkers would invade her and her young friend’s world. Mark Zuckerberg promises a standalone Instagram, but will, of course, import all the pictures in their context and metadata, to be monetized like everything else is in the Facebook network. My daughter wasn’t alone in her fears.

Like Mathew Ingram reported in GigaOM, many other people, including grown adults, were airing their grievances. Many even retweeted my daughter’s text in a sign of protest. As of right now, the text had been viewed on Twitpic over 71,000 times and was retweeted over 3,200 times. While the protesters probably represented small but vocal minority, they certainly were a passionate and diverse group. All of this passion highlights a theme I’ve been researching for a few months, the verticalization of social media.

Over an extended period of time, all markets go vertical or specialize, all the way to the point where the market cannot support any more divisions. Sometimes the segmentation is too gray and not demonstrable enough to support the business model. Look at TV channels, cars, tooth paste, and shampoo. They have all segmented beyond belief if you been alive long enough to see it from the beginnings. Cars are a good example. At one point, there were very few different types of cars consumers would want to buy and that manufactures offered. Now it seems that every brand has sedans, coupes, mini-vans, station wagons, SUVs, “minis”, sports cars, trucks, hybrids, etc. TV sport is another good example of specializtion. When I gew up, I could only watch sports on one of four network TV stations at very regimented times of the day. Now, from Austin’s Time Warner Cable, I can get access to over 50 different sports channels whenever I want, 24 hours a day. I see the same situation playing out with social media.

Socal media is now starting to mature, fragment and specialize. Facebook, for now, are many people’s “home base”, but as in life, all people have to leave home sometimes. That’s exactly what people started doing with a few sites like these:

  • Pinterest– Lifestlye social interactions around the “beautiful things you find on the web”.
  • GetGlue– Entertainment social interaction around what people are watching on TV, at the movies, playing, reading or listening to.
  • Foodspotting– Food social interaction between people who like to eat out and show off what they’re eating.
  • Goba-Face to face social interaction by bringing people together in the real world.

There are hundreds more services like these that cater to narrower slices of social interaction, but it’s not all rosy in the specialized social media world.

There are gating factors all markets need to overcome to move to specialization. For cars, it was a market large enough to warrant specialization plus the “sharing” of key parts like engines and chassis. For social media to specalize, it needed a home base, like Facebook, to provide login, authentication and open APIs to cross-post content and opinions.

Facebook has enabled the growth of these specialized social media sites. It’s a good thing they did, or Google would have done it and Facebook may not had nearly the lead they have today. This doesn’t mean Facebook will continue to leave its door open forever, though. Another potential growth-inhibiting factor is obvious; the number of active users and friends. There has to at least be enough users and friends to warrant going there in the first place. This is killer #1, but is facilitated by Facebook’s APIs. With today’s UIs and interaction models, I believe that consumers can really only tolerate one major social media “hub” like Facebook then one, maybe two specialized social media sites that are somewhat connected back to the home base. This could change over time due to aggregation work like Microsoft does with its “People” apps, but for now it’s a reality that there’s only so many sites we can handle. The final growth inhibitor is linked to the first. If you cannot gain scale, then you won’t be large enough to make enough money to stay in business. Most social media experiences who pass gate #1 fail gate #2.

Can we learn anything from a pre-teen girl’s reaction to a $100B company purchasing a tiny company with less than 15 employees for $1B? I hope so. I know I did. Consumers are very picky and if we offer them thin enough social media slices with enough mass to be considered a community, they like it. We only have to look at Instagram and Pinterest’s fast growing bases of active users as evidence that this is only the beginning of the social media specialization revolution. What does this mean for Facebook? Facebook needs to be the best “home base” it can be, integrating and facilitating traffic between smaller, specialized social media services. While Facebook has trumped Google many times in the last few years, they should get the YouTube playbook from them to show them how to do a branded integration the right way.

Google Created the Mess and Now Must Fix Android Tablets

Android for phones by any measure has been a success, while Android for “premium” tablets by every measure has been a disaster.  According to IDC, the iPad held 55% market share of all tablets in Q4 2011.  When you remove lower end tablets like the Fire and Nook and leave "premium" tablets at $399+, best case Android has approximately 13% market share, leaving Apple with 87% share.  This incorporates sales from some very nice Android tablets from Samsung and ASUS.  This is beginning to appear like the iPod market where Apple is squeezing every ounce of life out of the premium competition.  So who is to blame for the fiasco and who needs to fix it?  The responsibility lies squarely on the back of Google who in turn needs to fix the problem.

I was very excited about Android the first day I learned about it in 2005.  The market needed another strong choice for client operating systems to ensure the highest growth as Linux just wasn’t making headway. I bought the  T-Mobile G1 Android phone in October  2008, the Google Nexus One in January 2010 and many more Android phones including the HTC EVO 4G, the Motorola Atrix, and more after that.  The phone apps were there, more importantly the popular ones.  While the experience wasn’t as fluid as the iPhone, I and many others appreciated the openness, notifications, and live screens.  While the market was very excited about Android phones, it was a completely different story for tablets.
 
The first looks at Android for tablets, aka "Honeycomb" were amazing. Honeycomb, on paper and in demos, did almost everything better than the iPad. The interface was incredible and looked three dimensional and “Tron”-like. Multitasking, notifications, Flash video support, SD storage and Live Screens looked great.  The Motorola XOOM at CES 2011 won many awards including CES’s "Best of Show Award."  The anticipation mounted and the ecosystem was excited…. until it actually shipped.
 
As I explored here, I show that the XOOM was slow, buggy, without many apps, without Flash, without SD card support, and sold at a $300 premium to the iPad at $799. New models and prices were introduced starting at $379 seven months later.  Needless to say, it was a complete disaster. This was followed by Samsung with the Galaxy Tab 10.1 in June 2011 starting at $499.  This tablet experienced a similar fate as the XOOM but not as pronounced because it more quickly moved to Android 3.2.  The best premium Android tablet out there was and still is the ASUS Transformer Prime with its optional keyboard, but it also struggled because of Google’s operating system.  Google then released Android 4.0, aka "Ice Cream Sandwich" which didn’t add meaningful features for tablets, but instead aligned the application development environment between phone, tablet, and TV.  Android 4.0 tablets missed the holiday selling season and didn’t sell many at all compared to the iPad.
 
In summary, the following are the characteristics of what Google allowed to be introduced into the premium Android tablet market place:
  • buggy with crashes
  • slow interface
  • few tablet optimized applications
  • few services at launch for music, books, and movies
  • unfinished features
  • price points on top or higher than market leader Apple with lesser experience
  • missing key consumer retail time frames
So why do I place this primarily upon the shoulders of Google and not the brands, retailers, or component suppliers?  It’s about leadership.  If Google had fully understood what they were walking into, they should have:
  • waited to release Android 3.0 until it was feature complete.
  • waited to release Android 3.0 until there were at least 100 optimized, popular applications.
  • waited to release Android 3.0 until it had full support for movie, music and book services
  • waited to release Android 3.0 until there were greater levels of application compatibility issues that resulted in crashes.
  • instituted some tighter marketing management of hero SKUs to assure their experience was flawless

The result of Google allowing Android tablets out the door before it was fully baked is that the operating system is now viewed by most as a liability as opposed to an asset. Every major tablet maker that I’ve talked to loses money on premium Android tablets in a big way.  Also, anyone’s brand associated with the Android tablets has been marked as well. Motorola and Samsung both had premiere brands but I believe has been sullied by their association with Android for tablets.

Google’s reaction to all of this was to buy a hardware company (Motorola) versus working even more closely with their partners like ASUS and Samsung. Additionally, it’s rumored that Google will introduce their own Google branded tablet which will alienate Google all that much more.  Does the Google brand lend a cachet’ to the equation?  Absolutely not.

All of these issues and confusion benefits Microsoft right now. What was previously considered a free ride from Google with its "free" operating system now has turned OEMs directly into the arms of Microsoft and Windows 8 for tablet.  What a turn of events over the last 18 months.  The pandemonium isn’t over yet.  With undoubtedly more information coming out at this year’s Google I/O, Google is planning Android 5.0 which I am sure will be positioned as the savior of Android for tablets.
 
The problem is that there’s no savior in sight for Android on premium tablets.  We all know Android sells at $199 without much or any hardware profit, but how about $499 where the entire ecosystem can make money?  Google needs to seriously reconsider everything they are doing with  for tablets starting now because nothing else is working.  The new plan needs to fully account for the needs of the silicon partners, ODMs, OEMs, channel partners, application developers and most importantly, the end user.  It needs to find an entirely new name, too, because the Android name has been thoroughly destroyed in the high end tablet market. 
 
It’s time to stop treating Android for tablets like a hobby and start treating it more like a business.

What Apple Needs to do to Stay Ahead with the iPad 4

Apple once again delivered a high quality experience with the “new” iPad, aka iPad 3. Like phones, Apple has again managed to deliver enough to stay ahead as they did with the iPhone 4s. The new iPad didn’t deliver a knockout blow to Android, but certainly eliminated many gaps that could drive many premium ($499+) tablet buyers away from the platform. While the new iPad will sell exceptionally well, I’d like to discuss what Apple will need to deliver in the “new-new” iPad, aka iPad 4 to keep their leadership position.

Change What Broke Moving from iPad 2

As I said, the new iPad will sell extremely well, but there were some steps taken backward that need to be addressed:

  • Weight– 51 grams or 8% heavier (652 versus 601 grams) doesn’t sound like a lot, but when it comes to some usage models, it is. The weight increase is noticeable primarily while reading and playing games. If you don’t believe me, play Real Racing 2 or Air Supremacy for a few hours with the new iPad and then the iPad 2. Then read a few hours in bed with the two tablets. You will notice the difference, albeit a small number.
  • Battery Life– Even though the new iPad increased the battery a giant 70% to power the Retina Display, it actually stepped back in battery life according to Anandtech The iPad has always had good battery life but Apple needs to reverse the 49 minute, 8% reduction and get back to 10 hours in real battery life. As software is one of the biggest influencers of battery life, Apple could potentially drop a new iPad software image and help this.
  • Heat– I never saw this as a safety issue as Consumer Reports insinuated, but some usage models could be an inconvenience. First is outside use where even iPad 2s heat up and shut down. This can easily happen in the car or even using outside on the back porch. Anyone who has an iPhone or iPad can relate to this. It’s better that it shuts down than burning up, but is annoying. One point I need to make here is that many consumer devices heat up when they are used. This isn’t something unique to the new iPad.
Light Bike 2

Improved Scalable Graphics

Today, if a consumer wants to display some (not all) of their iPad content on a larger external display like a modern monitor or TV, issues exist. If they connect to an HDTV or to a modern-day monitor that is 16:9 there are huge black bars to the left and right that are not only ugly but limit the amount of data that a user can see on the external display. This is something that even RIM solved with the PlayBook and is primarily a matter of graphics drivers. Microsoft has enabled this for over a decade and Apple should too. This is more of an issue of software drivers and taking on a bit more complexity. If it’s a dev issue, then Apple needs to improve their tools to help developers do this easier.

Improve Wireless Display

The current Wi-Fi “n” is OK for web surfing but not for acceptable for wireless mirroring or displaying to an HDTV via AirPlay using an Apple TV. When playing games or displaying video the current implementation just isn’t quick enough. I and others I respect have had issues with stuttering. This can be solved by upgrading Wi-Fi to 60 Ghz. and adding support for WiFi Direct. This combination not only speeds up the connection significantly, but also removes the latency of the wireless router.

Improve Gameplay Even More

The current crop of games for the new iPad is impressive when compared to the iPad 2 but unimpressive when compared to game consoles and personal computers. Neither the CPU nor the GPU has enough horsepower to deliver this kind of experience. Real Racing HD, Modern Combat 3, Air Supremacy, and Infinity Blade 2 are nice for the new iPad’s 9.7″ display but to move up the food chain to challenge consoles and PCs in more graphically oriented games, they have a long way to go. I’m not saying that iPad won’t take sales away from today’s consoles, because they will, but those who want the highest gameplay reality with the use of technical graphics features of tessellation (better geometry, more real) , more textures (more real), physics (more real), AA (anti-aliasing to remove “jaggies”), and consumers are better off with a PC. To move all those frames around, Apple will also need to move to an ARM A15-based solution. I expect NVIDIA to keep their leadership role in tablet gaming and Apple needs to assess whether they continue to build or even consider using NVIDIA’s Tegra line. Apple won’t be able to keep pace as NVIDIA already has the intellectual property to deliver 100X the performance of what is shipping, albeit on much larger and power hungry designs.

Improve Safari Multitasking

Multi-tab browser multitasking is still painful and unproductive on the new iPad. Just open up 5-10 tabs and see what happens. First, when resources have maxxed out, iOS flushes the tab of data and the user needs to reload the entire tab when they return to it. The user returns to a blank, white tab. Secondly, iOS has a difficult time downloading items in different tabs at the same time. This is most likely the result of low memory bandwidth and a weak CPU. The new iPad uses the A5X which includes a dual core ARM A9-based processor which just isn’t up to the task. I’d like to see Apple use either NVIDIA’s Tegra line or even Intel’s Medfield to help fix this. If Apple wants to roll their own silicon, they will need to go to ARM-based A15 architecture, dual or preferable quad.

Complete iCloud

The current iCloud is incomplete as I point out here. Apple, at least for productivity, needs to complete the solution. Today, users need to go through gymnastics to sync their docs between the phone, tablet, and PC/Mac. There is a seamless link between iOS devices but breaks when it comes to the PC and Mac. Files do not automatically appear and update in the Documents folder as they should. Instead users need to open the document from iCloud in the web browser, edit on the PC/Mac, then copy back to the iCloud on the web This is suboptimal and Apple knows it and I expect this to be fixed at least by the new-new iPad. (Note: Technically these aren’t changes need to be made to the iPad, but the Mac and PC software.)

Convertible Configuration

As I sit and write this on my iPad with the Zagg keyboard, I really would like for Apple to take the convertible configuration more seriously as opposed to throwing it out to the peripheral makers. While better than nothing, the Zagg implementations and others are very clunky, and are, well….. peripherals and not well integrated. Apple should take what Asus has done with the Prime and Slider then perfect it. I can imagine a $699-799, 12mm thick iPad slider configuration. When the user wants a keyboard they slide it out and when they don’t need the keyboard, slide it on and use as a slate. Apple, while currently the tablet leader, cannot get caught sleeping as Microsoft with Windows 8 convertible designs.

This convertible configuration would benefit Apple in many ways:

  • fills $699-799 clamshell price hole Ultrabooks occupy
  • maintains MacBook premium positioning at $999 minimum
  • given Apple could add more battery in the keyboard, could outlast Ultrabooks by as much as 10 hours active use.
  • sets the stage for iOS and OSX operating system unification which then positions Apple to take more PC market share

Face Login that Works

Imagine how much time we waste logging into our devices. I am dumbfounded that this hasn’t been solved yet but understands the challenges, mostly technical. You see, to have accurate results, two elements are required. First, you need to upgrade the camera (s). Higher resolution, stereoscopic cameras could be used to capture a 3D view of the head and face. Those cameras should be higher resolution to capture skin and hair details. This helps to keep from someone fooling the device with a photograph of a person or using a mask. Finally, the new-new iPad would need enough “burst-mode” processing and memory bandwidth to do all of this in two tenth of a second. This wouldn’t impact battery life as it only maxes out the CPU, GPU or DSP for a very short time.

This same technology could be used to turn the iPad into a better multi-user, coffee table appliance that all the family members could share. For example, when my son grabs the iPad to play a game (which he often does), I don’t want him to get access to my client emails and accidentally delete one.

Anticipating the new-new iPad

The new iPad, aka iPad 3 will sell in droves, potentially twice as many as the $499 predecessor in the same timeframe. The fact that it will sell well doesn’t make it perfect by any stretch. It can and must be to continue its dominance at the $499+ price point. If the past is a guide to the future, Apple is making some of the final design and capability decisions right now on the new-new iPad and Apple knows better than everyone that they must continue to have at least “triples” to stay ahead of Android and Windows 8. The competition is more focused and more experienced and I expect a much tougher for Apple with Windows 8.

NVIDIA Solved the Ultrabook Discrete Graphics Problem with Kepler

When Intel released their first Ultrabook specification, one of the first component implications I thought of were the impact 633882_NVLogo_3D_DarkTypeto discrete graphics.  My thought process was simple; based on the Intel specifications for battery life, weight and thickness, designing-in discrete graphics that were additive to Intel’s own graphics would be difficult, but not impossible. By additive, I mean really making a demonstrable difference to the experience versus just a spec bump.  While I respect OEMs need to add discrete graphics for line logic and perception, sometimes it doesn’t make an experiential difference.  This is why I was so surprised and pleased to see NVIDIA’s latest discrete graphics solutions inside Ultrabooks. NVIDIA’s new GPUs based on the “Kepler” architecture not only provide an OEM differentiator, but they also provide a demonstrable, experiential bump to games and video.

Today’s Ultrabooks share similar specs

Today’s field of Ultrabooks is impressive but lack a sense of differentiated hardware specification and usage models. I deeply respect differentiation in design as I point out in my assessment of the Dell XPS 13, but on the whole, I can do very similar things and run very similar apps with the current top crop of Ultrabooks.

As an example, let’s take a look at the offerings at Best Buy.   Of the 13 Ultrabooks, all offer roughly the same or similar specifications: processor (Intel Core I-Series), graphics (Intel HD), operating system (Windows 7 64-bit), display size (13-14″), display resolution (1,366×768), memory (4GB RAM), and storage (128 GB).

image

Of these specifications, the level of the Intel Core CPU primarily determines the differential in what a user can actually do with their Ultrabook.  As Ultrabooks have matured a full cycle, differentiating with graphics makes a lot of sense, particularly in the consumer space.

NVIDIA’s Kepler-based GeForce GT 640M Mobile Graphics

Today, NVIDIA launched the first of their latest and greatest GeForce 600M graphics family, the GeForce GT 640M. This GPU features a new architecture code-named “Kepler” which is destined for desktops, notebooks, Ultrabooks, and workstations.  Designed to be incredibly powerful and efficient and created on TSMC’s lower-power HP 28nm process, test results I’ve seen show these new GPUs deliver twice the performance per watt of the prior generation.  Anandtech has thoroughly reviewed the desktop variant, the NVIDIA GeForce GTX 680, and have given NVIDIA the single card graphics performance crown.

NVIDIAs Kepler differentiates the Acer Timeline Ultra M3

acer m3With the NVIDIA GeForce GT 640M, users can now get the new graphics and the Ultrabook benefits of thin, light, responsive and great battery life. Consumers can actually buy this capability today in Acer’s new Timeline Ultra M3.  The M3 can play all the greatest game titles like Battlefield 3 at Ultra settings, is only 20mm thin and gets 8 hours of battery life.  NVIDIA suggests that this new combination of Ultrabook and Kepler-based graphics equates to the “World’s First True Ultrabook”. I need to test this for myself, but they have a point here given that it provides between a 2X and 10x bump in the most demand gaming titles over the Intel HD graphics.

How NVIDIA’s Kepler-based GPU fits in an Ultrabook

As I said earlier, when I saw the Ultrabook specification, I thought it would be very difficult to get decent discrete graphics into an Ultrabook. My concerns were around power draw to achieve minimum battery requirements and chassis height in 13 and 14″ form factors to include a proper cooling solution.

Between NVIDIA and their OEMs, many different factors played into enabling this capability:

  • NVIDIA Kepler architecture is twice as efficient as the prior SM architecture. The inverse of this is that at half the power, you can provide the same performance. For instance, the GT 640M reportedly provides the same performance as the previous GTX 460M enthusiast class GPU, at around half the power consumption.
  • NVIDIA Optimus technology automatically shifts between the lower power/performance of the Intel HD graphics and the higher power/performance NVIDIA discrete graphics. When the user is doing email, the Intel graphics are operating and the GeForce GPU is consuming zero power.  When the consumer is playing Battlefield 3, Optimus automatically turns on GeForce GPU to provide the best possible performance.
  • New and better power management allows GeForce GPUs to maximize performance by intelligently utilizing the full potential of the notebook’s power and thermal budget. For example, if the notebook’s heat sink assembly has spare thermal headroom, the GeForce GPU can dynamically increase frequency to provide the best possible performance without adversely effecting operating temperature or stability.

I was correct earlier in that this was very challenging and between NVIDIA and its OEMs. It’s clear they stepped up and made it happen.

Ecosystem and NVIDIA Implications

Having NVIDIA’s new high performance graphics inside Ultrabooks is good for the entire ecosystem of consumers, channel partners, OEMs, ODMs, game ISVs and of course, NVIDIA:

  • Consumers get between 2-10X the gaming performance plus all the other Ultrabook attributes.
  • Channel partners, OEMs, and ODMs can now offer a much more differentiated and profitable line of Ultrabooks.
  • Game ISVs and their distribution partners can now participate more fully in the Ultrabook ecosystem.

And, of course, NVIDIA has a big potential win here, too.  According to GFK, over the past two quarters NVIDIA has picked up nearly 10 points of market share inside Intel-based notebooks. NVIDIA’s Kepler only enables them to further increase this share, particularly with Intel Ivy Bridge-based Ultrabooks.  AMD hasn’t yet played their full mobile cards yet, but given AMD’s known GCN architecture and TSMC’s 28nm; they have limited weapons to pull out of their 2012 arsenal in Ultrabooks. The graphics world is a very dynamic market, so you never can be certain what each player is holding back.  AMD held the discrete graphics leadership position for a while, but 2012 looks very good for NVIDIA.

Windows 8 CP Tablet Experience: Distinctive yet Risky for Holiday 2012

A little less than a week ago, Microsoft launched to the public the Windows 8 Consumer Preview (CP). This is a follow-on to the Developer Preview (DP) that I’ve been using on a tablet and all-in-one desktop since it was introduced last September at the Microsoft BUILD partner conference. After 6 months and reportedly 100,000 code changes, is Windows 8 ready for prime time? Based on over 20 years of working with Windows development code and launching real products, I believe that Windows 8 is very distinctive but is risky for a Holiday 2012 release.

If you haven’t actually used Windows 8, I urge you to download it here. Truly using beta software is the only way to truly get the “feeling” of preview software and devices. What I will do is take you through the areas where I believe Windows 8 shines, needs work, and finally, areas where there’s not enough data to make a recommendation one way or another. I want to stress that my assessment is based on “preview” or “beta” code, not the finished product. Finished code is called RTM, or “Release To Manufacturing”. One very important hurdle for preview or beta is that it must be feature complete, which in some areas Windows 8 is and others, not.

Tablet Experiential Plusses

  • Fast response: My tablet booted very quickly and most times, woke up very quickly from “sleep.” Like DP, Metro was very fluid and fast as well, a first for a PC platform. Even installing apps was fast.
  • Content mashups:  Unlike Apple iOS or OSX, Microsoft has attempted to deliver what people really want who have multiple on-line services; a focus on the content and interaction, not the service. For example, those who have LinkedIn, Facebook, Twitter, multiple address books, etc., Win8 makes it simple. Instead of having to go to multiple services or apps, consumers go to apps like “People” (Facebook, Twitter, LinkedIn, Google), “Pictures” (Local, SkyDrive, Facebook, Flickr), and “Messenger” (Facebook, Microsoft, etc.) All of this saves time and places focus on the content.
  • Metro apps visually stunning: Microsoft pulled the “essence” of the app experience from Windows Phone, Zune and theXBOX 360. This results in beautiful looking apps like Music, USA Today, Weather, Bewise Cookbook, and iCookbook. When playing music, cover art and band photos are “silhouetted” on the display giving the feeling of a premium experience. Photos are huge and there is always a lot of white space. App beauty matters; just ask Apple.
  • Live tiles: Microsoft took what Android started in mobility, perfected the notification system with Windows Phone and extended it to the tablet. Without even opening up an app or swiping, consumers can see latest emails, next calendar item, most important stock prices, weather, and social media updates.
  • Dual use experience: I have been a proponent of modularity for years as it ultimately where the future of computing is going. With Win8 mid-term, Microsoft has the unique ability to capitalize on this with tablets, unlike Apple or Google. It’s simple; when users want to use the tablet on the couch or in bed, they use Metro. When I want the full desktop,they dock it with a full sized mouse, keyboard, 32” display and am doing real work. Microsoft ultimately needs to enable a way for a Metro and Desktop app to share the same local data files, but cloud-sync is an acceptable start particularly for the tech-aware audience.
  • XBOX integration: Like Windows Phone, Windows 8 CP integrates XBOX functionality quite well but is just a start. Using the XBOX Companion app on my tablet, I could find movies, TV shows, and music and even launch games to be watched or played on the XBOX. It is like an XBOX remote on steroids. I am still waiting for the enhanced “play to” functionality to share local content like photos and web pages to the TV via the XBOX. This functionality was discussed in-depth at the BUILD conference.
  • Search: Unlike the iPad, users can do full document and app content search. This what consumers expect and this is what Windows 8 CP delivers.

Too Early to Tell

  • ARM experience: Microsoft and its partners have been very selective on showing the Windows on ARM experience. It has been shown on stage and behind closed doors, but unlike the X86 versions, the public cannot touch the devices. Even at January’s CES show, the public was not allowed to touch the devices. If it were working great, there wouldn’t be a restriction and as I pointed out here, there are many challenges with Windows on ARM.
  • Updates: Every operating system and apps have updates and for good reasons, namely security and bug fixes. What is unknown with Windows 8 is the size and frequency of updates. We all know that the current pace and method of Microsoft updates is unacceptable in the modern world, and if it continues at its current pace, will detract from the tablet experience. The first day after I installed Windows 8 CP and got my system ready for desktop use, I received 34 updates; 4 for Windows 8 and 32 for Office. It took over an hour and that’s unacceptable in a modern, tablet world.
  • Tablet Games: I was very impressed with Pinball FX, but one game make not a trend. Given games are the most popular iOS and Android app category, I would have expected more by now.
  • Metro SkyDrive: I have used SkyDrive and Live Mesh for many years but primarily use Dropbox and SugarSync. There are two main issues I have found. First, I can see no more than 14 icons on an 11″ tablet display and there isn’t search capability. Sorry, consumers don’t like to create file folders nor do they manage them tightly. I am expecting Microsoft to change this or it renders SkyDrive useless.
  • Number of relevant apps: Certainly this will grow given Microsoft’s big bet and investment into developers but I was expecting more apps 6 months after Visual Studio was shown at the BUILD conference. 15 games and 3 social media apps 6 months after the developer preview isn’t the progress I expected.
  • Tablet OS footprint: The size of the final tablet installation is unknown, but if it’s more than a few GB, this will be a cost issue for tablets. Hard drives are “free” on desktops, but on SSD-based tablets, it’s a premium. The current download size for Win8 CP is between 2.5GB and 3.3GB, but those then get “unpacked” and increase in size. Microsoft is recommending 16GB free space for 32-bit and 20GB for 64-bit so the reality is the build will be between the download and the recommendations. Keep your eye on this one….
  • Tablet battery life: Microsoft and its partners have made a tremendous effort to improve battery life. Early indications show that by re-architecting the ways drivers work and BIOS work, using Metro as the front-end user experience, and by leveraging the lowest power ARM and X86, battery life will be competitive. I expect battery life to be competitive, but less than iOS or Android devices; but then again, it does more and I believe that it won’t become a consumer issue.

Experiential Improvements Needed

  • Too many bugs: Yes this is preview, but I was surprised to see this far into the development process the amount of application “hangs” with Metro apps like Mail, SkyDrive, and Photos. I experienced many situations where the screen just sat there in one color as if it were waiting for something. I used Microsoft’s recommended hardware tablet platform so that cannot be the issue.
  • Universal email inbox: The Metro Mail application doesn’t support a universal inbox. This is just basic and is surprising a feature complete preview launched without one.
  • MS Office file format viewers: Unlike iOS, OSX, and Android, the Win8 CP doesn’t include local viewers for MS Office documents. But it does support viewing PDF files.. huh? Click on a Word doc and you get sent to online SkyDrive where you can view and even edit a document. I see why Microsoft would want this as it “motivates” you to buy Office, but with all of the competition providing this, it really messes up the experience. The Windows 8 on ARM systems do contain Office but it isn’t clear what will ship on X86 systems. For the user’s sake, we can only hope that OEMs install at least viewers or Office Student Edition.
  • Metro Windows Explorer: Sorry, the newly designed Explorer doesn’t cut it in a touch environment. Even on an 11” display, it’s just too easy to click on the wrong icon or accidentally delete or move a file.
  • Metro Internet Explorer bookmark folders: Even Apple fought against but finally learned on iOS that for a browser to be usable, it needs an easy way to file bookmarks. And that means folders. 50 bookmarks strewn all over the place is just a mess and will repel users.

Conclusions

Windows 8 Consumer Preview builds upon the Developer Preview by adding application previews and cloud connectivity.  Windows 8 for consumer tablets is very distinctive in that it can effectively be used as a tablet device for “lean-back” usage models and for “lean-forward” usages when docked in desktop mode. Like Android, Windows 8 takes a content-first approach, albeit with much more beauty and style, and simplifies user’s interactions between different local and cloud-based services.

Unlike iOS, Windows 8 is “alive” and vibrant with its live tiles, white space, and over-sized imagery. When launched it will pose a serious threat to high-end Android tablets and will help thwart competitive threats on the desktop by Android, iOS (in convertible form), and even OSX. The biggest challenge I see is Microsoft’s and its partner’s ability to hit the 2012 holiday selling season with a stable operating system for tablets to compete with the iPad. That risk is being mitigated with special image loads for specific devices, but given the state of the Windows 8 CP experience, hitting holiday 2012 with the experience Microsoft envisions and must deliver will be a tremendous challenge.  I believe it is a bridge too far and the experience will suffer at the need to hit the holiday selling season.

NVIDIA’S Tegra 3 Leading the Way for Smartphone Modularity

I have been an advocate of modularity before it became popular to do so. The theory seems straight-forward to me, in that if the capabilities of a smartphone were outpacing the usage model drivers of a rich client PC, then consumers someday could use their own smartphone as a PC.  Large displays, keyboards and mice still exist in this usage model, but the primary intelligence is in the smartphone then combined with wireless peripherals.  At this year’s Mobile World Congress, NVIDIA took us one step closer to this reality with their partners and the formal announcement of Tegra 3 based smartphones.

Tegra 3 for Smartphones

Tegra 3 is NVIDIA’s latest and greatest SOC for smartphones, “superphones“,  and tablets.  It has four ARM A9- based high performance, 1.5 GHz cores and one “battery saver” core that operates when the lowest power is required.  The fifth core comes in handy when the system is idling or when the phone is checking for messages.  Tegra 3 also includes a very high performance graphics subsystem for games and watching HD video, much more powerful than Qualcomm’s current Adreno 2XX hardware and software implementation.

clip_image004NVIDIA announced five major Tegra 3 designs at Mobile World Congress; the HTC One X, LG Optimus 4X HD, ZTE Era, Fujitsu’s “ultra high spec smartphone” and the K-Touch Treasure V8.  These wins were in what NVIDIA coins as “superphones” as they have the largest screens, the highest resolutions, the best audio, etc.  You get the idea.  For example, the HTC One X sports a 4.7″ 720P HD display, the latest Android 4.0 OS, Beats audio, NFC (Near Field Communication), and its own image processor with a 28mm lens to take great pictures at extremely low light.  You get the idea.

There is a lot of goodness in the package, but that doesn’t remove the challenge of communicating the benefits of four cores on a 5 inch screen device.

Quad Core Phone Challenge

As I wrote previously, NVIDIA needs to overcome the challenge of leveraging four cores beyond the spec on the retail tear clip_image002pad.  It’s a two part challenge, the first to actually make sure there is a real benefit, then to articulately and simply communicate that.  These are similar challenges PC manufacturers had to deal with.  The difference is that PC makers had 20 years of dual socket machines to establish an ecosystem and a messaging system.  Quad core tablets are an easier challenge and quad core convertibles are even easier in that you can readily spot places where 4 cores matter like web browsing and multitasking. Smartphones is a different situation in that due to screen size limitations, multitab browsing and multitasking rarely pegs a phone to its limits.  One major exception is in a modular environment where NVIDIA shines the most.

Tegra 3 Shines the Most in Modular Usage Models

Modularity, simply put, is extending the smartphone beyond the built-in limitations. Those limitations are in the display, audio, and input mechanisms.  When the smartphone breaks the barriers of itself, this is where NVIDIA Tegra 3 shines the most.  I want to be clear; Tegra 3 is a competitive and differentiated smartphone and tablet SOC without modularity, but is most differentiated when it breaks free from its limited environment.

NVIDIA has done a much better job showing the vision of modularity but its partners could do a better job actually delivering it.  On the positive side, partners are showing some levels of modularity. HTC just announced the HTC Link for the HTC One X, software and hardware solution that plugs into an HDTV where you can wirelessly mirror what is on the phone’s display.  It’s like Apple’s AirPlay but better in some ways like being able to project a video on the large display and do something different on the phone display, like surfing the web.  Details are a bit sketchy specifically for the HTC One X and HTC Link, but I am hopeful they will roll out some useful modular features in the future for usage models. Apple already supports wireless mirroring supporting games so in this way, HTC Link is behind.

What NVIDIA Tegra 3 Should Do

What NVIDIA’s partners need to create is a game console and digital media adapter solution that eliminates the need to buy an XBOX, PlayStation, Wii, Roku, or Apple TV.  The partners then need to attack that.  All of the base clip_image006software and hardware is already there and what HTC, ZTE, or LG needs to do now is package it to make it more convenient for gaming. This Tegra 3 “phone-console” should have a simple base near the TV providing it power, wired LAN, HDMI, and USB.  This way, someone could connect a wireless game controller and play games like the recently announced Tegra 3 optimized games in great resolutions with rich audio. The user would have the ability to send phone calls to voice mail or even to a Bluetooth headset.  Notifications can be muted if desired as well.  And of course, if you want to watch Netflix, Hulu, or Amazon movies it’s all there, too.  The alternative to this scenario is for a Wi-Fi Direct implementation that doesn’t require a base where the user can utilize the phone as a multi-axis game controller with force feedback.  The challenge here is battery life but the user can pause the game or movie and pick up phone calls and messages. This usage model isn’t for everyone, but think for a moment about a teenager or college bound guy who loves gaming, wants a cool phone, and doesn’t have the cash to buy everything.  You know the type.

Other types of modularity that NVIDIA’s partners must develop are around productivity, where the phone drives a laptop shell, similar to Motorola’s Lapdock implementations as I analyzed here. Neither the software, hardware, or price made the Lapdock a good solution, but many of the technologies now exist to change that.  NVIDIA’s Tegra 3 would be a great start in that it enables real multitasking when using the Lapdock in clamshell PC mode.  Android 4.0 provides a much more modular computing environment to properly display applications on a 5″ and 11″ display including scaling the fonts and reorienting windows.  The Motorola Lapdock used two environments, one Android Gingerbread a a different one for PC mode.  Unsurprisingly, it was a good start but very rough one too, with room to improve.

NVIDIA, the Silicon Modularity Leader with Tegra 3

NVIDIA with its Tegra 3 solution is clearly the current silicon leader to support future modular use cases.  They are ahead of the pack with their modularity vision, patiently waiting for their partners to catch up.  This was the most evident at CES where NVIDIA showed an ASUS Transformer Prime connected to an XBOX controller and an HDTV playing high quality games. They also demoed the Prime playing high end PC games through remote desktop. Now that is different.

The opportunity for HTC, ZTE, LG and potentially new customers like Sony, RIM, and Nokia is there, and the only question remains is if they see the future well enough to capitalize on it.  With all the complaints from handset vendors on differentiation and profitability with Android, I continue to be puzzled by their lack of aggression.  An aggressive handset maker will jump on this opportunity in the next two years and make a lot of money doing in the process.

The Case for Intel’s Future Smartphone Success

In my many weekly conversations with industry insiders we discuss Intel’s chances in mobility markets, specifically smartphones. Few people are betting against Qualcomm and for very good reason in that they are entrenched at handset vendors and their 2012 roadmap, at least on paper, looks solid. What few are discussing is how Intel will pick up market share. My last column on Intel’s smartphone efforts outlined what Intel needs to demonstrate quickly to start gaining share and getting people to believe they can be a player. Now I want to take a look at why I believe Intel can and will pick up relevant market share over the next three years.

Intel Finally Broke the Code with Medfield

This isn’t Intel’s first time in mobility. Intel owned XScale, an ARM-based mobile processor that was in the most popular WinCE devices like the Compaq iPaq, one of the more popular Pocket PCs. XScale products even powered Blackberrys for a time as well. Intel sold the entire XScale mobile application processor business to Marvell in 2006 for $600M. This move was driven by Intel’s desire to focus on X86 designs. What followed were some failed mobile attempts with Menlo and Moorestown, two low power, Atom-branded processors that made their way into MIDs (Mobile Internet Devices). It appeared that Intel would make grand announcements with big names like LG for smartphones then nothing would happen afterward. Things are very different with Medfield. Handsets are at China Unicom in testing for Lenovo and Motorola announced their handsets would be at carriers for the summer.

Medfield is a huge step forward in design and integration for Intel. First, it combines the application processor with I/O capabilities on a single chip. This saves handset makers integration time and board space. Secondly, it is paired with the Intel XMM 6260 radio based on the Infineon Wireless Solutions (WLS) acquisition. This increases the Intel revenue BOM (Bill of Material) and also helps with handset integration. Finally, Intel has embraced the Android mobile OS in a huge way with a large developer investment and will provide optimized drivers for Medfield’s subsystems. This move is in contrast to their MeeGo OS efforts that didn’t go anywhere. Intel has even gone to the effort to emulate ARM instructions so that it can run native apps that talk directly to ARM. These apps are typically games that need to be closer to the hardware. This is a very good start for Intel, but as I tell my clients, if there are 10 steps to mobile silicon success, Intel just successfully crossed step 3.

It’s a Tough Smartphone Market

Intel made some very serious headway with Medfield, but it is a very competitive market out there. According to IDC, in Q4 2011, Apple and Samsung combined to garner almost 50% of the smartphone market. As I pointed out in my previous column, Apple already designs their A-Series processors and I don’t see that changing. I expect Samsung with the exception of the very low end to lean into their own Exynos silicon. Nokia at 12% Q4 smartphone share is tied to Windows Phone and Qualcomm at least for the short term. Struggling RIM doesn’t need another variable to worry about with their muddled operating system strategy and is currently tied to Qualcomm. Finally, HTC is rumored to tie up with NVIDIA on its Tegra platform on the high end. Who does this leave for Intel?

For Intel in the short term, with Motorola and Lenovo on-board, this leaves private label for carriers, LG, Sony, ZTE, Huawei, Kyocera, Sanyo and a very long tail of small manufacturers. The long tail will be a challenge for Medfield until Intel waterfalls the products line to be cost-competitive with lower end models. I expect Intel to start waterfalling products down in the end of 2012.

Why Intel Could Succeed

While I outlined the many challenges, Intel could very well succeed in the space longer term. First, the phone marketplace is a rapidly changing market. Not only have there been tremendous share shifts in the last two years, but feature phones are migrating to smartphone market resulting in exploding growth.

Operating systems are clear from shaking out. Microsoft will not go gently into the night with Windows Phone and will invest what it takes to be successful even if it takes another Nokia-like investment to own another platform. I also believe once Microsoft starts gaining share, they will devote resources for X86 on Windows Phone 8 or 9 platforms. They see Intel as successful with Medfield and the WINTEL alliance could be brought back from the dead. Long-term, I do not believe Samsung will be happy licensing someone else’s operating system, particularly with Apple’s integration and experience success. I expect Samsung to do one of three things, possibly two; increase investment in Bada to a point that it can compete with Android in a closed environment, embrace webOS, oe lean heavily into Tizen. Marketplaces in dynamic change are an opportunity for newcomers, even companies worth $140B like Intel.

One other important factor that hasn’t fully played out is “carrier versus handset-maker” dominance. Up until the Apple iPhone, the carriers dictated terms to the handset makers. Every carrier who has adopted the iPhone has taken a gross margin reduction. This doesn’t mean they made a bad decision; they had to carry the iPhone. That carrier margin reduction money is going to Apple and not the carriers. Carriers are strategizing how they can regain that dominance going forward and I believe Intel will part of those plans. Intel has the capability to partner with an extremely low cost manufacturer or ODM an entire solution, white label it to a carrier and provide a competitive Android experience. I expect a few key announcements this month at this year’s Mobile World Congress.

Of course, we cannot forget about Intel’s technology. According to tests run at Anandtech, Intel’s Medfield is competitive in power at 32nm LP so you must assume that it only gets better at Intels 22nm 3DTriGatetechnology. Intel will roll Atom into 22nm in 2013 and 14nm in 2014. This is all the while in 2012 TSMC is at best case at 28nm and GLOBALFOUNDRIES and Samsung is at 32nm.

I define success as the ability to reach a relevant level of profitable business that supports the desired brand goals. For Intel, this doesn’t need to be 80% like they have in the PC market, but needs to be a profitable 20%.

What this Means for Intel, Qualcomm, Texas Instruments, and NVIDIA

Over a period of three years, Intel will start to take market share from Qualcomm, Texas Instruments and NVIDIA, albeit very small in 2012. As Intel integrates wireless, moves to 14nm, and waterfalls their offerings to lower price point smartphones, this makes much more competitive to handset makers and carriers. I expect Huawei, ZTE, or a major carrier to go big with Intel in 2013 which will make a huge difference. One thing to remember about Intel; unlike others in the marketplace, Intel also captures the manufacturing margin TSMC and GLOBALFOUNDRIES makes and the design margin ARM earns. While Intel has a long way to go in proving themselves, they have the start they never had before at a time to take advantage of the mammoth growth in smartphones. Never count Intel out of any market, no matter how many times they have tried and failed.

Future iPads Will Cannibalize TVs

As ZDNet’s Adrian Kingsley-Hughes points out, we know absolutely “nothing” about the iPad 3 right now.  While pontificating about future Apple products is a lot of fun, drives many page views and makes web site editors very happy, it’s  just a pontification factory.  At some point in the near future, the iPad will have a better display and will be lighter than its predecessors, which drives me to the conclusion that the next generations of the iPad will start to rapidly cannibalize HDTVs, particularly second or third sets. I’d like to share my thoughts on why this will happen.

 Consumers Radically Changing TV Viewing Habits

I have done a lot of consumer research and have been tracking PC use in the living room for years.  Around 10 years ago, outside the very tech savvy, users started augmenting their TV viewing experience with a notebook PC.  The early majority pecked away at their notebooks as the rest of the family watched something the early majority somewhat ignored.  This model then transitioned into family members watching shows that were on the major broadcaster’s web sites.  Remember when Big Brother started providing live PC feeds?  This model quickly was augmented by Hulu and Netflix diving into the market as an intermediary.  The iPad followed which provided very simple, manicured apps that gave access to rich “TV” content from Netflix, Hulu, and even cable companies like Time Warner and Comcast.

These viewing habits drove a wedge between two distinctly usages; personal and group viewing. Mobile devices like the iPad and their services enabled growth of personal viewing and consumers could watch finer slices of what they wanted to watch, when and where they wanted.

Group viewing isn’t going away any time soon, but as more people spend time on personal viewing, group viewing declines.  They only exception is “crossover” viewing where a family member is wearing headphones watching another show on a mobile device while other family/dorm members are watching on the HDTV.  Regardless of the viewing model, it drives the need more personal viewing devices and less group devices, or certainly drives the behavior to prioritize personal over group.

Consumers are changing their viewing habits from group to personal, but will future iPads be up to the task in terms of video and weight?

Future iPad Display as Good as Watching a 75″ HDTV?

No one publicly knows the iPad 3 resolution for sure, but let’s assume that the lines are doubled horizontally and vertically to provide a “2K” (2,048×1,536) resolution which provides 4X the resolution of the current.   I will also assume that content will come in three flavors: 1) upressed to 2K by the iPad, 2) services provide upressed 2K content, and 3) in special cases iTunes will provide native 2K content.  Net-net there will be video content that can take advantage of the new and higher resolution.

Most people watch iPad video content between 12″ and 16″ from their eyes depending on if they’re in bed or sitting on a couch. Assuming the iPad 3 is 2K, the visual experience would be similar  to watching a 75″ HDTV at 10′.  Users vary in terms of visual acuity and even neck length, but mathematically the numbers are accurate and make sense.  The farther the TV is away from the user, the larger it must be to compensate for the distance is away from the user.  Future iPads will provide a similar video experience as a huge HDTV.

Weight is a Deal-breaker

Many I discuss this with argue that the iPad 2 is light enough to replace much of the TV viewing.  There aren’t “standards” that dictate this, but from doing primary research with consumers, there are products that more comfortably enable someone to comfortable hold a device for hours and stare at it comfortably.  The Amazon Kindle DX2 at 18.9 ounces (1.18 lbs.) is the closer to right form factor and weight to be held comfortably in bed or on a couch.  If you have ever watched a show on an iPad 2 at 1.325 lbs. in the bed, you know exactly what I am talking about.  After a while, you wish the iPad would just float so you didn’t have to touch it.  Or you find a way to rig the stand so that you can lay on your side without touching it.  To effectively replace a TV, future iPads must be significantly lighter to effectively replace personal TV viewing.

5 iPads for the Price of One HDTV

Consumers have an amazing way of rationalizing cool electronics they want to buy.  Most consumers use their heart over their head when making an electronics purchase and I can see users rationalizing buying multiple iPads instead of placing their investment into a TV. If Apple were to go after the secondary TV market with vengeance, I could see the consumer rationalism going like this…. “Hmmmm…. I can buy 5 iPads for the price of  one high quality 60″ TV.  And everyone in the family would have them.  I will be the hero of the family in that everyone gets an iPad where they watch what they want to watch and do all the other great iPad things. And, when we want to watch the big game together, we can watch the older, but good enough 50″ we bought 5 years ago.”  This is human rationalization at work and happens every day in consumer electronics.

Siri Will Push Consumers Over the Edge

I’m not going out on a limb when I say I believe Apple will integrate Siri into future iPads.  The stronger commitment I think they will make is to an entertainment dictionary and natural language capability.  As I wrote last September on the fabled “iTV”,  instead of popping between Netflix, Hulu+, YouTube, iTunes and TWC TV, I believe Apple will aggregate and index this “channel” content into Siri to provide a one-stop touch and voice enabled experience. In this way, the users can say “find Revenge” and Siri will scan across all of the registered sources and look for re-runs, live or taped versions of “Revenge”, regardless of the source.  This is the ultimate remote controls in our world where there are a 1,000s of “channels” available.

This will serve as the final consumer rationalization point they need to make the tradeoff between a new TV or iPad.

Holiday 2012 Will Provide a Directional Indicator

With 3D an unmitigated flop in TVs and flat panel saturation becoming a reality, the TV industry is banking on “Smart TV” to pull it out of the hole in 2012.  As I’ve written previously, TVs won’t be very smart in 2012 when it comes to advanced user interface and they aren’t bringing anything else to the table to motivate consumers to replace their old HDTV.  What is new is a shiny new iPad with much higher resolution than their TV and the best entertainment remote control interface at a dramatically lower price than a new TV.  I believe the future iPad will take a big chunk of the secondary TV and even delay new primary TV purposes through consumer rationalization that it can serve as the primary “personal TV” device and I expect to start to see the effects in the holiday 2012 selling season.

What Intel Must Demonstrate in Smartphones (and soon)

Intel made a big splash at CES 2012 with the announcement that Motorola and Lenovo committed to Intel’s Medfield clip_image002smartphone solution. This came on the heels of a disappointing break-up between Intel and Nokia as well as a lack of previous traction with LG. While Intel has come farther than they have ever come before with one of their X86 SOCs, they still have a long way to go to claim smartphone victory. Of course Intel knows this and is working diligently and sparing no expense. The biggest challenge Intel faces is attacking a market where the incumbent, ARM ecosystem partners Qualcomm, NVIDIA, and Texas Instruments have almost 100% market share. To start gaining share in smartphones, Intel must demonstrate many things in the near future.

More Design Wins with Key Players

The Motorola announcement was impressive in that Moto has a respected name in smartphones, but they won’t carry Intel that far alone. Lenovo is an even smaller player and while very successful in PCs, hasn’t been able to secure a lot of smartphone market share even in their home country, China. Intel knows they need a few more partners to start chipping away at market share and I expect them to announce at least one at this year’s Mobile World Congress.

One of the challenges is that many of the top players are already locked-in in one way or another, Intel has some negative history with, or has rapidly declining share. Apple already has their own A-Series SOC, Samsung has Exynos SOC, and Nokia rebuffed Intel last year and is clearly locked into ARM and Microsoft for the time being. RIM as a partner is a shaky proposition and HTC is an aggressive player but is recently dropping share. That leaves lower smartphone market share holders LG, Sony, Sharp, NEC and ZTE in the short term.

Longer term, I don’t expect Apple or Samsung to get out of the SOC business because they have been successful with their own strategies. I cannot see Nokia or Microsoft motivated to drive a change or provide dual support for X86 until Windows 9. RIM is in a free-fall with no bottom in sight. Intel is forced to take the long-term approach as they are with Lenovo by developing smaller smartphone players to become larger ones. ZTE certainly is a good long term prospect as is Huawei. If Intel can leverage their PC franchise with them I could see them being successful.

Relevant, Differentiated, and Demonstrable Usage Models

In fighting any incumbent, the new entrant must provide something well above and beyond what the incumbent offers to incent a change in behavior. I am assuming that Intel won’t lead in low price or lowest development cost, so they must offer handset makers or the carriers a way to make more money or get consumers to demand an Intel-based smartphone. Regardless of which variable Intel wants to push, they must devise relevant, differentiated and demonstrable usage models that ARM cannot.

By relevant I mean that it must be fixing a known pain point or creating a real “wow” feature consumers never asked for, but is so cool it cannot be passed up. One pain point example is battery life. Battery life is simply not good enough on smartphones when used many times daily. If this weren’t true, car chargers and battery backs wouldn’t be so popular. Wireless display is useful and cool but not differentiated in that Apple can enable this via AirPlay. Demonstrable means that it must be demonstrated at the store, an ad, or on-line on a web site. If something isn’t demonstrable then it may as well not exist.

I would like to see Intel invest heavily in modularity, or the ability to best turn the smartphone into a PC through wireless display and wireless input. Yes, this is dangerous short-term in that if Intel does a great job at it then they could eat into their PC processor franchise. But, this is the innovator’s dilemma, and a leader must sacrifice something today to get something tomorrow. I could envision an Intel-based emerging region smartphone that enables PC functionality. ARM cannot offer this well today but will be able to in the future with their A15 and beyond-based silicon. Intel should jump on the modularity opportunity while it lasts.

One other opportunity here is for Intel to leverage their end-to-end experience from the X86-based Intel smartphone to the X86-based data center. If Intel can demonstrate something incredible in the end-to-end experience with something like security or a super-fast virtualized desktop, this could be incredibly impactful. One thing that will be with us for at least another 5 years is bandwidth limitation.

Carrier Excitement

Outside of Apple, the carriers are the gatekeepers. Consumers must go through them to get the wireless plans, the phones, and most importantly, the wireless subsidy. Apple’s market entry strategy with AT&T on the iPhone was a strategic masterpiece in how to get into a market and change the rules over time. Apple drove so much consumer demand for iPhones that the carriers were begging Apple to carry the iPhone, the exact opposite of the previous decade.

Intel must get carriers excited in the new usage models, bring them a new stream of revenue they feel they are being cut out from, or lower their costs. Intel doesn’t bring them revenue from content side but could I can imagine Intel enabling telcos to get a piece of classic retailer’s PC action once “family plans” become a reality. While telco-distributed PCs weren’t a big success in the past, this was due primarily from the absence of family data plans. I can also imagine Intel helping telcos lower the costs of their massive data centers with Xeon-based servers. Finally, if Intel could shift traffic on the already oversold “wire” by shifting processing done in the cloud and onto their SOCs, this would be very good in a bandwidth-constrained environment.

Competitive Handset Power

At CES, Intel showed some very impressive battery life figures for Medfield handsets:

• 6 hour HD video playback

• 5 hours 3G browsing

• 45 hour audio playback

• 8 hour 3G talk time

• 14 day standby

This was measured on Intel’s own reference platform which is somewhat representative of how OEMs handsets will perform. What will be very telling will be how Medfield performs on a Tier 1 handset maker, Motorola when they launch in Q3 2012. There is no reason to think the Moto handset won’t get as impressive battery life figures, but Intel could gain even more credibility by releasing those figures as available.

When Will We Know When/If Intel’s Smartphone Effort is a Success?

Intel has slowly but surely made inroads into the smartphone market. Medfield is impressive but competing with and taking share from an incumbent with 99%+ market share is a daunting task. The easy answer to measure Intel progress is by market share alone but that’s lazy. I believe that Intel smartphone efforts should first be measured by handset carrier alliances, the number of handset wins, the handset quality and the new end usage models their SOCs and software can enable. As these efforts lead to potential share gain does it make sense to start measuring and scrutinizing share.

Why the PC Industry Cannot Ignore Smartphones

When HP abandoned their smartphone and tablet business and webOS last August, many in the industry were hp-veerdisappointed in the speed of the Palm acquisition and the quick dismantling of it. Some who consider themselves "business-savvy" said it was the wise approach as it wasn’t core to HP’s corporate mission. They said that smartphones were a distraction to competing with IBM and even Dell. We won’t know until 3-5 years from now whether it was a good decision or not.

I believe though, that just as PC companies fought to stay away from the sub-$1,000 PC market in the 90’s, PC makers who don’t embrace smartphones could be out of the client hardware business in 5 years.

Some Context

Over the last 20 years, PC hardware and software have done this little dance where one is ahead of the other. New software came out that required better hardware, then the new hardware outpaced the old software and the cycle continued. With the better hardware and software came new features and usage models like multimedia, desktop publishing, 3D games, DVD video, videoconferencing, digital photography, the visual internet, and video editing. Then Microsoft Vista was launched and it seemed no matter how much hardware users threw at it, issues still existed. Microsoft then spent the next few years fixing Vista and launched Windows 7 instead of developing environments for new rich client usage models. Windows 7 actually took less hardware resources than Vista, the first time a Microsoft OS could say this. Microsoft is even publicly communicating that Windows 8 will take less resources than Windows 7. So what happened? Did the industry run out of usage models to consume rich PC cycles? No, there are many usage models that need to be developed that use rich PC clients.

What happened was netbooks, smartphones and tablets. Netbooks threatened Microsoft and forced them to re-configure Windows XP for the the small, cheap laptops. This was in response to the first netbooks, loaded with Linux, getting shipped into Best Buy and direct on the internet. In retrospect this wasn’t a threat to Microsoft, as those netbooks had a reported 50%+ return rate. After netbooks came MIDs and after MIDs failed came touch smartphones and the iPad. Once the iPhone and iPad showed strong sales it was clear that the center of design was moving to mobility even though needs the rich client PC could solve didn’t just go away.

Windows 8 and Rich PC Clients

Windows 8 was clearly architected to provide a tablet alternative to the iPad and stem the flow from Windows to iOS and Android. Most of the work has been to provide a new user and development environment called Metro, WinRT and to enable ARM SOCs. None of these investments does a single thing to propel the traditional rich PC client forward, maybe with the exception of enabling touch on an all-in-one desktop. Without Microsoft making major investments to propel the rich client forward, it won’t move forward even to the dismay of Intel, AMD and Nvidia. I want to be clear that there are still problems that the rich client PC can solve but the software ecosystem and VC investment is enamored primarily with tablet, smartphones and the cloud. Without Microsoft’s investment in rich PC clients, thinner clients like phones and tablets will evolve at a much faster rate than rich PCs.

The Consequences of Not Investing in the Rich PC Client

With the software ecosystem driving "thin" clients at a much faster rate than "rich" clients, the consequences start to airplaytvemerge. We are seeing them around us every today. Users are spending more time with their tablets and smartphones than they are with their PCs. Savvy users are doing higher-order content creation like photo editing, video editing and even making music with GarageBand. That doesn’t mean that they don’t need their PCs today. They do, because neither smartphones nor tablets can do everything what a PC or Mac can do…. at least today. Display size, input method and lack of software modulraity are the biggest challenges today.

Enter Smartphone Modularity

Today, many users in traditional regions require at least a smartphone and a PC, and a tablet is an adder. Tomorrow, if users can easily attach a keyboard to a tablet via a convertible design, they may not need a PC as we know it today. It’s not a productive discussion if we debate if we call this a PC with a removable display or a tablet with a keyboard. What’s important is that some users won’t need three devices, they’ll just need two.

What about having just one compute device, a smartphone, and the rest of the devices are merely displays or shells? Sounds a bit aggressive but lets peel this back:

  • Apps: If you believe that the smartphone ecosystem and apps moves a lot faster than the rich client ecosystem, then that says that thin clients at some point will be able to run the same rich apps as a PC. Then the question becomes, "when".
  • OS/Dev Environment: iOS, Windows, and Android are all becoming modular, in that their goal is that you write once and deploy everywhere. Specifically, write once for a dev environment and deploy to a watch, phone, tablet, PC and TV or console.
  • Hardware: Fixed function blocks and programmable blocks on tablet and smartphone SOCs are taking over many of the laborious tasks general purpose CPUs once worked on. This is why many smartphones can display a beautiful 1080P video on an HDTV. This is true for video decode, video and photo cleanup, and natural user interfaces too. 3D graphics will continue to be an important subsystem in the SOC block.
  • Display: With WiDi, WiFi Direct, and WiFi AC on the mainstream horizon, there’s no reason to think that a user cannot beautifully display their apps from their 4" smartphone display to a 32" high resolution PC display. Today with my iPhone 4s airplay movieI can display 1024×768 via AirPlay mirroring with a little lag but that’s today via a router and WiFi network. I can connect today via hardwire and it looks really good. In the future, the image and fonts will scale resolutions to the display and the lag will disappear, meaning I won’t even need to physically dock my smartphone. It will all be done wirelessly.
  • Peripherals: Already today, depending on the OS, smartphones can accept keyboard, mouse and joystick via Bluetooth, WiFi or USB. The fact that an iPad cannot use a mouse is about marketing and not capability.

Smartphone Modularity a Sure Bet?

As in life, there are no sure things, but the smartphone and cloud ecosystem will be driving toward smartphone modularity to the point where they want you to forget about PCs. Apple, Microsoft, and Google are building scalable operating systems and development environments to support this. Why Microsoft? I believe they see that the future of the client is the smartphone and if they don’t win in smartphones, they could lose the future client. They can’t just abandon PCs today, so they are inching toward that with a scalable Metro-Desktop interface and dev environment. Metro for Windows 8 means for Metro apps not just for the PC, but for the tablet and the Windows smartphone. The big question is, if Microsoft sees the decline of the PC platform in favor of the smartphone, then why aren’t all the Windows PC OEMs seeing this too? One thing I am certain of- the PC industry cannot ignore the smartphone market or they won’t be in the client computing market in the long-term.

The Dell XPS 13: An Ultrabook that Could Steal Customers From Apple

If you are in the high-tech industry and haven’t heard of the term “Ultrabook”, you’ve probably been on sabbatical or have been living under a rock. Intel introduced an industry-wide initiative to re-think the Windows notebook PC, which they have dubbed and trademarked the “Ultrabook”. Launched at Computex 2011, Ultrabooks are designed to be very thin and light, have good battery life, have instant-on from sleep, be more secure and have good performance. If you want to see the details on what constitutes an Ultrabook, let me direct you to an article I wrote in Forbes yesterday. Does this sound a bit like a MacBook Air? This is what I thought about the entire category until Dell lent me their Ultrabook, the Dell XPS 13, for a few days. I have to say, I am very impressed and believe they have a winner here that could take some business from Apple. I don’t make that statement lightly as my family is the owner of three MacBooks and I do like them a lot.

Dell plays hard to get
When Ultrabooks were first introduced in July, Dell was somewhat silent on their intentions. Typically Dell is locked arm in arm with Intel many steps of the way. When they didn’t introduce an Ultrabook by the back to school selling season, “industry people” started to ask questions. When Dell didn’t release one by the holiday selling season, people were asking, “what’s wrong with the Ultrabook category”, or “what is Dell cooking up”?

I thought they were waiting for Intel’s Ivy Bridge solution that was scheduled for earlier in the year. Whatever Dell was waiting for doesn’t matter, because they did nothing but impress at CES. During the Intel keynote with Intel’s Paul Otellini, Dell’s vice chairman Jeff Clarke, stormed on-stage with some serious Texas swagger. The video cameras at the CES event didn’t do the Dell XPS 13 justice as it’s hard to “get” the ethos of any device on camera, but with Jeff Clarke and Paul Otellii on stage, you knew it was important to both companies. In my 20+ years as PC OEM and technology provider to OEMs, I believe the only way to really “get” a product is to live with it as your primary device for a few days. And that’s just what I did.

Industrial Design
It’s apparent to me that Dell took their combined commercial and consumer experience and put it to good use. Rather than just follow Apple, HP or Lenovo, they put together what I would call the best of both worlds. The machined aluminum frame adds the brawn and high-brow feel, while the rubberized carbon-fiber composite base serves to keep the user’s lap cool and reduce weight. The rubberized palm rest provides a slip-proof environment that adds serious precision to keystrokes and trackpad gestures. It also provides a slip-proof mechanism for carrying the unit across the house, the office, or into a coffee shop. In a nutshell, Dell solved my complaints about my MacBook Air and made it look, feel and operate premium.

Instant-On
I give Dell and Intel credit for working together to make Windows 7 PCs almost “instant on”. The XPS 13 turned on and off very quickly thanks to Intel Rapid Start and Dell’s integration. I wasn’t able to use Smart Connect, but when I can use the XPS 13 for a few weeks I want to try this out. This is essentially a feature that intermittently pulls the XPS out of sleep state and pulls in emails and calendar updates. While this is as close a PC will get to “always on, always connected”, it is a decent proxy.

Ingredient Branding and Certifications
Historically, the typical Windows-based PC with all its stickers looks like a cross between a Nascar racing car and the back of a microwave oven. That doesn’t exactly motivate anyone to shell out more than $599 for a Windows notebook. There are no visible stickers on the XPS 13 and the only external proof of Intel and Microsoft is on a laser-etched silver plate on the bottom of the unit. Underneath the plate are all the things users usually ignore like certifications.

Keyboard and Trackpad
I never quite understood how little evaluation time users spend on what ends up being one of the most important aspects of a notebook; the keyboard and trackpad. I already talked about the rubberized palm rest that gives the XPS 13 a stable palm base for the keyboard and trackpad. My palms slip all over the place with my MacBook Air. The XPS 13’s keyboard is auto backlit and the keys have good travel and a firm touch. The trackpad feels like coated glass and supports all of the Windows 7 gestures. Clicking works by either physically clicking the trackpad down or gently tapping it. It’s the user’s choice.

Display
The display is 13.3″ at a very bright 300 nits at 1,366×768 resolution. It’s an edge to edge display (or nearly), which allowed Dell to design a 13.3″ display into around a 12″ chassis. I compared it to a MacBook Air and it is in fact narrower with the same dimension display. That is very impressive. I would have preferred a higher-resolution display but I don’t know if many users will make a huge deal out of this. The display is coated with Gorilla Glass which gives some extra added comfort knowing it will be up to the task of my kids accidentally scratching it up.

Ports
Compared to some of the other Ultrabooks, I applaud Dell for removing some of the ports that I am certain primary research said were “must-haves.” Must haves like a VGA port, 5 USB ports, and an ethernet port. (yawn) Users get a Displayport, one USB-3, one powered USB-2, and a headphone jack. The only port I would have preferred was a mini or micro HDMI port. Displayport guarantees that I will need to buy a cable or an adapter I don’t have. I can live without the SD card reader but it sure would have been nice if they could have fit it inside.

Battery Life
I am still very skeptical on most battery life figures of any battery-powered product. One exception is the Apple iPhone and iPad, where Apple goes out of their way to provide as much detail as possible for different use cases. With that caveat, I do believe the Dell XPS 13 will have very respectable battery life figures versus other Ultrabooks and the Apple MacBook Air. Dell says the XPS 13 will achieve nearly 9 hours of battery life, well above Intel’s target of between 5 and 8 hours.

One of the sexier features harkens back to the days of Dell batteries, which had buttons to gauge how much was power was left. Like the Dell batteries of yesteryear, press a small button on the side (not back) of the XPS 13 and it will light up circles to show how much battery you have left. That shows a dedication to useful innovation, not penny pinching bad decisions made in dark meeting rooms. This is the kind of small thing that demonstrates attention to detail that Apple quite frankly has dominated so far.

Consumer and Commercial Applicability
Whenever I hear that one product serves two different markets I usually cringe and jump to the conclusion that it will be mediocre at both. I also take a very realistic approach on the “consumerization of IT”, in that I believe we are a long way off until 50% of the world’s enterprises give their employees money to choose their own laptop. In the case of the Dell XPS 13, I believe that it will provide a good value proposition to both target sets. Consumers are driven by style, price, aesthetics and perceived performance at an certain price point while businesses are more interested in TCO, services, security, and custom configurability. The Dell XPS 13 provides all that. They may run into challenges with IT department and sealed batteries, lack of VGA and Ethernet ports, but then again a few IT departments would require serial ports if you let them spec out the machine completely.

Pricing and Specs
The Dell XPS 13 starts at $999 and includes an Intel Core i5 processor, Intel HD 3000 graphics, 128GB SSD hard drive, 4GB memory, USB 3.0, and Windows Home Premium. For a similarly configured Apple MacBook Air, buyers would pay $1,299. With the Mac, you get OS X Lion, a bit higher resolution display, Thunderbolt I/O, and an SD card slot. And yes, for the record, I know PCs don’t primarily sell on specs but they are still a factor in the decision. If it weren’t, Apple wouldn’t provide any specs anywhere, right?

Possibly Taking Bites from the Apple
From everything I experienced with the Dell XPS 13 evaluation unit, I can safely say that they have a potential winner. Why do I say “potential”? First, I’m using an evaluation unit, not a factory unit with a factory image. As a user or sales associate, if I start Windows and I start getting warning messages for virus protection, firewall and 3rd party software, the coolness factor will be for naught. The first consumer impression will be bad. I hope this doesn’t happen with the factory software load.

Many success factors go into successfully selling a system and creating a lasting consumer bond. Great products must align with great marketing, distribution and support. Controlling the message is key at retail. If, and I mean “if” Dell can effectively pull their messages through retail and somewhat control merchandising at retail, this will be a solid step in connecting the value prop with the consumer. This is very hard, especially in the U.S., where Best Buy rules brick and mortar. What will the Best Buy yellow shirt say when someone asks, “whats the difference between the MacBook Air and the Dell XPS?” If they say “$300” that is a fail. Retail will be important, more important than direct for Dell, because industrial design doesn’t translate well to the web. Seeing the XPS 13 image doesn’t impress as much as holding it does, so retail cannot be minimized.

I see the XPS 13 doing well in business and enterprise, again, given aligned messaging, channel, sales training and support. IT departments now have a design that is every bit as cool as the MacBook Air and arguably more productive plus the added benefits of TPM and Dell’s customization and support.

Net-net I see potential consumer and business buyers of thin and very light notebooks looking at Apple’s MacBook Air and many choosing the Dell XPS 13 Ultrabook instead. This won’t just be based on price, but all other benefits I’ve outlined above. I also believe Apple’s MacBook Air sales will increase during 2012 but they would have sold more had it not been for Ultrabooks, especially the Dell XPS 13, the best Ultrabook I’ve used so far.

You can get more information on the Dell XPS 13 Ultrabook here on Dell’s website.

The Potential Losers if Ultrabooks Win

(Originally published on Forbes)

Ultrabooks were one of the most discussed form factors at this year’s CES 2012.  This was due not only to Intel’s CES marketing push, but by all of Intel’s ecosystem demonstrating their prowess by showing their latest and greatest designs.  OEMs like Dell, HP, Acer, Asus, Toshiba and Lenovo showed their new designs with different industrial design, color, keyboards, displays, Intel processors, storage, and proprietary software and cloud services.  One question I have received often since CES is, “who loses if Ultrabooks are successful”?  We must first start by defining an Ultrabook then move on to a complex discussion with many scenarios.

What is an Ultrabook?

Ultrabooks were introduced by Intel at last year’s Computex 2011. Intel owns the Ultrabook trademark, which means only those who license it and abide by its restrictions can use it. This becomes important as it relates to receiving Intelmarketing and design funds.  If OEMs, ODMs and retailers don’t abide by the Ultrabook definition, they will not be eligible for those funds.

An Ultrabook is a notebook computer that has the following characteristics:

  • Thin: 21mm or less.  As a comparison, the 13.3″ MacBook Air is 17mm at its thickest point.
  • Battery Life: 5 to 8 hours.  The 13.3 MacBook Air, per Apple, gets 7 hours of “wireless web” browsing.
  • Start up: Intel describes that “the system wakes up almost instantly and gives users quick access to their data and applications.”  There are storage, boot, sleep, and BIOS implications to this.
  • Secure: Intel states that “bios/firmware is enabled to expose hardware features for Intel Anti-Theft Technology (AT) and Intel Identity Protection Technology (IPT).”  This means the hooks must exist in BIOS that can talk toIntel AT and Intel IPT
  • Processor: Intel Core Processor Family for Ultrabook.

Storage Implications

Most of today’s notebooks use spinning storage, specifically a 2.5″ hard drive.  On the spot market, you can buy a 1 TB 2.5″ hard drive for $145-110. This is very inexpensive and enough storage to hold just about everything a user may need unless they’re a videophile.  The downside is that physical hard drives are slower and consume more power than SSDs.  To achieve the battery life and more importantly start up requirements, Ultrabooks require some form of SSD.  SSDs can come in the form of an SSD drive or a hybrid drive which has a combined SSD and physical hard drive. A 128GB SSD drive on the spot market is around $175-200.  A 500GB hybrid drive with 4GB flash costs $150 at retail.

The potential losers here are traditional spinning 2.5″ hard drives.  Hybrid drive-based Ultrabooks are just hitting the market and it’s too early to say whether they will dominate over the more expensive, responsive and power saving SSD drives.  Seagate is already in the market with their Momentus XT brand hybrid drives but Western Digital has yet to show up with a consumer solution.

Discrete Graphics Implications

Two different kinds of PC graphics exist, discrete and integrated.  Discrete are a separate graphics chip that is either soldered on the mainboard or most likely a separate card inside the notebook.  Integrated graphics are inside the SOC (System on a Chip) with the CPU and memory controller or it exists in what’s called the “tunnel” or the companion chip to a CPU.  Intel provides integrated graphics only and is the PC graphics market share leader pulled by their CPU franchise.  AMD provides discrete cards and chips, formerly branded ATI,  and also provides integrated solutions with their Fusion-based SOCs.  Nvidia serves the PC graphics market solely with discrete graphics cards and chips.

The potential losers here are discrete graphics.  It’s not they are “banned”, but the Ultrabook specifications make it very challenging to integrate discrete graphics into designs.  The two challenges are height and power draw.  Adding a discrete card and keeping inside the 21mm restriction is difficult but not impossible. Two major players, Lenovo and Samsung have already announced Ultrabooks with discrete graphics.  The announced Samsung Series 5 contains the AMD HD 7550M and the Lenovo ThinkPad T430u will ship with Nvidia Geforce 610M.

Discrete graphics from AMD and Nvidia will again get challenged when Intel unveils Ivy Bridge that has Intel HD 4000 graphics that support Direct X (DX) 11.  AMD and Nvidia have managed to weather the risk through Intel’s DX 9 and DX 10 and I expect a similar kind of battle here.  The ending could be different if AMD and Nvidia cannot effectively market the value of more gaming graphics or GPU-compute horsepower.

Processor Implications

By definition, Ultrabooks must contain Intel Core processors for Ultrabooks.

This means AMD, or for that matter, ARM-based processors from Nvidia, Qualcomm, or Texas Instruments cannot be inside an Ultrabook.  This requires a bit more of an examination as it is regulated by the Ultrabook definition.  AMD atCES 2012 was discussing their “ultrathin” plans and will reportedly enter the market with their Trinity platform.  Press reports describe that AMD will leverage their graphics capability and also enable much lower price points than the $1,000 Intel price point many Ultrabooks sell at.  I expect to hear more from AMD at their Financial Analyst Day next month.

ARM-based SOC suppliers Nvidia, Qualcomm and TI argue they already provide Ultrabook-like benefits with products like the Transformer Prime.  The Asus Transformer Prime is a 10.1″ convertible powered by Nvidia’s quad-core Tegra 3, gets 18 hours of battery life, is super-thin at 16.3-18.7mm thick, and is instant-on with Android 3.2 moving to 4.0 OS.

Security Implications

Intel Anti-Theft Technology and Intel Identity Protection Technology come with the Ultrabook package.  OEMs arent required to support every feature, but many of the features are tied to specific solutions.  For instance, Intel Anti-Theft works with Winmagic, Computrace, and Symantec.  Don’t see your provider?  Well you are out of luck and more than likely the company unless they build to Intel’s spec and APIs.    Because many of the Ultrabook security features are hard wired into the CPU and chipsets, by definition, it has potential implications for AMD.  The potential impact is yet to be seen because AMD has not played their “ultrathin” hand yet.

Marketing Implications

Intel owns all rights to the Ultrabook name.  With that, they have the right to enforce how people use it. This, tied with the 100s of millions of dollars that will be invested in Ultrabooks, will be very impactful to the ecosystem.  AMD cannot use the name Ultrabook without Intel’s expressed permission, something I doubt either party would explore.  If Intelcan make Ultrabooks a household name and consumers then buy online, Ultrabooks have a built-in advantage. BestBuy.com and even HP.com have a separate digital aisle specifically for Ultrabooks that won’t include anything from AMD.    Amazon currently, on the other hand, does not.  AMD is at the least risk at retail where the “ultrathin” specifications could be evident.  Consumers will see OEM brand, design, thinness, weight and battery life.  Time will tell how powerful the Ultrabook brand will be at physical retail.

Like AMD, no one in the ARM ecosystem like Nvidia, Qualcomm or TI can use the Ultrabook brand either for their Windows 8 clamshell designs.  So that fancy Asus Transformer Prime? Not an Ultrabook in the ads, product reviews nor will Asus receive any engineering or marketing funds.  Would Best Buy rather stock a margin-neutral, 13″ (hypothetical) $599 Asus Transformer Prime or a $699 Asus Ultrabook that gets $50 dollars marketing money per unit?  You know the answer.

So Who Potentially Loses if Ultrabooks Win?

As you can hopefully see by the analysis above, there are many scenarios that must play out before all the winners and losers can be tallied. There are not any clear-cut answers.  This is a highly competitive market and historically, AMD and Nvidia know how to play the game well and have much more experience at it then Qualcomm and Texas Instruments. Qualcomm and Texas Instruments have little or no experience fighting Intel at their own game.

Spinning hard drives without flash are extinct on the Ultrabook but adding flash to a hard drive to make a hybrid isn’t rocket science.  So even Western Digital cannot be counted out yet.

Net-net, there are no simple Ultrabook winner-loser answers but what is for certain is that Intel has shaken up a sleepy Windows PC ecosystem, and that’s a good thing for consumers and the PC industry.

How the Apple iTV is Accelerated by Samsung

(originally published at Forbes)

Back in September, I wrote an analysis on why Apple should build an HDTV.   The premise was that there are huge experiential issues Apple could solve and they could strike a deal with the MSO’s and satellite companies.   That was a big premise, but ironically with what Samsung showed at CES, it’s apparent Samsung will accelerate the likelihood ofApple launching an “iTV”.

Samsung 2012 Smart TVs at CES

At this year’s CES, Samsung made a very impressive showing in consumer electronics.  They showed off an array of devices from intelligent refrigerators to thin and energy sipping OLED displays to phones to Smart TVs.  Two major themes came out of the HDTV launches; smart interfaces, apps and cable and satellite content.

Smart Interaction, Kind Of

Samsung showed in controlled demonstrations their next generation of TV interfaces.  Samsung calls it Smart Interaction, or the ability to control the TV through voice commands and far-field air gestures.  Voice commands andthe air gestures work in a similar fashion to Microsoft’s Kinect.  Get the TV’s attention with your voice and tell it to change channels, turn the volume up or down, go to apps, etc.  Air gestures allow the consumer to use their hand as a virtual mouse clicking on an icon, or using the hand as a consumer would use their finger on a tablet by swiping or grabbing.

All of this is great in theory, but one of the challenges that I saw at CES was that it just didn’t work well.  The demoer was having a very hard time with the system getting it to work.  I talked to others at the show to see if in fact this was an anomaly, but it wasn’t.  Smart Interaction didn’t work well for those I talked to either.  This was a public demo in a controlled environment so I expected to see a better response, especially because you know everyone will compare it toMicrosoft Kinect and Apple’s Siri.

To be clear, what Samsung showed was a glimpse into their 2012 product line and not on shipping platforms, but was still concerning because perfecting these interfaces takes years, not months.  Apple is proof of this in that Siri, the voice-control mechanism on the iPhone 4s is still beta three months after public launch.

Samsung Cable and Satellite Content Deals

Samsung also launched an impressive amount of U.S. content deals with Comcast, DIRECTV, Verizon, and Time Warner Cable.  The vision is classic IP-TV, or removing the set top box and just plugging the Ethernet cable into the TV.  In theory, this provides the consumer with a much more integrated TV-content experience.

Comcast will provide its Xfinity services directly to a new Samsung TV without the need of an STB. DIRECTV will give the new Samsung TVs to access to live and stored content from the satellite content provider. Verizon said it will provide Samsung the Verizon FiOS TV app which gives users access to 26 live TV channels and access to VOD titles through Verizon Flex View.    Time Warner Cable and Samsung did show a demo of a user accessing stored content from a set top box in the home and said apps would be available “later this year.”  While these announcements are complex and not as simple as saying, “all STB content now available on the new 2012 Samsung TV”, it was a step forward from last year where cable companies weren’t all that excited about this IPTV premise in a world where they are an icon next to Netflix and Hulu.

Samsung’s Smart Interaction Accelerate Apple iTV

Samsung demonstrated two things at CES 2012 related to Smart TVs in general. First, they showed how not to demo the next generation of TV user interface.  Messing with the TV interface is dangerous in that it is the primary pathway to get to content.  Users blame themselves when they lose the remote, but when users get an error with voice control or air gestures, they will blame Samsung and stop using it.  Then they will tell 10 friends about it.  Yes, it will improve over time, but from what I saw, there is a lot of improvement to do.  This enables Apple, with an iTV, to perfect the user interface.  Apple would undoubtedly leverage Siri for voice control and leverage local iOS devices to do this.  Leveraging the huge base of iPhones, iPads, and iPods allows voice control to be better, in that the microphone is 10 inches away from you, not 10 feet.  This helps block out more noise and generally could provide a much better experience.  I believe it will work much better than Siri given the “dictionary” is smaller.  The smaller the “dictionary”, which in this case will be content, the higher the likelihood it does what you want it to do.

Envision how this looks at a Best Buy. You will have a Samsung TV on one side of the store and an Apple iTV in theApple store within a store.  The Samsung voice control may not demo well based upon what was shown at CES, and theApple voice control will “just work.”  Net-net, by Samsung launching Smart Interaction before it’s ready provides a clear and demonstrable pivot-point for Apple to differentiate from.  This is in a similar way to how Apple’s capacitive touch screen interface “just worked” and other phones didn’t just work well back when the iPhone first launched.

Samsung’s Content Deals Accelerate Apple iTV

The second thing Samsung demonstrated, and demonstrated well, was that they could cut deals with the cable and satellite guys.  This breakthrough is important because it shows that there is a deal to be done.  When a TV can blend cable, satellite, and OTT content, this is the “holy grail”.  Even better is when the user can have one program guide or one database to find the content they want with a precise, by-user recommendation engine like Netflix and Amazon.

By Samsung breaking some newer ground with Comcast, DIRECTV, Verizon, and Time Warner Cable, this at least givesApple the most concrete idea of what it would take to for them to do a deal.  Yes, Apple has been trying to cut deals with them forever, but Apple certainly doesn’t want Samsung to get too entrenched as it could dull some differentiation with an Apple iTV.  Just as the iPod and iTunes got credit for aligning the music industry, Apple wants to get credit for aligning the cable and satellite providers and in turn, deliver a great experience to the users.

While Apple was not at CES 2012, their impact and industry reaction from Samsung will help accelerate development and launch of an Apple iTV.  Samsung has provided Apple with an experience to pivot and differentiate off of, and has helped provide a basis point for Apple’s own deal with the cable and satellite companies. Samsung has helped accelerate Apple’s iTV.  Ironic, yes?

 

How Sony can beat Samsung and LG on Smart TV Interfaces

As I wrote last week, Samsung and LG are following Microsoft’s lead in future interfaces for the living room. Both Samsung and LG showed off future voice control and in Samsung’s case, far-field air gestures. Given what Samsung and LG showed at CES, I believe that Sony could actually beat both of them for ease of interaction and satisfaction.

HCI Matters
I have been researching in one way or another, HCI for over 20 years as an OEM, technologist, and now analyst. I’ve conducted in context, in home testing and have sat behind the glass watching consumers struggle, and in many cases breeze though intuitive tasks. Human Computer Interface (HCI) is just the fancy trade name for how humans interact with other electronic devices. Don’t be confused by the word “computer” as it also used for TVs, set top boxes and even remote controls.

Microsoft recently started using the term “natural user interface” and many in the industry have been using this term a lot lately. Whether it’s HCI or NUI doesn’t matter. What does matter is its fundamental game-changing impact on markets, brands and products. Look no farther than the iPhone with direct touch model and Microsoft Kinect with far-field air gestures and voice control. I have been very critical of Siri’s quality but am confident Apple will wring out those issues over time.

At CES 2012 last week, Samsung, Sony, and LG showed three different approaches to advanced TV user interfaces, or HCI.

Samsung20120117-133700.jpg
Samsung took the riskiest approach, integrating a camera and microphone array into each Smart TV. Samsung Smart Interaction can do far field air gestures and voice control. The CES demo I saw did not go well at all; speech had to be repeated multiple times and it performed incorrect functions. The air gestures performed even more poorly in that it was slow and misfired often. The demoer keep repeating that this feature was optional and consumers could fall back to a standard remote. While I expect Smart Interaction to improve before shipment, there’s only so much that can be done.

LG
LG used their Magic Motion Remote to use voice commands and search and to be a virtual mouse pointer. The mouse

20120117-133949.jpg

pointer for icons went well, but the mouse for keyboard functions didn’t do well at all. Imaging clicking, button by button, “r-e-v-e-n-g-e”. Yes, that hard. Voice command search worked better than Samsung, but not as good as Siri, which has issues. It was smart to place the mic on the remote now as it is closer to the user and the the system knows who to listen to.

Sony
Sony, ironically, took the safe route, pairing smart TVs with a remote that reminded me of the Boxee Box remote which has a full keypad one side. Sony implemented a QWERTY keyboard on one side and trackpad on the other side which could be used with a thumb, similar to a smartphone. This approach was reliable in a demo and consumers will use this well after they stop using the Samsung and LG approaches. The Sony remote has microphone, too which I believe will be enabled for smart TV once it improves in reliability. Today the microphone works with a Blu-ray player with a limited command dictionary, a positive for speech control. This is similar to Microsoft Kinect where you “say what you see”.

       

I believe that Sony will win the 2012 smart TV interface battle due to simplicity. Consumers will be much happier with this more straight forward and reliable approach. I expect Sony to add voice control and far field gestures once the technology works the way it would. Sony hopes that consumers will thank them too as they have thanked Apple for shipping fully completed products. Samsung and LG’s latest interaction models as demonstrated at CES are not ready to be unleashed to the consumers as they are clearly alpha or beta stage. I want to stress that winning the interface battle doesn’t mean winning the war. Apple, your move.

Samsung & LG Validate Microsoft’s Living Room Interaction Model

Microsoft launched Kinect back in November 2010 in a  move to change the man-to-machine interface between the consumer to their living room content.  While incredibly risky, the gamble paid off in the fastest selling consumer device, ever.  I saw the potential after analyzing the usage models and technology for a few months after Kinect launch and predicted that at least all DMA’s would have the capability.

The Kinect launch sent shock waves into the industry because the titans of the living room like Sony, Samsung, and Toshiba hadn’t even gotten close to duplicating or leading with voice and air-gesture techniques.  With Samsung and LG announcing future TVs with this capability at CES, Microsoft’s living room interaction strategy has officially been affirmed at CES and most importantly, the CE industry.

Samsung “Smart Interaction”sammy

Samsung launched what it called “Smart Interaction”, which  allows users to control and interact with their HDTVs.  Smart Interaction allows the user to control the TV with their voice, air-gestures, and passively with their face.  The voice and air gestures operate in a manner similar to Microsoft in that pre-defined gestures exist for different interactions. For instance, users can select an item by grabbing it, which signifies clicking an icon on a remote.  Facial recognition essentially “logs you in” to your profile like a PC would giving you your personal settings for TV and also gives you the virtual remote.

A Step Further Than Microsoft ?

Samsung has one-upped Microsoft on one indicator, at least publicly, with their application development model.  Samsung has broadly opened their APIs via an SDK which could pull in tens of thousands of developers.  If this gains traction, we could see a future challenge arise where platforms are fighting for the number of apps in the same way Apple initially trumped everyone in smartphones.  The initial iPhone lure was its design but also  the apps, the hundreds of thousands of apps that were developed.  It made Google Android look very weak initially until it caught up, still makes Blackberry and Windows Phone appear weaker, and can be argued it was the death blow to HP’s webOS. I believe that Microsoft is gearing up for a major “opening” of the Kinect ecosystem in the Windows 8 timeframe where Windows 8 Metro apps can be run inside the Kinect environment.

Challenges for Samsung and LG

Advanced HCI like voice and air-gesture control is a monumental undertaking and risk.  Changing anything that stands between a CE user and the content is risky in that if it’s not perfect, and I mean perfect, users will stop using it.  Look at version 1 of Apple’s Siri.  Everyone who bought the phone tried it and most stopped using it because it wasn’t reliable or consistent.  Microsoft Kinect has many, many contingencies to work well including standing in a specific “zone” to get the best air gestures to work correctly.  Voice control only works in certain modes, not all interactions.

The fallback Apple has is that users don’t have to use Siri, it’s an option and it can be very personal in that most use Siri when others aren’t looking or listening.  The Kinect fallback is a painful one, in that you wasted that cool looking $149 peripheral.  Similarly, Samsung  “Smart Interaction” users can fallback to the remote, and most will initially, until it’s perfected.

There are meaningful differences in consumer audiences of Siri, Kinect, and Samsung “Smart Interaction”.  I argue that Siri and Kinect users are “pathfinders” and “explorers” in that they enjoy the challenge of trying new things.  The traditional HDTV buyer doesn’t want any pathfinding or exploring; they want to watch content and if they’re feeling adventurous, they’ll go out on a limb and check sports scores.   This means that Samsung’s customers won’t appreciate anything that just doesn’t work and don’t admire the “good try” or a Siri beta product.

One often-overlooked challenge in this space is content, or the amount of content you can actually control with voice and air gestures.  Over the top services like Netflix and Hulu are fine if the app is resident in the TV, but what if you have a cable or satellite box which most of the living population have? What if you want to PVR something or want to play specific content that was saved on it?  This is solvable if the TV has a perfect channel guide for the STB and service provider with IR-blasting capabilities to talk to it.  That didn’t work out too well for Google TV V1, its end users or its partners.

This is the Future, Embrace It

The CE industry won’t get this right initially with a broad base of consumers but that won’t kill the interaction model. Hardware and software developers will keep improving until it finally does, and it truly becomes natural, consistent, and reliable. At some point in the very near future, most consumers will be able to control their HDTVs with their voice and air gestures.  Many won’t want to do this, particularly those who are tech-phobic or late adopters.

In terms of industry investment, the positive part is that other devices like phones, tablets, PCs and even washing machines leverage the same interactions and technologies so there is a lot of investment and shared risk.  The biggest question is, will one company other than Microsoft lead the future of living room?  Your move, Apple.

How Intel Could Achieve the 40% Consumer Ultrabook Target in 2012

There has been a lot of industry skepticism since Intel predicted at Computex Taipei 2011 that Ultrabooks would account for 40% of consumer portable sales by the end of 2012. That included skepticism from me as well, and I continue to have that skepticism. Rather than dive into that discussion though, I think it’s more important and productive to examine how Intel could conceivably achieve that goal.

What Intel is Actually Predicting

It’s important to understand what Intel means when they made their prediction. First, they are making the prediction for the consumer market, not the slower moving SMB, government, or enterprise markets. Also, the prediction is not for the entire year, it is for the end of December, 2012. That is, 40% of consumer notebooks by the end of December 2012 would need to be Ultrabooks. This makes a huge difference when evaluating the probability of this actually occurring.

So what would it take for 40% of all consumer notebook sales to be Ultrabooks by the end of 2012?

Make Ultrabooks Look New, Relevant, and Sexy

Intel and their ecosystem need make Ultrabooks perceived as new, relevant and sexy. By relevant I mean making the direct connection between what the Ultrabook delivers and what the consumer thinks they need. Sexy, is, well sexy, like MacBook Airs. The ecosystem must make a connection with:

  • Thin and light– this is easier because Apple has blazed the trail and it is evident on the retail shelf.
  • Fast startup– this is somewhat straightforward and a communicated consumer pain point with Windows today
  • Secure– this is the most difficult in that it is always difficult to market a negative. It’s like life insurance; it sounds good, people say it’s important, then don’t buy it. I think Intel would be much more successful taking the same base technology and enabling exclusive consumer content or speeding up the on-line checkout or login process.
  • Performance- this is difficult to market in that no longer does performance have a comparable metric and chip makers have appeared to stop marketing why it is even important.
  • Convertibles- I am a big fan of future convertibles given the right design and OS. If OEMs can put together a classy, ~18mm design, it could very well motivate consumers to delay a tablet purchase. This will not work prior to Windows 8’s arrival, though because you really need Metro for good touch.

Probably the biggest impediment here is the “sexy” piece. Sexy is the “X” factor here. It’s cool to have an Apple MacBook Air. It isn’t cool yet to have an Ultrabook. A lot of that $300M UltraBook investment fund must pay for the Ultrabook positioning and re-positioning of anything Windows. This is a tough task, to say the least.

Steal Some Apple MacBook Air Market Share

Intel and their ecosystem, to hit the 40% target, will need to steal some of Apple’s market share. There is no way around this to achieve the 40% target unless they want to pull the dreaded “price lever”. Apple “owns” 90+% of the premium notebook market today and because Windows OEMs and Intel for that matter aren’t motivated to trash pricing now, they will need to steal some of Apple’s share. This will be a tough one, a real tough one particularly in that Intel shoots itself in the foot short-term by going aggressively after this one given they are inside every MacBook Air. So OEMs will need to take this one on their own, using Intel marketing funds as a weapon. This will be especially difficult given that Apple positioning isn’t going to be instantly erased by anything short term and Windows OEMs haven’t been able to penetrate this for years. Remember the Dell Adamo? Sexy, Windows 8 convertible designs could very well be the magic pill that could help steal share from Apple.

Lower Price Points

This is the last lever anyone wants to pull as it destroys positioning. Depending which data service you look at, the average consumer notebook ASP (average selling price) is between $600-700. This seems high, I know, when you look at what is being sold at local retailers, but remember that this includes on-line and Apple which has a higher ASP. Ultrabooks range from around $799 to $1,299 excluding Apple. This is well above the prices it would need to be to achieve the 40% goal. There are two ways to lower price; lower the cost or lower margins. I believe you will see a little bit of both.

As volumes increase, there will be immediate cost savings in expensive mechanicals like aluminum, plastic, and composites. Custom cooling solutions are very expensive required to cool thin chassis between 16-21mm in thickness. Tooling and design cost can be amortized over greater volumes to decrease the cost per unit. Intel Ivy Bridge, available in April 2012, will provide a shrink from 32nm to 22nm which would theoretically allow a lower price point at the same performance point, although I am sure Intel isn’t leading with that promise. Intel would much rather provide large marketing subsidies and pay NRE (non recurring engineering) costs to retailers and OEMS to design and promote the Ultrabook category. SSD is a tricky one to predict given spinning hard drive supply issues. Spinning hard drive price increases allow SSD makers to increase prices which doesn’t bode well for Ultrabook BOM costs in the short term.

Leverage Windows 8 Effect

The expected Windows 8 launch for the holiday of 2012 could help the Ultrabook cause on many fronts. First, it may give consumers a reason to consider buying a new laptop or notebook. I fully expect consumers to delay purchases and wait for Windows 8 to arrive. This could create a bubble in Q4 that, again, helps achieve the 40% goal.

Perceived Momentum

Finally, Ultrabooks need to get off to a solid start in 2012. Consumer influencers and the rest of the ecosystem needs to perceive UltraBooks as a success in 1H/2012 for them to “double-down” for 2H/2012. CES will be one tactic to do this, where I expect to see 100s of designs on display to demonstrate OEM acceptance to the press, analysts, and retail partners. Intel’s Ivy Bridge will give another boost in April, followed by the Windows 8 launch. Retailers cannot be stuck with excess inventory and cannot make drastic price cuts that would only deposition the category. Currently there is skepticism on the entire Ultrabook value proposition and the price points they can command so there is a lot of work to be done.

Will Ultrabooks Achieve the 40% Target by End of 2012

While this analysis is about what it would take to achieve the goal, I must weigh on what I think will happen. I like to bucket these kinds of things into “possible” and “probable”. I believe that if the Ultrabook ecosystem could accomplish everything outlined above, Ultrabooks could hit 40% of consumer notebook sales by the end of 2012. So it is possible, BUT, I don’t see it as probable, primarily due to the low price points that it will need to be hit. There just isn’t enough time to reposition a Windows notebook as premium and either raise price points of the Windows notebook category or steal Apple market share.

Voice Control Will Disrupt Living Room Electronics

In what seems to be a routine in high-tech journalism and social media now is to speculate on what Apple will do next. The latest and greatest rumor is that Apple will develop an HDTV set. I wrote back in September that Apple should build aTV given the lousy experience and Apple’s ability to fix big user challenges. What hasn’t been talked about a lot is why voice command and control makes so much sense in home electronics and why it will dominate the living room. Its all about the content.

History of U.S. TV Content

mic

For many growing up in the U.S., there were 4-5 stations on TV; ABC, NBC, CBS, PBS and an independent UHF channel. If you ever wanted to know what was on, you just looked into the daily newspaper that was dropped off every morning on the front porch. Then around the early 80’s cable started rolling out and TV moved to around 10-20 channels and included ESPN, MTV CNN, and HBO. The next step was an explosion in channels brought by analog cable, digital cable and satellite. My satellite company, Time Warner, offers 512 different channels. Add that to the unlimited of over the top “channels” or titles available on Netflix, Boxee, and you can easily see the challenge.

The Consumer Problem

With an unlimited amount of things to watch, record, and interact with, finding what you want to watch becomes a huge issue. Paper guides are worthless and integrated TV guides from the cable or satellite boxes are slow and cumbersome. Given the flat and long tail characteristic of choices, multi-variate and unstructured “search” is the answer to find the right content. That is, directories aren’t the answer. The question then becomes, what’s the best way to search.

The Right Kind of Search

If search is the answer, what kind of search? The answer lies in how people would want to find something. Consumers have many ways they look for things.

Some like to do surgical searching where they have exacts. They ask for “The Matrix Revolutions.” Others have a concept or idea of what they are looking for but not exactly; “find the car movie with Will Ferrell and John Reilly” and back comes a few movies like Step Brothers and Talladega Nights. Others may search by an unlimited amount of “mental genres”, or those which are created by the user. They may ask for “all Emmy Award winning movies between 2005 and 2010”. You get the point; the consumer is best served with answers to natural language search and then the call to action is to get that person to the content immediately.

Natural Language Voice Search and Control

The answer to the content search challenge is natural language voice search and control. That’s a mouthful, but basically, tell the TV what you want to watch and it guides you there from thousands of entry points. Two popular implementations exist today for voice search. There are others, like Dragon Naturally Speaking, but those are niche commercial plays.

Microsoft Kinect

Microsoft has done more more to enhance the living room than any other company including Apple, Roku, Boxee and Sony. Microsoft is a leader in IPTV and the innovation leader in entertainment game consoles. With Kinect, a user can use Bing to search and find content. It works well in specific circumstances and at certain points in the experience, but it needs a lot of improvement. Bing needs to find content anywhere in the menu structure, not just at the top level. It also needs to improve upon its ability to work well in a living room full of viewers. Its beam-forming is awesome but needs to get better to the point that it serves as a virtual remote.

Finally, it needs to support natural language search and the ability to narrow down the choices. I have full confidence that they will add these features, but a big question is the hardware. The hardware is seven years old. Software gymnastics and offloading some processing to the Kinect module has been brilliant, but at some point, hardware runs out of gas.

Apple Siri

While certainly not the first to bring voice command and dictation to phones, Apple was the first to bring natural language to the phone. The problem with the current Siri is that its not connected to an entertainment database, its logic isn’t there to narrow down choices, and it isn’t connected to a TV so that once you find what you are looking for you can immediately switch the TV.

As I wrote in September (before Apple 4s and Siri), Apple “could master controlling the TV’s content via voice primarily.” If Apple were to build a TV, they could hypothetically leverage iPhones, iPads, iPods to improve the voice results. While Kinect has a full microphone array and operates best at 6-8 feet, an iPhone microphone could be 6 inches away and would certainly help with the “who owns the remote” problem and with voice recognition. Even better would be if multiple iOS devices could leverage each others sensors. That would be powerful.

While I am skeptical in driving voice control and cognition from the cloud, Apple, if they built a TV, could do more local processing and increase the speed of results. Anyone who has ever used Siri extensively knows what I am talking about here. The first few times Siri for TV fails to bring back results or says “system unavailable”, it gets shelved and never gets used again by many in the household. Part of the the entertainment database needs to be local until the cloud can be 99% accurate.

What about Sony, Samsung, LG, and Toshiba?

I believe that all major CE manufacturers are working on advanced HCI techniques to control CE devices with voice and air gestures. The big question is, do they have the IP and time to “perfect” the interface before Apple and Microsoft dominate the space? There are two parts to natural language control, the “what did they say”, and the “what did they mean”. Apple licences the first part from Nuance but the back end is Siri. Competitors could license the Nuiance front end, but would need to buy or build the “what did they mean” part.

Now that HDTV sales are slowing down, it is even harder to differentiate between HDTVs. Consumers haven’t been willing to spend more for 3D but have been willing to spend more for LED and Smart TV. Once every HDTV is LED, 3D and “smart”, the key differentiator could become voice and air gestures. If Sony, Samsung, LG and Toshiba, aren’t prepared, their world could change dramatically and Microsoft and Apple could have the edge..

Apple iCloud Shortcomings Provide a Competitive Opportunity

Apple iCloud launched two months ago to huge fanfare and punditry. It’s no surprise given the huge future opportunity with the cloud. Also, it was a big deal for Apple given their past online endeavors had been so unsuccessful that even Steve Jobs issued out one of the few apologies Apple had ever made. In that case, it was about MobileMe. Two months in, Apple has done an admirable job, but it’s clear if they don’t plug some holes, competition has the ability to swoop and and deliver a much more user centric, comprehensive solution.

iCloud Problem #1: Lack of video sync
Unlike photos with Photostream, iCloud will not sync videos taken off of an iPhone and sync to a consumer’s iPad, PC, Mac, or Apple TV. So that last minute winning basketball score…. you are out of luck. Lose the video? Oops… With advanced and mainstreamers users already embracing video this is a huge hole that will be be filled by someone. Bandwidth isn’t an excuse because there’s certainly enough of that over WiFi at home or the business. This is a hole that Google could easily fill in that they get video via YouTube. And with Apple owning both ends of the pipeline, they could even develop a proprietary CODEC that shrinks and expands the files minimizing bandwidth even over WiFi. Microsoft certainly has the capability given they own the PC market and with Live Mesh could provide an solution that never touches an external server.

iCloud Problem #2: Fractured productivity pipeline
Unlike photos, iCloud requires significant user intervention to sync documents, presentations, and spreadsheets between iOS devices and PCs/Macs. If a user creates a document on an iPad and wants to pull it into Pages for Mac, the user is required to download from iCloud.com. After changes are made on the Mac, the user needs to drop it back into iCloud.com. Seems like syncing documents folder on the Mac and PC would have been a whole lot easier. Again, an opportunity for Google Docs and Office 365 from Microsoft.

iCloud Problem #3: Lack of on-line photos
Unlike Google Picasaweb and Yahoo Flickr, iCloud provides no way to go online and view, download, and share pictures from a web browser. This is a very basic feature that is surprising in its absence. Microsoft Live Mesh and Windows Live service can easily fill in this gap.

iCloud Problem #4: PDFs are books, not documents
For most consumers, PDFs are intended to be files intended to be uneditable documents. They are so pervasive that even global governments use them as standard document formats. How does iCloud treat them? As books, of course. In Apples war with Adobe they have crossed the line and have sacrificed the consumer in the process. This is easily addressed by Google and Microsoft.

Filling the Gap
Many companies can fill the gap opened by the lack of iCloud comprehensiveness, timing, and completeness. They fall into two categories; niche plays and comprehensiveness plays.

From a comprehensive standpoint, there are three options, Google, Microsoft and an OEM. Google and Microsoft solutions are straightforward, but the OEM play is a bit complex. Google and Microsoft can build cross platform smartphone, tablet and desktop apps that keep everything in sync. Google already has many desktop apps, with Picasa 3 already filling the comprehensive photo sync role to Picasa Web. Microsoft already has a comprehensive solution with LiveMesh and Office 365 but need to provide more robust smartphone and tablet integration. OEMs like HP, Sony and Dell could either build their own infrastructure or partner with companies like Box, Dropbox, or Sugarsync to fulfill that need. They could also partner with Microsoft and Google as well, but sacrifice some level of integration and control.

The niche players are in the market today, companies like Sugarsync, Box, Dropbox and even Evernote. Essentially, a consumer looking for a specific, non-integrated solution can look to these players today to provide cloud sync. While they aren’t always integrated into an end to end pipeline into the apps, they provide a solution today, and maybe even tomorrow who don’t want to get locked into a solution. Most sophisticated and experienced users will actually prefer this approach as they understand the complexity and see the downside to being locked into an app environment. Probably many reading this blog in fact.

Microsoft, Google, and Independents Fill the Gap
I believe Apple is rolling out online, integrated services as fast as it can, prioritizing what it believes consumers will want first. Services hasn’t been Apple’s core competency, as Ping and MobileMe highlight this. It’s on a slow pace which will let Microsoft and Google edge into a market leading position, regardless of Apple’s prowess in phones and tablets. Microsoft will leverage their ~95% share in PCs and Google will leverage their market share advantage in smartphones and search. The big question is, can Apple accelerate into an area rife with competition in an area which isn’t it’s core competency?

Let’s Stop Classifying the iPad as a PC

Last month, Canalys reported that “Apple is on track to become leading global PC vendor”. That would be a tremendous accomplishment, given that no reports had Apple in the top 5 at the end of 2010. How will Apple accomplish this? Well, according to Canalys, they will do it with iPads. You know, a “PC” without physical keyboards, trackpads, or mice. This re-classification got me thinking, what is a PC and how wide does this definition go?

I must point on very early that I am not debating here if the iPad can duplicate, replace or augment certain usage models a PC can do. I know first-hand this is true because I use my iPad now in circumstances that two years ago I would have only used my PC. A few examples are airplane trips and at Starbucks. I am not alone. Respected journalist Harry McCracken wrote a piece on Technologizer  entitled, “How the iPad 2 Became My Favorite Computer“.  That is NOT what I am asking. I am asking about the industry classification of the device.

I’d like to propose a few tests and run a few products through to see what filters out. A PC today must have or be:

  • Electronic: a PC must run off some kind of electric power, AC or DC.
  • Operating system: a PC must run something above BIOS or machine code
  • Personal: the PC is designed for one or a few people, not many. In other words, it’s not a multi-user server. (Clarification: It could serve many people, but isn’t classified as a server.)
  • Portable: a PC can be moved
  • Apps: a PC must be able to run an application above the operating system level
  • Storage: a PC must be able to store personal data, settings or content
  • Customizable: a user can change the PC’s settings
  • Input: a user can input data so that the PC will react to commands
  • Display Output: the PC will visibly show data based via some visible display technology

So, this seems fair, doesn’t it? Well, what products then are “personal computers” by with this definition?

Echo Photo

           

Is this fair? Some of the items above even have generally accepted industry designations like e-readers, consoles, watches and refrigerators. Well, so does the iPad. IDC, Gartner, and Forrester already designated the iPad a “tablet”, so it seems there’s precedence.

We all know the iPad isn’t a computer; it’s a tablet, so why do we all keep pretending? It is fun, I know, even I’m amused when writing this. So what is a PC?

I believe a PC has all the nine characteristics at the top of the page but with the following conditions:

  • display greater than 5″
  • physical keyboard
  • physical mouse or trackpad
  • light enough to be picked up by an average age adult 
  • open application environment where users can load, side-load without having to jail-break

While there will always be exceptions to the rule and definitions will evolve over time, I suggest this definition could help the industry to simplify and better educate.

Does any of this classification debate anything?  While I agree with Tech.pinions colleague Ben Bajarin when he says, “Consumers don’t care nor think about it.  They just hire products to get jobs done”, I do believe it matters a lot.  Companies, investors, developers and consumers are influenced by classifications.  Classifications get used to describe market share, which then impacts financial analysts, which then could impact the stock price of the company. This is also a factor that comes into play with technology investments. “Should I develop this piece of technology for the PC or tablet market”?

My final thoughts are on the future.  The way technology is headed in the future, calling the iPad a PC will set precedence that will only lead to even more confusion and misinformation.  I believe there’s a scenario where the smartphone has a chance to dethrone the PC.  If people change their usage models and start adopting it widely, should we re-classify the smartphone as a PC in a few years?   If the answer is “yes”, then let’s also be prepared in 2015 to announce, “Timex could become the leading PC maker in 2016″.  Let’s stop classifying the iPad as a PC, it only serves to confuse people.

I’d love hear your thoughts. Do you believe an iPad should be classified as a PC?

Also see: Who Really Needs a PC Anyway?

[thumbsup group_id=”4460″ display=”both” orderby=”date” order=”ASC” show_group_title=”0″ show_group_desc=”0″ show_item_desc=”0″ show_item_title=”1″ ]

Windows 8 Desktop on ARM Decision Driven by Phones and Consoles

There has been a lot written about the possibility of Microsoft not supporting the Windows 8 Desktop environment on the ARM architecture. If true, this could impact Microsoft, ARM and ARM’s licensees and Texas Instruments, NVIDIA, and Qualcomm are in the best position to challenge the high end of the ARM stack and are publicly supported by Microsoft.  One question that hasn’t been explored is, why would Microsoft even consider something like this? It’s actually quite simple and makes a lot of sense the position they’re in; it’s all about risk-return and the future of phones and living room consoles.

The Threat to Microsoft

The real short and mid term threat isn’t from Macs stealing significant Windows share from Microsoft, it’s all about the Apple iPad and iOS.  It could also be a little about Android, but so far, Android has only seen tablet success in platforms that are little risk to a PC, like the Amazon Kindle Fire.  Market-wise, the short term threat is about consumer, too, not business.  Businesses work in terms of years, not months. The reality is that while long term, the phone could disrupt the business PC, short term it won’t impact where Microsoft makes their profits today. Businesses, short term, won’t buy three devices for their employees and therefore tablets will most likely get squeezed there.  Business employees first need a PC, then a smart phone, and maybe a few a tablet.  There could be exceptions, of course, primarily in verticals like healthcare, retail and transportation.

What About Convertibles?

One wild-card are business convertibles.  Windows 8 has the best chance here given Microsoft’s ownership on business and if you assume Intel or AMD can deliver custom SOCs with low enough power envelopes, thermal solutions and proper packaging for thin designs.  Thinking here is that if business wants a convertible, they’ll also want Windows 8 Desktop and more than likely backward compatibility, something only X86 can provide.  So net-net, Microsoft is covered here if Intel and AMD can deliver.

Focus is Consumer and Metro Apps

So the focus for Microsoft then is clearly consumer tablets, and Microsoft needs a ton of developers writing high quality, Metro apps to compete in the space.  Metro is clearly the primary Windows 8 tablet interface and Desktop is secondary, as it’s an app.  Developers don’t have money or time to burn so most likely they will have to choose between writing a Metro app or rewriting or recompiling their desktop to work with ARM and X86 (Intel and AMD) desktop. It’s not just about development; it’s as expensive for devs to test and validate, too.  Many cases it’s more expensive to test and validate than it is to actually develop the app.  Strategically, it then could make sense for Microsoft to push development of the Metro apps and possibly by eliminating the Desktop on ARM option, makes the dev’s decision easier.

Strategically, It’s About Phones and the Living Room in the Endimage

Windows 8, Windows Phone 7, and XBOX development environments are currently related but not identical.  I would expect down the road we will see an environment that for most apps that don’t need to closely touch the hardware, you write once and deploy onto a Microsoft phone, tablet, PC and XBOX.  The unifier here is Metro, so getting developers on Metro is vitally important.

If Microsoft needed to improve the chances developers will swarm to Metro and do it by taking a risk by limiting variables, let’s say by eliminating ARM desktop support, it makes perfect sense.

A Scenario Where Smartphones Take Down the PC

If you’ve done any long term strategic planning you know there are few absolutes but, very many scenarios. Tech history shows that even disruptive innovations take time to rollout and many scenarios existed that could have gone both ways. BlockBuster saw digital media coming and I will bet they had scenarios that showed varying levels of digital video acceptance showing what would happen to them if they didn’t lead levels of digital media leadership or lowest price. What if the publishers had stuck to their earlier guns and hastened digital rollouts? That could have given BlockBuster breathing room to develop more and they may still be around in their prior form.

There are other scenarios rolling out that are very interesting in that they could disrupt a giant, 500M unit market. That is, the scenario that has the smartphone “taking out” the personal computer.

I’d like to take a look at a few variables that could increase the likelihood of this happening. Remember, it’s not about absolutes, but about different scenarios and their chance of happening. Also, I’m not saying absolutely it will happen, but it is a viable scenario.

The New Personal
It all starts with the end user and making choices. If posed with the question, “if you had to choose between your phone of the PC, which one would you pick?” Sure, most want both, but making them choose makes them prioritize, and most would pick the phone. Why? One reason is that its so personal. People take it in the bed, bathroom, our pocket, on the dinner table. It knows where we are, what we’re doing, who we’re with, can communicate how we feel, etc. There are even reports that people would rather starve or refrain from sex rather than separate from their phone. Net-net, the phone is more personal and one variable that could, scenario speaking, accelerate the erosion and “take down” of the PC.

Good Enough Computing
Setting input and output aside for a second, the smartphone is pretty good, or good enough, for most email, web, social media, and light content creation. The web has actually “dumbed down” a but to make this possible and apps have helped almost as much Light content creation is writing email, editing photos, creating social media posts, and even taking notes. The big usage model exceptions to this are workstations and extreme PC gaming even though these are starting to be processed in the cloud. Most all else, scenario speaking, can be processed in the cloud.

Modular Designs
The iPhone 4s and the iPad 2 can already wirelessly mirror what is on the phone or tablet on the next best display. Most Android devices and even QNX can work with a full size wireless keyboard and mouse. Extrapolate that ahead three to five years with quad core general purpose processing, today’s console graphics capability, and even better wireless display technologies and it doesn’t seem, scenario speaking, that there won’t be a whole lot the user cannot do.

For “desktop” use, users will be connect to full size displays at high resolutions with full size keyboards, trackpads, and mice. Apple Siri, Microsoft Tellme and Google Voice Actions voice interfaces will be greatly enhanced in future iterations and can serve as the secondary input. Scenario-speaking, laptops could be wireless “shells” and leverage the processing power, graphics, memory, storage and wireless plans. The shells would cost a lot less than a full fledged laptop and have the convenience in that the content, apps, wireless plan is in one place.

One potential modular wild-card are flexible displays. While these have been demonstrated at every CES for over a decade, they appear to be getting very close to reality. While details are hard to come by, Samsung indicated that they will be shipping flexible displays in 2012. This could mean in phones by 2012 or shipped to OEM customers in 2012 for shipment in 2013. HP has been very active as well with their flexible display technology in alliance with ASU, the US Army, Du Pont, and E-Ink. HP is positioning their technology not only great for phones and watches, but also for larger POS displays, interactive advertising, and even on the sides of buildings. As it relates to smartphone modularity, think about “unfolding” a 10″ display from your 3″ device. That changes everything.

Potential Winners and Losers in Scenario
There are obvious winners and losers in this scenario. The big winners will be those who can monetize the smartphone or thin client and the cloud. Losers will be those who are stuck in the old model of computing, scenario speaking. If you’re one of those companies, I’d be rethinking your strategy.

Protectionism Rarely Works Over Time
Any scenario where well established and large losers exist, there will be protectionism. Over time, protecting something with such consumer benefit and such upside for other companies very rarely works. This is especially true for this scenario given the high levels of consumerism. Today, consumers have access to great info from the web and it’s amplified in the social media echo chamber. It’s hard to snow over consumers in any high value scenario.

Scenario Conclusion
The “smartphone kills the PC” scenario isn’t novel or new, but it is certainly one of the most important one of this decade. And certainly one of the most controversial as well given the 500M unit stakes with the winners and losers. How many of those will really be modular smartphones and how many will be PCs as we now it today?

The Real Reason Google Released an iOS Gmail App

Last week, Google re-released their much maligned iOS Gmail application. It includes a few features over and above the photostandard iOS email app, but nothing that is really exciting the wide swath of users if “number of stars” is any indicator. To boot, many have argued that the Gmail app actually removes features from standard iOS email platform. So the question is why did Google really launch Gmail application for iOS?  There’s a lot here beneath the surface.

Gmail App for iOS

Gmail for iOS adds the following functionality over the standard iOS email client:

  • Ability to see one entire thread on one screen
  • “Important” and “everything” distinctions similar to the Gmail.com website
  • Report SPAM
  • Full message text search
  • Labels
  • Visual addressing.  See the addressee’s avatar.

Net-net this brings it more in-line with the full desktop client.  One could say it detracts from the experience:

  • Tiny, unreadable text size depending on email source
  • Lose fast access to other email clients like Yahoo or Exchange
  • Slow initialization unless already opened

 

It’s About a Entry Point to Other Google Services

Sure, Google will improve and tweek the experience to improve the app, but why did they develop it in the first place?  First, imageit’s about Google services and getting user to them from an iOS environment……. and advertisements. Think about this… what are the four most likely places Google could derive revenue from.  And when I ask that, I mean the first entry into the revenue stream or the starting point:

  • Search
  • Maps
  • Email
  • Social media

 

Once a user reaches this entry point, then Google can “cross-sell” them into:

  • Shopping
  • Books
  • Videos
  • Music

 

It’s About Micro-Segment Profile Development

Most importantly, mail provides one of the riches data sets from which to build ad profiles.  Google indexes every email you send and receive and from it builds micro-profiles about you. The better the profile, the better the ad targeting, the IMG_0899higher the CPM and CPC.  All of this means more money for Google.  If users get lured away from Gmail, Google loses this.

Wrestling For Inside Control

Google is already the leader in search and maps and has the preferred placement on iOS, but Gmail is just one of a line listing of mail options. This becomes a problem for Google in that now Apple and iOS become the control point for users. Furthermore, what happens if users get more comfortable with iOS mail and even worse, iCloud email? None of these is good for Google and Google will keep wrestling with Apple until they can get the inside edge.

Of Course Amazon Kindle Fire Cannibalizes the Apple iPad

One way I test and gauge insights is to engage in and monitor social media.  It’s certainly not the only way, but it is one of many ways.  One very interesting discussion I am monitoring is the Amazon Kindle Fire versus Apple iPad.  There are definitely two camps that exist and not a lot in-between.  So what will really happen between these two tablets?

Different Target Markets, BUTimage

One thing everyone needs to realize is that there are many different kinds of consumers with very different needs, wants, drivers, and checkbooks.  Sure, our friends and family kind of seem like us, but that’s because its human nature to surround ourselves with people similar to ourselves.  We may think that we are a lot different from our friends, but statistically, we are very similar.  Let me give you just one example….. According to the U.S. Census bureau, the median household income in 2010 was pegged at $49,445.  Do you make a lot more… a lot less?  You get the idea.

As it relates to the iPad, there are consumers who would have stretched up to buy a $499 iPad 2 who will, instead, buy the $199 Fire.

Different Needs, BUT

The Fire and the iPad are also architected to address different needs, but that doesn’t necessarily dictate exactly what a consumer will do with it.  Tech.pinions colleague Tim Bajarin nailed it when imagehe talked about the differences in content creation and consumption on the iPad versus Kindle.  One thing to be careful with however, is what we mean exactly by content creation.  Is creating an email content creation?  Is cropping a photo content creation?  I happen to think it is and I believe that those who buy a Kindle will, in fact, be creating emails and cropping photos.  Why, because it’s the best available device they have to do that with at that moment.

Here’s the analogy, and it’s a personal one.  My teenagers don’t own a tablet, and therefore they watch videos and read books on their iPhones.  It’s the best device they have at the moment, even though it would be a much more enjoyable experience on the iPad.  Problem is, Dad (me) is too cheap to buy another one.   Those who have a Kindle will be creating light content because it’s the best device they have at that moment.

It Won’t Matter This Holiday Season

In the end, none of this discussion is relevant this holiday selling season.  Based on information from my contacts, both Apple and Amazon have been conservative in their production forecasts.  Apple doesn’t want to get stuck with potential inventory before their next iPad and Amazon took a cautious tone given it’s a new product and they barely break even on the gross margin side with an untested video and music upside content model.

Net-net, for the holidays, both will sell out and we won’t be able to see who will be the finest cannibal.  BUT after the holidays, when inventories are adjusted and there isn’t a line for either, if Apple either doesn’t adjust their pricing, introduce a lite-iPad, a 7″ iPad, or a new kind of subsidized business model, they will lose out in volume to the new class of 7” tablets, not only from Amazon, but also from Barnes and Noble.

[thumbsup group_id=”4039″ display=”both” orderby=”date” order=”ASC” show_group_title=”0″ show_group_desc=”0″ show_item_desc=”0″ show_item_title=”1″ ]

Quad Core Smartphones: What it Will Take to Become Relevant

hedgeThere has been a lot of industry discussion on multi-core smartphones in the past year, and the dialog has increased with NVIDIA’s launch of Tegra 3, a quad core SOC targeted to phones and tablets. The big question lingering with all of these implementations particularly with phones is, what will end users do with all those general purpose compute units that provide significant incremental benefit? In the end, it’s all about the improved experience that’s relevant, unique, demonstrable, and easily marketable.

Multi-Core Background

Before we talk usage models, we first have to get grounded on some of the technology basics. First, whether it’s a multi-core server, PC, tablet or phone, many these things must exist to fully take advantage of more than one general purpose computing core in any platform:

  • operating system that efficiently supports multiple cores, multitasking across cores, and mullti-threaded apps
  • applications that efficiently take advantage of multiple cores
  • intelligent energy efficiency tradeoffs

Once those elements get into place, you have an environment where multiple cores can be leveraged. The next step is to optimize the platform for energy efficiency. All of the hardware and software platform elements, even down to transistors, must be optimized for low power when you need it and high performance when you need it. The Tegra 3 utilizes a fifth core, which NVIDIA says initiates when extremely low power state is required.

Assuming all the criteria above are met, then it comes down to what an end user can actually do with a phone with four cores.

Modularity Could Be the Key

Quad core phones could potentially add value in “modular” usage environments. While there have been a lot of attempts at driving widespread modularity, most haven’t been a big hit. I personally participated on the Device Bay Consortium when I was at Compaq, along with Intel and Microsoft. It didn’t end up materializing into anything, but the concept at the time from an end user perspective was solid.

Today and beyond, smartphone modularity is quite different than Device Bay’s “modules”. The smartphone concept is simple; use a high powered smartphone which can then extend to different physical environments. These environments span entertainment to productivity. Here are just a few of today’s examples of modularity in use today:

These are all forms of today’s modularity with different levels of interest, penetration, and adoption.

So what could quad core potentially add to the mix? Here are some potential improved usages:

  • Modular video and photo editing. These apps have historically always been multithreaded and could leverage a clamshell “dock” similar to the Lapdock or Multimedia Dock.
  • Modular multi-tab web browsing. Active browser tabs require a lot of performance and overhead. Just use Chrome PC browser and check your performance monitor. iOS5 actually halts the tab when moving to another tab forcing the user to reload the tab.
  • Modular games that heavily utilize a general purpose processor. Caveat here is that most of the games leverage the GPU a lot more than a general purpose CPU. It all depends on how the game is written, extent of AI use, UI complexity, where physics are done, and how the resources are programmed.
  • Modular natural user interface. While plugged in and “docked” at the desk or living room, the smartphone could power interfaces like improved voice control and “air” gestures. This may sound like science fiction, but the XBOX 360 is doing it today with Kinect.
  • Multitasking: Given enough memory and memory bandwidth, more cores typically means better multitasking.

Will It Be Relevant?

Many things need to materialize before anyone can deem a quad core smartphone a good idea or just a marketing idea for advanced users. First, smartphones actually need to ship with quad cores and a modular-capable OS. The HTC Edge is rumored to be the first. Then apps and usage models outlined above need to be tested by users and with benchmarks. Users will have to first “get” the modularity concept and notice an experiential difference. Moving from standard phone to modular experience must be seamless, something that Android 4.0 has the potential to deliver. Finally, some segments of users like enthusiasts will need to see the benchmarks to be swayed to pay more over a dual core phone.

There is a lot of proving to do on quad core smartphones before relevance can be established with any market segment beyond enthusiasts. Enthusiast will always want the biggest and baddest spec phone on the block but marketing to different segments, even if it provides an improved experience, will be a challenge.