HP’s Surprising Tablet Attack

I recently attended HP’s Securities Analyst Meeting (SAM) where HP made its case to Wall Street on why investors should believe in
the company and buy the stock. I will write about the conference later when I have thought through the content a bit more, as, unlike financial analysts, I need to think three to five years out. Thinking short term about HP doesn’t do anyone any good. What I do want to talk about are two of the products shown at SAM, a new impressive tablet lineup.

The tablet market right now is exhibiting all the traditional signs of any tech market. You have a lot of experimentation going on right now to see what consumers and business users really want. Apple, like the early days of the consumer PC, is punishing everyone who stands in its way in the tablet market. If you have any doubt on that, just ask Motorola, RIM, Samsung, and anyone attached to webOS. As experimentation progresses, the markets starts to settle into more of a predictable rhythm and then after that, massive segmentation and specialization will occur. This is classic product lifecycle behavior. HP, like Dell and Lenovo have learned a lot over the last 18 months, and HP, in particular, put the pedal to the metal with their latest tablets.

HP segmented its line into a consumer and large enterprise line. What about small business? Too early to know and quite frankly this will be the last market to adopt tablets, so I like this decision. Technologically, HP has opted to forgo ARM-based technology in its latest offering and instead has opted for the Intel CloverTrail solution. Only after we all see pricing and battery life will we know if this is a good decision. Let’s dive into the models.

HP ENVY x2 for the Consumer
The first time I saw the ENVY x2 at the Intel Developer Forum I was stunned, quite frankly, at how thin, light, and sexy the unit was. My latest Intel tablet experience was a thick and heavy Samsung tablet with a loud fan used for Windows 8 app development. The HP ENVY x2 wasn’t anything like this as it was thin, light, fan-less, and sexy industrial design made from machined aluminum. No, this didn’t feel like an iPad… it felt in some ways even better. This is a big thing for me to say given my primary tablets have been iPads… gen 1-2-3. It is very hard to describe good ID in words, but it just felt good, real good.

It was apparent to me that HP stepped up their game in design and after talking with Stacy Wolff, HP’s Global VP of design, they have amped up resources a lot. While most of Intel’s OEMs are focused on enterprise devices, this consumer device stands out. The only thing that could potentially derail the ENVY x2 is Microsoft with a lack of Metro applications or too high a price tag. Net-net I will need to see pricing and Windows 8 Metro launch apps before I can assess what this will do to iPad and even notebook sales.

HP ElitePad for Commercial Markets
Let’s face it, commercial devices run counter on many variables to what consumers want. The tablet market is no different. Enterprise IT wants security, durability, expandability, cheap and known deployment, training, software, and manageability. Consumers want sexy, cool, thin, light, easiest to use, and based on the amount of cracked iPad screens (mine included), durability is not that important. HP has somehow managed to cross the gap between beauty and brawn in a very unique way. When I first saw the ElitePad, I thought it was a consumer device. It even has beveled corners to make it easier to pick up off the conference table!  Like the ENVY x2, it also feels like machined aluminum.

The ElitePad, because it has at its core an Intel CloverTrail-based design, can run the newer Metro-based Windows 8 apps and legacy and new Windows 8 desktop apps.  IT likes to leverage their investments in software and training, and they will like that they can run full Office with Outlook as well as any corporately developed apps without any changes.  You don’t want to be running Photoshop on this as it is Atom-based, but lighter apps will run just fine.

IT “sees” the ElitePad as a PC. Unlike an iPad, it is deployed, managed and has security like a Windows PC. For expandability, durability and expanded battery life, HP has engineered a “jacket” system that easily snaps around the ElitePad, which felt to me like the HP TouchPad. The stock jacket provides extra battery life and a fully bevvy of IO including USB and even full-sized HDMI. If HP isn’t doing it already, they should be investing in special jacket designs for health care, retail, and manufacturing. Finally, there is serviceability. While I don’t want to debate if throwing away a device is better than servicing it, IT believes that servicing it is better than tossing it in the trash. For large customer serviceability needs, HP is even offering special fixtures to easily service the tablet by attaching suction cups to the surface and removing the display. Net-net, this is a very good enterprise alternative to any iPad enterprise rollout.

I am very pleased to see the care and time put into the planning and design of these devices. The three unknowns at this point are pricing, battery life and Windows 8 Metro acceptance and the number of tablet apps. If there are a lack of Metro apps at launch, the entire consumer category will be in jeopardy in Q4, but commercial is quite different as ecosystems can grow into their show size over time. I cannot give a final assessment until I have actually used the devices, but what I see from HP in Windows 8 tablets is exceptional.

Apple Maps: Decision by Wall Street?

Just as we all start to get sick of reading and debating the Apple Maps debacle,  a new interesting thread, issue, or new piece of information comes up.  This time, it was, Apple’s CEO Tim Cook apologizing for the lousy experience Apple Maps delivers.  You should put to rest the debate on whether it’s a good experience or not as it is per Apple’s own admission.  I applaud Apple for its admission, but Apple should never shipped it.  From a company who has defined itself by delivering the best experiences, why was there a decision made to ship a sub optimal experience?
 
Apple redefined for the entire technology industry what the word "experience" means. While There may be some debate debate if Apple has redefined the personal computer, There’s no one who can legitimately argue that Apple did not do this for the smart phone and even the tablet.
 
This is why it is just so odd as to why Apple would allow this low-quality map application to be released in the first place.
 
There are really only two different and mutually exclusive ways this can be explained:
1) Apple forgot how to deliver and evaluate a good experience.  Essentially this scenario says that Apple, after delivering great mobile experiences for years, forgot how to do that and how to measure it.
-OR-
2) Apple knew they were delivering a poor experience and decided to ship the iPhone 5 and iOS 6 anyways.  they would deal with the consequences afterwards.
 
Any reasonable person who follows technology closely must know that it’s the second option.
 
This then gets to the decision making process.  Anyone who has ever spent any time in product management knows that near the launch of a product, you have many launch readiness reviews. Apple may call their review something different, but they have something like it.   It involves cross functional teams, typically including product management, product marketing, program management, engineering, manufacturing and operations. Each group goes through their review, one after the other. These types of meetings essentially compare the minimum launch criteria with the current state of the product. The outcome of this meeting determines if the product is ready to ship or not and the critical actions to close the gap. The most senior executives rarely attend these meetings, but are sought afterwards for escalation If there are issues.
 
You can bet that Apple maps was reviewed and scrutinized over and over and over. You can assume some people said that the experience was good enough to ship and there were those who said Apple maps was not ready to be shipped.  The decision probably came down to Tim Cook himself, who opted to ship a sub-optimal Apple maps experience.
 
Tim Cook had a very difficult decision to make in that none of them resulted in anything optimal. He had to choose between:
1) Ship a sub optimal experience coincident with the launch of the iPhone 5, "hitting" commit dates made to Wall Street, press and retailers. With this decision Apple would potentially take the heat from consumers and the press.
2) Delay shipping the iPhone 5  until Apple Maps delivered a good experience. This would raise the ire of Wall Street and investors.  As we have seen over the last two weeks, Even though Apple shipped millions of the new iPhone five, it still wasn’t good enough for much of Wall Street. Imagine, if the iPhone five were delayed by a few months.  Imagine what that would’ve done to the stock price.
 
So how did it work out for Apple? Short-term, it worked out pretty well if you measure in terms of sales. They sold 5 million iPhone 5s the first weekend and 100M ungraded to iOS 6.  Apple’s stock hit a massive hit this week, but it’s unclear to say if Apple Maps were the culprit, if it was financial analyst expectation versus how many they sold, or fears of production issues.  It was a combination of all of these.   Apple’s reputation has surely taken a hit but it’s unknown if it will have any lasting impact. Apple’s prior issues with products stemming from things like iPhone antennas, MobileMe, and Ping barely made a brand scratch and was followed up by record selling products.  I do believe that people will start to question brand-new things that Apple gets into that are related to their core competencies.  These are could be markets like search and products like TVs.
 
So why was the decision made to ship a lousy Apple Maps experience? As we’ve seen Apple’s stock get hammered as of late, it was about the stock price.  Imagine if Apple had delayed shipping the Phone 5.  The stock would have been hammered even harder.  Therefore, Apple’s Tim Cook probably made the right short-term business and stock decision even if it wasn’t in the best interests of its customers.  You see, brands have half-lives, and while Apple cannot have a string of incidents like Apple Maps, it can afford this one in isolation.  Tim Cook, Wall Street thanks you.  Apple Maps users, not so much.

Of Course HP Will Enter the Smartphone Market Again

Two weeks ago, the industry was abuzz with discussion about Meg Whitman’s Fox Business interview on September 13. There, she said HP must ultimately offer a smartphones. This set off a chain of new stories, some aghast that HP would be considering something like this given HP’s last foray in phones. Most of the ire stems from HP’s exit and dismantling of Palm and webOS last year versus a strategic analysis. Upon closer analysis though, this makes perfect, strategic sense for HP.

HP’s last foray in phones didn’t end pretty. In less 18 months, Palm and webOS was acquired by HP and then shuttered. In less than 60 days, the HP TouchPad was launched then discontinued. There was nothing positive about how this ended for HP, Palm, webOS, retail partners, employees or its app ecosystem. At this point, none of this matters in the future and it really is time to move on. The discussion must start at the value of the smartphone.

I have been unapologetically bullish on where I see smartphones into the future. There is a credible scenario where the smartphone could take on most of our client computing roles. In this scenario, the smartphone is a modular device, which “beams” data to wireless displays and peripherals. Modular operating systems with modular development environments like Android and Windows will enable developers to write once and deploy to many different kind of form factors. Just imagine how much better this will be in five years. Even at IDF 2012, Intel showed this scenario in their WiGig video, albeit with a tablet, but there’s nothing to keep this from being a phone. I want to be clear that this (heavy modularity) will only happen if PC usage models stagnate to the point where they don’t need tremendously more compute performance or storage. If Intel is successful with their Perceptual Computing initiative, the probability of this scenario greatly decreases as the smartphone won’t be able to deliver the required performance. HP then must develop a smartphone if they want to be in the future client hardware business. Meg Whitman also talked emerging markets.

Meg Whitman touched on this modularity potential when she talked emerging regions. She talked about how in some countries, the smartphone would be their first computing device and in some cases their only computing device, meaning they will never own a PC or tablet. The first point here is price. In many countries, people will only be able to afford one device, and that device will be a smartphone. Secondly, due to the modularity scenario described above, it will extend to other usage models, like desktop computing. I don’t think anyone can find fault in Meg Whitman’s logic. Let’s now look into enterprise.

Today, two of the biggest buzzwords is “BYOD” or the “consumerization of IT”. Don’t confuse this with the ability to get corporate mail on your iPhone. That’s not BYOD. BYOD is getting full enterprise network, application, security, and management access. That’s a lot different than mail, but many “experts” do confuse this very important point. Imagine how important this is in a healthcare, financial, government, or even any business that develops any kind of IP. You get the point. This is where HP could meet a need for a phone and enterprise management system for that phone, so it is managed just like an enterprise PC. Given HP’s enterprise focus, it makes perfect sense for HP to offer an enterprise-class smartphone with enterprise security, manageability, and deployment capabilities. Does this mean it will be an ugly brick? No. I’m speculating a bit, but I think it will be an attractive phone, but it will be durable enough to be dropped once without shattering the screen or glass backing. As its designed for durability, it will be waterproof, too. HP has an opportunity, the one opportunity that RIM and BlackBerry missed, and that’s an enterprise phone.

There are many strategic reasons for HP to offer a smartphone that are very logical, given the enterprise and emerging region needs explored above. Given HP’s enterprise focus and experience in managed client devices, they have a lot of value to add, too. Add that to the modularity scenario and it essentially would make HP look crazy not to get back into smartphones. I outlined here that PC makers cannot run away from smartphones, so I am very happy to see HP getting back in. As for execution? While fresh in the industry’s mind, I think it’s time for all of us to get beyond webOS and give HP another shot.

My Favorite Things About iOS 6

Having used every version of iOS and Android since inception, I am always very excited to jump on the latest and greatest smartphone operating system.  You see, operating systems say as much about a company and about the future as it says about what’s important now.  While this isn’t a deep analysis on OS mind reading, I wanted to share with you my initial thoughts on Apple’s iOS 6 for the iPhone and iPad.

There are elements about  Android and iOS that I like.  None of these operating systems is perfect, but each has things that I really like and is valuable to its different kinds of users.  iOS 6 is no different in that there are certain things I really like about it.

  1. Do Not Disturb: Ironically, my favorite thing about iOS 6 isn’t about what it enables me to do, but what it enables me not to do.  My phone is my alarm clock and it was very annoying at 2am when it would start buzzing due to someone in China posting on my Google+ wall or getting other notofocations.  Well, no more.. one button means bliss.
  2. VIP inbox: This is a special sort on important people.  Like many, I get about 200 emails a day but refuse to let it run my life.  The VIP mail “sort” enables me to instantly see the most important messages from the most important people, like my wife.  And clients, of course.
  3. Improved Message Sync– I have two iPads and my iPhone so iMessage synchronization is key.  iOS 5 was a bit spotty, but iOS 6 has been spot on so far.  Thank you Apple.
  4. Reply with Message: Like many, my work day includes bouncing between calls, desk time, and driving.  When I’m on a  call and a client calls, I want them to know that I will get right back to them.  With “Reply with Message”, its only two presses and I can SMS and message I like.
  5. Facebook Integration: Instead of opening the Facebook app to share something, it is now built into the core of the OS. This means saving time, clicks, and contacts integration.  Even though Android and webOS had this for a long time, it still doesn’t diminish it as a good feature.
  6. Shared Photo Streams- This will be huge in my family as almost everyone in the family has an iPhone or iPod and we love sharing pictures.  I will probably use this for more personal photo sharing versus pulling me away from Facebook, Twitter, or Google+.
What about Maps, Siri, Camera, and Passbook?
Apple made some changes to Maps, Siri and added a new app called Passbook.
  • Maps- I use both an Android and iOS phone (sometimes Windows Phone) at the same time to always compare and contrast the experiences.  I’ve always been happy with the maps on Android devices as it had turn by turn directions that were very accurate.  The Apple maps function so far has worked so-so (my kid’s school missing) in my little town of Austin and I have a heard a lot of chatter about others having some issues. Steve Wildstrom does a good job of covering some of the Apple maps challenges here.
  • Panorama Mode- I’ve been taking panoramic pictures for a long time.  Before adding the feature ti iOS 6, I just used Microsoft’s Photosynth app that’s been available in the App Store  for a long time.
  • Siri- There has been a lot of research done that says on the whole mainstream consumers are happy with Siri.  In my n=1 research, I have never been thrilled with Siri’s ability to determine what I am saying.  I haven’t yet noticed a sharp improvement in this capability, either, but others, like Tim Bajarin, have.  My bar is set quite high as I am in the car over two hours a day and want to do a lot of voice texting and dictation. Because of Siri’s lack of accuracy with my voice, I am not planning on using the additional database capabilities like sports score, movie times and restaurant reservations.  But I am sure others will love it.
  • Passbook- Think of Passbook as the one digital place for all those annoying paper items or bonus and discount cards that I always manage to misplace.  Apple says you can put airline tickets, movie tickets, coupons, loyalty cards and more.  I am very excited about this feature as I am paperless.  Unfortunately I cannot get it to work, and as of this writing, I keep getting error messages.  I’m not the only one with this challenge as I have seen many Twitter posts on the same thing.I have researched this and don’t have a fix yet, but will update this as soon as I do.
All in all, I am happy with iOS 6 on my iPhone 4s.  No, it’s not “swing me around the room” amazing, but it didn’t have to be for me to still like iOS.  I prefer Android’s open content sharing mechanisms, notifications, and live pages more than what iOS has to offer, but not enough to switch my primary device off of my iPhone.

 

The Significance of Motorola’s RAZR i Announcement with Intel Inside

Today, Motorola Mobility announced the Motorola RAZR i for Europe and Latin America.  The phone is very similar to North America’s RAZR m, but instead of a Qualcomm, ARM-based SOC, it includes an Intel Atom-based X86 processor.   This announcement is significant for Intel and the industry for many reasons, including Intel’s first big brand phone in a unique industrial design and the increased use of Intel Inside which could become an industry disruptor.

Background

Intel had its challenges in mobility over the past few years with its Menlow and Moorestown products as there were many promises made but very few smartphone products shipped.  Things changed dramatically with the delivery of the Intel’s Medfield SOC and with the Infineon wireless acquisition.  Medfield shocked all industry watchers in that it was the first Intel mobile SOC that performed well and got good battery life. Shocking too was Intel’s ability to use binary instruction set conversion in Android to make specific applications that embedded ARM instructions work well on the X86 platform.  So far, so good on compatibility as I haven’t even heard of any meaningful compatibility challenges with Android apps.

Medfield pulled in the first wave of phones from Lenovo and  ZTE in China, Lava in India, Orange in U.K. and France, and MegaFon in Russia. The phones were good at their price points, but not great, and the designs looked like derivatives of the Intel-designed phone. The phone industry watchers were pleased at a good start by Intel, but wanted to see a global brand in a unique phone.  This is where the Motorola RAZR i comes into play.

The Motorola RAZR i

In a discussion with Intel’s vice president and general manager of Intel’s Mobile and Communications Group, Mike Bell and Motorola, it was apparent that both sides were very excited.  I don’t mean the traditional “oh it’s a launch and I have to be excited” tone.  This was for real.

Like the RAZRM m, the RAZR i has an edge to edge, 4.3” Super AMOLED display, Gorilla Glass, aluminum display frame, KEVLAR backing, and splash proof coating. Unlike the RAZR m, the RAZR i is powered by an Intel Atom SOC clocked at 2GHz.   It has a single core SOC and is “hyperthreaded”, meaning it can process two operations, or threads, at the same time.  On paper, this approach should save power as most phone operations are single threaded and you don’t have to burn a full second core to do this.

Motorola and Intel said that the 2Ghz SOC will help with many different usage models, including web browsing, multitasking, games and imaging with its 8MP camera.  Without using the RAZR i or even reading detailed reviews, the faster web and multitasking messaging makes complete and total sense as Intel performs very well here and this can get only faster as they’ve boosted clocks to 2GHz.    The improvements on games makes sense, too, given it includes a very highly clocked Imagination Technologies graphics core and  should be even clocked higher with the CPU boost.  I will wait to use the phone and wait for reviews before commenting on the camera.  The features look very impressive as it can take 10 pictures in a second, but quality is key here, too, and I need to use a unit first.  These functions are primarily handled by the Hive-based ISP and don’t have a lot to do with a 2GHz processor anyways.

Battery Life

Motorola says that the RAZR i achieves “20 hours of combined usage time”.  The footnote says, “Battery life based on an average user profile that includes both usage and standby time.  Actual battery performance will vary and depend on signal strength, network configuration, features selected, and voice, data and other application usage patterns.”  I have to say that is quite vague.  What is more useful is what Apple discloses which comprises detailed talk time, web browsing, video, music, and standby.  I really cannot even comment on the RAZR i battery life until I see some of these figures broken out which I am sure will be available as reviewers put the phone through its paces.  As previous Intel-based phones have very respectable battery life in their class, I’d expect better battery life as Motorola has packed a 2,000 mAH battery inside but we will have to wait for reviews to see the impact of the Super AMOLED display and Motorola software load.

Intel Inside Branding

Intel has spent over $1B in marketing each year for at least the last 15 years.  They have built a highly-ranked brand on consumer awareness measurements, and amongst techies very high awareness and familiarity.  In the PC and server space they have high degrees of preference as well with the “processor-aware” crowd.  The smartphone market is different, though in that heavy-duty ingredient branding hasn’t been used… yet.  One major thing I noticed for this launch was the proliferation of Intel and Intel inside branding and sub-branding.  It is evident in the web, print and video assets shown at today’s event, even down to the Intel jingle at the end of the commercial. Below are are a few examples.

razri
Motorola’s Event

 

razrintelinside
Motorola Web Site Specifications, First Line

 

razri intel logo
Kevlar backing of the RAZR i
razri on wood
Motorola RAZR i Landing Page

 

razri with intel inside
Motorola RAZR i Landing Page
razr i logo
Motorola RAZR i Landing Page

 

Ending to Motorola RAZR i U.K. TV Commercial

This is a very significant industry dynamic as Intel is now running similar marketing “plays” in the smartphone as they have used in the PC market.  The impact on the PC market was pronounced as both the distribution channels and the PC makers became very dependent on Intel Inside dollars to fund their marketing operations. Imagine if this dynamic catches hold in the smartphone market.  It could be very disruptive to smartphone distribution, brands and consumers and could quite possibly give Intel the leg up they need to gain more handset design wins and sell even more phones.  Qualcomm, NVIDIA and TI will need to reevaluate how they combat something like this.

Some phone brands look at the PC market and want to stay as far away from that brand dynamic as possible  as they see Intel’s brand as dominating theirs.  Others see this as a great opportunity to leverage Intel’s brand, potential gain share, and amplify their own marketing dollars.  Industry watchers need to keep their eye on this dynamic as it will be a potential disruptor in the smartphone ecosystem.

Conclusion

The Motorola RAZR i smartphone announcement with Intel is very significant on many levels.  First, Intel gets a big-named brand in Motorola, who, while has faded in share of late has a very solid brand profile.  Secondly, the RAZR I has a unique industrial design with unique features which set it apart from the Intel-designed skins from the previous launches.  I am very, very interested to see how well the phone performs and in battery life given a highly-clocked Intel Atom at 2Ghz combined with a 2,000 mAH battery while still only 8.3mm thin.  Finally, and most significant, is Intel raising the stakes in smartphone component branding.  This branding dynamic will have more of an industry impact than even the RAZR i itself and has the potential of being a major disruptor.

Without Metro Apps, Innovative Touch-based Windows 8 Consumer Hardware is Meaningless

Last week at IFA in Berlin, Germany, HP, Dell, Samsung, and Sony announced some very unique hardware designs for Windows 8.  They included touch notebooks, convertibles, sliders, flippers, hybrids, and tablets that can take advantage of Microsoft’s Metro touch-based UI. The hardware was very impressive and it was obvious that a lot of thought and effort went into the design.  Will these be successful?  It’s impossible to say at this point because two huge questions have yet to be answered: device price and the number of high quality Metro applications.  Device prices will be announced by Windows 8 launch on October 26, but I want to dive into the applications question.  Without many high-quality Metro-based application details on the horizon, it’s hard to get excited about the hardware.

Let’s first look at the diversity of products.

Touch-based Hardware for Windows 8

There were many innovative devices launched at IFA to take advantage of Windows 8 touch for the Metro environment.  Here are a few that appeared innovative:

  • HP SpectreXT TouchSmart– Premium 15.6″ HD touch display Ultrabook with Intel Thunderbolt technology.
  • HP ENVY x2– Ultrathin notebook whose 11.6″ HD display can be removed and used as a tablet.
  • Dell XPS Duo12– Premium Ultrabook  with 12″ touch whose screen flips around to be used as a tablet while in Metro-mode.  The design includes use of machined aluminum, carbon fiber, and Gorilla Glass.
  • Dell XPS One 27–  Premium All In One with a 10-point touch enabled, Quad HD (2560×1440) 27″ display.  The all in one will lay flat as well, enabling multiple users to use it at the same time.
  • Sony VAIO Duo 11– Slider with an 11″ display that operates as a notebook and a tablet when the HD display is slid onto the keyboard.
  • Samsung ATIV Tab– Windows RT tablet with 10″ touch display.  It’s thin at 8.9 mm and light at 570 grams

As you can see, the diversity of Windows 8-based touch devices was very wide, which, given a wide variety of high quality apps, would usually mean that something would stick.  The problem is, few really know the true state of Metro-based touch apps, including most PC and chip makers.

Ecosystem Losing Confidence in Metro Application Delivery Timing

Nearly ten months ago, Microsoft held its BUILD conference for Windows 8 software and hardware ecosystem partners.  Microsoft also launched their development platform for Windows 8, called Visual Studio 11.  Every attendee went home with a robust Samsung developer tablet and keyboard with Visual Studio Express Beta, intended to spur development of Metro-based applications.  As of April 2012, six months later, 99 applications were available.  As I outlined here, this was way behind where Apple was but far ahead of where Android was.

So where does that leave the state of Metro apps?  Microsoft is now seven weeks away from launch and virtually no one has much of a sense for how many compelling Metro apps will be available at launch.  Here were some key milestones that Microsoft has anounced:

  • April 18, announces developer submission locales from 5 to 38 markets, but limited to select partners; app catalog at 21 markets
  • May 31, “hundreds of preview apps in the catalog-including the first desktop app listings”; app catalog increases to 25 markets; Share contract added
  • July 20, Microsoft outlines how to monetize and get paid for apps
  • August 1, RTM Windows store opens; “qualifying businesses can submit apps”; 54 new markets and 24 app languages added

As of today with my version of Windoes RTM, I can only see 844 Desktop + Metro apps in the Windows 8 store.  I do not see most of my favorite apps, including Facebook, Path, LinkedIn, Pinterest, Google+, Netflix, Hulu+, Amazon Video, HBO Go, ESPN Video, Time Warner Video, CNN, Flipboard, Pulse, Nike Running, MyFitnessPal, Pandora, Flixter, or E*trade.  I am concerned and many ecosystem executives are telling me that they are very concerned with the state of Metro-based applications.  Should we really care about how many Metro applications will be available at launch given Windows 8 is a five year investment?

Why Should We Care About Apps at Launch?

Having microscopically observed and participated in the launch of the Motorola XOOM, HP TouchPad and the BlackBerry PlayBook, there were some common threads that led to their failed launches and quick retreat.  These issues included:

  1. Incomplete hardware: hardware features did not work or were not available, including SD cards, and LTE support.
  2. Incomplete paid content: lack of support for paid movies, music, games and books.
  3. Sluggish and buggy: experience was slower than expected and/or included many bugs
  4. Lack of high quality apps: few applications were available that consumers recognized or were compelling.  In some cases, basic apps were missing like calendar, mail and contacts.
  5. Priced at iPad: tablets even with issues above were priced on top of the iPad, an already known and successful product with a great consumer brand

Many of these issues were addressed shortly after launch, but it didn’t make a difference.  The damage was done and in most cases, irreparable.  What developer wants to write apps for an ecosystem or platform that just got slaughtered by the press, analysts and consumers?

What about post-launch? Some issues still exist even for 10″ Android tablets.  Android 10″ tablets have come a long way with ICS, Jelly Bean and paid content, but still suffer from a lack of high quality apps, which still number in the 100’s.  Remember, part of the the success of the Google Nexus is that it leverages the Android phone applications, not those designed for tablets. And, let’s not forget it is priced at half of the iPad. I believe Windows 8 will launch with #1 complete hardware via PC makers, #2 paid content via XBOX Live, Netflix and Kindle and so far it looks that even #3 Windows RT will have acceptable performance.  As for #4 and #5, well no one know yet and if we can learn anything from Android tablets, a robust supply of touch-based applications are required for success, which eludes them almost 2 years later.

I’ll say it again…… innovative consumer hardware for touch-based products is meaningless without thousands of high quality, compelling and popular apps.

Apple vs. Samsung: What Doesn’t Compute

I’m not a lawyer, but I am an analyst who unfortunately has participated in some of the largest corporate legal battles, has two immediate family members who are IP lawyers, and has had to decide on industrial design for consumer electronics. None of this qualifies me to give legal advice, but I am able to spot some very interesting things in technology lawsuits.  The Apple-Samsung lawsuit was no different, as it was full of opportunity and oddities, and I wanted to share just a few observations.

The first thing I want to be clear on is that it is apparent to me that based upon the evidence and common sense, I believe Samsung infringed on at least few of Apple’s patents.  Just looking at the Samsung phones before and after and hearing about the need to be like Apple was enough for anyone would to arrive at that conclusion that some phones were made to look like Apple’s.  What I am not saying here is that I agree with everything that the jury came back with either; I don’t.  I am not a lawyer and I did not see every shred of evidence that the jurors saw.

With that off my chest, let’s dive into some of these things that I found unique or odd about the trial.

I’ve Seen That “Aligned Grid” Before

Two of the patents under scrutiny dealt with the way iOS icons are set in a grid with a lower bar situated at the bottom for apps.  Specifically, these were patents USD604,305 and US 3,470,983.  It was funny, the first thing I thought of was my Windows desktop where I have icons aligned in a gird with my most used icons pinned to my taskbar.  I remember old versions of Windows where it would “Align to Grid”, too.  So really, what is so unique or special about this patent?  Is it the fact that I am using it on a PC and the patent is on a phone?  I find this one odd.

IMG_5926
iPhone Icons Aligned to Grid With Dock
image
Windows Icons Aligned to Grid With Dock

I’ve Seen That “Pinch and Zoom” Before

I remember getting an early preview of Microsoft’s original Surface table, now called PixelSense.  It could recognize over 50 simultaneous touch points as it was designed for more than one person and entire hands.  One of Surface’s special features was to pinch and zoom in on photographs…. almost exactly like the iPhone.  Apple’s two finger pinch and zoom is covered under US 7,844,915. I am certain that Microsoft and Apple are dealing with this in one way or another behind closed doors, and I speculate that based upon Microsoft Research budget and amount of years they had been working on Surface, they have the upper hand.  Remember, Apple was not the juggernaut it is today with more cash and market cap than anyone, therefore putting Microsoft in a better position to patent pinch and zoom.

surface picture
Microsoft Surface (2007)

I’ve Seen Those Curves Before

One of the other key patents Apple was fighting in court was related to the rounded corners. Apple had two design patents related to the corners.  The two patents, USD504,889 and USD593,087 were both patents related to many physical elements combined, including rounded corners.  Those curves are specifically 90 degree curves related to the same curvature in Apple’s legacy icons which date back over 20 years.  I ask, does it make sense that someone can patent a curve?  It does to the USPO, but in other designs like cars, you see related curves all the time, yes?  I mean, really, do curves seem like something that is patentable?  On the top is the Compaq T1100 sold in 2003 and on the bottom is Apple’s patent filed in 2004.

Compaq TC1100 (2003)
apple 504889
Apple USD504,889 (2004)
apple 504
Apple USD504,889 (2004)

 

Would You Confuse an Apple and Samsung Phone?

One very prominent scene inside the courtroom was when Apple icon designer Susan Kare testified even she confused the Galaxy for the iPhone.  I’ll give Mrs. Kare the benefit of the doubt, as maybe she was just looking at the icons, but I doubt anyone else on earth would confuse the two phones.  Every Galaxy S has a “SAMSUNG” and “AT&T” logo on the front of the phone and you certainly wouldn’t make the mistake of buying the wrong phone as the carton is clearly labeled Samsung.  So if consumers wouldn’t confuse the two and wouldn’t mistakenly buy the wrong phone, how damaging is the similarity, really?  Have you ever heard even a rumor of someone mistakenly buying a Samsung phone and thought it was an iPhone?  If you have, please let me know in the comments below.

 

s2 packaging    samsung s2

 

iphone 3gs packaging iphone 3gs

 

So What?

So I have brought up some possible inconsistencies or “horse sense” that may go against what the jury said and potentially even against patent law, so what?  I think if we cannot look at ourselves in the mirror, be honest with each other on what violates a patent or if there even is a patent to violate, the U.S. patent system itself will lose credibility and is doomed.  If reasonable intelligent people can’t even make sense of it, then what does that say about the problems we will face in a few years as companies become even more litigious as they file patent after patent just so they don’t get burned down the road?  I hope more good than harm comes out of this patent spat.  The big picture is really about continued innovation.  We should all pay heed to what Ben said so well yesterday“The key to the future will be to seek out new opportunities with fresh thinking and innovative ideas. To those that think innovation is dead I pose this question: Have all the problems of the present and the future been solved? Until the answer is yes, there will always be room for innovation.” Let’s not let the patent system stifle that innovation and let’s use some common sense as we approach it.

How Significant is the Synaptics ThinTouch, ForcePad and ClearPad Technology?

As I have cited previously, human computer interface (HCI) changes have defined the winners of the last decade in the phone, tablet, and premium PC and game console markets. I believe this will continue into the future. As important as the singular technologies are the way that different kinds of controls come together as a system to deliver multi-modal input methods. This is why I am so excited about Synaptics three new technologies, ThinTouch, ForcePad and ClearPad. They have the chance to revolutionize the way notebooks, convertibles and tablets will be used in the future. Over the last few months, I got an insider’s view of the technologies, talked with the designers and HCI experts, and of course, got my hands on the technologies. I’d like to share some insights I’ve gained on the technologies.

Multi-Modal is the Future of HCI

The future of all device interaction will not be governed by a single way of interaction, but via multi-modal interactions. Essentially, devices will take inputs in a myriad of ways, whether they are via keyboard, direct touch, voice, and even through machine vision. To not confuse the user, they will all need to work as a cohesive “system” and therefore need extensive systems integration and software work. In the next few years, this will be especially true on notebooks, convertibles and tablets. Synaptics has some incredible smartphone technologies with its InCell and TDDI (touch display integration), but I want to focus on HCI for notebooks and tablets. Let’s dive into the technologies.

ForcePad Technology

Over the last ten years, Synaptics and Apple have driven the biggest advancements in touchpads. Just look at how theforcepad touchpad has morphed from a small, three button touchpad to a large, button-less touchpad seen today’s premium Ultrabooks and Apple Macbooks.

ForcePad technology removes all moving parts, is pressure sensitive, and at less than 2.8mm, is thinner than a slice of cheese. ForcePad has the ability to perform all the functions users can perform with a ClickPad, even without knowing there is pressure sensitivity, thus reducing any adjustment period or learning curve. Synaptics usability research scientists (with whom I met) have tested this and observed that on average a user with a ForcePad easily adapts to this hingeless touchpad and quickly prefers this experience over the majority of hinged PC designs on the market today. The “gesture continuation” capabilities that the ForcePad pressure sensitivity offers, provides a smoother and easier method to perform the core functions from pointing to scrolling and zooming.

By adding pressure sensing, this enables consistent interactions on any part of the touchpad and it never wears out, making it much more reliable. There’s even an auto-calibration feature that has several benefits. For the OEM it enables a consistent OEM branded feel & behavior across models, that is something that is not in PC notebooks today as the hinge mechanism in today’s ClickPad is developed by the OEM’s choice of ODM, using several ODMs, and OEM may have different feels for the ClickPad even if using the same supplier because the ODM designs the hinge. Next the auto-calibration feature has the ability to compensate for chassis-flex, with thin and light notebooks, some can flex and activate a click just by picking the notebook up with one hand, ForcePad can detect this behavior and reduce accidental clicks. Lastly, while the OEM can pre-select the desired feel, this auto-calibration feature can enable a user to personalize their response, adjusting if for how firm or light they may want.

Its ability to “feel the force” also opens up new usage models where by adding a third, relative dimension, a user could conceivably replace a joystick to play a game, eliminate the need to choose new brush sizes in a paint program, or even eliminate annoying, slow scrolling on long web pages or documents.

The ForcePad, codenamed “Jedeye” in the a User Interface Software and Technology conference (UIST) competition was selected for the student software development and is expected to look at innovative methods to provide innovative methods to HCI through the addition of pressure sensitivity.

The benefits of making the touchpad 40% thinner by eliminating the hinge are straight forward. By making the touchpad thinner, the PC maker has the ability to use that volume to make the device notebook chassis thinner or even use that space for extra battery.

Users will need to get used to the lack of a clicking sound and feel, but as the entire industry learned, clicking is optional. As we learned from Apple and BlackBerry phones, what many thought would be an issue wasn’t at all, and those who sat in the past learned to regret it. Now on to the keyboard.

ThinTouch Technology

There hasn’t been a whole lot of innovation on keyboards over the last 20 years. Even while some of the best OEMthintouch-edge companies like Lenovo and their customers pride themselves with the ThinkPad keyboard performance, it is open to the same issues as all scissor-based mechanisms. That is to say, they are more open to break and become full of gunk. Have you ever had a key pop off and have to replace the entire keyboard due to a scissor mechanism?

ThinTouch removes the entire mechanism and replaces it with a capacitive touch enabled mechanism that is 40% thinner and brings several benefits. First, by removing the scissor mechanism, they keyboard should be lighter, thinner, and backlit keyboards should be more effective, brighter and require less battery draw due to the design without a scissor mechanism and also more reliable and manufacturable according to Synaptics. Secondly, keys are capacitive, there is the potential to enable personalization and turn on pressure based controls and even near-field “air gestures” into the mix. Imagine if you had a gesture to “fling” an image from your laptop to the TV by gesturing a certain way without even touching the keys. Reading a book, air swipe to turn the page. Once you make the keyboard, capacitive, the sky is the limit. Now onto the touchscreen.

ClearPad Technology

Synaptics got a bit of a late start in the touch display controller market but they are making up for it by expanding their controller line that supports windows 8 touch specifications from 12” now up to include the13.3, 14.1, 15 and 17” displays. And, in the future, but evidenced today in the smartphones up to 5”, tablets and convertibles could move to InCell, which removes and entire discrete sensor, thinning the product, improving the optical qualities and increasing performance. The ClearPad Series 4, TDDI smartphone technology, shrinks the number of controller from two to one; integrating Touch with the actual Display Driver IC this will significantly improve responsiveness while lowering cost.

For now, Synaptics biggest advantage in tablet and convertible touch display controllers as it marries up perfectly with ForcePad and ThinTouch. And that happens to be one of Synaptics’ biggest advantage, which is they don’t stop at the hardware. Their offering is the system and total HCI experience that is the blend of hardware, firmware, software, and the test tools to deliver this multi-modal interface solution.

Will PC Manufacturers Spend an Extra Dollar?

Unfortunately for PC makers over the last few years, Apple has run away with the premium-priced notebook and premium-priced all in one market. Ultrabooks could change this, but we won’t know until after the holiday selling season. Ironically, some Ultrabooks still come with cheap touchpads and uninteresting keyboards.

I believe that many PC makers will quickly adopt these Synaptics technologies to differentiate themselves from Apple and from each other, and some will even drive them across mid-range product lines, too. Unfortunately, some OEMs will continue to count pennies as they lose dollars, as the price between high quality touchpads and mediocre quality is around one dollar. When Apple comes out with their next generation of HCI, they will wish they had invested that dollar.

Deciphering Microsoft’s Latest Windows Blog on Windows RT

For over 20 years, I worked with Microsoft as a customer or a technology partner. Microsoft has a huge job in guiding their enormous Windows ecosystem down certain paths, and over those two decades I have seen many flavors of communication styles. For Windows 8, Microsoft has adopted a significantly different way of communicating with the ecosystem versus prior OS releases. For the broad ecosystem, Microsoft is communicating with their main “Building Windows 8” Blog, which appears as a direct link to the engineering team. Their latest blog on Windows RT truly is an interesting one. While not that significant on the surface, if you dig deep with context, it is actually saying a lot, providing deep insights to Windows RT.  It also highlights the amount of pacification Microsoft is doing to the financial community and the OEMs.

It is helpful to put some context around Windows 8 and Windows RT. As I’ve written about previously, Windows 8 truly is the biggest risk Microsoft has ever taken. Microsoft is risking over 50% of their operating profits by deprioritizing the Windows desktop and leading with the “UI formerly known as Metro”. If Metro is a hit, it greatly increases Microsoft’s probability of success in tablets and phones. If not, they’ve risked a huge part of their company profits and reputation. Along the path, Microsoft has to keep multiple constituents aligned, many who are at odds, particularly when it comes to Windows RT where it’s truly X86 versus ARM. And this is where the latest blog gets interesting in what it says and doesn’t say.

“Surface Didn’t Kill Other Consumer Windows RT Tablets, Really”

With Surface, Microsoft is competing head-to-head with their customers. Whether we want to sugar coat it with phrases like “priming the pumps” the end result is still the same in that OEMs will be competing for mindshare and market share with companies like Dell, HP, Lenovo, Acer and Samsung. Microsoft has not once said that if a certain threshold was achieved, then they would stand down or pull back. I don’t want to open the debate yet on whether Microsoft needed to do this or not as I will save that for a future column. One of the biggest things Microsoft is trying to say here is that even post-Surface, OEMs are still very interested in Windows RT tablets. So in the blog, Microsoft pointed out with exuberance that in addition to its own Surface, Dell, Lenovo, Samsung, Acer, and Asus will launch ARM-based Windows RT devices. Why did Microsoft do this? It was primarily to pacify investors who were concerned with the potential beginning of a crumbling of Windows as a platform.

“Acer May Be Upset, But Not That Upset” (UPDATED)

Acer’s CEO JT Wang has really been coming after Microsoft lately over by making some very caustic comments about Surface. They are veiled threats in a way, almost as if it’s a negotiation in the public forum. Wang is basically saying that Microsoft has no business doing hardware and they should leave it to the OEMs or risk mass defection and big hardware headaches they aren’t ready to take on. He may be right long-term, but OEMs really don’t have a viable option short term other than Android. Android for 4-7” devices may be doing well, but there are still less than 500 Android tablets apps available after a year and a half. Microsoft’s statement in their blog about OEMs leads with ASUS and it’s not just about alphabetical order, either, as ASUS gets their own, special hyperlink to their product, unlike the other OEMs. The blog says, “If you are following Windows RT, perhaps you have taken note of the Asus Tablet 600 (Windows RT) announcement or Microsoft’s own Surface RT™ news.” I love this part. It literally binds ASUS and Surface together as if to say, “Everything’s OK with Acer, really.”  ASUS isn’t Acer but they compete heavily in Asia and Europe.

“Dell Doing a Work Tablet, Like Lenovo, But with ARM-based Design”

Only one OEM got a quote in Microsoft’s latest Windows RT blog, and that was Dell. The rumor mill had been swirling for weeks on whether “Microsoft would allow Dell” to make to make a Windows RT tablet. This was a bizarre rumor in that it really wasn’t Microsoft’s decision on who does the first tablets; it was primarily Dell’s and the silicon provider’s choice which then needs to be approved by Microsoft because they are investing resources, too. Sam Burd, Dell’s VP of the PC Product Group says in the blog, “Dell’s tablet for Windows RT is going to take advantage of the capabilities the new ecosystem offers to help customers do more at work and home. We’re excited to be Microsoft’s strategic partner, and look forward to sharing more soon.” Note he leads with “work” and follows with “home”. This is in direct response to Lenovo’s Intel-based business tablet entry last week, which, interestingly enough, was cited in a Windows RT blog. Microsoft also intends this to counter Lenovo’s slides that show Windows RT as a lousy corporate client.

“Where Are the Web Browsing Battery Life Figures?”

Like others, I was glad to see the Windows RT battery life figures. The HD playback numbers make sense as video playback is limited to a very small part of the SOC and in some cases can do it without even lighting up the CPU or GPU. The connected standby is also a very impressive number, but I am curious about the variables around it like the type and persistence of the connection. One figure though that was glaringly absent was the lack of a web browsing battery life figure. This one is real important as it is also an indication of how well Metro apps will do as many are based on web technologies. Their absence probably means that the numbers aren’t great or inconsistent and they are still tweaking drivers.

“Windows RT Does Deliver a Differentiated Experience, Really”

Microsoft starts their Windows RT blog with a tip of the hat to both Intel and AMD. The industry was surprised when Intel provided OEMs low power silicon with acceptable performance and I think how well AMD’s Trinity does, too. Intel downplayed their achievements on Medfield, too, which makes sense as they lost  mobile credibility on Menlow and Moorestown. Microsoft’s tone in their blog is as if they are saying, “OK, Intel does have competitive silicon that works with the full-featured Windows 8, but there is value in Windows RT, too.”

And what is Microsoft saying about the incremental value? Essentially, they are parroting exactly what NVIDIA’s CEO Jen-Hsun Huang said months ago, and that it’s about consistency of experience. Microsoft talks about consistency in battery life, graphics, gestures, and even physical characteristics.

Microsoft may have a point here in that the focus and options are much more dialed in on Windows RT than on Windows 8. For example, because Windows RT cannot install Windows 7 apps, there’s no way that a consumer will install BattleField 3 and have a lousy experience. Also, the touchpad experience on Windows 8 is crucial, and according to the blog, all Windows RT tablets will support the side-swipes and swipe-up/down. Does this mean that you cannot find these features on X86-based Windows 8 tablets? No, but it doesn’t mean all of them will have it either.

Where is Toshiba?

One of the biggest missing OEMs from the blog was Toshiba. Toshiba and Texas Instruments both were showing off Windows RT tablets at Computex, but they were nowhere to be found in the Microsoft blog. Shara Tibken at the Wall Street Journal cleared up any ambiguity with her article. Toshiba will not do a Windows RT-based tablet now and place focus on Intel and AMD-based tablets. This does not bode well for TI, who has significantly lagged NVIDIA and Qualcomm on drivers for the Windows RT platform. While I have personally used NVIDIA and Qualcomm-based RT tablets, I haven’t been able to actually try out on based on TI silicon yet, so all I can comment on what I have heard from developers. With TI’s focus primarily on Android, it makes sense they would prioritize that development over Windows, even with the direct help they are receiving from Microsoft. I believe the future of TI-based Window RT devices is in question now, at least for launch.

Conclusion

Microsoft felt the need to pacify their OEM customers and investors and used this latest blog to do it. They came under attack of late for launching Surface with Windows RT that cast doubt on the future of any other RT tablets being successful. Ironically, with Intel-based Windows 8 tablets being announced as well by Lenovo that look very compelling, Microsoft needed to reiterate why Windows RT is incrementally valuable and different. Will the blog be enough to sway what people think in the ecosystem? I don’t think so, as what people really want to know is how many well-known, high quality Metro-based applications will be available at launch, as this will be the true decider of how well Windows RT will fair, at least in the short term. We should know more tomorrow, August 15, as the doors to the paid Windows 8 store opens up.  As we learned from the Blackberry PlayBook and the webOS-based Touchpad, first impressions do matter and I hope Microsoft has heeded that history.

Why do Products like the Google Nexus Q Launch?

At Google’s I/O developer conference in June, Google boldly announced the Nexus Q, a $299 streaming music sphere.  Last week, Google stopped selling the Nexus Q and cancelled all consumer pre-orders, saying buyers would get the unit for free.  It appeared obvious to just about everyone except Google that the Nexus Q value proposition was incredibly weak.  So how does one of the most powerful companies in the world allow a product so obviously not ready get to market?  It happens for many reasons and more often than you might think.  I’d like to characterize the many reasons products like this make it this far as I experienced it through my 20 years of product management and product marketing.

The Google Nexus Q Value Proposition

While I detail out the Nexus Q value proposition here, let me provide a small sample.  For $299, consumers get a cool looking black sphere for music and video streaming that can only pull content from the Google Play cloud.  To control it, you must have an Android phone or tablet, and if your friends want to add tracks to the music list they must have an Android device. The Q also came with a high quality, built-in amp and speaker jacks.  Net-net, it was $200 or 3X more than an Apple TV or Roku device if you are looking at it as a content streamer.  If you are audiophile, you will be comparing it to a Sonos, which while comparatively priced, can also stream from multiple cloud services, and be controlled by iOS and Android devices.

The press gave it the expected response.

The Press Reaction

The press reaction was brutal.   The best way to show just how bad it was is with the headlines:

Google I/O was a smashing success.  They launched a great Nexus 7 tablet with Project Butter and Google Now, some cool social and search features, and who will ever forget the Google Glass demo?  Google didn’t need to launch the Q to have what I would characterize as one of the more perfect developer conferences.

So why and how exactly do products like the nexus Q make all the way through the ideation, planning, execution, and launch phase without someone putting the brakes on?  There are many ways and reasons this happens that I outline below, and product managers need to pay heed to all of these potential pitfalls.  I am generically speaking about products, not specifically about the Nexus Q.

Don’t Solicit Outside Opinions

Some companies don’t solicit outside opinion by design.  Outside opinions can be via market research, consultants, tech analysts, etc.  The project is so secret that they put the team members in another building with limited access and have them sign NDAs.  This is done, obviously for security, and there are hundreds of examples of this that the industry finds out about after the fact.  This week in the Apple-Samsung trial, we all heard that the iPhone development was treated like this.  This secret method obviously works well for Apple and a handful of companies, but for other companies, not so great. Some other companies don’t solicit outside input because they just don’t see value in it.  Some say it’s too time consuming or too expensive.  That’s just code for “I don’t see the value”.  They figure they are the experts, have all the answers, and any outside inputs could lead to having the project derailed.

Living in an Alternate Universe

Sometimes, companies will solicit external input on a concept or product but don’t listen to the advice or heed it.  Many times you will hear the phrase, “well that’s only one piece of input we’ll incorporate.”  This usually means the outside input won’t be used or even heeded, because, quite frankly, the company “knows” more than the outsider giving the input.  Or so they think.  These kind of folks get outside input because the rule book says they needed to get it and once they did, it becomes a filled-in check box versus valued input.  The product planner may believe the input, but just not heed the outcome.  The end result is the same as not soliciting input.

Underestimating the Downstream Impact

Many times, a product will successfully make it down the gate process only for the PM to be surprised down the line by a cost over-run or an internal team missing a delivery date.  This could very well have happened to the Nexus Q.  I can imagine the Q had a very long list of required features and nice to have features.  Judging by what launched, the “must-have” features didn’t even ship.  I can see just a few features that could have been added to make the Q the must-have party device.  What happened next was where the damage was done.  So what is a product leader’s reaction if their product is hit with a significant cost overrun or schedule slip?  In the case of the Q, the impact was most likely minimized.  Look at many Google products that get sent out as “Beta” at Google.  Most of them do.  Google News and Shopping were betas for years and cost the consumer nothing but time to use.  The difference is that the Nexus Q was $299 and it was more like Alpha stage if you gauge on feature completeness.

The Emperor Has No Clothes

When I have been privy to post-mortems of train-wreck projects, many times it comes down to a lack of leadership.  The entire team had a decent vision, knew the customer and what they wanted, solicited external input, listened to it, and knew the downstream impacts of issues that existed if they trudged forward.  But the leader in charge felt they had to meet the date at all costs or didn’t listen to the team members.  The leader made a commitment to someone, whether it be to their VP, SVP or board of directors to deliver something by a certain date and they were going to do it come “hell or high water”.   Leaders are trained to listen, be decisive and stick to their guns, sometimes at big costs.

What’s Next for the Nexus Q?

I believe the team managing the Nexus Q had a deadline to hit with Google I/O , they got in over their heads on features and just couldn’t deliver.  And no one stopped the project.  The Nexus Q name and Google brand has been damaged, but I think it’s recoverable.  To maintain the Nexus Q price, Google will need to add a ton of features.  I am envisioning the “ultimate party device” where everyone who comes to the party can play videos, music and games from their own device and the cloud.  I can also see “party mode” where every picture, video clip, and social media post  taken at the party is shown on a large HDTV. If something like that is a bridge too far, then Google will need to pull the price lever.  The BOM cost cannot be over a $100 with the built-in 25W amp, so they have a lot to work with.   Whatever Google does, they need to make some decisive moves if they hope to be successful with living room devices.

Microsoft Office 2013’s Biggest Risk Could be its Visual Design

Microsoft Office has been the staple of productivity for years, particularly for businesses. Therefore, whenever big changes happen to the product, it’s a big deal. Literally millions of IT departments and users shoulder the burden to learn every new version in hopes of squeaking out every ounce of corporate productivity. Microsoft’s latest version is in preview, out for testing millions of current and some potential users, too. There has been a lot written about the risks on potential pricing and its cloud-first method, but I believe the biggest risk is in its visual design, which looks more like a free Google product than a rich app buyers pay $399 for.

clip_image002Whether the industry accepts it or not, Apple is leading the latest design wave as measured by what looks premium. Physically, Apple design is all about minimalism with brushed aluminum, blacks, whites and sweeping angles with as few connectors and buttons as possible. The software design language is connected to the hardware language as that same brushed aluminum and minimalism is brought to OSX and apps like Mail and Calendar. Some apps take on a style of real world objects like Contacts, Notes, Pages, Find Friends, Newsstand and Photo Booth with elements of paper, leather, wood and even fabric. I cannot say I am a huge fan of the real-life designs, but it hasn’t stopped me or millions from buying Apple products. Microsoft’s Metro is distinctly different.

Metro design is a sharp departure from Windows 7 and also very different from Apple. Being different is a good thing as long as it attracts who you are targeting. Metro is direct touch, air gesture and speech control first, mouse and keyboard second. It focuses on the content by adding a ton of white space, 90 degree angles, and multiple, bright colors. There are no ties to real-life metaphors in color, shape, or texture. Like many, I like Metro for phones, tablets and even the XBOX. Now Office, in Office  2013 adopts the Metro design, a sharp departure from Office 2010. After using Office 2013 for a week, this is unfortunately where my Metro design admiration stops. The interesting part is that I thought it looked fine in screen shots, but as I used it on my 23” display for a week, it felt lifeless and drab. It was hard to even sit in front of and use for a few hours and I believe many other users will have this challenge as well.

I must point out that the industry has lived through many Office design changes, and there has always been a lot of uproar.  This is nothing new.  Remember when the ribbon first came out?  Many said that would be the thing that drove people to the alternatives which didn’t happen.  I think this case could very well be different as many alternatives exist, primarily Apple and Google and with such a drastic design departure, users will need to relearn or become comfortable with something new.  At no time has Apple’s and Google’s office tools been such a viable alternative.  I do not bring bias into this conversation as I have been a committed Office user since its existence.  In fact, I bend over backwards to use it in that I pay a monthly fee to Google just so I can sync my Google contacts and calendar to Outlook.  Based on the design changes and the alternatives, I am considering the switch and am looking at Apple and Google right now.  While mine and other’s purchase criteria incorporate more than just design, I think it is vital as it’s what you will be staring at eight hours a day.

The Google Apps for Business design language is more similar to Office 2013 than to Office 2010. It is minimal and very blocky with few shadows and lines.  In some ways, it’s more minimal than even Office 2013 that still sports the full ribbon.

Apple’s Mail and Calendar are more like Office 2013, with depth and shadows but with a very minimal ribbon or header and has seamless connections to Google Mail and Calendar without a monthly fee.

    image

So what does this mean to the success of Office 2013?  I believe Microsoft’s risk in enterprise is primarily with Google Apps for Business, but until Google can develop more robust spreadsheet scripting, increase presentation design  sophistication, and implement a more robust offline capabilities, it won’t make too big a dent in white collar professionals.  Employees who just need mail and calendar, Google is a big risk.  At $499-349 retail price, why would IT even think of doing this? And look at the Google design…. looks so similar now with Office 2013.  For small businesses, I believe Apple is the big risk to Microsoft Office 2013.  Included with every Mac, a user gets a full-fledged and robust email and calendar program and can users buy decent spreadsheet, presentation and word processors for  $19.99 a piece.  add to that they’re already  synced with iCloud and have optimized apps for the iPhone and iPad. Like me, users with Apple can also eliminate the monthly fee I pay for the Google Connector for Outlook.

It is a good time for consumers and businesses as even more choices are available than ever.  Now that the design has changed so much, now is the time to explore your options.

How Android Raises the Experience Bar with Nexus 7

As a technology insider who has actually planned, developed, and launched products, I have always believed it was important to spend inordinate amount of time living with new and emerging technology products.  Only this way, can you get the “feel” of a product; where it is and where the category is headed.  With regards to Android tablets, I have lived with every version of operating system since inception on 10” and 7” tablets. For every Android tablet version, I added every single personal and business account and used it as I would expect general and advanced users to use it.  While I had experienced some very positive things about each Android tablet version, whenever I held it to the iPad, it just didn’t compare.  Either my preferred apps weren’t available, the content I wanted was missing, or it just didn’t “feel” right.  After using the Google Nexus 7 for a few days, I can say the experience is solid and a lot of fun, something I have never before said about an Android tablet.

Why Non-iPads didn’t Sell Well

We must first understand Google’s previous missteps with Android tablets to fully appreciate how far they have come with the Nexus 7.  While I penned this post a year ago outlining why Android tablets weren’t selling well, let me net it out for you.  Non-iPads haven’t sold well over the last year because:

  • tablets were sold with incomplete collections or no available movies, music, TV, books, and games
  • tablets were sold with minimal applications optimized for the platform
  • tablets were released with unusable features like LTE, SD cards, and USB ports
  • tablets didn’t “feel’ good as there were stutters and sputters
  • with all the issues above, most 10” tablets were sold at the same price as the iPad

Think about the horrible stories consumers who paid full price for an HP Touchpad, Motorola Xoom, or BlackBerry PlayBook tell their friends and colleagues today.  Given tablets are a new category and still a “considered” purchase, everything other than the iPad was considered risky, particularly for the non-techie consumer.

So why will the outcome for the Nexus 7 be any different? Well, it’s all about its integrated and holistic experience.

Nexus 7 is a Big Phone with Access to 600,000 Phone Apps

No one doubts that Google’s Android has been successful in smartphones.  They’ve been so good, in fact, that Android even eclipses iOS in market share.  This is why it’s so important to understand the implications of Google choosing the phone metaphor for the Nexus 7 as its it’s all about apps.  Even today, Android tablets apps are counted in the hundreds and iPad tablet apps are in the hundreds of thousands.  Apps and content are to tablets as roads are to a car, and consumers have access to at least 600,000 of these Android apps.  It’s not only about leveraging the phone app ecosystem as the HTC Flyer were phone-based 7” tablets and didn’t exactly set the world on fire in sales.

Nexus 7 Uses State of the Art Hardware and Software

I liked my Kindle Fire when I first got it, but in reality, I was most impressed with the price versus the iPad than the experience. Over time, my Kindle just sat in my drawer at home and I used my iPad 2 then the iPad 3.  I stopped using my Kindle because the web and mail experience were just so pathetically slow, and quite frankly I got tired of staring at pixels as I am very near-sighted.  I attribute this to the cheaper hardware, a much older Android 2.3, a slow browser for complex sites, and a lower resolution display.  I must reinforce, though, it was at less than half the price of the iPad 2 when it shipped and millions looked the other way as they were just happy to have a tablet.

The Nexus 7 uses state of the art hardware and software and at least for 6 months, buyers won’t have too many levels of remorse. The two main drivers of the experience are Android Jelly Bean and the NVIDIA’s Tegra 3. Jelly Bean, the latest Android OS, adds a tremendous amount of new features but, in short, enable:

  • Project Butter which doubles the UI speed to 60fps so Android finally feels responsive
  • fully customizable widgets at any size the user chooses
  • voice search and dictation that actually works, as Google moved much of the logic and dictionary back to the client and off of the cloud
  • fully customizable notifications, to see just what you want to see and very little of what you don’t want to see
  • Google Now, their first intelligent agent

The NVIDIA Tegra 3 SOC is just as impressive as it has:

  • quad core processor clocked at 1.3Ghz which speeds up tabbed browsing, background tasks, widgets, task switching, multitasking, installing apps, etc.
  • 5th battery saver core which operates in idle mode, which saves battery life
  • GeForce graphics with 12 cores clocked at 416MHz to play the highest-end Android games and HD video

When you add these features to the 7”, 1280×800 (216 PPI) display, you get a very solid experience that just “feels” good.

It’s All About the Experience

As the rest of the phone and tablet industry has painfully learned from Apple, it is about the delivering the holistic and integrated experience between software and hardware, not the ingredients that make it up.  The Nexus does deliver a good, holistic experience, and not just at a certain price point.  While what defines as “good experiences” are very personal, here are many of the experience points I believe will be universally appreciated:

  • light enough to comfortably hold in one hand and small enough to put in a coat, cargo pant pocket or purse
  • the UI “feels” fluid and very fast
  • cannot see any pixels which can distract from the visual experience, particularly when using in bed or with near-sighted users who hold the tablet near their face
  • the tabbed browsing is very fast, focuses well on desktop-sized sites, and bookmarks sync with desktop Chrome
  • the apps and content users want will be available, at least in most countries
  • email is full-featured and very fast, with no lag to delete, create, or linking to web sites
  • notifications are subtle, non-invasive, and speedy to resolve
  • live tiles are fully customizable and save time to see content, even eliminating the need in many cases to open an app like email or calendar
  • with multiple apps running in the background with data feeds updating, it still feels smooth

The holistic experience is greater than just the sum of its piece parts, a first for Android tablets.

Nexus 7 Significantly Raises the Android Tablet Experience

As Ben Bajarin pointed out here, usage models will differ between 7” and 10” tablets. One thing I must add is that like the Fire, the Nexus 7 will pull some potential sales away from the iPad if Apple does nothing.  This is an element that many fail to recognize.  The analogy I will use to show this is between sedans and minivans.  If minivans had never been introduced, sedans would have sold more.  In parallel, without a Nexus 7, Apple would sell more iPads, even if they aren’t the same exact usage models or price points.

Will Apple roll over and let Google and Android slow down its march toward digital dominance?  Probably not, as I do expect Apple to introduce a 7” tablet for many reasons and also as Apple laid out at WWDC, iOS 6 is very compelling, especially when connected with other Apple devices.  Today, the broad tech ecosystem and investors see Apple as invincible, understandable as they have plowed over many of the largest companies in tech.  If Google and Android start to gain credibility in the tablet space, what message will that send about invincibility?  Apple needs to stop Google in their tracks and remove all of the oxygen during the holidays to maintain its dominant status.

One thing for certain is that the Nexus 7 and Jelly Bean significantly raise the bar for the Android tablet experience, something that has been absent for 18 months.

Human-Computer Interface Transitions will Continue to Drive Market Changes

As a former executive and product manager for end consumer products and technologies, I have planned and conducted extensive primary research on Human-Computer Interfaces (HCI) or Human Machine Interface (HMI). A lot of this research was for the industrial design of consumer products and their mice, keyboards, and even buttons. I conducted other research for software and web properties, too. Research was one input that was mixed with gut instinct and experience which led to final decisions, and in those times and companies for which I worked, were #1 in their markets. Only recently have innovations in HCI come to the forefront of the discussion as the iPhone, iPad and XBOX Kinect have led in HCI and in market leadership. I believe there is a connection between HCI and market leadership which needs more exploration.

For years, the keyboard and mouse dominated in HCI. For the previous deskbound compute paradigm, the keyboard was the best way to input text and perform certain shortcuts. The mouse was the best way to open programs, files and also move objects around the desktop. This metaphor even impacted phones. Early texting was done on 12 keys where users either forced it with one, two, or three strikes of a key to represent a different letter which was then improved with T9 text prediction. Thankfully, Blackberry popularized the QWERTY phone keyboard for much improved texting and of course, mobile email. Nokia smartphones then popularized the “joystick”, which served as a mini omni-directional pointer, once the industry shifted to an iconic, smartphone metaphor.

Then Apple changed everything with the iPhone. They both scrapped the physical keyboard and the physical pointer and replaced it with the finger. We can debate all day long if it were the capacitive touch screen, the app ecosystem, or something else that drove Apple to its successful heights, but we can all agree that Apple needed both to make a winner. Just use on of those $99 tablets with a resistive touch screen and you will know what I’m talking about.

The touchpad has gone through many noticeable changes as well. Remember when every notebook had a touchpad and two, sometimes three buttons? Now look at Dell’s XPS 13, the MacBook Air and the MacBook Pro. We are now looking at a world with giant trackpads that can recognize the multitudes of gestures with minimal effort.

Then Microsoft changed the game with the XBOX Kinect. Interestingly enough, like Apple, Microsoft eliminated physical peripherals and replaced with a body part or multiple body parts. Nintendo reinvented gaming with the Wii and its bevvy of physical controllers, and then Microsoft removed them and replaced them with major limbs of the body. In the future, Microsoft could remove the gaming headset microphone, too, once Kinect can differentiate between and separate different player’s voices.

Voice is of course one of the most recent battlegrounds. Microsoft has shipped voice command, control and dictation standard with Windows PC for nearly a decade but has never become mainstream. They do provide a very good “small dictionary” experience on the XBOX Kinect, though. Apple has Siri, of course, and Google has Voice. Microsoft may look like the laggard here based upon what they’ve produced on PCs and phones, but I am not counting them out. They have mountains of IP on voice and I wouldn’t doubt it if the industry ends up paying them a toll for many voice controlled system in a similar way OEMs are paying Microsoft every time they ship Android. This is just one reason Apple licenses Nuance for the front-end of Siri.

We are far from done with physical touch innovations. Just look at Windows 8 notebooks. Windows 8 notebooks are experiencing a dramatic shift, too, with their multiple gestures using multiple fingers. Just look at all the innovations Synaptics is driving for Windows 8. Their Gesture Suite for Windows 8 “modern touchpads” adds supports for the eight core gesture interactions introduced with Windows 8 touch, specifically supporting the new edge swipes to navigate the fundamentals of Windows 8 Metro experience. Interestingly, with the addition of all of the Windows 8 gestures on the trackpad, for certain usage models, the external mouse actually starts to get in the way of the experience. I can see as touch displays and advanced touchpads become commonplace, this could eliminate the need for a mouse. This would be interesting as in previous HCI shifts, it resulted in the elimination of a physical device to improve the experience.

The long-term future holds many, many innovations, too. I attended this year’s annual SIGCHI Conference in Austin, TX, which I like to describe as the “SIGGRAPH for HCI” and it is truly amazing what our future holds. Multiple companies and universities are working on virtual keyboards, near field air touch using stereopsis (2+ cameras), improved audio beam forming for better far-field and a bevy of other HCI techniques that you have to see to believe.

What can we take away from all of this? One very important takeaway here is to realize that those companies I cited who led with major HCI changes ended up leading in their associated market spaces. This was true for Blackberry and Nokia during their haydays, and now it is Apple, Microsoft and maybe Google’s turn. It doesn’t always stand true, though in commercial markets, but in many cases, stands true for consumer companies. Just look at SAP. In the future, it is important to keep your eyes on companies investing heavily in HCI technologies; companies like Microsoft, Google, Apple, and innovators and enablers like Synaptics who I believe will continue to surprise us with advanced HCI techniques which will lead to market shifts.

Are Wearables the Next Wave of Computing?

Two weeks ago at Google I/O, Google thrust wearable computing into the mainstream, public eye by performing one of the most incredible stunts I had ever seen on the technology stage.  Wearing Google Glass and communicating via real-time voice and video, daredevils jumped out of a blimp, landed on the Moscone Center roof, rappelled down its side and biked into Google I/O to throngs of cheering participants.  Is wearable computing great for “show” but making no sense in reality, or is this technology truly the future of computing?

We need to first define wearable computing.  This may appear simple in that it’s “computing you can wear”, but it’s a bit more complicated than that as I have seen some very confused news articles on it.  Let’s start with what they aren’t.  Wearables are not computing devices that are implanted into the body.  This may appear at first glance to be an odd thing to even discuss, but over the next ten years there will be very many compute devices implanted to help people with their medical ailments to help with sight, hearing, and drug dispensing.  Related to this, wearables are not implanted into prosthetic limbs either.  Wearables are compute devices embedded into items attached to the body.  Some are hidden, some are visible.  Some are specific compute devices; others are embedded into something we view today as something totally different like glasses, contact lenses, clothing, watches and even jewelry.

So now we know what wearable computers are, let’s look at the current challenges making them more than a niche today.  For all wearables, input and output are the biggest issues today that keep them from being useful or inexpensive.  Today, keyboards and pointing devices (fingers included) are the most prevalent input methods for today’s computer devices.  Useful voice control is new and gaining popularity, but isn’t nearly as popular as keyboard, mouse, and touch.  For wearable input to become popular, voice command, control, and dictation will need to be greatly improved.  This has begun already with Apple Siri and Google Voice, but will need to be improved by factors of ten to use it as a primary input method.  Ironically, improvements in wearable output will help with help with input.  Let me explain.  Picture the Google Glass display that will be built into the glasses.  Because it knows where you are looking by using pupil tracking, it will know even better what you are looking at which adds context to your command.  The final input method in research right now is BCI, or brain control interface.  The medical field is investing heavily in this, primarily as a way to give quadriplegics and brain-injured a fuller life.

Output for wearables will be primarily auditory, but of course, displays will also provide us the information we want and need. Consumers will be able to pick the voice and personality as easy as they get ring-tones.  Music stars and even cartoon character “agent voices” will be for sale and downloadable like an app is today.  Some users will opt for today’s style earphones, but others will opt for the newer technology that fits directly into the ear canal like a mini-hearing aid, virtually unnoticeable to the naked eye.

Display output is the most industry-confused part of the wearable equation.  The industry is confused on this because they are not thinking beyond today’s odd-looking Google Glass’s glasses form factor.  Over time, this display technology will be integrated as an option into your favorite’s brand of designer glasses. You will not be able to tell what are regular glasses and ones used for wearables.  Contact lenses are being reworked as well.  Prototypes already exist for contact lenses that work as displays, so like all emerging tech, will make it into the mainstream over time.  The final, yet most important aspect to the wearable display is that every display within viewing distance will be a display.  Based on advancements in wireless display technology and the prevalence of displays everywhere, your wearable will display data in your car, your office, the bathroom mirror, your TV, your refrigerator…… pretty much everywhere.

So if the input-output problem is fixed, does this mean wearables will be an instant success?  No, it does not.  Here, we need to talk about the incremental value it brings versus the alternatives, primarily smartphones.   Smartphones may actually morph into wearables, but for the sake of this analysis, it helps to keep them separate.  On the plus side, wearables will be almost invisible, lighter, more durable, and even more accurate for input when combined with integrated visual output.  New usage models will emerge, too, like driving assistance.  Imagine the future of turn-by-turn directions with your heads-up-display integrated into your Ray Ban Wayfarers. On the negative side, wearables will be more expensive, have less battery life, less compute performance and storage, and almost unusable if a display isn’t available all the time.  This is a very simplified view of the variables that consumers will be looking look at as they make trade-offs, but these same elements will be researched in-depth over many, many years.

So are wearables the next wave of computing?  The short answer is yes but the more difficult question is when and how pervasive.  I believe wearables will evolve quicker than the 30 years it took cellphones to become pervasive.  The new usage models for driving, sports, games and video entertainment will dictate just how quickly this will evolve.  I believe as technology has fully “appified” and Moore’s law will still be driving silicon densities, wearables will be mainstream in 7-10 years.

Why Apple Needs a 7 Inch Tablet

Last week, most of the tech industry was consumed with Google I/O, Google’s annual event to woo software and hardware developers to Googlenexus 7 and consequently away from Apple and Microsoft.  In addition to Google Glass-adorned daredevils jumping out of blimps and scaling down the sides of buildings, the Nexus 7 Tablet, the first full-featured, no-compromise tablet was launched at $199.  What’s very clear is that the Google Nexus 7 will sell well and take business away from Apple’s $399 iPad 2.  This is exactly why Apple needs a 7” tablet or else face the prospect of losing market share and profit dollars.

The Kindle Fire was released back in September 2011 to big fanfare.  I was accurate in stating it would take share away from the $499 iPad 2, which was true until the iPad 3 was launched and iPad 2 reduced down to $399 back in February.  The situation has changed now as the Fire is slogging away and is losing share to the iPad 2 and even to the $199 Nook Tablet and the $169 Nook Color.  It makes sense, as the Fire is a stripped down tablet and the iPad 2 is not, and many consumers were willing to pay the extra $200 to have the full experience.  The Fire used a smartphone operating system, had an SD display and users got a large smartphone experience.  It wan’t a great experience, but it wasn’t horrible, particularly at the ground-breaking price point.  The Fire also lacked access to the broad Google Play content and application environment, too, which, to some, was limiting.

The Google Nexus 7 Tablet is an entirely different animal.  It comes with the top of the line NVIDIA Tegra 3 with 4-PLUS-1 processor, the latest Android Jelly Bean OS, NFC, an HD display, camera, microphone and full access to the Google Play store. After seeing Jelly Bean in action, it is a marked improvement over prior Android operating systems  I have used that just didn’t quite feel right and toward which I have been so critical.  The Google Nexus 7 will sell well, which is good for Google, Android, ASUS and NVIDIA, but bad for Apple, unless they act before the holidays.

Historically, Apple has been OK taking the high road on unit market share, particularly in PCs.  The situation changed with the iPod, iPhone and the same is true for iPad.  Apple wants market share and will do what it takes to get it, as long as it’s profitable, they can deliver a great experience, and stay true to their brand. Apple could do just this with a 7”, $299 tablet. Apple would be very profitable as well, as the most expensive piece-parts of a tablet are the display and touch-screen, which are priced somewhat linear with size. Apple may have redesigned some of the innards of the new iPad 2 as they lowered the price, but not nearly enough to offset the $100 price reduction, so a mini-iPad would be additive, not dilutive like the $399 iPad 2.

Would consumers pay $100 for the Apple brand and experience?  In most traditional geographies, yes, they would, as consumers have shown the willingness to pay more than $99 more for iPods and $199 more for iPads.  This is exactly what the mini-iPad would be; a large iPod.  That’s not bad, that is good in the sense that  the iPod is still the most popular full-featured personal media player out there.

Will Apple productize what they undoubtedly have running in their labs?  I will leave that to the numerous Apple rumor sites, but one major occurrence suggests they will not, and that was one of the great proclamations from the late Steve Jobs.  According to Wired, in October of 2010, Jobs apparently said the following during an earnings call: “7-inch tablets are tweeners: too big to compete with a smartphone and too small to compete with the iPad. These are among the reasons that the current crop of 7-inch tablets are going to be DOA — dead on arrival.” Does this say Apple would never do a 7” tablet?  Actually it does not, as it is really a statement about non-Apple products and  Jobs left Apple some wiggle room to maneuver.  What I know for sure is that Apple must act in the next few months or risk tablet share degradation to the Google Nexus 7.

Google Nexus Q: A Confused Product

Wednesday, Google kicked off their annual developers conference in San Francisco.  Dubbed Google I/O, the conference is targeted at developers in the Google ecosystem.  It is meant to woo them so that they keep developing for the ecosystem and if Google had their way, leave the Apple and Microsoft ecosystems.  Many positive things came out of I/O including the Nexus 7 tablet and a spectacular demonstration of emerging technology with Google Glass.  Technology aside, I have never seen such an awesome demonstration like this that went from blimp to rooftop to rappelling to BMX bikes, and all in about five minutes.  That will be talked about in the valley for a while and I pity the next major company who does an announcement as it will be compared to Glass.

All was not good, though at Google I/O.  Amidst all the successful rollouts of products, technologies, and advanced prototype demonstrations was one of the most confusingly-positioned products I have seen in a while.  The Google Nexus Q is a real head-scratcher as it isn’t positioned well against anything that will be looked at as a competitor.  Bad positioning never ends well, as it typically results in deep price cuts or discontinuation, and that is exactly where I see the Nexus Q headed.

The current state of living room electronics has segmented Smart TVs, set top boxes, OTT adapters, and game consoles.  Sure there are gray areas and overlaps, but that’s how consumers segment them today in their heads.  The Nexus Q and Apple TV are examples of OTT (Over the Top) Adapters that take digital content from the cloud or from local devices.  So if the Nexus Q is an OTT Adapter, how well is it positioned?

Compared to the Apple TV, the Nexus Q does less and priced at $299 is three times more expensive.  That’s not very good positioning.  Both products can take content from the cloud, but the Apple TV can play almost anything from an iOS device, including games.  The Q does have an amp so you can directly plug speakers into the device.  Is that worth the extra $200?  No it is not.

Compared to the $199 XBOX 360 S, the situation gets even worse.  Today’s 360 does nearly everything the S can plus plays thousands of the top games.  According to market data of media sales, the XBOX 360 is a formidable media hub with users buying huge amounts of movies.  Users can also play content from their mobile devices if they are “PlayTo” certified or support the latest DLNA standards.

Compared to Sonos, the price gets within range, but here is where Google lacks experience, a fully segmented line-up, cross-compatibility, a brand, and mass distribution.  Sonos is the gold standard today for distributed digital audio in the home.  Sonos supports the Android, iOS, and Amazon ecosystems, not just Android like the Q.  Sonos also offers a full line of bridges, speakers, amplifiers, and OTT adapters.  The Q offers one adapter and one set of speakers.  But the Nexus Q does have lights that glow around the ball like disco lights…..

I am not down on Google or Google I/O, I am frustrated that Google blew an incredible opportunity to have a flawless Google I/O.  The industry needs a new champion and Google had a shot a proving they were the one.  Instead, Google reinforced that nagging description of not getting the end user or buyer.  So for now, Apple keeps that title and over time as the Nexus Q reduces in price and gets discontinued, Google will get another shot next year.

Surface Changes the Microsoft, OEM Dynamic Forever

20120619-082300.jpg

Yesterday, Microsoft announced Surface, a Microsoft-branded line of Windows tablets and convertibles. While details on battery life, pricing and availability were not available, Surface looks very impressive at first glance. The most unique feature is the thin keyboard case that converts the device into an extremely portable notebook. By competing with their own PC customers, Microsoft has changed the customer dynamic forever and will cause ripples all the way down the value chain.

Microsoft has a mixed history of making their own hardware products. On the plus side, we have the XBOX, mice and keyboards. XBOX is the dominant gaming and entertainment living room platform with one of the most innovative input devices, the gamer with Kinect. Microsoft has also had some big failures, too. The Kin phone was on the market a few months and the Zune was just recently discontinued. Both the Kin and Zune were nicely designed hardware, but both had certain fatal flaws, too. Kin consumer pricing blew it out of any reasonable price range for its target market. Zune forced users into a content acquisition model consumers just weren’t ready for and also faced intense competition from Apple’s iPod. While Surface details like pricing and availability were not available, assuming enough high-quality Windows Metro apps are available, the tablet looks very compelling… and that’s a problem for OEMs.

Since the days of DOS, Microsoft has never crossed the line and competed with its own PC customers in PCs, the HPs, Dells, Toshibas, and Lenovos of the world. When Microsoft got into Xbox, their customers did not want to get into that market. The only major friction point was discussion around Microsoft under-investing and deprioritizing PC gaming in lieu of Xbox game investments. When Microsoft launched Zune, PC OEMS did participate in the PMP market, but the Zune took the premium spot and left some differentiation room for its OEMs.

Before Surface, many OEMs I research were planning to launch Windows 8 and RT tablets. Some would be out for the October Windows launch, others would be out in Q1. Some tablets would be focused on the consumer market, others commercial and designs were in final prototype stages. Those designs could be in serious jeopardy now, but key details do not exist on Surface that would give better indication of an OEMs response. These are details like pricing, bundled software and services, target markets and distribution. Given Microsoft did not share details, one must now play out scenarios and do what-if games.

Microsoft could price Surface $100 above their OEMs. This would be a halo product strategy where Surface was the best of the best and aspirational, but wouldn’t sell that many. That is, unless it came pre-bundled with key services up front. This could be dangerous given consumer reaction to Zune’s all you can eat music plan. It would though “prime the pumps” like Ballmer indicated, paving the way for other OEMs to slot in. Microsoft could also narrow the target market, like going consumer only and not adding tools and features that would make it desirable to IT. This is an unlikely scenario given the Windows 8 and RT enterprise feature set and the popularization of BYOD. Surface will be in the enterprise on its own or get dragged in there by CIOs given the Microsoft brand and backing.

All the above scenarios are muddy and net-net only enable OEMs to participate in a low price leader position. This is similar to what the Android tablet manufacturers are experiencing today, which is ugly at best. A few companies are rising to the surface like Asus and Samsung, but still no one I talk to likes this market as no one is making any money in it and return rates and low levels of satisfaction run rampant. This is why OEMs were so excited by Windows 8. They saw how Android and in HPs case webOS turned out for them and came back to Microsoft.

With Surface, the dynamic between Microsoft and its customers changes…. forever. The announcement yesterday may be known as the day Microsoft delivered the iPad’s first real competition, but may also be known as the day Microsoft crossed the line with OEMs. Microsoft now is competing directly with its customers. Some OEMs will contemplate exiting the PC business entirely or exit the consumer market. Others will re-engage with Android. Some will run after Tizen, webOS or even plan to double down on their own OS like Samsung’s bada. Regardless, the PC market as we know it will be different from here on out. In some ways, that is a good thing, but long term could be a very dangerous game for Microsoft if the conclusion is that they have less customers for Windows.

HSA Foundation: for Show or for Real?

I recently spent a few days at AMD’s Fusion Developer Summit in Seattle, Washington.  Among many of the announcements was one to introduce the HSA Foundation, an organization  currently including AMD, ARM,  Imagination, MediaTek, and Texas Instruments.  The HSA Foundation was announced to “make it easy to program for parallel computing.”  That sounds a bit like an oxymoron as parallel programming has been the realm of “ninja programmers” according to Adobe’s Chief Software Architect, Tom Malloy at AMD’s event.  Given today’s parallel programming challenge, lots of work needs to be done to make this happen, and in the case of the companies above, it comes in the form of a foundation.  I spent over 20 years planning, developing, and marketing products and when you first hear the word “foundation” or “consortium” it conjures up visions of very long and bureaucratic meetings where little gets done and there is a lot of infighting.  The fact is, some foundations are like that but some are extremely effective   like the Linux Foundation. So which path will the HSA Foundation go down?  Let’s drill in.

The Parallel/GPU Challenge

The first thing I must point out is that if CPUs and GPUs keep increasing compute performance at their current pace, the GPU will continue to maintain a raw compute performance advantage over the CPU, so it is very important that the theoretical performance is turned into a real advantage.  The first thing we must do is distinguish is between serial and parallel processing.  Don’t take these as absolutes, as both CPUs and GPUs can both run serially and in parallel.  Generally speaking, CPUs do a better job on serial, out of order code, and GPUs do a better job on parallel, in-order code.   I know there are 100’s of dependencies but work with me here.  This is why GPUs do so much better on games and CPUs do so well on things like pattern matching. The reality is, few tasks just use the CPU and few just use the GPU; both are required to work together and at the same level to get the parallel processing gains.  By working at the same level I mean getting the same access to memory, unlike today where the CPU really dictates who gets what and when.  A related problem today is that coding for the GPU is very difficult, given the state of the languages and tools.  The other challenge is the numbers of programmers who can write GPU versus CPU code.  According to IDC, over 10M CPU coders exist compared to 100K GPU coders.  Adobe calls GPU coders  “ninja” developers because it is just so difficult, even with tools like OpenCL and CUDA given they are such low level languages.  That’s OK for markets like HPC (high performance computing) and workstations, but not for making tablet, phone and PC applications that could use development environments such as the Android SDK or even Apple’s XCode.  Net-net there are many challenges for a typical programmer to code an GPU-accelerated app for a phone, tablet, or a PC.

End User Problem/Opportunity

Without the need to solve an end user or business problem, any foundation is dead in the water.  Today NVIDIA  is using CUDA (C, C++, C#,), OpenCL, and OpenACC and AMD supports OpenCL to solve the most complex industrial workloads in existence.  As an example, NVIDIA simulated at their GTC developer conference what the galaxy would look like 3.8B years in the future.  Intel is using MIC, or Many Integrated Cores to tackle these huge tasks.  These technologies are for high-performance computing, not for phones, tablets or PCs. The HSA Foundation is focused on solving the next generation problems and uncovering opportunities in areas like the natural user interface with a multi-modal voice, touch and gesture inputs, bio-metric recognition for multi-modal security, augmented reality and managing all of the visual content at work and at home.  ARM also talked on-stage and in the Q&A about the power-savings they believed they could attain from a shared memory, parallel compute architecture, which surprised me.  Considering ARM powers almost 100% of today’s smartphones and tablets around the world, I want to highlight what they said.  Programming for these levels of apps at low power and enabling 100’s of thousands of programmers ultimately requires very simple tools which don’t exist today to create these apps.

The HSA Foundation Solution

The HSA Foundation goal, as stated above, was to “make it easy to program for parallel computing.” What does this mean?  The HSA Foundation will agree on hardware and software standards.  That’s unique in that most initiatives are just focused on the hardware or the software.  The goal of the foundation is to literally bend the hardware to fit the software.  On the hardware side this first means agreement on the hardware architectural definition of the shared memory architecture between CPU and GPU.  This is required for the CPU and GPU to be at the same level and not be restricted by buses today like PCI Express.  The second version of that memory specification can be found here.  The software architecture spec and the programmer reference manual are still in the working group.  Ultimately, simple development environments like the Google Android SDK, Apple’s XCode and Microsoft’s Visual Studio would need to holistically support this to get the support of the more mainstream, non-ninja programmer.  This will be a multi-year effort and will need to be measured on a quarterly basis to really see the progress the foundation is making.

Foundations are Tricky

The HSA Foundation will encounter issues every other foundation encounters at one time in its life.  First is the challenge of founding members changing their minds or getting goal-misaligned.  This happens a lot where someone who joins stops buying into the premise of the group or staunchly believes it isn’t valuable anymore.  Typically that member stops contributing but could even become a drag on the initiative and needs to be voted off.  The good news is that today, AMD, ARM, TI, MediaTek and Imagination have a need as they all need to accelerate parallel processing.  The founding members need to make this work for their future businesses to be as successful as they would like. Second challenge is the foundation is missing key players in GPUs.  NVIDIA is the discrete GPU PC and GPU-compute market share leader, Intel is the PC integrated GPU market share leader, and Qualcomm is the smartphone GPU market share leader.  How far can the HSA Foundation get without them?  This will ultimately be up to guys like Microsoft, Google and Apple with their development environments.  One wild-card here is SOC companies with standard ARM licenses.  To get agreement on a shared memory architecture, the CPU portion of ARM SOC would need to be HSA-compliant too, which means that every standard ARM license derived product would be HSA-compliant.  If you had an ARM architecture license like Qualcomm has then it wouldn’t need to be HSA-compliant.  The third challenge is speed.  Committees are guaranteed to be slower than a partnership between two companies and obviously slower than one company.  I will be looking for quarterly updates on specifications, standards and tools.

For Show or for Real?

The HSA Foundation is definitely for real and formed to make a real difference.  The hardware is planned to be literally bent to fit the software, and that’s unique.  The founding members have a business and technical need, solving the problem means solving huge end user and business problems so there is demand, and the problem will be difficult to solve without many companies agreeing on an approach.  I believe over time, the foundation will need to get partial or full support from Intel, NVIDIA, and/or Qualcomm to make this initiative as successful as it will need to be to accelerate the benefits of parallel processing on the GPU.

 

 

Ultrabooks: Progress a Year Later?

One year ago almost to the day at Computex 2011, Intel introduced Ultrabooks to the world.  The first generation of Ultrabooks was nice, but they were also homogeneous (exception Dell XPS 13) and very expensive, limiting  access to many demographics. So are things any different a year later? After seeing what Intel’s Ultrabook partners launched so far at Computex, it’s only fair to characterize it as impressive. Design wins and choice don’t guarantee sales, but you cannot have sales without it.  OEMs at Computex launched some serious innovation with new form factors and usage models while lowering price points which I think deserves a deeper look.  It also gives us a good indication how far Intel has come and where this is all going.

Display Size

First generation Ultrabooks came with 13” displays.  Display size is one of the most important purchase criteria and the fact is, consumers like varying display sizes to match their primary use case.  Generally speaking, those who want to travel prefer smaller displays and those who want to replace desktops or do more content creation prefer the larger screens.    Here are a few examples:

  • HP ENVY 6t– 15.6” display, .78” thick and up to 9 hours battery life
  • Gigabyte U2440– 14″ display, 22mm thick
  • Dell XPS 13– 13″ display in a 12″ chassis, 18mm, up to 9 hours battery life
  • Sony Vaio T– 11.6” display, .71” thick, and up to 7.5 hours battery life

Display Resolution

Most Gen 1 Ultrabooks came with a 13″ display with 1,366 x 768 display resolutions.  That is still the case today on average, but if consumers want more, they can get HD resolutions.  As a reference, the highest supported MacBook Air resolution is 1,440 x 900, 48% less detail than full HD.

Weight

Even though the first Ultrabooks were light, OEMs are even challenging each other on weight.  This is a good thing because in the future because when Windows 8 and convertibles arrive, these systems need to be a lot lighter as to operate as decent tablets.  Here is the lightest, or at least the lightest claimed:

  • Gigabyte X11– at 34.3 oz. with an 11.6″ display, claim to be the lightest (10% lighter than 11.6″ MacBook Air)
  • NEC LaVie Z- 35.2 oz., with a 13.3″ display (26% lighter than 13.3″ MacBook Air)

New Materials

Machined aluminum is nice, but it also exactly what Apple uses.  That’s not necessarily a bad thing because it looks upscale, but in some ways unoriginal and also expensive and heavy.  OEMs have gone out of their way to be different in this with chassis materials.  Here are just a few that I think deserve highlighting:

Touch Interfaces and Convertibles

Touch didn’t make a lot of sense with Windows 7, but Windows 8 changes all this.  As I am an avid tablet + dock user, I am constantly trying to touch my Ultrabook display.  As long as the display doesn’t push back as it is being touched, I think it will work well.  Even better will be convertible designs.

  • Acer Aspire S7– the 11.6″ or 13.1″ display folds back 180 degrees
  • ASUS Transformer Book– the 11.6″, 13″, or 14″ display detaches from the keyboard to use as a tablet
  • Lenovo IdeaPad YOGA– the 13.1″ 1,600 x 900 display bends back 360 degrees to use as a tablet
While manufacturers aren’t indicating these are officially Ultrabooks, they are very exciting:
  • ASUS TAICHI– dual sided HD display in 11.6″ and 13″ form factors.  Open the clamshell and it’s a notebook.  Close it and it’s a tablet.

Discrete Graphics

Original Ultrabooks came with Intel HD 3000 graphics.  Second generation Ultrabooks come with HD 4000 graphics which, while improved, just won’t cut it for everyone.  Discrete graphics attach rates in Western Europe, China, and Russia are all above 50%, so the need is there, and the new Ultrabooks deliver.  With Nvidia launching the GeForce GTX 680M yesterday, things will be getting even more interesting.  Discrete graphics are not available on the Apple MacBoook Air.  Here’s what consumers can order today:

Target Audience

First generation Ultrabooks were 100% consumer-focused.  As we are one year into the Ultrabook initiative, commercial and even enterprise-grade solutions are available:

  • Dell XPS 13– supports TPM security chip, custom factory integration, and worldwide Dell ProSupport
  • HP EliteBook Folio– supports TPM security chip, smart card reader, and worldwide commercial services
  • Lenovo ThinkPad X1 Carbon–  supports 3G connectivity, worldwide service and support

Price Points

Premium price points aren’t necessarily a bad thing; just look at Apple.  BUT, if you don’t have Apple’s brand, then as an OEM in the short-term you will need to live with lower volumes.  Generally, the opening price point for the first gen Ultrabooks was $999 and for second generation, this will be much lower.  Intel and their partners invested in alternative chassis designs, hybrid hard drives, display technology and the inclusion of Core i3 processors to lower opening price points. One important point to keep in mind is that these same, lower-priced Ultrabooks still provide the minimum acceptable bar of experience on vectors like battery life, responsiveness, fast resume, and built-in security features. Here are just a few:

  • Lenovo U310 – starting at $749 with 13.3″ display,  500GB hybrid hard drive, 3rd Gen Intel Core i3 Processors, unknown case material (42% less than MacBook Air)
  • Sony VAIO T– starting at $769 with 13.3″ display,  3rd Gen Intel Core i5 processor, 320GB hybrid hard drive, magnesium/aluminum case, (41% less than MacBook Air)

Wrapping Up

So have Intel and its partners come a long way in a year?  Absolutely. In one year everything has changed; from design diversity and differentiation, user interfaces, improved usage models and price points.  These are all good future indicators for solid sales, because in the end, this is exactly what consumers really want. To be clear, design wins don’t guarantee sales but you cannot have sales without them.  The traditional notebook market will continue to grow but I believe many consumers will choose to pay a bit more for an Ultrabook as they are looking at a better experience for a $100-200 increased investment. In many regions and demographics, that isn’t a lot to ask for, but in many, it’s not.  For those who need a new notebook, want a tablet, but cannot afford both, I believe many more will choose an Ultrabook than ever before.   As touch and convertibles become more pervasive and more affordable given many of the huge Intel touch investments, even more consumers who want it all will choose a touch Ultrabook.  This will be an interesting 12 months, that’s for sure.

 

AMD’s “Trinity” APU Delivers Impressive Multimedia

AMD officially launched its “Trinity” line of second-generation AMD A-Series APUs for notebooks two weeks ago and systems will be hitting the store shelves in a few weeks; desktops are expected later this summer. Reviews are showing that AMD significantly increased its performance per watt over its predecessor, “Llano,” and as a result, Trinity is competitive with Intel on battery life as well. One set of special hardware and software features AMD collectively refers to as the AMD HD Media Accelerator relates to a visibly enhanced and faster multimedia experience, which I think deserves a closer look as mainstream and techies users alike can benefit significantly from their use. It is also good indicator that chip makers are focusing even more on the end user experience and ways to improve it.

Smooth out shaky videos

All of us reading this column have taken a shaky video on our smartphone, palmcorder or camcorder. Whether it’s soccer games, track meets, or the first time our babies run, we all take shaky video. And all of us have watched a shaky video, too, and people relate differently. Some have no issues, but many do and won’t even watch the video. My wife actually feels sick watching any video like this and I’m sure she isn’t alone.

To smooth out these videos and remove the “shakes”, AMD developed AMD Steady Video technology. When run on a Trinity-APU system, AMD Steady Video technology significantly reduces the amount of jitter the user experiences when watching these videos. It is also done automatically without user intervention when video is displayed on supported browsers and media players. AMD Steady Video works with the top web browsers and media players. AMD Steady Video web browsing is supported by Chrome, Internet Explorer, and Firefox. The feature is also supported in Windows Media Player, VLC Player, and ArcSoft and Cyberlink media players, too. This covers an incredibly wide swath of global users.

Improve video quality on any device

One of the technologies more sophisticated users can appreciate is the AMD Accelerated Video Converter. This technology significantly accelerates the recoding and transcoding of video. Recoding means changing the format and size of a video. This helps when users capture video in a very high quality format that is very dense, but want to put the video on their phones, tablets or even upload to YouTube. Recoding the video makes it smaller or places it into a different format where it can be better viewed, shared or edited. Transcoding means recoding and playing back the output in real-time versus storing and sharing. This is the area where the AMD Accelerated Video Converter significantly improves the experience because it cleans up that same video at the same time as it’s recoding the video.

Transcoding comes in handy when you have devices spread out all over the house with video files on them and you want to watch those videos on a myriad of devices from TVs to PCs to game consoles to tablets to smartphones. Transcoding optimizes the video specifically for the playback device as every device prefers different kind of video. A “Trinity”-based notebook using the Accelerated Video Converter with a program like ArcSoft Link+ acts as a “media server” and transcodes the video and sends it to the playback device. The source file doesn’t have to be on the “Trinity” notebook; it can be on any device in the home and if it supports the latest DLNA protocol and if on the same LAN. DLNA isn’t niched anymore as it is supported on virtually every new major consumer electronics device and will even be the basis for all future set top boxes that will stream protected content around the home.

As a final benefit, AMD Perfect Picture technology is a video post-processing capability that works in concert with AMD Steady Video such that all the video that passes through the Trinity-based notebook is cleaned up to look sharper and have richer, more accurate colors. As a result, users can playback better looking video on their companion devices regardless of where they are in the home. This usage model may be for the more sophisticated users, but through features like Apple’s AirPlay and iTunes Share, consumers are getting much more comfortable playing back content from remote devices

Speed up file downloads

Today on a Windows-based PC, there isn’t a QOS (quality of service) arbiter to determine which application gets bandwidth. Users can be downloading a gigantic file like a movie, game, or app and the rest of the system is useless for anything like web browsing and video conferencing. With data density increasing at a faster pace than bandwidth, this will become a larger issue in the future. This is where AMD Quick Stream assists.

AMD Quick Stream adds the QOS feature that Windows lacks. The concept is simple; it provides equal access to each download. If four apps are downloading content, each app gets 25% of the available bandwidth. With three apps, each would get 33%. I found this feature useful as I was downloading a game in the background, looking up stuff on the web, and doing a Skype call. The system just felt more responsive.

Wrapping up

By adding the AMD HD Media Accelerator to all Trinity APUs, AMD is in many ways bucking the tech hardware industry rut of feeds and speeds that don’t demonstrably improve the end user experience. This is neither a minor investment nor a desire to load systems with bloat-ware; these are expensive technologies designed to improve user’s expressed areas of pain while fitting into very small resource footprints. The multimedia features are also comprehensively released and supported for multiple web browsers, media players, and home data protocols which ensure widespread deployment and regular updates. What AMD has done with AMD Steady Video and AMD Quick Stream is undoubtedly positive for end but also a positive message for the ecosystem, too.

Why Apple is Wrong About Convertibles

On Apple’s last earnings call, CEO Tim Cook responded to a question on Windows 8 convertibles by saying, “You can converge a toaster and a refrigerator, but those aren’t going to be pleasing to the user.” At first glance, this makes total sense, and from the company that brought us iPod, iPhone and iPad, this has wisdom. But as we peel back the onion and dig deeper, I do not believe Apple is correct in their assessment. As I wrote here, I have long-believed that convertibles would be popular in 2013 and I still believe convertibles will be a thriving future market, albeit not as large as notebooks or tablets.

Mashups between two devices are rarely successful, particularly in mature markets like PCs. I have researched, planned and delivered 100’s of products in my career, and very rarely have I seen two purpose-built products combined to create something real good. The problem becomes that by combining two products, the result becomes good for no one. The primary reason this becomes the case is that you have to make tradeoffs to make the combined product. By combining most products, you are sub-optimizing the separate product and what they uniquely deliver to their target markets. Convertibles have that possibility, but if designed appropriately as I outlined previously, this won’t happen.

Cars give us a few examples to work from. As the car industry matures, we see more and more specialization. There are now sedans, coupes, mini-vans, SUVs, mini-sedans, sports cars, trucks, truck-hybrids, etc. Specialization is the sure sign of a mature market as consumer’s tastes have gotten to a point where they know exactly what they want and the industry can profitably support the proliferation of models. Industry support is a very important in the industry must be able to afford all this proliferation. The auto industry supports this through common parts that are shared like chassis, engines, and electronics.

What does this have to do with convertibles? Ask yourself this question: If my SUV could perform like a Cayman Porsche, would I like it? Of course you would; it is called a Porsche Cayenne. The problem is, you could pay up to $100,000 for it. Want your sedan to drive like a Cayman? Just get a Porsche Panamera. The problem, again, is that is around $95,000. The expense isn’t just about the brand. Porsche invested real R&D and provides the expensive technology to make these “convertibles” perform well.

There are similarities and differences between the Porsche Panamera and Windows 8 convertibles:

  • Price: Buyers will only need to spend an extra $100-200 more than a tablet to get a convertible. Many will make that choice to have the best of both worlds. The average U.S. car is around $33,000 while the Panamera is around $100,000, three times the average. One argument Apple could have is that if future, full-featured tablets become $299, the added price could be too much to pay for the added convertible functionality.
  • Low “Sacrifice Differential”: This is Apple’s strongest point, as in many mashups, combining two products results in something that isn’t good for any usage model. “Fixed” designs will need to be less than 13mm thick and the “flexible” designs (ie Transformer Prime) need to be less than 18mm thick with keyboard. Otherwise, the convertibles will be too thick to serve as a decent tablet at 13mm or thicker than an Ultrabook over 18mm.
  • Transformation capabilities: Convertible form factors like the Transformer Prime can convert into a “notebook” with an add-on peripheral, but cars cannot. I wish there were a 30-second add on kit that could turn my Yukon into a 911 Porsche but there isn’t. Related to PC convertibles, if you have ever used the Asus Transformer Prime, you know what I am talking about. It is one of the thinnest tablets, and when paired with its keyboard, is only 19mm thick. One of the great features of the Prime is that the keyboard provides an extra 40-50% battery life boost that actually adds utility. Windows 8 for the first time supports the lean-forward and lean-back usage models. As a tablet, the users uses it with Metro. As a “notebook” clamshell form factor, the users can use Metro and then use Windows 8 Desktop with the trackpad and keyboard. This has never existed before and Apple doesn’t have this capability in iOS or OSX.
I do believe that convertibles ultimately will have space in the market as they serve to eliminate, for some users and usage models, redundancy of having two devices. OEMs must be particularly careful in how thick they make them. The original iPad was around 10mm and that was pushing some of the boundaries, particularly with reading books. The thicker the designs, the less desriable they become as they will not make a very good tablet. Flexible designs like today’s Asus Transformer Prime, when connected with Windows 8, could be a lethal market combination as it combines a thin tablet and a keyboard when you want it. Gauging by how much shelf-space is devoted to iPad keyboards, I must conclude that consumers are snatching these up in high volume.
I believe Apple is wrong about convertibles, but on the positive side, Apple’s warning gave the entire industry pause for thought. Interestingly, it provided the opposite effect of what I believe Apple intended, which was to freeze the market. Instead, it indicated that Apple was not going to do it, which motivated more OEMs to build, given they wouldn’t have to worry about Apple. While the volumes for convertibles won’t be as large as tablets or notebooks, I do believe they have a place in the market in the mid-term.

NVIDIA GeForce Grid: Killing off Game Consoles?

Yesterday, NVIDIA launched VGX and the GeForce Grid, which, among many things, could render future game consoles obsolete.  This may sound very far-fetched right now, but as I dig into the details of the capability of the GeForce Grid and map that against consumer future needs, unless future consoles can demonstrably deliver something unique and different, they will just be an unnecessary expense and a hassle to the end consumer.

Problems with Cloud Gaming Today

Services exist today for cloud gaming like OnLive and Gaikai.  They have received a lot of press, but it’s uncertain if their business models and experiences would exist years from now if they stay with their current approaches and implementations.

Scalability is one issue.  Services need to directly match one cloud game session with one graphics card, so if you have 1,000 gamers, you need 1,000 graphics cards.  You can just imagine the challenges in scaling that experience out to millions of users.  You would need millions of graphics cards, which in a data center environment doesn’t make a lot of sense logistically or financially.

Latency is another issue.  Cloud game services need to maintain severs 100s of miles away to maintain an appropriate latency in game-play.  Latency is the lag time between when a user does something and they get a response. Imagine if there were a one second delay between the time you pull the trigger in Battlefield 3 and the time which something happens.  This would render the cloud game absolutely unplayable. Latency in social media apps like Facebook is acceptable, but not with games. Having to provide “edge servers” close to end users like the industry does today is completely unproductive as you cannot leverage these same servers during off-times and it’s difficult to even leverage servers across different time zones.  Therefore, servers are sitting around idle with nothing to do. This places another immense financial burden on the cloud game provider.  NVIDIA and their partners are attempting to solve these problems.

Nvidia VGX and the GeForce Grid

NVIDIA, with VGX and the GeForce Grid is attempting to solve the scalability and latency problems associated with today’s cloud gaming services like Gaikai and OnLive.  NVIDIA VGX are the technologies addressing the current virtual display issues and the GeForce Grid is the specific implementation to attack issues in cloud gaming.  They are addressing the problems with two very distinct, but related technologies: GPU virtualization and low latency remote display.

Virtualization of the GPU enables more than one user to share the resources of a graphics card.  Therefore, the one to one ratio between user gaming sessions and graphics card goes away.  With NVDIA VGX, multiple users can share a single, monster-sized graphics card.  This provides much better scalability for the cloud game data center and correspondingly reduces costs and increases flexibility.

Lower latency remote displays enable a significant improvement in the speed at which the remote image is sent to the end client device.  In this cloud gaming scenario, the gaming frames are actually converted into an H.264 movie and sent to the user.  NVIDIA has enabled improvements in the process by eliminating many steps in the process.  The frame of the game no longer needs to touch the CPU or main memory and is encoded directly on the NVIDIA VGX card and sent directly over PCI Express to the network card.  By bypassing many of the previous components and removing steps, this speeds up the process immensely.  This delivers a few benefits.  First, all things equal, it can deliver a much faster experience to the gamer that they never experienced before.  The experience just feels more like it is happening locally.  Combined with GPU virtualization, the reduced latency also enables cloud gaming data centers to be located farther away from users, which increases data center utilization and efficiency.  It also enables entire geographies to be served that could never be served before as “edge servers” can be consolidated.

Wither Future Game Consoles?

If NVIDIA and its partners can execute on the technology and the experience, it would essentially enable any device that could currently playback YouTube video well to be a virtual game device. Gamers could play any game, any time, and immediately.  What kinds of devices do that today?  They are all around us.  They are smartphones, Smart TVs, and even tablets.  There’s no loading games off of a disc, no downloading 500MB onto a PC; its just pick the game and play.  Once the gamer is done playing on the TV, they can just take their tablet and pick up in their bedroom where they left off.

This kind of usage model is quite common when you think of it.  Many consumer books, movies and even music in this same way, so why not games?  For many consumers, convenience trumps quality and that’s one of the issues I can see with future consoles.  There is no doubt that the visual detail and user interfaces will be much more sophisticated than cloud gaming. As I look to how well the iPod did with its “inferior” music quality, consumers chose convenience over quality.  Look at Netflix on a phone or tablet.  Consumers can get much higher quality on the local cable service, but a growing number of consumers choose convenience over quality.

Device makers and service providers who don’t see any monetization currently off of games today will very aggressively adopt this approach.  TV makers, for instance, see no revenue from any game played on their devices.  Gaikai, as an example, is cutting deals with TV manufacturers like LG to provide this service built into every Smart TV in the future.  Telcos and cable companies are also very motivated to tap into the huge gaming revenue stream.

I believe that consoles will adopt cloud gaming capabilities in addition to physical media or they will be viewed as lacking the features gamers want.  I also believe that cloud gaming will seriously cannibalize future game consoles.  Many who would have purchased a new game console if cloud gaming with NVIDIA VGX and GeForce Grid had not existed will not buy game consoles.  With that premise, it begs the question if future game consoles have a bright future.  If game console makers don’t do something aggressive, their future is looking dim.

If you would like a deeper dive on NVIDIA VGX and the GeForce Grid, you can download my whitepaper here.

HTC One X International: Trading in My iPhone 4S?

One X

HTC announced the HTC One family in Barcelona at Mobile World Congress 2012. The HTC One X was one of the bigger standouts as it indicated the best in breed of Android phones available on the market. Some even said it would threaten the iPhone. Does it live up to the hype? I had the chance to use the HTC One X International version for a few days and I wanted to share my first hand experiences with you, which were very positive.

Background

I have been evaluating Android phones well before the first T-Mobile G1 launched back in 2008. I was a BlackBerry addict like many for so long until the Nexus One arrived, then I switched to Android wholesale….. for a while. The iPhone 4 finally pulled me from the Android world with its consistent performance, robust app store, quality photographs, and perfected HDTV Airplay mirroring functionality. Could the HTC One X International pull me back over to Android with its much more sophisticated ICS Android 4 operating system and higher quality app and media store? Maybe.

What I Enjoyed About the HTC One X International

Facial Login
I have been evaluating face login for about a decade and this is one of the first I have used that worked well. It’s missing a few features like auto-adjusting the display to provide light, but it worked well in most environments. If it did miss-read my face, it backs off to a secondary security method like typed password or drawing a pattern. I have not tested for false positives using photographs or videos either.

Multitasking
Quite simply, I have never used a phone this fast and did so many tasks at the same time as I did with the One X International; installing apps, updating apps, syncing Sugarsync data, and browsing in Chrome Beta at the same time were very fast. As hard as I tried to slow the system to a crawl using real apps and not benchmarks, I failed. This is a first for me as I had previously tried nearly every major flavor of Android phone. I attribute most of the multitasking prowess to the Nvidia Tegra 3 processor with its 4-PLUS-1 quad core architecture. When doing heavy multitasking, all four cores were blaring. When reading email, it only uses the one battery-saver core.

Display
The One X sports a monster 4.7″ HD display at 1,280×720 resolution. In comparison to my 4S, this provides 60% larger viewable image area at a very comparable PPI (pixels per inch). The contrast ratio was one of the best I had ever experienced, too. The georgous display made web surfing, viewing photos, watching movies, and playing games a very enjoyable experience.

Games
This is where the One X showed one of its key strengths. I prefer the eye candy and my preferred games are FPS (first-person shooters). I tried many of the titles in the Nvidia TegraZone to stretch the Tegra 3 as far as it would go. ShadowGun THD looked great not only on the integrated display, but also when displayed on a 60″ HDTV screen. I have an XBOX 360 and while I wouldn’t say it’s the same quality graphical experience as the latest Halo, it is close. To have this capability built into a phone, for “free”, is exceptional. I can see how tomorrow’s phones based on Tegra graphics will give future consoles a run for their money.

Camera
The camera experience overall was positive. I appreciated the fast, multi-picture taking capabilities and taking pictures in low light. I thought my iPhone 4S was fast, but the HTC One X was even faster. I also appreciated taking pictures while I was taking videos, and I can imagine making some very interesting photo-video mashups. Unlike the iPhone, I’m not limited to sharing my pictures from Photos just to Twitter. Right from Gallery, I can share to Facebook, Dropbox, SkyDrive, Flickr, Instagram, Picasa, Skype, and yes, Twitter.

Battery Life
I was pleasantly surprised with the battery life as I didn’t notice many demonstrable differences between the One X and my 4S. One area was web browsing where I was using Chrome beta on the One X, which delivered a fuller web experience than Safari, but felt like it was using more battery. Most impressive was that I could get decent battery life with a four core processor, great mobile graphics, on a display with 60% more area. I have to admit, when I first heard about Tegra 3 on phones, my head went directly to concerns on battery life. Nvidia pulled off something real big by enabling good battery life while having four processor cores and Nvidia graphics.

What I Would Like to See Changed About the HTC One X

Size
This is a personal preference, but I like to control the phone with my thumb, without two hands. . The One X requires me to use two hands which rules it out of quick stop-light usage in the car. Techpinions columnist Ben Bajarin goes into depth here on this idea here.

Charging
Like I said above, I like to multitask with my phone, using it more like a mini-tablet than a simple phone. As I would near the end of a battery charge, I would plug in the phone so I could keep playing or working. Often, I would get a warning message warning me that I was draining power quicker than I could charge the unit. This will hopefully get addressed in a software update as it is inconvenient.

Packaging
A beautiful phone deserves beautiful packaging. If you like eggs, great. The One X ships in what looks like a giant, single egg carton. The phone is beautiful and deserves to sit right next to the iPhone 4S, but the packaging should be hidden from human eyes.

Photo Skin Quality
All my shots of people outside in bright sunlight has a red or pink tinge to their skin. Either I had a defective unit or some calibration is required in the driver. I scoured the web and found a few instances of this happening to others. I cannot imagine this not getting fixed.

Trading in my iPhone 4S?


As I said previously, I prefer smaller phones I can control with one thumb. For those who desire the benefits of a larger display phone like the HTC One X International version, I can recommend this phone to those who don’t have access to LTE. The multitasking and games are better than anything I have used to date and when combined with the awesome 4.7″ display, the One X satisfies.

Immersive Social Games Bringing Families Together

The big discussion on social games recently is centered around games like Farmville and companies like Zynga, whose recent IPO generated a lot of attention. I see a much more interesting phenomenon taking place where new, cross-generational and immersive social games are bringing families like mine closer together. It’s an interesting phenomenon that goes back to family baseball and Monopoly. KingsIsle Entertainment, developer for the successful Wizard101 game launched Pirate101 this week. This is the second in cross-generational and social game which puts an exclamation point on the growth and value these kinds of games provide to families, including mine.

My Personal Experience

My son, his five cousins, his uncles and I all play KingsIsle’s Wizard101 MMO game. Most times my son plays and initiates a conference call. Sometimes he even uses Skype. Did I mention he was ten years old too? I really enjoy Wizard101’s ability to cater to my needs as well as my son’s. The characters and situations even harken back to 70’s comedies I grew up on. There are typically two levels to the dialogue, one for adults and one for my son. The game is deep, and according to KingsIsle, there are about 30 hours of spoken audio and hundreds of hours of gameplay in the main quest lines alone.

My son talks to me about Wizard101 at least twice a day about new levels, characters, spells, minions, and even new houses and furniture. He is fully vested. It’s even to the point where I pay him his weekly allowance in Wizard101’s virtual currency called crowns. Is my and my son’s story unique and different? Maybe fanatical but not unique when you look at the numbers.

The Country of Wizard101

My son is not alone in his fanaticism for the game. Wizard101 has 25M registered users, which, if it were a country, would be larger than population of Australia or replace Texas as the second largest state in the US. That is a large reach. Frequency is impressive, too with 14M monthly unique visitors to Wizard101. That’s larger than Sony Online Entertainment’s Freerealms.com, Nickelodeon’s nick.com, EA.com and popcam.com. According to KingsIsle, users have racked up 22.3B minutes of gameplay and have acquired 2B items in their quests. Independent research gives us an insight into why the game is so popular.

Trinity University Research Study

Trinity University surveyed 30,000 Wizard101 players last year and came back with some very interesting results. The study hasn’t been officially released, but I wanted to share a few things I thought were most interesting:

  • 60% of responding children play Wizard101 with other family members. One-third of those children play with their parents or grandparents.
  • 68% of responding adults play Wizard101 with other family members. Approximately two-thirds of those adults play with their children or grandchildren.

KingsIsle gets feedback all the time from its players about how families are experiencing the game together. Players tell stories about grandparents playing with their grandchildren, distant relatives playing with each other, Dads playing with their kids on business trips. There are also stories of older gamers finally finding a sense of community they had always longed for. Gaming can be more than just about having fun, it can be about the core of relationships and life

Jedi Lessons at Hogwarts

The research-based fact that kids and adults can enjoy the same kind of entertainment together makes a lot of sense when you think about it. Look at some of the biggest entertainment phenomenons of our time and it starts to gel. Cross-generational movies music and games are big. Look at Star Wars, Harry Potter, and most of the Pixar movies. Kids, adults and families all enjoy and watch these movies together as the content pleases different generations. KingsIsle isn’t done at Wizard101. There’s more.

Second KingsIsle Cross-Gen Game This Week

KingsIsle announced a new cross generational MMO game this week, called Pirate101. It’s all about pirate adventures and while similar in some ways to Wizard101, it’s quite different, too. I got early access to Pirate 101 and played with my son. He took right to the controls and I enjoyed it even more as the combat is more mature and quite frankly I enjoy the better visual effects. OK, I also enjoyed flying pirate ships around space too.

The game is more mature, but not too mature, to keep my son in the game as he gets older. Pirate101 will be a winner in the Moorhead household and I’m sure in the marketplace as it takes the winning Wizard101 formula and adds more mature themes and gameplay.

Conclusion

Social games are huge in numbers right now, but cross generational games that are very social could be even bigger. I have personally seen it, the reach and usage stats show it, and research tells us why this is the case. The new Pirate101 will tell all of us just how big it can actually get. Between now and then, I will continue to pay my son’s allowance with crowns!

Yahoo!: Tactics Masquerading as Strategy

Last week on Yahoo!’s earnings call, CEO Scott Thompson outlined six points the company would pursue to return the company to a proper focus. When I looked at the list, they all made sense as operational principles or even action items. The big problem is that unfortunately, operating principles or action items aren’t a strategy, and this does not bode well for employees, stockholders, advertisers and even end users. The best strategies are set in the context of a strong mission and vision, neither which Yahoo! has communicated to anyone.

My Personal Yahoo! History

I remember telling a colleague that I invested my entire life into Yahoo!. I had Yahoo! Mail, had all my contacts, my calendar, read all my news through My Yahoo, all my notes in Notepad, and even got weather and movie times from them. I would even start at My Yahoo! for search as My Yahoo! was the first place I started in the morning. Now, I start my day at Pulse News on a tablet or phone over coffee, listen to podcasts while taking my kids to school then then go to Twitter and Facebook for “cultivated” news. I rarely go to a Yahoo! property with the exception to check out stock prices on Yahoo Finance. I’m not alone as Yahoo users have fled to Google for search and mail, Facebook and Twitter for social media, and vertical, specialized sites like Instagram, Pinterest, Foodspotting, and Goba. It all makes sense, though, the story of a big company’s downfall.

Being the 800 lb Gorilla Difficult

Being the largest kid on the block is great at times. I know; I worked for AT&T and Compaq at their peaks. I worked for companies who created and took markets. It was fun as I worked for the more entrepreneurial divisions. I saw IBM in the late 80’s and early 90’s almost left for dead before they became untouchable as they appear at least today. Then there’s AOL who keeps fading farther into the background, buying content brands that are full of conflict. The jury is out on Microsoft and Google if they have already peaked or can move to where they are perceived as the leader. Net-net, being the 800 lb gorilla isn’t an easy thing because of three primary reasons. First, large, successful companies when they get large, become slower and bureaucratic. The inventors are replaced by people who are great at process and but light on vision. Secondly, these companies are concerned more with playing defense and protecting their ground and less out about winning in new markets. Third, these companies face the innovator’s dilemma, where they incrementally improve their services as opposed to investing in disruptive exploration. It’s hard being the 800 lb gorilla. So how does Yahoo! intend to deal with this?

Yahoo!’s Six Points of “Strategy”

On the Q1 2012 earnings call, Yahoo! CEO outlined six “essential elements of [the] plan.” This was after layoffs and after a reorganization. Most companies let strategy dictate organization, but I don’t believe that’s not the case here. Here are the six strategy elements verbatim:

  1. consolidating technology platforms and shutting down on transitioning roughly 50 properties that don’t contribute meaningfully to engagement of revenue
  2. defining our core media connections and commerce businesses, including News, Finance, Sports, Entertainment, Mail and a handful of others. Those properties that generate the majority of our engagement and revenue.
  3. moving engineers into our commerce businesses to put them closer to our user and dedicating some of our best and brightest Yahoo!s to meaningful innovation in those core businesses.
  4. accelerating the deployment of the platforms and technologies we’ve built to make each of our properties more scalable, nimble and flexible, and therefore, less costly and time consuming to run.
  5. making better use of Yahoo!’s vast data to personalize user experiences and dramatically improve advertiser ROI.
  6. refocusing our R&D on Owned and Operated properties and stopping development of a number of initiatives, including platforms for outside publishers and theoretical science that were outside of our core.
These are great tactics and action items but don’t provide any insights into what matter first and foremost.

Where Does Yahoo! Intend to Win?

The tactics above are great in the context of a solid mission, vision and objectives, but Yahoo!’s says nothing about where it wants to win. You see, getting every Yahoo! employee working in a single direction is the right thing to do, but what if it’s the wrong direction? This would be catastrophic and at least what I see communicated this is exactly where Yahoo! is headed. The first question is, “where does Yahoo! want to win that is uniquely valuable to consumers and to advertisers?” That piece is a mystery for Yahoo!. Yahoo! needs to lean into something, and they have a lot of choices as they are still in the large growth segment:

  • local
  • deals
  • mobile
  • social
  • photos
  • living room
  • specialized verticals

I’m not advocating for any one of these at this moment, but Yahoo! needs to choose something, anything, to get the remaining 12,000 employees focused on. I won’t be enough presumptive to say Yahoo! doesn’t have a strategy floating around on the Executive Staff’s desks, but it certainly isn’t being communicated to stakeholders who need it.

Yahoo! Next Steps

Yahoo! needs to regroup after the last few weeks and in the next few months, decide where they want to win,communicate this broadly, then create a supporting strategy, then organize to deliver on that strategy. The last month has been nothing but triage, and if they need to quickly reorganize again to support a real strategy, most of the few weeks will have been a waste of time for the employees. Yahoo! has two paths they can go as a former 800 lb gorilla; the Apple/IBM way or the Excite/DEC way. I’d like to see Yahoo! make a comeback for more than the nostalgia; I’d like to see a Yahoo! comeback to inspire everyone in the industry that comebacks can happen and employees and key leaders can make it happen. That’s good for everyone. Who doesn’t love a comeback?