Apple’s Free OS Upgrades and iWork Could Leave a Mark on Microsoft

Apple’s announcement Tuesday brought with it many innovations across the span of tablets, notebooks, and workstations. Apple introduced the new iPad Air, updated the iPad mini, redesigned the MacBook Pro, and provided more information on the Mac Pro.  I attended Apple’s event, and one announcement that didn’t get much attention until Microsoft’s blog is that much of Apple’s key tablet and personal computer software is now “free”.  Over the long-term, I believe this could have an impact not only on Microsoft, but its OEM partners, too.  Let me start with what Apple announced.

Yesterday, Apple announced that with the purchase of every new iPhone, iPad, and Mac, OS upgrades, iLife and iWork will now be “free”, or downloadable and usable for no charge.  Think about that for a second…. Free, high-quality operating systems, lifestyle, and productivity software across phone, tablet and notebook and woIMG_8358 (2)rkstation.  Consider for a second that it costs $120 to upgrade from MS Windows 7 or Vista to Windows 8.1 and an MS Office 365 license costs $99 per year or $300 over a three year period. I believe this will make a difference to desktop software in the long-term.

From a tactical point of view, this reduces the Apple premium price for the premium experience. Let’s consider the new 13” MacBook Pro. What was once $1,299 could now appear $879 if you factor in three years of MS Office and one major MS operating system upgrade.  This is a TCO basis that may be more appropriate for businesses than consumers, but does comprehend the potential full costs.  I don’t believe this will immediately be comprehended in consumer’s or businesses value proposition, but I do think overtime, it could. Now let’s look at this strategically.

Microsoft has diversified over the last decade into enterprise software and services, but Office and Windows, including upgrades, are still cash cows.  Enterprises don’t pay list price for OS upgrades or Office, but based on MS’s profit margins, there is still a lot of “room” to work.  And it’s that “room” that Apple intends to pierce based on today’s announcements.  Consider for a minute what the MS world would look like to Microsoft’s customers and partners with the expectation of free OS upgrades and free Office.  Apple is essentially commoditizing OS upgrades and productivity software.  The PC software industry has already has been impacted by the mobile world and I don’t see this stopping anytime soon.  In fact, Apple’s announcement exasperates the issue.  PC software and services like Windows upgrades and Office will continue to look more expensive year after year.

So what does this mean to MS’s partners?

MS OEMs like Dell, HP, and Lenovo now must consider the entire value proposition with PCs with Windows and MS Office.  Consider that OEMs do drive revenue selling Office. Just look at how hard they pressure sell you in on-line configurators.  OEMs recognize that Office is the business standard, but how do you deal with a “free” cloud and client offering from a credible brand like Apple over the long haul?

Free MS Office productivity alternatives to Office have been available for 20 years, but this time, it’s different now.   PC software that costs a lot of money looks odd when compared to low cost mobile and freemium models.  As I said before, over time, I believe buyers will be less likely to pay as much as they do today for PC software, look more closely at the alternatives.  This creates a big challenge for Microsoft.

I believe that OEMs, because of the vanishing software opportunity, just have one less reason to connect themselves to Microsoft and more strongly expand their opportunities with alternatives like Android, Chrome OS, and Linux.  This transition has already started when you look at Dell’s, HP’s and Lenovo’s products, but Apple just gave OEMs another reason to invest more into the alternatives.

Net-net, Apple’s “free” software announcement will hurt Microsoft, starting with consumer, then bleed over to education and small business.  I don’t think this will have much of an impact to medium and large enterprises because Microsoft has money to move around (Exchange, SharePoint, Windows Server, Lync, etc.) but will certainly come up in Microsoft price negotiations.

Motorola’s Confusing “X8 Computing System” Actually Qualcomm and TI Silicon

Two weeks ago, I penned the Forbes column, “Google’s Motorola Confuses Everyone With Its ‘X8 Computing System.‘ “My intention was not to ridicule or embarrass anyone, but to point out just how important it is to be factual and precise during product launches and communications.  Trust me, I have empathy as I have launched hundreds of products in my career.  I showed in the column that there were multiple press interpretations of what Motorola launched with the Moto X which caused a lot of confusion.  Additionally, I believe Motorola was taking credit for work other companies had done, which I do have a bit of an issue with.  While most end consumers don’t care about the technical details, many do, particularly the influencers, and those influencers have an impact on that typical consumer. One ambiguous piece of information when I wrote the first column was determining exactly who made the two special voice and contextual chips.  Thanks to AnandTech’s Brian Klug, we finally know.

Last week, Brian Klug wrote a very positive review of the Moto X.  Through his analysis, he uncovered a few things.

On the contextual processor core, Brian writes,

“This stowage and contextual awareness detection comes through fusion of the accelerometer, gyro, and ambient light sensor data on a TI MSP430 controller which enables most of the active display features from what I can tell. These then are exposed as flat down, flat up, stowed, docked, and the camera activation (flick) gesture. The MSP430 also surfaces its own temperature sensor to the rest of Android, which is nifty (the Moto X has an accelerometer, gyro, pressure sensor, compass, and the MSP430’s temp sensor).

On the natural language processor core, he writes,

“Anyhow I spent some time tracking down what is responsible for the voice activation feature as well, and it turns out there’s a TI C55x family DSP onboard the Moto X, probably one similar to this. It’s easy to see the MSP430 references without much digging, the C55x references are referenced in an aov_adspd (activate on voice, dsp daemon?) application, and then inside the two aonvr1,2 firmware files that are loaded presumably onto the C55x at boot. The C55x runs this lower power (sub 1 mW) voice recognition service and wakes up the AP when it hears it, I believe it also does noise rejection.”

As I have worked closely with AnandTech since its inception, I can say they usually have their facts straight, so I will take this as near-fact.

As further confirmation, AllThingsD’s Arik Hesseldahl confirms the TI connection with his article discussing a recent IHS tear-down report.  Arik reports,

“The main chip inside the phone is a Qualcomm Snapdragon S4, which IHS estimates costs $28. That chip has been combined with two chips from Texas Instruments that handle gestures and listen for spoken commands from the user, which cost between $4 and $5 together.”

OK, so the Motorola branded ‘X8 Computing System’ is comprised of Qualcomm and Texas Instruments silicon. I think this is a very good example of heterogeneous computing, which is a good thing, but the silicon is so very not Motorola.

You may be wondering, “do consumers care about silicon details” or “why shouldn’t Motorola re-brand the silicon they bought and integrated”?

Most consumers could care less about the silicon their phones have, but the precision of the communication does matter to influencers.  As we have seen over the years, communication about new products spreads quickly, particularly over social networks and the web.  The social glitterati many times make up their minds during the product launch they are watching, tweeting and posting away.  The general consumer injests this either directly or by retweets or reposts on social media sites and, based on research I’ve seen at OEMs, pay attention when they’re in the buying state of mind.  While some marketeers call it “buzz” or “intrigue”, you don’t want any ambiguity in your product launches.  This is why Apple is so precise during product launches.

Apple, during their product launches, is very precise with every word is communicated.  They know that ambiguity leads to a pause, and every pause takes away from understanding.  If there is ambiguity on the facts, Apple is quick to follow up afterwards with their specially chosen press corps.  This precision helps focus the social glitterati on the key messages Apple wants reinforced, and not the side-shows debating what Apple really said or meant.

In the end, I believe Motorola’s “less than precise” explanation of their “X8 Computing System,” which is comprised of Qualcomm and TI silicon, played against them.

The Anatomy of an Iconic CEO Firing

Unless you were on vacation in Alaska last week, you know that Microsoft announced that CEO Steve Ballmer would retire in 12 months and that a special board committee had been formed to find his replacement.  The departure was positioned as voluntary, but there has been a lot of pontification and doubt about that claim.  I personally doubt CEO Steve Ballmer left voluntary and believe he had some pushing from the board.  After personally watching many CEOs come and go, I thought it would be valuable to share some insights from my nearly 25 years of Fortune 500 company experience so you can arrive at your own conclusion. 

The first thing I want to stress is that some CEOs do actually leave the company voluntarily.  There are a few key signs to look for:

  • Public succession planning process named the replacement 6-12 months before the old CEO leaves
  • The old CEO takes the chairman of the board seat (don’t be fooled by just a seat on the board or chairman emeritus)
  • CEO is of retirement age
  • CEO leaves the company and goes immediately to another company
  • CEO cites family or health challenges (don’t confuse with the “leaving to spend more time with the family” schtick)
  • Company’s stock performance has outperformed their peers

It’s important to get into the head of the board of directors, because they really call the shots when it comes to firing a CEO.  When it comes to CEO hirings and firings, the board is there to not only find the right people, but also to make sure that when firing a CEO, the company looks as good as it can.  They balance throwing the CEO under the bus versus the downstream implications to other stakeholders like investors, employees, suppliers, and customers. If throwing the CEO under the bus makes the company look even more screwed up or even mean, they will opt to position the departure as an amicable departure.  If publicly and openly taking out the CEO makes the company look good, they’ll do that.

The best case scenario for a board who wants a new CEO is to quickly drive them to quit on their own and orchestrate events to make it look amicable.  This happens every day in corporate America. Typically, the CEO will be pulled into a special meeting where the board will express their displeasure in something the CEO or the company has done or that the board has been “thinking” that something should change.  The board then makes what the CEO believes is an untenable request.  That request could be to hit a specific metric in a specific time or just a dictate of a decision they’d like the CEO to make.  The CEO either quits on the spot under a specific separation package or after the meeting.  The board has been through this many times and it works like clock-work.

Now let’s get into the head of the departing CEO whose board doesn’t want them anymore.  The departing CEO wants to save as much face as possible, as their friends, family, business associates and potential future employer are watching.  Their desire is to position the exit as their decision, not anyone else’s, and certainly without a board commenting poorly on past performance or decisions.  Departing CEOs don’t want to be perceived as having been leaving mess, because no future employer wants that carry-over drama.

So what does a firing positioned as an amicable separation look like?  Here are a few signs, which, in some sense, is the antithesis of a CEO actually leaving on their own:

  • CEO leaves well before the board’s succession plan called for
  • CEO didn’t leave immediately for another company or cite health or family problems
  • CEO leaves well before a turnaround is complete
  • Company’s stock under-performs their peers
  • CEO isn’t invited to take the chairman seat
  • CEO leaves after a major and very public company gaffe

Microsoft’s Steve Ballmer made some very telling comments in ZDNet’s Mary Jo Foley’s interview.

“Q: When did you actually decide you were going to retire? Was this a sudden decision?

Ballmer: I would say for me, yeah, I’ve thought about it for a long time, but the timing became more clear to me over the course of the last few months. You know, we worked hard. We worked hard on our strategy process, our org process. And frankly I had no time to think about it during all of that…. I would say my thinking has intensified really over the last couple, two, two and a half months, something like that.” 

“Q: So when did you finally decide?

Ballmer: Officially, a day or two ago. We had a board call. When was that, two days ago? And it was really two days ago … I would say that we really — I finalized and we finalized that this was the right path forward.”

“Q: What’s next for you now?

Ballmer: Frankly I don’t know. I haven’t spent a lot of time — I don’t have time to spend actually even thinking about what comes next. I’m not going to have time to do that until the board gets a successor in place.My whole life has been about my family and about Microsoft. And I do relish the idea that I’ll have another chapter, a chapter two, if you will, of my life where I’ll get to sort of experience other sides of life, learn more about myself, all of that, but it’s not like I leave with a specific plan in mind.”

What I don’t fully understand is that if this has been in the works for years with intense thinking over the past few months, why no ideas about next steps?

“Q: What are you guys looking for in a successor?

Thompson: Well, we’ve had a process underway for quite some time to think through what the attributes of the successor would need to be. We’ve worked with an outside firm, Heidrick and Struggles, to help us define that. But I don’t think we’re ready to declare that externally at this point in time. We are well down the path in the search, and hopefully in some reasonable amount of time we’ll have a new leader.”

Similar point here.  If this has been in play for years with intense activity over the last few months and if the new strategy and org is so sound, why doesn’t the board know of the characteristics they want in a new CEO?  Were they trying to save face for Ballmer here?

So, did Ballmer leave on his own or was he motivated or pushed by the board to leave?  I’ll let you make up your own mind.  I urge you to read Microsoft’s press release and also Ballmer’s letter to the troops. I believe, based on my experiences with large companies the last 23 years that Ballmer was pushed to leave or was put under some very aggressive directives by the board.  What do you think?

How FitBit Changed My Life

When technologists and pundits weigh in on technology trends and products, we are always quick to point out that technology needs to meld with our lives as opposed to us changing to fit the device.  In most cases that is true… but not all.  My FitBit experience was different from many I’ve had in that I changed key items in my lifestyle as opposed to FitBit melding with me.  FitBit literally changed my life, and I think there are lessons that we can derive from that.

Let me give a primer on FitBit first for those unfamiliar.  FitBit develops a line of health and fitness wearables and a scale that connect to mobile, PC and cloud services. The wearables fall into three categories, one that’s worn on the wrist (Flex), one that’s designed to be clipped on (Zip), and one that fits hidden inside a pocket (One). These devices track steps, calories, distance, very active minutes, floors, hours slept, and times woken up from sleep. The scale, Aria, captures weight and BMI.  All of this information is then fed into your computer or phone that you can view in an easy-to read phone or web dashboard.

I have the One wearable and Aria scale.  Every morning when I get up, I’ll read the One’s display to see how long I slept.  Then I’ll reset it by holding down the button for a few seconds to let it know I’m done sleeping.  After that I jump on the Aria scale, which, as anyone know who’s trying to lose a few pounds like me, can be an “interesting” experience.  Aria tells me my weight and BMI and then syncs directly with the cloud.

After getting dressed, I’ll make sure I pop the One into my pocket.  I have goals setup for steps, so, like checking a smartphone for social media updates, I’m constantly checking the One’s tiny display to see where I am during the day with my number of steps and derived calories.  If I haven’t met my goals, I will literally change what I’m doing.  I’ll take the stairs more.  I’ll walk to something versus driving. I’ll take the long way instead of the short-cut.   At the airport, I’ll walk versus taking the people movers.

At night, I changed my behaviors, too.  I will literally sleep with the One.  It comes with a soft wrist band where you slip the One inside and then wear for the night.  I then hold the button down for a few seconds that says I’m “sleeping”.  When I first thought about wearing something at night, I had a visceral negative reaction.  I don’t even like to wear any jewelry to bed and certainly didn’t expect to get comfortable with a wrist strap. After one night, it became reflex.  It helped that my wife didn’t make any comments about the “geek” in me.

So why does any of this relevant to us high-tech folk?

I believe that the more changes one is willing to make in their life to fit in a tech device, the more important, meaningful and game changing the device.  Typically, the underlying driver is something deeper than it appears on the surface.  Look at smartphones as an example.  We now place smartphone by our beds and take them everywhere we go, incessantly checking them hourly (by the minute in the case of my teenage girls).  This is driven by the strong need to communicate and be part of a community.

My willingness to have the FitBit on my side or on my wrist 24×7 and change what and how I do things is driven by an even stronger need, the need to survive and thrive.  I’ve made the personal connection with the device and my ability to live a healthier and longer life, so I’m willing to so many things differently that would be considered odd or anti-social.  Let me take this one step further, and a bit on the scary side.

If you’re like me, when you hear of the word “implant” you think “artificial” or maybe some pain.  If I’m so willing to make changes driven by the need to survive and thrive, if an implanted health device came along that could make me even healthier, what would I do?  It’s scary to think I’d even consider it, but now I would.  Let me step back a bit and close this out.

I believe that one of best ways to evaluate the stickiness and success of a future consumer tech product is to look at that unique need it can fulfill.  Is it about love, community, communication, health, wealth?  Those products that uniquely filled those needs or helped you get there will be the stickiest.  Fitbit and the class of products like them making that personal connection with a lot of people, and I’m bullish about their future.  This can also help us predict the tech future, as perilous as that is.  Want to know how Google Glass or the Xbox One will do?  Run it by the fulfillment test and see.

Is Chromecast Really Android’s Attempt at an Apple TV?

I have been connecting compute devices to my TVs for nearly 20 years, the first being a Compaq Presario hooked to a massive RCA 35” tube TV via an NTSC converter. Back then, there wasn’t online audio or video content worth streaming, but there were games like “You Don’t Know Jack” that were a lot of fun.  My, how times have changed.  I now have three Apple TV’s and an Intel-based WiDi base station connected and also a few retired Google TVs that currently sit in boxes.  I just picked up a Chromecast, and after using it for a week, I wanted to share my thoughts and impressions, and out of those, see what insights I found.  One thing in particular I have a lot of questions about is exactly what Google is trying to do strategically.  Let’s start with the product.

My first impression after I opened the package was just how small it was.  It’s really small, thinner than my Kingston 32GB USB3 stick, but wider at the end.  I was thinking, “What a great thing to travel with”, or that I could move it from room to room between my 4 HDTVs. It appears at the outset that Google is trying to “one-up” Apple as it relates to size.  I do need to point out a few things, though.

What the pictures never show is that Chromecast requires USB power, either from the TV or from a charger.  I first thought that it supported the MHL standard where HDMI is powered, but it doesn’t.  While Apple TV beauty shots never show power cords it still bugged me because the expectation is that Chromecast is drawing power from the port.

Let’s talk setup.  It’s really easy.  You just plug the Chromecast into the HDMI port, plug it into your TVs USB port and the hardware is setup.  WiFi setup is a lot easier than any other connected TV device as there’s no painful pass-code entry with a T-bar remote like on the Apple TV.  With Chromecast, you download the Chromecast Android app, Wi-Fi connect to your phone, enter the pass-code on your phone, it hands off to Chromecast to your router, and you’re connected.

One other thing I need to point out is Chromecast’s visual style points.  There’s never a black screen when it’s connected.  What you see are stylish, full screens of nature and cityscapes.  So Apple-esque….

One potential setup issue will occur wherever there is a logon screen to connect.  Like An Xbox or Apple TV, there is no way to sign into a hotel or work WiFi screen to put in a special access code.  That’s disappointing, especially if you wanted to use it while traveling.  This limits Chromecast’s utility a bit and somewhat defeats the purpose and value of its small size.

Let me move to content.

Chromecast currently supports Android app-based YouTube, Google Music, Google Play TVs & Music, and PC and Mac Google Chrome browser content.  I watched four movies via Android Google Play and the experience, on the whole, was good.  The video and audio was high quality.  My only complaint was fast forwarding and rewinding.  If you miss something and want to go forward or back a few minutes with any degree of precision, it’s really difficult.  I attribute this to WiFi latency.  Someday, the industry need to get with the WiFi direct program and remove the router from this usage model equation, but not now in this case.  Android Google Music and Android YouTube worked well, too.

The PC/Mac Google Chrome mirroring experience, albeit in Beta, worked really well for me.  There is a lot of noticeable latency, much more than Airplay, but for most music, movies, video, and even doing a slide presentation, it will be just fine.    There are three of classes of content on the Chrome browser: 1) those that use Google’s API and have a seamless and full screen experience,  2) those that you must set manually to full screen and, 3) those that don’t run video at all.  YouTube and Netflix use the Chromecast API. Super Pass, Hulu and Vimeo don’t use the API, but work just fine.  Finally, Amazon and Time Warner Cable, probably because they use Silverlight, won’t play any video.

One thing that I find most interesting is to think what Google may do down the road with Chromecast.  I find it interesting they used 3D graphics from Vivante.  3D graphics sure make menuing and overlays nice, but why add the same Vivante GC1000 graphics that’s inside the Samsung Galaxy Tab 3?  Theoretically, this could run OpenGL ES 3.0 games, and the user could use their smartphone as the controller.

All in all, Chromecast is Android’s impression of the Apple TV.  It follows the Android philosophy- it costs a lot less, doesn’t do as much but does enough, and the experience isn’t as smoothe (in this case driven by WiFi latency).  This equation has worked well for many players in the Android ecosystem, and I expect Chromecast to sell well, certainly better than the failed Google TV attempts.

What Exactly is Motorola Hiding With the X8?

Last week, Motorola announced their new line of Droid phones for Verizon.  Three of the features they stressed the most were “up to 32-48 hours battery life”, case configurability, and what they called the “Motorola X8 Computing System” for always-on functionality.  The X8 was by far the most confusing and mis-reported elements of the announcements, with many outlets reporting conflicting facts about the X8.  So what exactly is Motorola hiding with the X8 and why weren’t they more clear with the facts?  Let’s start with what Motorola communicated at their launch event.

During the Motorola launch event, the company characterized the technology as the “Motorola X8 Computing System”.  It fake packagewasn’t described as a chip or as an SOC, but a “system”.  This is the first time I’ve ever heard something described like this and is very confusing.  The picture of the X8 was confusing, too, which I believe shows a fake PCB or package with 8 pads, 4 on each side.  This makes the X8 look like one piece of silicon, or SOC.  Most phone makers like Apple are very clear with their descriptions and describe exactly what is inside their phones as “chips” or “SOCs”.  The word “Qualcomm” was never used during the event, either, nor was it on the Motorola website at first, which has significance as I will point out later.

The press then described the X8 exactly as Motorola presented it, which was that Motorola created their custom SOC, which would be great… if it were true.

Only after ArsTechnica ran an article entitled, “Motorola’s 8-core chip gives us a lesson in marketing-speak” did the details start to emerge.  Ars ran a benchmark which listed the Droid’s configuration, clearly showing clearly that the phone was based on the Qualcomm Snapdragon S4, which is commonly found in 10s of millions of smartphones around the world.  Then the press jumped on that, now referring to the X8 as based on the S4 but with two custom Motorola cores for contextual computing (sensor hub) and natural language.  Also, Motorola then added the following to their own website:

“The Motorola X8 Mobile Computing System is comprised of a Qualcomm Snapdragon S4Pro family processor (1.7GHz Dual-Core Krait CPU, Quad-Core Adreno 320 GPU), a natural language processor and a contextual computing processor.”

Why didn’t Motorola just say that it used the Qualcomm S4 at launch?  Why did they show what looks like a fake PCB/package, making it look like it was one chip with all that functionality?

Sascha Sagan’s PC Magazine interview with Motorola engineering SVP Iqbal Arshad added some extra detail but added some more confusion. Sagan reported that, “The X8’s CPU is, basically, a 28nm Qualcomm S4 Pro running at 1.7GHz. Motorola has customized the chip’s firmware, though.” That custom firmware statement is just bizarre based on my 20 years dealing with chips. Every phone manufacturer customizes firmware of a phone to a certain point, but no one modifies the base level of the Qualcomm IP.  To add a little more mystery on the two specialized chips, in the article, Arshad “declined to say where Motorola got them from, or who manufactured them.”  Again, odd.

So what is it with all this secrecy, seemingly fake PCB/package shots, and confusion?  It’s all about who gets the credit.  Having planned over 100 product launches in product and corporate marketing, I understand the pressure of “the launch”.  With Google breathing down your neck, you’d take some chances, too.  You see, Motorola wants to get most of the credit for themselves and not share any of it with Qualcomm, the secret vendor that developed the two specialized chips, or the secret vendor who manufactured them.

The problem with this approach is trust.  When phones or chips get launched, the tech press and analysts will bare with getting marketed to, but want to know the facts, too.  In fact, doing good marketing is appreciated by the press and analysts, but the facts need to come out alongside of the spin.  If not, a lack of trust starts to build.  The way the X8 was communicated projects a lack of trust, not excitement.

You may be thinking, “so if you’re so smart, Pat, how would you do it?”  Well, I would have positioned the “system” as the holistic combination of the Motorola value-adds.  I like to think of these as “the special sauce” or the “magic blue crystals that make your clothes whiter”.  The Motorola “system” would have been the two special cores and the software that made it all happen, and not try and take credit from Qualcomm.

As soon as the Droid ships, it will be torn apart by iFixIt undoubtedly and we will most likely know what is really inside, with real, not fake die shots.  Going forward, my wish is that Motorola would take credit they deserve, give credit to those who deserve it, but most of all provide the facts.



Korus: A Superior Multi-Room, Wireless Audio Solution

For the last few years, casual home speakers have seen a resurgence, proceeded by the popularity of high-end headphones used on smartphones and tablets.  Sonos has run away with the wireless, multi-room consumer speaker system market, but the volume driver has been Bluetooth-based wireless speakers from brands like Bose and Jambox.  Like we’ve seen in many other segments of high-tech and consumer electronics, a new company called Korus and their V-Series speakers could be on their way to disrupting Sonos, Jambox and Bose.

Through the course of doing research with Korus, I have had the distinct pleasure of testing many versions of their new line of speakers.  I can say that I’m very impressed with Korus when compared to Sonos and of course, the litany of Bluetooth-based speakers I have used from brands like Bose.  Specifically, I got to test two different versions of near production-level speakers in their family, the V600 and V400 and I wanted to share my experiences with you.  The first thing I want to go into, though, is one core technology that really differentiates the speakers, called SKAA.

SKAA Wireless Standard

Wireless speakers on the market today typically use two standards for their wireless functionality- Bluetooth or  WiFi.  Each has their pros and cons as I have outlined in detail here.  Bluetooth is built into every phone, but can only support one speaker, is unreliable, difficult to pair, low bandwidth, low range, and high latency.  WiFi, used as the basis for AirPlay and Sonos, is very pervasive, long range, supports 5-10 speakers, high sound quality, but it requires a network, has very high latency, low battery life, mid-grade reliability, and takes a long time to pair.

SKAA, on the other hand, utilized in the Korus V600 and V400, takes the best features of WiFi and Bluetooth and combines into one standard.  SKAA is high bandwidth at 480kbps, long range at 65 feet, very reliable and not as susceptible to interference, has low 40ms latency, 20 hours battery life with an iPhone, and is easy to pair and re-pair. The only thing someone can question is that it requires a “Baton” or wireless audio transmitter, in the device that has the music.  I’ve thought a lot about that and as I survey different devices like Logitech mice and keyboards and even a FitBit used with a PC, they all require dongles to get the highest reliability. Both Logitech and FitBit use a dongle because Bluetooth is hard to pair, unreliable and is susceptible to interference. Let’s get onto the speakers themselves.       

Korus Setup

As I outlined above, the Korus speakers use “Batons” to connect to your phone, tablet or PC.  Here’s how I setup the V600 and V400 with my iPhone for the first time: 1/ play the music on the device and 2/ plug in the baton.  That’s it.  While there are just as many steps to setup as Bluetooth, you never have to setup that baton up ever again.  Also, you are never “contending” for control of the speaker like Bluetooth as whoever has the baton has control of the music. 

Korus Reliability

Once I setup my Korus speakers, the connection just stuck.  I got between 30-75 feet of range in my house, and of course your mileage will vary given house build.  Also, when I got out of range, it wasn’t a gradual AM-radio style interference; the speakers just turned off.  That may sound like a nit, but it’s very annoying at a party when the host who uses Bluetooth speakers gets out of range and the room starts crackling.

Korus Flexibility

When I walk into the homes of most of my geeky colleagues, I usually see a Sonos speaker system somewhere inside.  With a Sonos system, you get a wireless speaker system that can have up to 5 speakers playing the same song simultaneously, controlled by an iOS or Android app.  This is great, but comes with many limitations.     The first limitation with Sonos is that you must have their application to play the audio.  That’s great for supported services, but what if you wanted to play the new Google Play Music All Access, a game or a movie?  You’re out of luck.  Even if your favorite service is supported today, given the finnicky nature of content, you may find in the future Sonos doesn’t support it.

The Korus speakers can play any audio content on any player on iPhones, iPads, PCs and Macs. It plays all music, all game audio, and all movie and video audio.  You see, the batons are essentially just a wireless extension of your audio port.  Try doing all of that with your Sonos.  It won’t.

The final example of Korus flexibility I really enjoyed was what I like to call “party-mode”.  This is when you have a bunch of friends over, you’re having some drinks, and listening to some music.  Undoubtedly, someone will say, “have you heard that new song”?  Someone will reply, “yes, I have it on my phone”.  With Korus, I just needed to hand the baton to the one with the song and it just plays.  Try that with Bluetooth.

Korus Multi-Speaker Capabilities

Korus can play the same song or audio on four speakers simultaneously.  This is really nice when you are having a party or just hanging around the house.  You can adjust the volume via the Korus volume control app, the program’s app, the phone’s device’s volume control, or you can do it manually on the speaker.  There isn’t any lag after pressing play either, when the music starts playing, unlike the WiFi lag you get with Sonos or Airplay.  Needless to say, the multi-speaker capabilities are something no Bluetooth speaker system has.

Korus Speaker Quality

I will say this right now; I’m no audiophile.  What I do know from working on so many speaker development projects in my career isV600_inside_1 that everyone is different in what they consider great audio.  I personally like a very rich, bassy sound.  Audiophiles  don’t like any special effects and can notice highs that my ear doesn’t.  What I can do is vouch for my friends and family who thought the Korus speakers sounded great.

The Korus V600 is a larger 11lb unit and has a frequency range between 80Hz and 20kHZ and include side-firing tweeters which I thought provided a lot of “width” to the audio experience.  The Korus V400 is a smaller 4.4lb unit and has a frequency range between 125Hz and 20kHZ

My Bose wireless speakers were never used again after the Korus came into the house.

Korus Fine Points

The Korus speakers had some softer, fine points I wanted to share with you.  As Apple has demonstrated so many times, some of the softer adders really make the difference and I think they did as well for Korus.  The first are the handles.  These speakers are designed to be moved around the house or taken with you to someone’s house or even to the beach.  The V600 can also be powered for 90 hours by 6 D batteries.  Even the power cords were thought through well.  They remind me of a much softer and flexible version of an Apple TV cord.  The power cord can be wrapped around the handle to move or to remove cable clutter on a counter-top.  Finally, we have the buttons for power, volume etc.  You touch them, there isn’t a lot of side to side travel, which says “quality” to me.  I think consumers notice these fine points and are more important than people think.  Just ask Apple.

Pricing and Availability

The Korus V600 and V400 aren’t cheap knock-offs; they are feature rich and have a price tag that corresponds to it.  The V600 including three batons (30-pin/Lightning/USB) are $449, and the V400 including three batons are $349.  This corresponds quite nicely versus Sonos Play 3 at $299, Sonos Play 5 at $399 plus the Sonos Bridge at $49.  My Bluetooth-based Bose Soundlink was $299.

Wrapping Up

It is always fun to see potential disruptors in markets where you think the innovation is gone.  Korus has demonstrated that there is still a lot of room left in consumer speakers to innovate and disrupt.  When the speakers become available in the fall, I highly recommend checking them out if you are considering Sonos or one of the many Bluetooth-based wireless speakers.

Are The New Nexus 7 Improvements Enough to Dethrone the iPad mini?

It’s hard to believe that 13 months ago, the preferred tablet form factor was 10” and Android was literally nowhere in tablets.  Then came the first Nexus 7 at Google IO in June 2012, Kindle Fire 2 in September 2012, then the iPad mini in November 2012 changing the preferred tablet form factor to 7-8”.  A year later Apple still reigned in tablets of all sizes with IDC reporting that in 1Q13 Apple held nearly 40% market share while its nearest competitor, Samsung, registered around 18%.  Android as a whole did come in at 56% share, 247% growth.   With Asus and Google upping the ante, can the new Nexus 7 dethrone the iPad mini? Let’s first go over what Google launched last week.

Google and Asus last week launched the new Nexus 7, improving many of the specifications.  Here are the major changes:

  • Display:              720P to 1920×1200 resolution, registering a PPI of 323
  • SOC:                     Nvidia Tegra 3 to Qualcomm Snapdragon S4
  • WWAN:              None to LTE
  • RAM:                   1GB to 2GB
  • Cameras:           1.2MP front-facing to both 1.2MP front-facing and 5MP rear facing
  • Price:                    $199 to $229 for WiFi only

Sure, it’s a bit thinner and lighter and uses a rubberized backing versus faux leather, but outside of the additions I listed above, I didn’t notice anything personally that dramatically impact the experience.  Let me talk a bit about the experience.

On the plus side, the display was gorgeous.  I had to strain to see pixels on V1 but I cannot see any pixels on V2.  I’m extremely near-sighted and notice any video aberrations.  I watched three full-length HD movies on V2 and they looked great.  I didn’t experience any arm strain, either as Nexus is light.  Photos really looked awesome, too.  Games were extremely fast and fluid, as well.  Finally, you can’t beat the price of $229, particularly when compared to the iPad mini at $329.  Now let me get to the downsides.

It’s hard to explain, but when compared to the iPad or to my HTC One phone, the Nexus 7 V2 has some kind of user interface lag.  It’s not a lot, but it’s perceptible, at least to me. GMail is annoying too, and I have never gotten quite used to it, which is why on my Android devices, I use whatever the manufacturer like Samsung or HTC offers. The 5MP camera is disappointing as it exhibits tremendous shutter lag and pictures appeared grainy.  So what does this mean to the iPad mini?

Comparing the new Nexus 7 to the iPad mini is harder than you can imagine.  On one hand, the Nexus has a much cheaper base price, a superior display,  and offers a great video, photo and game experience.  On the other hand, the iPad mini’s UI and interface feels quicker and its camera generates higher quality pictures and videos with no shutter lag.  The mini’s mail and calendar experience is so much better as well.  Personally, I was a bit disappointed with the Nexus 7 V2 as I expected more.  Based on specs, I expected no interface lags like on my HTC One and decent pictures.  As odd as it sounds, personally I still prefer the Nexus 7 V2 to the iPad mini because I prefer the Android ecosystem and I am a sucker for a great display.

In the end, I do think the Nexus 7 will pick up some share at the expense of the iPad mini, but not as much as you might imagine or for the reasons you may think.  It is a much closer competition than appears on paper.  Those consumers with iPhones will most likely go with the mini as they have bought into many iOS apps and content and are very comfortable with the experience.  Tablets are still a considered purchase and are perceived as risky and going mini lowers the consumer’s risk.  To a small portion of consumers, the display will be enough to pull them toward the Nexus, but the primary purchase driver will be the cheap opening price and the great display.  Distribution will play a factor too, as V1 had limited distribution, but V2 is expected to have very wide distribution around the world.

So everybody calm down, I don’t believe the iPad mini is dead nor will Apple lose extensive market share based on the Nexus V2.


Time To Reboot Smartphone Benchmarks

Benchmarks are used in every market as one way to show why one company’s widget is better than another company’s widget.  Whether it’s MPG on a car, energy ratings on appliances, or a Wine Spectator rating, it’s a benchmark.  The high-tech industry loves benchmarks, too, and there is an industry full of companies and organizations that do nothing but develop and distribute benchmarks.  With high-tech benchmarks come controversy, because the stakes are high as all things equal, the widget with the highest benchmark scores will typically receive more money.  The recent spat about Intel versus ARM in smartphones illustrates what is wrong with the “system”.   When I mean “system”, I mean the full chain from benchmarks to the reporting of them and to the buyer.  I want to explore this and offer some suggestions to help fix what is broken.

The first thing I want to highlight is that getting benchmarks “right” is important to the buyer.  If a buyer makes a choice based on a benchmark and either the benchmark isn’t representative of a comparative user experience or if the benchmark has been manipulated, the buyer has been misled.  The first case would be like a buyer buying a racing boat based on the number cup holders and the second would be an auto MPG test during 100 mph tail winds.   The smartphone benchmark blow-up has accusations of both and reminds me a bit of a Big Brother episode.

It all started with ABI Research publishing a note in June entitled, “Intel Apps Processor Outperforms NVIDIA, Qualcomm, Samsung”. News outlets like the Register picked up on this and wrote an article entitled, “Surprise! Intel smartphone trounces ARM in power trials.”  Seeking Alpha jumps on the bandwagon, citing the ABI report with an article entitled, “Intel Breaks ARM, Sends Shares Down 20%”.  100′s of article later, if you believed the press,Qualcomm, Samsung, Nvidia, would be driven out of mobility and Intel would pick up all their business.  Even Intel doesn’t believe that even though they have made some pretty remarkable mobile improvements with Atom from where they were three years ago.

As I said before, this isn’t the first benchmark controversy or the last.  BAPCO Sysmark andMobileMark were some of the more controversial in the PC world over the last decade, and even 3D benchmarks have controversy too. Apple isn’t immune either. Servers and systems are chock-full of examples, too.  So why would mobile smartphone benchmarks be any different?

In talking with Intel, they said that no one should ever use one benchmark to measure performance and that if there were questions on how AnTuTu worked on their processors or compiler, they should contact AnTuTu.  I’ve had no luck with that yet.  Smartphone benchmarks are an issue, so what should the industry do?

I have had the pleasure of being part of multiple industry benchmarking efforts over the last 20 years, have seen some success, some failures, and learned a lot.  I think that if consumers, press, analysts, and investors stuck to a few guidelines, we wouldn’t get sideways like this.

The following are my top benchmark learnings for client devices like phones, tablets and PCs:

  1. The best benchmarks reflect real world usage models: It all starts with what the user wants to do with their device.  For smartphones, it should reflect what is done with the content- social media, email, messaging, web, games, music, photos, video, and talking.  As an example, do smartphone benchmarks comprehend photo lag time or battery life?
  2. Never rely on one benchmark: There are no perfect benchmarks and every one of them has a flaw somewhere. In the AnTuTu case, many in the press and analyst community relied on one benchmark which exasperated the issue at hand.
  3. Benchmark shipping devices: Many benchmarks are performed on test systems, not real devices.  Typically, there is benchmark variability between phones in-market and test systems and they go both ways.  Some early ODM phones or reference designs have beta stage firmware and drivers which can be slower or faster than production level designs by OEMs. Many shipping phones have bloat-ware, which can slow down a benchmark. Intel was very clear with me to ignore the latest leaked Bay Trail benchmarks that were on reference designs
  4. Application-based benchmarks are the most reliable: There are three types of benchmarks, synthetic, application-based, and hybrids.  Synthetic benchmarks, like the AnTuTu memory test, are running an algorithm that tests one specific subsystem, not how a smartphone would run an app in the real world.  The benefit of synthetics are that they are easier to run and develop. I prefer application-based benchmarks like the 3D benchmarkers are using where they test a specific game like Crysis, but I am OK with hybrids as long as they reflect the performance of real-world applications.  Application-based benchmarks take a lot of development time and resources.
  5. Look for transparency: Some benchmark companies and groups tell you exactly what they test, how they test, and even offer source-code inspection. If benchmarks don’t offer that, be very wary, and ask yourself, “why don’t they”?
  6. Look for consistency: The best benchmarks are repeatable and can be relied upon time and time again.  Be wary of benchmarks that give you different results after running them multiple times.  You just can’t rely on benchmarks like that.

While no benchmark is perfect and all have issues, I really like FutureMark’s benchmark approach, execution and the ability to herd many of the largest tech companies to arrive at decisions.

Mobile benchmarks are one of the more challenging benchmarks to develop, for many reasons. First off, mobile benchmarks support multiple mobile operating systems, primarily iOS and Android.  Secondly, there are three processor instruction sets to support- ARM, MIPS, and X86.  Thirdly, smartphones are based on somewhat custom subsystems like image signal processors, digital signal processors, and video encoders which are often hard to support.  Finally, it is very difficult to measure power draw of specific components of an SoC or even a complete SOC because it entails connecting tiny measurement probes to specific parts of the phone.

In the end, I’m glad the AnTuTu-Intel-ARM blow-up happened, because it gave the industry a chance to reflect on how well smartphones are being evaluated with benchmarks, their pitfalls, and the impact of the industry quickly jumping to conclusions from one benchmark.  Now it’s time for the smartphone industry to come together plug many of the holes out there.

Why Windows RT Will Survive $900M Later

Yesterday, Microsoft announced that they were writing off $900M in Surface RT inventory.  This is based on price reductions on Surface RT to clear inventory.  If we assume that Microsoft factored in $150 per unit and we do some simple math, we can then estimate that Microsoft is sitting on 6M Surface RTs.  This is an absolute abomination, and I don’t think this is a surprise to many that Surface RT didn’t sell well, but what is a surprise is the magnitude of the write-down.  Even with nearly $1B in write-downs, I don’t think Microsoft will cancel Windows RT and I want to share my thinking.

I would be remiss if I didn’t first give my opinion on why Windows RT didn’t sell well.  First, I disagree with the notion that it has to do with the dual tablet-PC nature of Windows 8, and for that matter, RT.  Research I have conducted and research I have seen shows that once users actually use a use a touch-Windows device, they like it.  It’s that trial that is the tough part.  What doomed Surface RT, plain and simple, was the lack of premier apps and because the tablet market shifted to the 7-8″ form factor.  This isn’t the main topic of this post but I needed to weigh in.

To better understand why Microsoft will keep investing in Windows RT, we have to know why they invested in it in the first place.  When Microsoft would have had to make the decision to support an ARM-based Windows RT, Intel did not have a competitive mobile part and had just come off of some very public mobile failures, Menlow and Moorestown.  The CloverTrail schedule was risky, too, and Microsoft felt that they needed lower power ARM-based SOCs to meet the battery life bar set by the iPad and the Motorola Xoom.  The other factor is that in the minds of both Microsoft and Intel, any dollar invested by an OEM into each others products, is a dollar that they lose.  Microsoft is interested in cheap hardware so they can charge more for software.  Intel is interested in cheap software so they can charge more for hardware.  Makes sense, right?

The first reason Microsoft will keep investing in Windows RT is to keep Intel competitive on tablets.  Microsoft thinks that if they don’t hold something over Intel’s head, they won’t see solutions in the future as competitive as Bay Trail which, at least on paper, looks very competitive for holiday 2013 Windows 8-based tablets.  Microsoft is also seeking to lower prices on 7-8″ tablets, and they see ARM-based SOCs from someone like Rockchip or Huawei providing that cost reduction necessary to enable Microsoft to charge more for software or lower the product street price. We also need to factor in phones.  Windows Phone 9 will most likely share the same kernel as Windows RT (9) and therefore it would make sense to cease development now for ARM to revive it a few years later.  Finally, Microsoft is thinking wearables and IoT devices based on this shared Windows RT (9) kernel, and so far, Intel doesn’t have a roadmap that would provide this level of performance/watt necessary to last weeks on a single charge.

So even with nearly $1B in “losses” racked up so far, Microsoft will trudge on, because they believe that they need ARM-based silicon to cover all their product segment bases and increase the price of their software to OEMs.

Pebble: The Nerd’s Watch

Ben Bajarin wrote a nice piece last week entitled “The Challenge of Wearable Computing”, where he talks about the challenges of wearables and offers some suggestions to developers to improve their chances for success. In this column, I want to share with you my early, personal and specific Pebble watch experience and extrapolate some of thoughts to the general consumer market.  Let’s start with a little background on Pebble.

The Pebble watch started as a Kickstarter project  that exceeded their $100K funding goal by over $10M over a year ago in May, 2012.  The watch is currently shipping directly from Pebble and you can also buy it at Best Buy for $149.99.  Pebble supports basic peer-to-peer functionality for Android and iOS phones and sports a low res e-ink style display. Built into the core Pebble OS, it supports notifications for calls, texts, calendar events, email, Google Talk, and Facebook notifications.  So basically, whenever your phone gets a notification, Pebble gets one.  Through 3rd party apps I installed, I could control my phone’s music player, extend RunKeeper, see Twitter and Facebook feeds, see the weather, view photos, view calendar, page my phone, and respond to texts.  Sounds robust and valuable, right? Well, not really.  The best way to go through the experience is discuss highs and lows.

Pebble Highs

  • Battery life: I have had Pebble for over a week, use every notification and the backlight but only have had to charge once.
  • Reliability when connected: When the watch is connected to a phone, the apps are super-consistent.  This must be partly because they do one single task, like alert you of an email or text.
  • Notifications with phone tucked away: There are times when having a phone out isn’t socially accepted or inconvenient, like during dinner or when in the airport line.   With Pebble, I can get nearly all my notifications and it was sometimes reassuring that I wouldn’t miss something.
  • Display: When you hear of a 144×168 black and white resolution in a world of Retina displays, most would laugh it off as a joke.  I was very surprised just how quickly I got accustomed to the backlit, e-ink like display.  I rarely had an issue in full sunlight and literally a flick of the wrist, the backlight turns on.
  • Exercise: While the RunKeeper integration is extremely limited, it does provide the basic information like pace, distance, and time run eliminating the need to look at my phone.

Pebble Lows

  • Limited utility via limited apps: Pebble is severely limited by a very low number of apps that support the platform.  Does “mobile device lacking apps therefore delivering low value” sound familiar?  It’s the issue for Windows RT, Windows Phone, BB10 and was one of the death blows for HP’s webOS. Sure it’s early, only a year into development, but getting notifications, having a second screen for a few apps, and controlling a few things on the smartphone just isn’t enough.
  • No App Store: There currently isn’t an official app store for Pebble, making finding apps a chore.  Users can either search the Android store for “Pebble” or go to the many non-Pebble supported websites via Google search.
  • Unreliable BT connection: Bluetooth inherently is unreliable, as we have all experienced at one time or another.  This is a real back breaker because Pebble is limited without the phone connection.  To make matters worse, Pebble doesn’t have a visual indicator that it is successfully connected, so you are left wondering if you were missing notifications. To add insult to injury, my phone often said Pebble was connected when it really wasn’t
  • Nerdy: My wife nailed it when she saw me with Pebble and asked, “so is that the nerd watch”?  As I recovered from the “nerd slap”, I thought about it, and the watch really isn’t very stylish. In fact, it’s nerdy.  It is shiny and feels cheap and plasticky, like a watch you can win as a prize in a machine in an arcade.  At the end of the week, I missed some of my watches.  I’m no watch collector, but I have some that are a few hundred bucks and a few that are a few thousand dollars.

Pebble right now is a classic “tweener”.  Let’s look at fitness devices as an example.  Pebble is not like a FitBit One, FitBit Flex or Jawbone UP that tracks sleep, movement, calories yet inexpensive, stylish or easy to hide. Nor is it like the $249 MotoActv that has a color display, heart rate monitor, embedded GPS tracker and built-in music player.  Pebble is smack dab in the middle of the devices while trying to get developers to do more. A tweener is never a good place to stay for long as it usually ends in death.

Pebble needs to be more “general purpose” like a phone, tablet, PC or more “focused” like a sports watch or game console.  To do this, I believe Pebble will need to change dramatically. To go more “general purpose”, Pebble needs a complete overhaul in UI that would enable a lot more input functionality via, let’s say, voice.  Even with added features, it would take a lot to get over the “nerd factor”.  I could see non-nerds getting comfortable with Pebble functionality if it somehow embedded into their favorite Omega, Breitling, TAG Heuer, Citizen, or Burberry watches.

To get more “focused” Pebble needs to identify a unique problem for a unique audience that only it can solve…. and then go solve it.  I could see a customized “Pebble-like” device solving some very unique living room gaming challenges with a multi-axis (more than 3) accelerometer.  I could see specialty watches for firefighters and policemen, too.  The list goes on and on, but unfortunately, this is just not what Pebble is.

Until Pebble and other devices like Pebble are semi-concealable or get more focused on solving focused problems, it will remain the nerd’s watch.  The world needs and loves nerds, but I don’t think it’s a very large market in the near future.

The Dell-Icahn Debacle Exemplifies what’s Wrong with Wall Street

There have been a lot of industry and financial discussions lately about Dell’s privatization efforts.  So far, I have stayed out of the fray, but I think it is now time for me to weigh in on what I consider a total debacle…. a total lack of understanding of strategy, the technology industry, and Dell.  The current institutional investors who are mulling which way to vote on the Dell-Silver Lake offer appear clueless as they risk their current investments in search of a few pennies more from Icahn and Southeastern.  This is a good example of what’s wrong with Wall Street.  Let me start with the basic Icahn-Southeastern hypothesis.

Icahn and partner’s basic premise is that a newly appointed board can run Dell better than it’s running Dell today, and therefore they must think that Dell’s current board is mismanaging the company.  Many in the industry would like to know who those proposed board members are, their backgrounds, and exactly how their strategy would be different.  Are the proposed board members smarter, more experienced than the Dell’s current board?  So far, no details from the Icahn camp so it’s impossible to assess.

Given the massive wealth Icahn has amassed, he obviously has some brilliant folks, but would the new board have technology backgrounds or would they come from areas where Icahn has amassed wealth?  Icahn Enterprises web site states that“Icahn Enterprises L.P. (NASDAQ: IEP), a master limited partnership, is a diversified holding company engaged in nine primary business segments: Investment, Automotive, Energy, Gaming, Railcar, Food Packaging, Metals, Real Estate and Home Fashion.” Outside of investments and energy, each one of these markets is dominated by slow rate of change in terms of market dynamics, competitive shifts and growth.

Take high tech smartphones as an example. Five years ago, the leaders in smartphones were Nokia (40% share), RIM (16% share), and then Apple (9% share).  Android, hadn’t even shipped a phone five years ago and now it is the leading smartphone operating system today.  Nokia and RIM are almost out of the smartphone market and now Samsung (33% share) and Apple (17% share) lead the pack.  Five years ago, Facebook had 100M users and Twitter had around 1M tweets a day.  Now, Facebook has 1.1B users and Twitter now has 400M tweets per day. Technology isn’t railcars.

To give investors a better sense of comparing boards, Icahn and partners should divulge exactly who would be on the newly proposed board of directors.

Appointing a new board of directors would mean a new strategy, but what strategy? To offer nearly $25B to buy something, you must have some theory on what can be improved that the other guys missed or mis-executed.  There must be some low hanging fruit that no one else sees, right?  So far, there have been no specific proposed strategy changes floated by the Icahn camp.

I have been researching Dell’s strategy for close to 20 years. I was their competitor at AT&T and Compaq for nearly a decade, was a supplier for over a decade at AMD, and my firm researches them and their competitors today.  As PC growth declined, Dell had to pick a direction: stay bottled up in the consumer client-computing market or grow into an end-to-end enterprise player in data center infrastructure, software and services. Looking what it took for Samsung and Apple to rise to a duopoly in consumer phones and tablets, Dell chose the right direction.  Dell, who was very strong in servers and business PCs, went on a $13B acquisition tear for five years in services, software, storage, and security to pull together those pieces to become that end to end player.  It is a strategy that will take at least five more years to bear full fruit, as Dell needs time to pull those disparate parts into one holistic offering.

So what is the Icahn and partner’s strategy?  No one knows, but if that camp views previous Dell acquisitions as a bad choice, one must assume that divestitures is in the cards, selling off many of the businesses that Dell just acquired.  To an end-to-end enterprise player, this is like amputating a foot on a runner.  You need all limbs in good condition to finish the five year race.  I would view the selling of assets as a pre-cursor to Dell’s demise as a company, but this is what I view as a high probability with an Icahn acquisition.

To remove any ambiguity, I think it would help clarify for everyone for the Icahn camp to reveal some parts of the new strategy. Imagine what happens to the company value if, when the smoke clears with an Icahn win, there isn’t a new and amazing strategy or one that’s rejected by the employees.

There is a scenario where Michael Dell could still hold around 40% of the shares and could wage a proxy battle if Icahn won.  There could also be a split board of directors, a total mess. So what if Michael Dell just sells his stake and walks?  Michael Dell is worth billions, there are a lot of other interesting investments to make, and I’m sure his wife and kids would love to see him home more often.

While I think the possibility is low that Michael would walk with an Icahn takeover, investors should consider the “what-if”.  At this stage in Dell’s turnaround, if Michael Dell, left, Dell Corp. would die.  It’s as plain as simple as that.  Dell has great lieutenants, but if you’ve ever worked in a large, high-tech company, you know that the CEO choice drives the “operating system” of the company. The CEO drives the way decisions are made, the operating cadence, and is the cheerleader to the employee base.  This is particularly important when you have a founder-CEO.  You don’t want your founder-CEO leaving during a time of chaos, you want years of transition to a new one.  I can just imagine what happens to the stock price if Michael left.

Investors have a big vote on July 18 to accept or reject the Dell-Silver Lake offer.  The ISS, or Institutional Shareholder Services, has cleared the Dell-Silver Lake deal, which means institutional investors have “cover” to proceed.  Even with that, Icahn has vowed to keep the deal in the courts for years.

If the Icahn deal goes through, Michael Dell could wage his own proxy battle and there could even be a split board of directors.  This sounds like a complete mess and chaos would ensue.  What do you think that does to the turnaround momentum, employee and customer sentiment….. and the stock price?  If $13.65 a share sounds low, how about the pre-buyout price of around $9 or lower?  That is what the current institutional investors are risking here, based on their lack of understanding of high tech and what it takes to position Dell for growth.  Sure Apple valuation is a bigger “crime”, but the Dell-Icahn debacle demonstrates what’s wrong with Wall Street.

Windows 8.1 Does Little to Boost Holiday 2013 Sales

Last week, I tuned into Microsoft’s Channel 9 to listen to keynotes and developer lectures for MS BUILD, Microsoft’s developer conference. BUILD attracts Microsoft devotees from its developer community for PCs, phones, servers and even XBOX.

The biggest item on everyone’s mind was Windows 8.1 and how Microsoft planned to breathe developer life into the platform. The conference was set against a backdrop of flagging PC sales and a PC ecosystem that is one edge, anxious to decide where they should be making their future investments. When BUILD concluded and the smoke cleared, my takeaway was that Windows 8.1 is a step forward, but will do little to boost holiday 2013 sales. Ironically, the hardware could make a difference. Let’s start with what 8.1 brings to the table.

Windows 8.1 was about two things- making Windows 8 more comfortable for traditional Windows desktop users and completing the base Windows tablet experience. Here is a list of the top features making it easier for desktop users:

  • Adding back the Start menu: While in the desktop app, clicking on the white Windows flag takes you back to the start screen in Metro. Right-clicking the flag let’s you shut down the system and access key desktop settings.
  • Boot to Desktop: Windows 8.1 let’s you boot to the desktop app, which is essentially the Windows 7 experience .
  • Remove Charms: Allows users to disable charms when you place your cursor in the top right or bottom right corner of the display.
  • Jump to All Apps: Upon pressing the Windows flag in desktop, this can take you to the All Apps page. If selected in settings, this means users will never have to see a Live Tile unless they want to.

So literally, if you don’t want to see much of anything that Windows 8 brings over Windows 7, Windows 8.1 will let you do that. Let’s move to the Windows 8.1 features that signify completion of the base Windows 8 tablet experience:

  • 8″ tablets: Windows 8.1 supports 8″ tablets, the volume driver in its category.
  • System-wide search: Instead of choosing between searching for apps, settings or files, the new search searches everything. This reminds me of Windows 7 and of OS X, but is arguably a better search than 8.
  • Basic photo and video editing: Windows 8 had no photo or video editing, obviously a feature left on the cutting room floor given every major OS has this already, including Windows 7. Windows 8.1 brings some basic and touch-optimized tools to the table. I really like the dials in photo editing.
  • Improved App Snapping: Windows 8 limited users to simultaneously display two apps, one occupying 75% of the display and one occupying the other 25% of the display. This limited the amount and kind of apps users could run. 8.1 adds up to 4 windows of varying sizes. This is a big step but I find it still difficult to get the windows in the right place.
  • Miracast: This enables 8.1 devices to wirelessly share their display when connected to a Miracast-certified devices listed here. This really helps plug the AirPlay hole. I have yet to test this feature pervasively, but I hope it is nearly as solid as AirPlay or it won’t be widely adopted.
  • Tile customization: Tiles can be 4 different sizes and similar apps can be assembled together with header names. This isn’t as clear as folders but extends the platform and makes it simpler than before.

[pullquote]All of these improvements to the desktop and tablet mode are a real step forward, but unfortunately won’t make a big difference on sales in holiday 2013. [/pullquote]

Why? You first have to understand what’s holding Windows 8 back in the consumer marketplace.

As I have been very consistent on, I am a believer that the closer the PC gets to the tablet, consumers will be more likely to buy a new PC. It won’t be one watershed event, but a long term evolution of the PC into the simple, always on, always updated, snappy, thin, light, reliable, with many apps, and 10+ hour battery life device. Many users appreciate this today in the the iPad, Nexus, Galaxy, Kindle Fire, etc.

The clear majority of Windows 8 PCs shipped up to this point, however, were quite different than the optimal. Most delivered three hours battery life, were heavy, difficult to use versus a tablet, weren’t touch-based, weren’t always-on or always connected, a bit lethargic and didn’t offer the consumer app library. Either that or they were expensive if you couldn’t use them as a “2 in 1” device (some usage models yes, but not all). What problems does Windows 8.1 help solve? Let me give 8.1 credit where it is due- 8.1 is simpler and more robust than 8. For the other consumer issues outlined above, 8.1 doesn’t improve a whole lot of anything. While I was initially excited about the prospect of an 8″ tablet, it was squelched by the awful reviews of Acer’s 8″ tablet. I didn’t sense confidence after listening to BUILD that tier 1 and 2 apps will grow in numbers, even though I was excited about Facebook coming to the platform.

Does this mean the industry should pack it in for holiday 2013 and go home? Absolutely not, as hardware could help turn the tide for Holiday 2013. Between Intel, AMD, Qualcomm, Nvidia and their OEMs, they have the ability to bring the required touch-based snappiness, always-on, always-connected, thin, light, with 10+ hours battery life to tablets and convertibles, all at a decent price. Think of the irony for a second; hardware helping save software. Sad, but true nonetheless. This isn’t to say Microsoft’s efforts won’t make a different for the holidays, because they will. But I believe their latest retail strategy will make a much bigger impact than they made with the improvements made in Windows 8.1.

What to be Aware of Installing Windows 8.1 Preview

It has been 24 hours since Microsoft released Windows 8.1 Preview and while many have successfully installed it, some have not. I want to share with you my experiences so that it may hopefully save you some time this week.

Immediately after the link was released, I started downloading on my primary notebook, the Dell XPS 12.  I got an error message that said, “Your Windows 8.1 Preview install couldn’t be completed. Something happened and the Windows 8.1 Preview can’t be completed.”  I get the choice to “try again” or “cancel”  My expectation was that there was a lot of traffic hitting the servers and I tried again… and again, and finally connected t the store.  At the end of the 20 minute download an error message appeared.  The message said, “Something happened and the Windows 8.1 Preview couldn’t be installed.  Please try again. Error code: 0x80240031.”  I was given the chance to “Try again” or “cancel”.  I tried and tried again.  Then I searched on the error message but no information was available.  So I figured it was a bad machine or Windows 8 load.  Time to try another.

So I tried a system builder constructed new Intel Haswell machine.  Same problem.  Then a Lenovo Yoga 13.  Then Microsoft Surface Pro. Same issue.  I contacted the helpful folks at Microsoft who said they would look into it and get back to me.  They did ask some clarifying questions about what I had, because there are some issues they have found and shared on a blog post.  Users cannot upgrade yet Atom processor-based systems or those with Windows 8 Enterprise.

So I then decided it had to be an overwhelmed server issue so I waited  till the next morning.  I tried all four systems again and then, because I had seen someone from Nvidia talk about Yoga 11 success, I tried it on that.  Same result.  No joy.  I am currently in the process of reimaging the Yoga 11 and I will keep you updated along the way.

My message to you: Don’t waste your time like I have on 5 different systems…. move on, do something different for a few days and come back until they’ve sorted out the issues.



Instagram Video More a Threat to TV and Camcorders than Vine

Last week, Facebook announced Instagram Video, giving users the ability to take 15 second videos, add special effects and share with their friends. Instagram Video is nearly a feature-by-feature copy of Vine and has been reported to be negatively impacting Vine already. I believe, though, that Instagram Video’s biggest impact will be more on TV viewing and Camcorders than Vine.

If you haven’t used Vine or Instagram Video, it’s hard to explain why it is so addictive. For producers, it’s all about creativity, having fun, capturing “news”, showing off and getting “likes”. Let me use my daughters as an example on the production side. When they go to their friend’s house, what do they do? They are making videos, 8 hours at a time. It reminds me a bit of traditional photo Instagram, but much more intense and time consuming.

The most interesting video capture observation is “gamification”. Neither Vine nor Instagram Video support importing videos, meaning you have to capture the video you want in real-time. There is only minimal video editing in Instagram where you can delete the last video segment taken. What this leads to is the aspect of “challenge”, which I believe makes taking videos all that more addictive. The most interesting features in both programs are the use of filters and image stabilization to make videos look better or artistic.

As I said in the intro, I believe the more interesting discussion isn’t about Vine versus Instagram Video, but about how this increases the acceleration of the demise of the consumer camcorder. Camcorders are great for taking videos of graduations, weddings, baby births and sports events, but that’s about it. Editing consumer videos have been a total nightmare on a PC up until the last few years, which isn’t lost on the general consumer. The video quality and storage is higher on a camcorder than a phone, but then again, so is the picture quality of a discrete point and shoot camera. It’s the same logic here. Both Instagram and Vine give us yet another reason to ditch the camcorder. Let’s talk about viewing videos.

On the viewing side, the behavior is different than on the capture and edit side. It looks and feels more like channel surfing or wading through a Facebook stream. This audience is much, much larger and includes many who don’t enjoy making and editing the videos themselves, but would rather just watch and maybe “Like” or comment on a video. My daughters like to call them “stalkers”.

What makes viewing so addictive is that it is just so personal and has so much depth. Pictures and text are nice, but videos add motion and audio, adding a deeper layer of meaning. This is in part why you now see comedians, indie film makers and novice newscasters flocking to the new media platform. In a sense, the medium becomes real-time reality TV.

The most interesting playback feature is just how quickly videos start playing. Compared to other forms of video, it feels instant, but in reality, there is a small delay. This has a huge effect on just how much this positively adds to the experience. You see, our brains multiply time, meaning that milliseconds feel like seconds. Consumer packaged food makers know this well. This is why consumers will pay 30% more for packaged food that can be opened one second faster.

Like the impact video capture and edit on Vine and Instagram Video had on camcorders, I believe viewing videos has an impact on watching TV. The logic is simple- the more time we are consuming videos, the less time we have the TV switched on. For super-connected homes this has been the case with the trade-off between all forms of social media, smartphones and tablets. Those connected homes are spending less time on the TV and more time on their phones, tablets and PCs. As the content improves and more people are producing even more videos, more people will want to tune into the services to make sure they don’t miss anything.

What’s to put a potential damper on the viewing? Too many ads. I don’t think it’s a coincidence that Instagram Video chose the 15 second ad length for their videos. It makes it all that easier to slot in an ad that was produced for TV, which come in 15 second increments.

Net-net, the competitive angle of Vine and Instagram Video is interesting, I believe the genre impact to other mediums and devices is more important. The new Instagram Video harms camcorders and TV viewing a lot more than it does Vine.

New Microsoft and Best Buy “Store Within a Store” a Big Step Forward

Last week, Microsoft and Best Buy announced that they will be doing a Microsoft “store-within-a-store”. Essentially, Microsoft will pay Best Buy a large sum of money to “own” part of the store, in a way similar to Samsung and Apple.  Best Buy will still own the inventory, but Microsoft will own the merchandising, staffing-levels and training.  I believe this is a big step forward, and if executed well, helps solve many of the issues associated with Windows 8 PC experience.  Let me start with some perspective on Windows 8.

For anyone who has been in the industry a while, you know that a few things defines the Windows experience over the last 20 years:

  • primary keyboard and mouse UI
  • one, windowed Desktop environment with lots of “chrome”
  • start button (18 years ago)
  • multitasking of any app
  • backwards app and peripheral compatibility
  • desktops and notebook form factors

Windows 8 changed ALL of this:

  • primary UI display touch, secondarily mouse and keyboard
  • two environments; one Metro and secondly Desktop
  • no start button
  • every Desktop app multitasks, select Metro apps multitask
  • Desktop X86 backwards compatibility, ARM no app backwards compatibility, undetermined peripheral compatibility
  • desktops, notebooks, convertibles, detachables, tablets

In other words, everything changed.  The problem was, that Best Buy’s training and merchandising didn’t change dramatically to educate the buyer on the benefits of Windows 8 nor the differences between Windows 8 and Windows RT.  After talking at length with Microsoft, here are my expectations:

  • computer will always be turned on and internet-connected
  • security devices won’t impede ability to try convertibles and detachables
  • more knowledgeable sales associates
  • touch devices clearly merchandised
  • more, higher-priced “hero” SKUs that are the best of the best

If executed well, I believe this will go a long way in mitigating the current buying experience issues inherent with Windows 8 and Windows RT.  The Austin store is one of the first stores to open and I will be there on opening night to gauge their level of execution.  And of course, will report back.

Why PC Modularity Could Be Successful

Modularity in electronics is defined as being able to transform or extend from one device into another. In smartphones, it has been very successful and for the most part, it has displaced the mp3 player, and GPS. For many, the smartphone displaced a portable game device and a camera. Some consumers are using their phones as their stereo by plugging it into a speaker bay, and some college kids will even link their phones to their HDTVs to watch videos. Years ago, one could claim that the PC sucked in the typewriter, but it’s hard to say that recent modularity attempts have been a commercial success. With all the PC “gloom and doom” though, I do see a few likely scenarios where PC modularity could be a success.

With today’s technology, if one wants to have a modular PC, they need to accept a few trade-offs. The Dell XPS 12 is a great notebook, but because of its size and weight, it is only secondarily a tablet. The HP ElitePad 900 is a really good tablet, but doesn’t have enough performance nor does its display size serve the PC function. New technology is coming down the road that changes a lot of these challenges where a PC tablet could successfully transform into a small notebook and serve as a good desktop solution as well. Let’s talk chips first.

Intel is bringing out Bay Trail which will maintain the ten hours battery life, double CPU performance, and triple GPU performance. This means 2-3X the performance of the current chip, Clovertrail, that already gets high performance marks versus the competitive set. It has also been rumored that Intel will offer a 4.5 watt Haswell, enabling PC performance with less than Bay Trail battery life. This passes my smell test of 20+ years working in and around the chip industry. These two potential (one rumored) choices enable a fanless tablet that can then transform with the help of peripherals. One very interesting peripheral showed up on my doorstep last week that got me thinking about modularity again.

What got me thinking more about this was actually using a new peripheral for the HP ElitePad, the ElitePad Productivity Jacket. When it was announced months ago, it looked good, but I had no idea just how well it could be used to replace a small notebook. I already use the Expansion Jacket and the Docking Station, but the Productivity Jacket pulled it all together. I want to reiterate that the current solution has Clovertrail, not Bay Trail or Haswell, so it’s a bit pokey, even on basic productivity. Let me outline how I use these devices to complete the experience:

  • ElitePad, no peripherals: use this when you want the lightest and thinnest tablet experience. I get 10 hours battery life and use primarily Metro-based apps.
  • ElitePad + Expansion Jacket: I use this when I want extended battery at 20 hours, and I want tablet protection. I dropped it twice on concrete with nothing more than a slight, temporary compression.It also gives me two full USB, full HDMI, and an SD card slot. I would take this to my kid’s volleyball, football, and basketball games.
  • ElitePad + Productivity Jacket: This jacket adds a full keyboard, a protective case, two USB and an SD card slot. I plan on taking this on business trips and to meetings. The keyboard is not full size, but large enough for me to call it my #2 productivity device.
  • ElitePad + Docking Station: This is where I get the “real work” done. I work primarily with Windows Desktop apps here. I attach the dock to a large wireless keyboard, large wireless mouse, and a 32″ display. The dock has four USB ports, RJ45, HDMI and an audio line out. It’s nice, too because I can use the docked ElitePad as a second monitor and its size is perfect to display a calendar or email.

So I see a day when we have Haswell-based tablet parts, where one device, in specific use cases, can effectively be used in bed, on the couch, at the desk, in a car, on a train, and on a plane. I see this working extremely well for those who use a desktop and a 10″ tablet today. I see also see it as very good for someone who has a thin, 11″ notebook and a 10″ tablet. For someone who prefers a larger 13-15″ notebook, I don’t see it as optimal nor someone who needs workstation-class performance.

All of this discussion precludes that CPU and GPU innovation will outpace the performance needs of non-workstation personal computer applications. This is a bet that I would gladly take given Intel is ramping at a pace I haven’t seen since the early Core days. Intel’s continued business model hinges on PC growth, defense of their PC turf, and taking mobile share. When Intel’s backs are against the wall they have performed best, so I believe they will over-serve the performance needs of tablets and hedge by pulling Haswell down into that power range. All of this translates into a lot of tablet performance that, through modularity, can effectively be used as a PC.

Why I Prefer Convertibles Over Notebooks

Ever since Microsoft unveiled Windows 8 at the BUILD event in 2011, it was apparent that the Windows PC future was touch, gestures, tablets, convertibles, and hybrids.    Intel’s unveiling the following year at Computex 2012 with a plethora of form factors solidified that future. To predict the future of PCs, one must have an opinion on a few important areas of interest. One topic that requires a lot of analysis is if a convertible PC can replace your notebook or a 10” tablet.  I do a lot of contextual research and I wanted to put it to the test and share with you my personal experience.  I can tell you that under many circumstances, the latest convertibles can replace a notebook and/or a 10” tablet.  Let me start with the methodology.  You can find the detailed test matrix here.

For three weeks, I stopped using my Nexus 7 and Apple iPad and used three different convertible PCs:

  • Dell XPS 12, Lenovo IdeaPad Yoga 13, and Microsoft Surface Pro

I switched between the convertible models above and performed 12 unique usage models:

  • Productivity- Calendaring, Presentations, Reading Email, Spreadsheet ,Take Notes, Writing Email, Writing Report
  • Fun- Watching Movie, Play Game, Reading Book/Magazine
  • Social media- Web-based HootSuite, Facebook, Google+, LinkedIn
  • Surfing the Web- via Google Chrome and Internet Explorer

I then used each of those usage models across five different locations:

  • Desk- with a second display as a laptop
  • Couch- in my lap as a laptop and tablet
  • Bed-  as a laptop and tablet
  • Airplane- domestic and international, mostly as a laptop

Finally, I ranked the experience as Poor, OK, Good, Very Good, and Great.  I found what came out of the data quite interesting and I think you will to.

The first big takeaway is that for me, I prefer a convertible over a traditional notebook.  I don’t feel like the experience was compromised in any way that didn’t have an easy workaround.  Every one of the convertibles was an Ultrabook, meaning they all had solid performance with a Core i5 or i7, woke up very quickly, had at least 5 hours battery life and had sufficient storage at 128GB of SSD unless you’re a hard core PC gamer.   Both the Dell and Surface Pro had 1080P displays, too, so these were no compromise.  To provide a thin and lighter experience, all convertibles had minimal number USB-3 ports, so if that’s important to you when you are attaching a keyboard, mouse, scanner, or printer, just get a USB-3 adapter.  To remove many of the cables when I sat at my desk, I used a wireless keyboard, wireless mouse, wireless printing and only needed one tiny Logitech Unify USB dongle.

The second takeaway was that by making a few bi-modal software changes, a convertible can do a good job on tasks that were exclusively the realm of my 10” tablet.  Let’s take email and calendar. When I am using the convertible as a laptop, Outlook is best suited for this task.  When using the convertible in tablet-mode and I’m sitting on the couch, using Windows Mail and Windows Calendar were most comfortable.  Once I got familiar with the bi-modal software, it worked well. Another good example are movies.  I expected convertibles to have a lousy movie viewing experience but to my surprise it was pretty good when I switched to “movie mode”.  Movie-mode is when you flip the convertible so it’s screen-first, with the keyboard toward the back of the machine.  This worked well on planes, in bed and even on the coach.  This was one of my biggest surprise finds.  Movie-mode also helped when I was on an airplane when the person pulls their seat all the way back.  The display is oriented very close to my face and the keyboard is tucked away so that I could comfortably touch the display.The next major takeaway for me was that when I really needed to do something really quickly or was looking for that lightest weight and most convenient device, I reached for my smartphone. These were situations where I quickly wanted to check the weather, email, Twitter and even read the news with Pulse.  While my HTC One isn’t a “phablet”, the display at 4.7” and a lot larger than my previous iPhones.  So my large display smartphone fulfilled that convenience need.

Finally, there were things that weren’t a surprise that need to be stated for comprehensiveness.  Convertibles weren’t designed to be the best book or magazine reader.  The displays are large and the units are multiple times heavier than a real book or Kindle.  I read books on my 7” Nexus, not the 10” iPad anyways.   While the nearly 5 hours battery life worked pretty well on domestic flights between Austin and San Jose and New York, the convertibles ran out of gas on international flights if I didn’t have a plug.

Let me get back to the original question. Can a convertible PC replace a notebook or a 10” tablet? For me, in many instances, yes, in a few cases, no.  If you are willing to make some changes in the software you use on the device based on if you are using the convertible in laptop or tablet-mode, many usage scenarios will work very well.  I want to state that this is the state of the art today, but it only gets better in the future.  I expect that Intel will soon have the ability with Haswell to produce an SOC that has “Core-level” performance so that an OEM could develop a fanless tablet as thin as the iPad and with 8-10 hours battery life. This is based upon my predictions, not Intel’s.  AMD already has  its Temash SOC out enabling fanless tablets.  This will really change the game and substantially improve all the tests I performed.

If you’d like to look at my entire data set you can find that here.

Are the Latest 10”+ Android Devices DOA?

I’ll admit it, I have a love-hate relationship with Android. I love it as a phone choice, love it on 7” tablets, but think it provides a lousy experience on anything 10” display and above. I’m not alone as Android has captured 75% of the smartphone market but hasn’t had big success in the 10” and above category. Companies like Acer and Asus are now venturing into some very dangerous territory and some of their new Android products risk ending up like previous 10”+ Android devices. I’d like to begin with some Android tablet perspective.

It’s hard to believe that up until a year ago, Android had no tablet market to speak of. Android tablets had really been defined by market debacles like the Motorola Xoom. Samsung cranked out some interesting, high-res 10” tablets and Asus delivered some inspiring detachables, but none of them sold very well. Then came Google IO 2012 and the introduction of the Nexus 7, which redefined the volume tablet market. As Apple and Amazon followed with their new 7-8” offerings, the entire tablet market swung toward smaller screens and cheaper tablets. Even though there some excitement around the Nexus 10, on the whole, 10” Android tablets continued to sit, uninspired. What’s going on here?

The challenge with 10” Android tablets is all about apps, which goes all the way back to the first Android tablets. In fact, there are so few tablet apps that there isn’t even a way to segregate the app store to do a decent count of them. That’s when you know very few apps exist. This is a bit of a chicken and egg problem and Google hasn’t yet dug itself out of this hole yet. So why aren’t devs creating apps for the Android 10” platform?

Devs right now are confused about the Google large display ecosystem. I say “Google” and not “Android” because some devs see what Google and partners are doing with Chrome and need to first decide between Chrome and Android. They see Chrome notebooks selling well on Amazon but they are not seeing big optimism on 10”+ Android devices. Developers are confused and when it gets to the point of lock-up, stick with the safe bet, iPad.

In the end, it’s the consumers who suffer. You can install a 4” Android app on a 10” tablet, but many times it gets stretched to the point where the app is unusable. Imagine how that 4” app looks on that 20” display. Well, about twice as bad as the 10” display. All kidding aside, it is the consumer who feels the pain after they get home and try it out and expect an experience that just works. For users who stay in email, the browser, and a few optimized games it’s probably fine, but for those users who use many apps, the experience will be suboptimal. This brings us to the new Acer and Asus SKUs.

Acer has launched a 21.5” all-in-one with Android 4.0 (ICS) with a very slow OMAP 4430 that’s in the Kindle Fire tablet and Google Glass and 8GB of storage. Given what is under the hood, I can only imagine how anemic this system will be, regardless of the lack of apps. Asus has launched the “Transformer Book Trio”, a 12” two operating system (Android/Windows), dual architecture (Intel Haswell/Intel Clovertrail), tri-modal UI (Metro/Desktop/Android), and tri-modal physical (tablet/notebook/desktop). This is clearly not for the technology weary as bundles nearly every possible confusing variable to a general consumer. Aside from these variables, like the Acer AIO, it will stretch many 4”-designed apps to 12”, providing a less-than optimal user experience. Let me close in on answering the original question.

Are the latest Android 10”+ devices DOA? Yes, they are until Google can motivate application developers to create more Android apps that work well, and not stretched from 4” to 10” to 12” to 21”.



Microsoft and Google’s Game of “Office Chicken” is Just Alienating Users

Google and Microsoft are battling it out on a lot of fronts, but many times there is little collateral damage to end users. Unfortunately, in a few cases, end users have been cast aside in the spirit of strategic lockouts and bickering. Two immediate examples are the Windows Phone YouTube app and the battle of the calendar. I’d like to drill-down into Google dropping EAS and Microsoft not supporting calDAV in MS Office to highlight just how much these two giants are damaging the end user and I’ll end suggesting a unique solution.

It all started with Google’s “Winter cleaning” in December where they decided to stop supporting Microsoft Exchange Active Sync (EAS). This meant that Microsoft products like Outlook and even Windows Mail and Calendar 8 would no longer work if they were connected using EAS. It also screwed over users like me who run their businesses off of Google Apps who wanted to use the calendar inside Windows 8 for use with their Windows 8 touch devices. I already used Outlook and synced with Google Apps using Google Sync free of charge. At the time, Google Sync didn’t work with Office 2013, so I was stuck with Office 2010 which is a horrible touch experience.

Microsoft could have invested some work into their offline or online calendars to work with calDAV, but they didn’t, and I believe that it was for the “Scroogled” cause, not because it’s difficult. You see, 20 person development shops or less support calDAV. At a minimum, Microsoft could have been more transparent about why they weren’t supporting calDAV or whether they would ever support it.

I’ve tried many times to get away from Outlook. I may be in the minority, but I cannot run a small business off of web mail and calendar. Some can do this just fine, but many of us need a real app, not a web app as it’s faster, has better offline capabilities than Google Calendar and has many more robust features. I tried Thunderbird, eM Client, Zimbra, but they all have fatal flaws; eM Client doesn’t support conversation email, Thunderbird requires the buggiest of calendar and address book plug-ins, and Zimbra is this odd web-app that didn’t connect with my Google Apps Contacts. For a Windows 8 touch experience, I just hid the Windows app and placed an Explorer link on my desktop to my Google Apps Calendar.

Last week, Google announced that Google Sync finally supports Office 2013. I was very excited, because this may enable me to have my Windows 8 desktop and Metro touch experience in one app. I installed Google Sync and the calendar wasn’t syncing. I uninstalled Sync and reinstalled it. That reinstall failed and it said I should reinstall Office. I uninstalled Office and reinstalled Office and then Sync. No joy. I searched for the problem and found it here. The forum post says,

Hi XYZ,We apologize for any inconvenience caused, we’ve identified an issue with Google Apps Sync with Microsoft Outlook 3.3.354.948 which can cause calendar events to not sync. Mail and contacts are not affected. Our engineers are aware of the issue and are currently investigating. You can also find this information on our known issues page referenced.

Regards, PDQ”

In other words, you can’t sync your calendars with Office 2013 or 2010, we don’t know root cause, and don’t know when it will be fixed. Thankfully, Google did provide a link that did work with Office 2010 but remember, 2010 doesn’t work well at all with touch and that’s what I am trying to solve.

I am going through this excruciating detail to emphasize a point: It’s users who are getting caught in the cross-fire between Microsoft and Google, and when they do, it wastes a lot of time and money and causes a lot of consternation. If I could punt both Microsoft and Google for office productivity, I would.

There is a solution though, and one that may surprise you. If you are a Mac user, you know what I am talking about here. You see, Macs work great with Google services, supporting IMAP for mail, calDAV for calendar, and carDAV for contacts and tasks. It’s built right into the native programs bundled with every Mac. Macs don’t support touch, yet, but if you are a consumer or a small business owner like me who is wedded to Google Apps and need a good desktop experience, then you need to strongly consider going to the Mac.

The moral of the story here is that by messing with users to play big company games with power plays, Microsoft and Google both risk alienating their base of users and driving them into the arms of Apple, at least in this situation. Users don’t like to be forced to do anything they don’t want to and want the freedom of choice. In a way, I hope losing users to Apple would shed some light a fire inside Microsoft and Google not to mess with users as this would be good in the long run.

SKAA: Better Than AirPlay and Bluetooth for Premium Wireless Audio?

Wireless audio speakers and headphones are growing as a consumer category.  Best Buy has 192 different “wireless speakers” on their website, Amazon, 400.  The growth in wireless audio was helped by the growth of the premium music headphone phenomenon, started by Beats Audio.  Of these wireless speakers, the clear majority utilize either Bluetooth or AirPlay to connect the device to the speaker or headphone. The problem is that both of those standards fall short on premium audio quality, openness or ease of use. Skaa, an emerging audio standard with roots in pro wireless could solve most of today’s problems inherent in today’s wireless solutions.   

Let me begin with Bluetooth.  Most wireless audio products on the market today use stereo Bluetooth, A2DP. It’s on all modern smartphones, tablets and on many but not all computers.  Bluetooth’s primary use is very straigh-forward: connecting one phone to one headset or earpiece from Plantronics or Jawbone so drivers can talk and drive.  But as we have all experienced at some point, Bluetooth is an absolute nightmare to pair and maintain a reliable pairing. To add to the pairing nightmare, Bluetooth-based speakers also face a contention problem.  Wireless audio contention occurs when people, in my case family members, have paired to the same wireless speaker, allowing anyone to take control. In my house, we share a wireless Bose Soundlink II system across 4 people.  We have taken it everywhere inside our house, to parties, and when we travel.  If my wife is connected, even if she’s not using it, I have to ask her or my two daughters to turn off Bluetooth on their phones to let me in.  The other issue is distance. I cannot take my phone too far from my speaker or else the audio starts degrading.  The speaker starts hissing and popping.  I personally don’t use the Bose wireless speakers anymore because it is such a hassle. The final challenge for Bluetooth is bit rate.  I interviewed a few audiophiles for this piece and they literally said they do not buy any wireless Bluetooth devices because of its “less than MP3 quality” nature. I wrote a more technical note here, which provides a technical comparison. Let me switch to Apple’s AirPlay.

Another wireless audio alternative is Apple’s AirPlay.  I think AirPlay is an awesome feature to mirror my Mac and iPad displays and share photos with a group of people, but it comes with its own set of major issues for a premium audio experience. First, you need a WiFi network to use it, at least until WiFi direct is enables.  The network requirement eliminates the option of taking AirPlay-based set of wireless speakers to the park, unless you’re a mega-geek and bring a router with you.  Secondly, AirPlay is limited to Apple host devices, the iPhone, iPod, iPad, and the Mac.  I recently switched from an iPhone 4S to an HTC One X and my tablet to a Nexus 7, therefore limiting my AirPlay investment.  Staying inside the premium walled garden of  AirPlay is great if you or the family is all-Apple, but not for the other 75% of smartphone owners out there.

AirPlay also limits my ability to enjoy certain audio usage models.  First, there are no AirPlay headphones.  You can still do wireless headphones on Apple devices via Bluetooth, but AirPlay uses too much power as its basis is WiFi. Secondly, if I want to play a game or watch a movie directly on my iPad, I cannot send the audio to a wireless speaker as it will be out of sync with the video over AirPlay and for any other WiFi-based wireless speaker solution.  This is because AirPlay uses the unreliable home WiFi network with higher latency.  If the home network is 2.4Ghz., it is susceptible to interference from Bluetooth, the neighbor’s WiFI, microwave ovens and cordless phones.

There is a developing standard for wireless audio called Skaa, which eliminates many of the premium audio challenges inherent with Bluetooth and AirPlay.

SKAA comes from the professional and pro-sumer music world. The basis for SKAA is a standard called PAW, or Pro Audio Wireless, and powered the wireless gear for artists like Justin Bieber, Lady Gaga, Keith Urban, Kanye West, Eminem Band, and Justin Timberlake. These bands used PAW in concerts for wireless guitars and speakers because of its high quality with a high bit rate, long range, and because wasn’t susceptible to interference from other 2.4 GHz devices like smartphones and WiFi. SKAA, simply put, is the consumer flavor of PAW, designed for consumer phones, tablets, computers, TVs, and game consoles.

With SKAA, users can connect up to 4 speakers from one device, and because it has long range and multi-point capabilities, consumers could have four speakers in the kitchen, living room, dining room, and bed room all broadcasting the same, synchronized audio. The pairing nightmare goes away as it uses small, mobile-friendly, wireless transmitters that immediately start playing the music after pressing one button the first time you get a speaker.  These small, wireless transmitters are currently available for Apple’s 30-pin devices and USB for all computers, Mac, PC, and even Linux. Apple’s Lightning devices, micro-USB for Android devices, and other wireless transmitters are coming soon. So am I saying that Bluetooth and AirPlay are going away?  Absolutely not as these are two pervasive and flexible standards that will be here for a long, long time.  For audio, particularly premium audio, I do believe that SKAA-based speaker and headphone companies will start adopting the new standard and challenge AirPlay in the premium audio space.

If you want a more technical dive, I have written a short note here.

Only Apple Can Play Apple

It has been a challenging few days in the press for the Microsoft camp. In a positive move, Tami Reller, CFO and head of marketing for Windows, gave more details on how many Windows 8 licenses had been sold, details on what they had learned from Windows 8, and what they intended to do to improve the experience.  The overall press reaction was very negative to the point that Microsoft’s head of communications, Frank Shaw, penned a blog defending the company and singling out two press outlets.  How did Microsoft get to this point?  Well, as I was quoted a few times this week, it’s because “only Apple can play Apple”.

Apple’s style, love it or hate, has been around exclusion and secrecy.  Their product, PR and analyst programs are very exclusive and at least from my experiences, impossible to get a response from. Large scale analyst days are non-existent and their product launches are held in smaller-venue locations.  This exclusionary strategy works well for Apple when it is doing great and bringing a constant flow of ground-breaking product to the table like the iPod, iPhone and iPad.  For the Apple of the 90’s, outside the Mac proponents, it was looked at with disdain, which was actually helped to bind the Mac crowd together. Microsoft’s adoption of Apple’s strategy is part of Microsoft’s challenge today.

During Windows XP and Vista time-frames, Microsoft had some of the most inclusive practices on the planet of any technology company out there.  During the Vista time-frame, Microsoft literally invented the large company social media interaction.  Forget about Vista the product, their social media strategy was phenomenal.  Oh yes, don’t forget, this is where Robert Scoble got his social media start. Microsoft also pioneered what I consider the “super-user”, or MVP program, which is still in practice today. For Windows, Microsoft would give a tremendous amount of details, very early on, about upcoming versions of Windows and had very inclusionary analyst and press practices.  Then came Windows 7.

During Windows 7, communications changed dramatically.  They became a lot more like Apple’s than Microsoft’s.  Ecosystem briefings, disclosures, and two-way communications seemed to slow to a crawl.  But that didn’t seem to have an effect on Windows 7 as the operating system was wildly successful.  The problem was, that this new communications strategy mythically became one of the reasons, inside of Microsoft, that Windows 7 was so successful.  The logic was that because we (Microsoft) spent less time listening to people who don’t really add value, we executed faster and introduced a superior product everyone wanted. Then came Windows 8.

At the same time Microsoft was developing Windows 8, Apple was on their meteoric rise and all eyes were on Steve Jobs and his autocratic style.  It became even more reinforced inside of Microsoft that one of the keys to success was to basically, act like Apple.  No one actually said “we need to be more like Apple” but I was told, “Look at Apple, they do that, and it’s working great for them.” And then the Windows 8 started.

Windows 8 was the first major Windows UI overhaul since DOS.  It went from keyboard, mouse, icons and folders centricity to touch, swipes, charms and live tiles. The application development tools changed completely.  ARM support was added.  Literally, everything about Windows changed from top to bottom.  At a time like this, Microsoft needed early, widespread, inclusive communications with the entire ecosystem, not exclusive, secretive communications of the Windows 7 era. The result is what you see today, where it appears Microsoft can’t seem to get a break.  The 100M licenses sold that many companies would kill for gets picked apart, admission of Windows 8 challenges gets “I told you so’s”, and people are calling Windows Blue the Windows 8 that should have been.

You see, this has very little to do with the product itself and is a result of years of Microsoft trying to adopt many of Apple’s characteristics I outline above. This has unfortunately created a negative environment where the ecosystem, because they don’t feel heard or a part of creating Windows 8, doesn’t feel ownership of Windows 8.  It’s a disconnected relationship, like a distant cousin. Anyone like me who was a very strong part of the Windows ecosystem for years knows exactly what I am talking about.

While I will save for a future post exactly what I would do to turn this around, I will say this is all about carving out that unique positioning and acting on it.  There still exists a need for an inclusive and over-communicative tech giant, and it’s up to Microsoft and Google to determine who wants to take it. I have seen some positive signs coming out of Microsoft that they would like to head in this direction, but we will all need to wait and see.  Don’t forget, only Apple can play Apple.

Big-Box Retailers are Not Helping PC Sales

Last week, I wrote about the “softer” and arguably some of the more important PC attributes. Toward the end of that column, I threw out there a few examples of how U.S. big-box retail isn’t helping market those softer features. I want to dive a bit deeper into that this week as I think U.S. big-box retail shares a large part in the decline in PC sales and outline why I believe that. I will start with a little background.

I’ve been involved in and around electronics retailing since 1993 when I worked for AT&T Computer Systems then with Compaq in the late 90’s when they were #1 in consumer PCs. Back in those days, U.S. retail for computers looked very different than it does today, primarily a combination of big-box electronics stores, regionals, department stores, and office super stores. Manufacturer stores and online really didn’t exist. My how things have changed. E-tail is huge (Amazon), manufacturer store(s) (Apple) are lauded, only one national electronics retailer is still alive (Best Buy), mass merchants have aggressively entered the space, and clubs are as aggressive as ever. And, of course, there’s Microsoft and their stores.

What hasn’t changed in 20 years is just how poor the PC buying experience is in the big box retailers…… and that poor experience is negatively impacting sales at a time when the industry can least afford it. Big box retail does best when the category has been established and there is minimal change. In the PC world, everything is changing. Windows 8 brought a brand new UI that had not fundamentally changed since DOS. Gestures and PC display touch is new too. Let’s not forget about convertibles and hybrids, all new. How did big-box retail respond? The same way they have for the last 20 years.

Big box retailers responded by selling more end-caps to manufacturers, they added more demo days with manufacturer’s reps and did more promotions….. and it’s not working. It’s not working because fundamentally there exists a massive disconnect between what consumers want to and need to know about the latest generation of PCs. I’ll use Windows 8 notebooks and convertibles as an example and drill into that experience.

As we all know, Windows 8 brought with it a new, multi-mode UI and a new gesturing system that is used with mice, trackpads, and touch displays. Even Microsoft acknowledged recently it took a few hours for consumers to get comfortable with Metro. Windows 8 also brought instant on when paired with the right hardware. A consumer may even want to try out instant on and even compare certain notebooks. New notebooks are also bringing better battery life with thinner and lighter designs. To gauge how heavy the system is, you would want to pick it up and maybe carry it around. Maybe you would want to compare notebooks and how quickly the Ultrabook boots versus the $299 notebook.

How many times have you walked into a big-box retailer and walked down the notebook aisles and saw Windows 8 notebook PCs that:

  • Shells with no electronics inside
  • Turned off
  • Error messages on the screen
  • Not connected to the internet
  • Running a demo loop that required a retail password to interact
  • Had unsavory data visible from a prior consumer
  • Batteries removed to protect from theft
  • Tied down with security wires and could not be lifted
  • Touch display and backlit keyboards not merchandised

I am asking the question rhetorically because I know the answer. We all do. All of us have experienced this. I’ve heard all the reasons why big-box retail provides this kind of experience for the last 20 years. The reasons typically involve profit margins, security concerns, and the challenges of managing a distributed workforce, etc. Interestingly, I never see the above examples at an Apple store. Never, ever. I can sit at the Apple store there for hours and literally do a test drive like I would a car. I’ll bet Apple would let me download apps if I asked.

The unfortunate outcome is that the big-box retail experience is actually playing against increases in Windows 8 sales. The stores just do not provide, for many, the environment that meets the needs of someone trying to buy a new Windows 8 notebook. Consumers need a way to “test drive” Windows 8 and big box retail isn’t delivering. I believe one of the consumer reactions is to not buy a new Windows PC because they deem it risky. What do they buy instead? One of the consequences is that it helps nudge the consumer to buy a $199 smartphone or tablet which, when weighed against the big box retail experience, is a much lower perceived risk than buying that new PC.

The Soft Improvements in new PCs Could be their Biggest Draw

Last week was an ugly week for the computer world. The IDC figures that came out for Q1 said that it was the biggest contraction the PC market that had ever been seen since their tracking began. The best Apple could muster was a slight YOY decrease in Mac sales, which looked great compared to the rest of the industry. I spend a lot of time as an industry analyst dissecting the “why’s” and thinking about what it will take for the PC market to rebound. I talk to other analysts too, like Tech.pinion’s and Creative Strategies’ Ben Bajarin, and we’ve had some recent conversation that helped clarify the PC conundrum. I’m convinced that the “softer” improvements of new PCs could be the most important, yet under-communicated and understood reason to buy a new PC. By “softer” I mean those things that don’t have hard metrics or measurements. I want to peel back the onion a bit and let’s start with a little background.

Intel estimates that there are 500M computers actively being used that are over 4 years old. Think of just how many thick, chunky and poor experiences that equates to. If that’s your PC experience and haven’t experienced a new one, I can see why a new tablet or phone will be your next purchase. Consumers don’t really know the reality that new PCs offer a significantly improved experience and while I’ve already hit the “hard” reasons to buy a new one, I want to hit on the softer side. Let’s start with starting up a PC.

Waking an old PC from sleep or even worse hibernation is like starting an old tractor. With a Vista-based PC, it can take nearly three minutes to get it to the point you can actually do something with it. Compared to a phone or tablet this is ridiculous, which is one key reason consumers gravitate to the tablet and phone. The reality is, though, that the latest notebooks based on Intel and AMD technology wake up almost instantaneously. Intel’s Atom designs are literally instant-on like the best tablets and notebooks based on Intel’s Core and AMD’s Trinity with SSD storage isn’t too far behind. I think many consumers would be surprised just how far the PC has come.

Similar to “boot” time is the advancements in application load time with a new PC. The human brain amplifies wait time, and before smartphones and tablets, consumers settled for the lousy experience of an old PC. SSD’s and software optimizations changed the expectations when consumers used their phones and tablets. This put the burden on the PC industry to improve the experience, which it has done very well. App load time is nearly instantaneous due to advances in SSDs, software cache, and application architecture. Let’s move to physical UI.

Older PC notebooks have small track pads, typically three buttons and don’t support gestures. With this configuration, you really need an external mouse or trackpad to get anything old touchpaddone. Compared to a new PC, this is archaic, but if a consumer never experienced it, how would they know? Windows 8 PCs with high quality touchpads are a lot different. Systems like the Dell XPS 13 have a large trackpad without buttons and support all the Windows 8 gestures. We have to all thank Apple for raising the experience bar here. Touch is another great adder in new Windows PCs at prices as low as $499. How many times have you reached up from your notebook to touch the display? While Apple has ignored this so far on Macs, I believe it’s inevitable that touch display becomes the $499 PC standard in 2014.

Fan noise is another “soft” feature of new PCs that gets overlooked. I had the first MBA and it was loud as the fan seemed to always run. Today, even the thinnest Mac, Ultrabook or premium ultrathins barely make a sound. This has driven by many factors, including a lowering of the CPU TDP from 35 watts to 17 watts but more than that, a major decrease in idle power draw. There has been literally a 5X improvement in idle power draw and the result is the fan rarely kicks in. See a pattern here? Like a tablet.

Backlit keyboards are another feature that has made its way to the $499-$599 price point. The feature literally determines if we can use the notebook in the dark or low light conditions on a plane or in bed. Old PC notebooks don’t have this and new generation Windows 8 notebooks and MacBooks do. While some scoff at the importance of the feature, research I have conducted shows consumers value it and will pay dearly for it. Again, it is one of those “soft” features that can make the difference.

The final “soft” variable I want to discuss is design. Led by Apple, the entire PC industry raised their game in the last five years. Cheap looking, shiny ABS plastic has given way to magnesium, aluminum, textured plastic and rubberized surfaces on the newer class of notebooks. If you are reading this on your five year notebook, look and compare how it looks versus something you can buy today from $499-999. It’s ugly and you know it.

While these “soft” improvements in new PCs are difficult to measure, I believe they could be the strongest reasons to buy a new PC. Whether it’s improvement in looks, their silence, or the speed at which they start programs, newer notebooks are light years ahead of their predecessors. The PC industry has been spending a lot of money on this, but challenges exist. The first is legacy. Macs had such a premium perception for so long that it’s hard for consumers to accept that a Windows PC can have the feel of a tablet or smartphone. Therefore it could take a while for the new reality to sink in. Retailers are a big part of the problem, too. How many times have you walked into “big box” retail and the PCs weren’t turned on, weren’t connected to the internet, had an error message on the display or had some silly protection device that gets in the way of the trial. Net-net, the PC industry needs to shift a lot of their marketing spend to increasing awareness and motivating trial to point out the softer improvement in PC or face a very painful few years.

Facebook Home’s Uniquely Flawed Experience Examined

Facebook announced last week their new experience for Android smartphones, called Facebook Home. This is Facebook’s first major attempt to control more of the phone’s experience without actually selling a phone.   Facebook Home is pre-installed on the HTC First and also users of select HTC and Samsung phones can install it from Google Play.  I primarily use an HTC One X+, installed Facebook Home, and I am sorry to say, the experience was everything I was afraid it would be… a thick and clunky launcher that drains your battery, slows down and gets in the way of everything except Facebook.  I want to share with you my experiences, and based on that and other data points, piece together the implications.  Let me start with some background.

Facebook has received significant investor scrutiny to increase mobile monetization ever since they went public.  Every quarter since they went public, they have been dogged by institutional investors, so the pressure was on Facebook to improve their mobile strategy and execution.  It started with an improved Facebook, Messenger and Camera app (iOS) and it appeared that Facebook was on a little bit of a mobile roll.  Speculation grew that Facebook would in fact, partner with HTC on a “Facebook Phone”, but instead launched Facebook Home.  Let me talk about my experiences…

From the start, Facebook Home looked beautiful, with edge to edge photos of my timeline.  It looked like a combination between XBOX 360 animations and Path simplicity.  If all one wants to do is breeze through timeline and “like” and comment on items, it’s great, but using the other 99 smartphone features the experience starts to unravel.

Swiping my Chat Head up exposes my apps that were in folders or on the top screens.  Note I say “were in folders” because my folders were removed, the apps pulled out and spread out onto four screens that I need to scroll horizontally.  Heck, even Apple learned that phone app icons worked better in folders.  The change is simply confusing. My weather app is now on page four, which defeated the purpose of having it on page one where I wanted it.  If you scroll all the way to left you see “All Apps”, which, of course you scroll vertically to scroll through.  Confused yet? What would really help here is something like Apple’s Spotlight Search.  But hey, Facebook is now at war with Google, so maybe they feel they have to remove search and choose rivalry over usage…. or they assume people don’t have a lot of apps.

After using a non-Facebook app like Pulse Reader, you press the phone’s home button and you are back into Facebook world, which took between 5-10 seconds if over a telephone network.  The screen would be black and a white swirl would spin counter-clockwise, Facebook Home’s version of the hour-glass.  The experience was much better over a WiFi connection. Let’s talk notifications.

In Android, notifications display on the top band of the phone. Facebook Home default installation removes that band, so if you get Twitter, email or low battery notifications, you won’t see them.  Facebook, Instagram, and text notifications show up as Chat Heads.  To see non-Facebook notifications, users must swipe once to see them then swipe twice to pull down the notification menu. If you want to see the status bar, you need to go into Facebook Home settings and click “show status bar”.  Facebook Home also covers up alarms that come through my “Alarm” clock.  You can hear the alarm clock, but you don’t see it and need to plow through menus to snooze or shut it off.  This isn’t what I want to be doing at 5AM.  Let me move to heat and battery life.

Facebook Home is on most of the time and therefore like all active apps, uses power.  I will estimate that my battery life was reduced by 30%.  This is very unscientific test, but what was most telling was the error message that kept popping up. It said, “Battery is low.  The charging current is not enough for device power consumption.  Please switch to AC adapter”.  Problem was is that I was connected to an AC adapter, but my phone and Facebook Home was sucking more power than could be replenished.  As you would expect, Facebook Home heated up my phone, warmer than any game had done.  This is simple physics, but not something you expect from a social media app.   Let me finish off my experience with privacy.

There is no default-privacy with Facebook Home, period.  Your timeline is defaulted “open” for anyone to see.  It is above the lock screen and is just like a screen saver.   Literally, I could walk over to someone’s phone with Facebook Home and watch their timeline…. And “like” and even comment.  Like all Facebook privacy invasions, you can shut this off in settings.

So is it just me who found major issues with Facebook Home?  Absolutely not.  47% people who rated Facebook Home on Google Play gave it a 1 out of 5 stars.  As I dig into the comments, I definitely see most of the issues I outlined below.  This isn’t some anti-Facebook conspiracy as Facebook’s other Android apps receive rave reviews.  As a comparison, Facebook Messenger received 5% one star rating. Net-net, my Facebook Home experiences were shared by many others.  So I have spent a lot of time on the experience, but what about the implications?

As we have seen so many times before with apps like V1 of Apple Maps that received very polarizing rankings, large companies do act as quickly as possible and make quick improvements to their app.  I believe Facebook, who is famous for their all-night hackathons, will move quickly, but with such fundamental and major issues, I’m skeptical they can remove all the issues. Removing folders is a major mistake and Facebook will end up adding those back in or adding some kind of search.  Making it nearly impossible to find your apps isn’t a good way to get the focus on Facebook and without that, many will just uninstall it.

I believe the lagginess, heat generation and the battery life-sucking nature of Facebook Home will be very extremely difficult to fix on current devices like mine without a major architectural change or major sub-optimizations of the code.  Because this most likely won’t get fixed, many people will just uninstall it and move on.

Facebook had an extremely rocky start on a strategically important endeavor.    It demonstrates just how difficult it is to “own” an experience by putting a skin or launcher on top of an operating system.  Facebook will need to go deeper like Amazon did with their own device and experience or sit in “no-man’s land” between themselves, Google, and the handset manufacturers.  While this is a very risky move and not what investors want to hear, Facebook may need to do this to completely monetize mobility.  You will see small baby-steps on this path as Facebook starts to control more of a certain brand’s experience but ultimately Facebook will get out of this middle ground and do their own phone or remain a collection of apps in someone else’s experience.