Google’s New Android Math Doesn’t Add Up

Smartphone-Sales-to-End-Users-Feb-2013-Gartner

According to Gartner, Android sold 144,720,300 units in the fourth quarter of 2012. But let me ask you this:

Who cares?

Does Samsung care how many “Android” units were sold? No, they do not. They only care about how many of their devices they sold.

Do the various Android manufacturers in China care how many “Android” units were sold? No, they do not. They only care about how many of their respective devices they sold.

Does Amazon care how many “Android” units were sold? No, they do not. They only care about how many Amazon devices are being used to direct traffic to their web site.

Do Android developers care how many “Android” units were sold? No, they do not. They only care about those Android units that their software can address and, even more specifically, they only care about that portion of the addressable market that is interested in purchasing their product or downloading their product and consuming their advertising.

Does the Google Play store care how many “Android” units were sold? No, they do not. They only care about how much is purchased from the store.

Why Do We Count Android As A Single Entity?

“Android” is not a single entity. So why do we add all of the “Android” numbers together? We do it because we assume that higher numbers mean a stronger platform. We use it as a proxy for the strength of the platform. But it just ain’t so. Total numbers mean nothing. The only numbers that matter are those that strengthen the platform.

And do you know who agrees with me? Google.

Android’s New Math

Google reported that the number of Android units using Android versions 4.1 to 4.2 jumped from 16 percent a month ago to 25 percent this month. Impressive, no?

Google-ChromeScreenSnapz001

No.

The reason for the big jump was that Google changed the way they count the numbers. Previously, devices were counted when they checked in to Google’s servers. But Google is now only counting user vistits to the Google Play Store. Google argues that the data more accurately reflects users “who are most engaged in the Android and Google Play ecosystem.”

I agree. This is a better way to count the meaningful numbers rather than just the gross number of Android activations. However, did you notice the inconsistency in Google’s new math?

One Of These Is Not Like The Other

Google hasn’t recalculated and lowered the total number of Android activations.

In other words, when it comes to telling you how many activations they have, Google uses devices that check in to their servers. But when Google wants to tell you which versions of their operating system are in use, they only count user visit’s to the Google Play store.

Hmm. So we now know that versions of the Android pie are divided into different sized slices but what we don’t know is just how big that pie is. Exactly how many of the 144,720,300 units sold in the fourth quarter of 2012 are actually accessing the Google Play store?

We don’t know. Because Google isn’t saying. And until they do, those total unit sales and activation numbers have little meaning in determining the overall strength of Google’s portion of the Android platform.

Android’s Total Numbers Conceal Rather Than Reveal

“Android” shouldn’t be counted as a single operating system any more than Europe should be counted as a single country. Heck, Android doesn’t even have a “common market“.

If we’re going to use numbers as a proxy for determining the strength of various operating systems, then we have to use meaningful numbers. Perhaps we should be comparing the units running the latest version of iOS with the latest version of Android. Perhaps we should be counting the Amazon, Google, and the various Chinese portions of Android as distinct and separate entities. Perhaps we should even be counting that portion of the Android phones that run Facebook Home separately too.

What we most certainly should NOT be doing is lumping all Android sales and activations together and pretending that they’re one and the same and that their total numbers are advantageous to all of Android’s separate participants, such as Samsung, HTC, Amazon, Google, developers, etc. If an activation or a unit sale doesn’t count towards the strength of the whole operating system, then it shouldn’t be totaled. Totaling Android’s numbers together doesn’t make sense because there isn’t a single, unified Android platform.

Numbers should be used to reveal, not conceal. And Android’s numbers aren’t revealing its strengths, they’re concealing its weakness.

The HTC One: Setting a New Bar for Android Phones

HTC-ProductDetail-Hero-slide-04

I’ve been using the HTC One for a few weeks now as my primary smartphone and I have to say it is an impressive device on many levels. The HTC One is undoubtedly the best Android device I have ever used.

Through the years, HTC has shown that they can create extremely well designed and unique hardware. The HTC One is the pinnacle of the companies efforts and rasies the bar for all Android, and Windows Phone devices for that matter, going forward. The HTC One is the first smartphone that even comes close to the iPhone in terms of hardware and in some respects it is superior.

From my experience with the HTC One there were three key things that stood out to me.

Speakers and Sound

The speakers on the HTC One are incredible. Hands down the best speakers I have ever encountered on a mobile device. At first, I was impressed at the sound quality but questioned how practical the feature was. After a day or so, I quickly changed my mind and realized the feature was incredibly valuable. I started listening to music in more locations, contexts, and situations than before. Although I own the Big Jambox by Jawbone, I don’t always have it with me. Even when my family and I go to the beach or the park, we always try to pack lightly. Bringing the Big Jambox is not always an option. But I always have my phone with me and with the HTC One it’s like having a boom box with you at all times.

HTC includes the Beats audio feature which is a hybrid software and hardware audio processing solution. This feature worked well on the phone but interestingly the Beats audio feature was applied to audio that was being streamed to other devices. I stream music from my phone to my cars speakers frequently and I noticed the audio coming through my cars speakers was benefitting from the Beats feature.

HTC positions the enahnced audio and speakers on the One by calling it BoomSound. I’ve used many portable audio solutions and the distortion at high to full volume on many devices makes them simply unusable in louder or outside environments. This was my primary knock on the smaller JamBox. So I decided to test the HTC against other devices and this is what I found.

The iPhone 5 has great speakers but its max volume is 65 db and at that volume has minor distortion. My Retina MacBook Pro at full volume hits 95 db with excellent audio clarity and no distortion. The HTC One’s max volume hit 85 db with excellent audio clarity and no distortion. Suffice it to say, impressive for a mobile device.

Those stats aside, whenever I gave a demo of the speakers to friends and family, they simply said “wow.”

Camera

I think we would all agree that the camera on our smartphones may be one of the most valuable features. Every generation smartphone manufactures look to integrate better optics, sensors, software, and capabilities to the camera function. The processor and the camera are the two features that annually get signicant performance bumps.

HTC has always been pushing the camera envelope, mostly around megapixels, but you won’t find megapixel claims much with the HTC One and for good reason. Megapixels no longer matter. What matters now is what you do with those megapixels. HTC has packed a number of relevant features into the One that are typically rerserved for high end point-and-shoot and mirrorless DSLR cameras. The result is the best low-light pictures of any smartphone I have used. Low-light images are the trickiest to shoot with a mobile device and I generally travel with a DSLR for this feature alone.

Bottom Line is that the HTC One will rival many mid-range point and shoot cameras. Impressive for a smartphone.

Software

I’ve always appreciated HTC’s attempt to add value on top of Android. Their strategy with the Sense UI has been solid since the beginning. As Sense evolved, it got more refined and more polished. The hardcore tech community has generally bashed Sense in this regard because HTC is not targeting the hard core tech community with Sense. They are targeting your casual smartphone users who don’t want to fuss with their smartphone but favor ease of use over heavy customization and software tweaking.

Many of the UI changes HTC made helped Android get out of the way rather than get in the way. And for the masses that is a good thing. I have not been shy about my frustration with Android as a UI but HTC has done much to add elements of simplicity and convenience to the platform. HTC’s much simplified app launcher is a great example of this placing most recent apps, a search bar, and quick link to the Google Play store all near the top of the app drawer.

HTC has easily created the best Android phone to date for the mass market. Its uses for portable sound and image capture are best in breed of any smartphone. Considering how the masses use their phones, those two features alone will stand out.

The HTC One will distinguish istelf from the pack with the hardware alone. The key for HTC and the carriers that carry it is to market it appropriately. If they can do this, then I think HTC could have a winner on their hands.

My personal preference is still to iOS. Using the HTC One with its larger screen size and iPhone like design convinces me even more that I want iOS on a larger smartphone screen than 4-inches. In fact several times I remarked to people that I wanted iOS on the HTC hardware. Specifically the speakers and the camera.

I give many technology recommendations to friends and family alike. I recommend different devices depending on the type of consumer they are. However, If someone were to come up to me and ask my advice on which Android smartphone they should get. I would tell them without hesitation, the HTC One.

The School Standards Debate: Time for Tech To Weigh In

School kids (Photo c Monkey Business-Fotolia

 

Tech people are very fond of whining about the U.S. educational system, complaining that it is not producing the sort of workers they need. With a few notable exceptions–Bill and Melinda Gates and Dean Kamen come quickly to mind–the are much less good when it comes to doing anything about the problems of schools.

OK, here’s your chance. It won’t even cost you anything–calls for better education seem to die quickly in places like Silicon Valley when the talk turns to taxes–except some leadership.

The Common Core State Standards are the most important school reform to come along in many years. The standards fo mathematics and language arts lay out what we expect students to learn, year by year, from kindergarten through high school. They are not a curriculum, but a set of mileposts for what curriculum should cover, and they inject a badly needed dose of rigor into education. If you have any interest in K-12 education, you should take the time to read them here.

dear_industryDespite a studied effort by their authors and sponsors at the National Governors’ Association and Council of Chief State School Officers to avoid political pitfalls, the standards have come under increasing attack from both the left and right. CCSS was initially adopted by 48 states and the District of Columbia, but three states have withdrawn their support and their is pressure in many others to do the same.

On the left, opposition to CCSS is closely tied to opposition to standardized testing, based on the assumptions, not necessarily warranted, that the standards will lead to increased testing. The anti-testing advocacy group FiarTest argues:

More grades will be tested, with more testing per grade. [No Child Left Behind] triggered an unprecedented testing explosion (Guisbond, et al., 2012). The Common Core will compound the problem….

Lured by federal funds, states agreed to buy “pigs in a poke.” The new tests do not yet exist except for a few carefully selected sample items, so it is not possible to judge their quality. Nevertheless, states are committing large sums of taxpayer money for the equivalent of “vaporware”—much hype, little substance. New drugs must be carefully tested before release lest they do more harm than good. Yet, these new measures are being pushed through with at most one year of trials. There’s no guarantee that they will function as advertised and many reasons to believe they will not.

The argument that more study is needed is especially pernicious. CCSS has been in development for more than a decade and, unlike the radical math and science curriculum reforms of the early 1960s (remember New Math?), the new standards are mostly a compilation of best practices already in use. Then there’s the obvious paradox of demanding more evaluation while opposing the testing the could provide the data. (The National Education Association and the American Federation of Teachers, which oppose the use of standardized tests to assess teacher performance, both are on the record in support of CCSS.)

But the truly fevered opposition to CCSS is coming from the right, and this is what is threatening implementation in the states, largely through interference by state legislatures. The main objection, despite evidence to the contrary, is that CCSS represents a federal takeover of local education. Then there’s the complaint that CCSS is both untested and that the government is trying too hard to test it. Tiffany Gabbay, writing on the conservative site The Blaze, syas:

According to the conservative think tank American Principles Project, Common Core’s technological project is “merely one part of a much broader plan by the federal government to track individuals from birth through their participation in the workforce.” As columnist and author Michelle Malkin has pointed out, the 2009 stimulus package included a “State Fiscal Stabilization Fund” to provide states incentives to construct “longitudinal data systems (LDS) to collect data on public-school students.”

With attacks, often ill-informed (or completely uninformed; many of the people attacking CCSS show no sign of any knowledge of what they contain) coming from all sides, CCSS could use some friends, and I think its time for the tech industry to step up. I am much more familiar with the math standards than language arts, both because it is my area of interest and because by the nature of the beast, the language arts standards or vaguer and harder to interpret. The math standards, if properly implemented, would represent a huge step forward. They aim both at increased computational skills, largely deprecated in the standards in use for the past 25 years, and deeper understanding of the connectedness of critical topics in mathematics. Curriculum based on these standards should produce students better able both to do math and to think more deeply and critically.

This is exactly what tech companies is looking for in its future labor force. So instead of complaining about the deficiencies of American students, get out there and work for some constructive change.

 

HP’s New Servers Take a Page from the Smartphone Playbook

Yesterday, HP launched the Moonshot 1500 server, targeted at scale-out HPMoonshotSystem_HPProLiantMoonshot_serverdatacenters that drive today’s and tomorrow’s mobile applications, internet, big data and IoT. In its its first instantiation, Moonshot increases density, or the numbers of servers in a given space, by up to 8X. So for every rack of HP Proliant servers today, you could put up to 8X the number of servers in its place at the same power and space.  This equates to a huge cost savings in power and building space.  One of the more interesting parallels is that HP is taking a play out of the smartphone playbook to accomplish this.  Let me start with a brief review of smartphone technology ecosystems.

Today’s smartphones are powered by an SOC (System on a Chip) designed and sold by a collection of companies like Qualcomm, Apple, Samsung, Nvidia, Intel, Huawei, MediaTek and TI.  SOCs, unlike processors, have all the capabilities to run an entire phone, making capable the wide myriad of functions from texting to updating social media status, playing games, watching videos, listening to music, making and editing videos, taking and editing photos, and of course, talking.  The SOC that makes this happens is actually made up of multiple different accelerators that just happen to be packaged in one chip.

In fact, each smartphone has the following “accelerators” to accomplish a specific task, which I will GROSSLY simplify here:

  • CPU– boots the OS, runs serial app code well, multitasking, and is the orchestrator of the phone
  • GPU– accelerates games, UI and more and more doing parallel computing functions assisting in things like editing photos and videos
  • Video decode– plays back the high density, 1080P and even 4K video
  • Video encode– converts video to another format when doing things like video editing and even AirPlay to the TV
  • Audio- plays back music at extremely low energy
  • ISP (image signal processor)- required to take pictures and make them look pretty
  • VSP (video signal processor)- required to take videos and make them look pretty; sometimes bundled with the ISP

As you can see, all of the smartphone functionality comes from many different accelerators.  The alternative would be to use a general purpose ARM Intel or MIPS-based processor and do everything on it.  The problem is, that this could use 20-100X the power and heat.

So what on earth does this have to do with HP Moonshot servers?

The new HP Moonshot servers take a very similar approach to smartphones as they are designing specific servers for specific workloads or applications, just like a smartphone uses an SOC. Instead of playing games or watching a video, these servers will execute specific workloads like web serving, web applications, streaming content, analytics, cloud, database and caching.  Like an SOC’s accelerators, HP will be offering different server cartridges with on-board accelerators:

  • Processor– general purpose from AMD, Applied Micro, Calxeda and Intel, great on serial workloads and integer parallelism
  • GPU– used for visualization, remote access, cloud gaming, facial recognition, GPU compute
  • DSP– semi-custom, useful in many ways, but starting at video encode and decode
  • FPGA (Field Programmable Gate Array)– extremely custom, popular with government agencies and aerospace for a myriad of tasks you will probably never hear details about.

Like smartphones, HP Moonshot’s design addresses specific workloads and applications with a specific accelerator.  HP packages these different accelerators in a server cartridge that can be slid in and out of the Moonshot chassis very easily.  I have actually done this and it is very easy.  It’s not like installing a PCI Express card but is more like opening and closing the door of a sport car- very fluid with a “click” at the end.

One other analogy between smartphones and HP Moonshot servers I’d like to share is in regards to ecosystem.  I think we can all agree that much of the success of Apple and Google is around their app store ecosystems.  Well-oiled ecosystems that enable growing economic opportunity for both sides of the equation work very, very well.  Without the iOS ecosystem, Apple would have a handful of apps they wrote and then pointers to web sites for other content.  We all know how horrible that would be.  Also, the iOS ecosystem enabled applications I am sure were never thought of in the beginning.  These are apps like Instagram, Snapchat, Foursquare, and Pulse.  This is a testament to the combined innovation, creativity, and ingenuity a broad and thriving ecosystem can provide.

HP is setting up an ecosystem called the HP Innovation Ecosystem (a mouthful).  Its goal is to create a hardware and software ecosystem where, like in the case of iOS, all parties win and the combined innovation creates deliverables much larger than it could on its own.  HP has a really strong start with the hardware, as yesterday, I saw no less than 7 unannounced server cartridges and am promised there are many more on the way.  HP will essentially give you the specifications for what the card needs to plug into and a company can design their own cartridge.  In fact, for some government applications, HP may never see the actual server cartridge. This type of hardware ecosystem will be very interesting to watch.

On the software side, HP has enabled multiple OS, tool and app vendors, too, with an early focus on open source Linux.  To make coding and testing easier, HP has what they call the Discovery Lab, where ISVs and developing customers can install their code on HP’s servers in Houston, eliminating the need to buy a server for development.

This is the most comprehensive ecosystem I have ever seen and I look forward to seeing some applications we would have never thought of in a closed environment.

Ironically, the smartphone and tablet phenomenon has driven the need for a radical new approach to server and datacenter designs.  It’s even more ironic that to accomplish this, HP Moonshot servers have taken a page from the smartphone playbook of executing tasks on specific accelerators and creating broad ecosystems.

Who says servers aren’t cool?  If you would like a very deep dive, I have published a white paper here.

Fox, Aereo, and the End of TV

Aereo antenna array (Aereo, Inc.)

News Corp. Chief Operating Officer Chase Carey’s threat to pull the Fox network from the airwaves if Aereo wins its legal battle to retransmit over-the-air TV signals without paying for them is probably nothing more than bluster. But the fact that he could make such a threat with a straight face, and in front of the National Association of Broadcasters, no less, is a clear indication that the end of TV as we have known it is approaching.

The broadcast networks, especially Fox and the old big three of ABC, CBS, and NBC–are still tremendously important players in the TV world. Far more people watch their content than any of the cable-only channels. They still dominate news and live sports (though ESPN and to a lesser extent Fox have made significant inroads in the latter).

But over-the-air is no longer how they reach most of their viewers. And while we still think of broadcast TV as ad-supported, the retransmission consent fees paid by cable carriers–and avoided by Aereo–have become a tremendously important source of revenue to local stations. In a sense, they already are pay TV stations from the point of view of most viewers, and that is why  Carey’s threat is not an empty one.

What would it mean if over-the-air broadcast TV disappeared? For one thing, we could forget about the hideously complex incentive auction now being planned by the FCC to free a bit of the prime spectrum now occupied by TV stations for wireless data use and just turn the whole thing over to wireless.

Some of the more interesting consequences would be for politicians. Members of Congress depend on local stations to keep their names and faces in front of voters, especially as local newspapers fade away. Politicians are also the beneficiary of regulations that require local stations to sell advertising to candidates for federal office at the lowest rates they charge any customer. In fact, if stations stopped broadcasting over the air, the Federal Communications Commission would lose essentially all ability to regulate their content, rates, or much of anything else.

Even the most anti-regulation Republican doesn’t really want that to happen. That’s why Carey’s real audience may have been Congress. If Aereo wins in court, as seems increasingly likely, the broadcasters are likely to turn to Congress for relief. Carey’s statement was likely a shot across the bow in that fight.

But history has shown us that depending on favorable treatment from government to save you from the forces of change can work, but only for a little while. The times they are a-changing for television.

Thinking About The Future of TV All Wrong

family wathching flat tv at modern home indoor
 
I’m convinced that most of the commentary from the pundits and speculators around Apple TV and the future of TV in general is all wrong. There are some bits that I think have merit. Thinking about channels as apps for example is on the right path. Letting networks and brands have more control of their viewers is also on the right path. Thinking through how we will interact in active vs. passive ways with our contnet is also on the right path. But at a fundamental level there is something not being emphasized enough in this whole discussion of the future of TV.

TV is a Communal Experience

Right now, for most people, the TV is a communal experience more than it is a personal one. For example most people’s TV screen is in a communal place. It was designed from the beginning to be something that people gathered around and enjoyed together. This is not going to change. By nature of the size of the screen and its location, if more than one person lives in a house, the large TV monitor is a shared experience.

Most of the commentary I read around the future of TV brings with it a bias of an extremely personal revolution rather than a communal one. I get the sense as I read much of the ideas put forth around the future of television that many assume that the TV screen and the entire broadcast experience itself will become more personal. Now, while I think the TV experience will become more personal, I don’t think it is the large TV screen where the revolution will take place.

The large television set is a communal computer not a personal one. Therefore, its evolution will happen within the communal context.

Second, Third, and Fourth Screens

Using a smartphone, tablet, or traditional PC while watching TV is now common place among owners of such screens. These devices have something in common which the TV does not. These screens are highly personal. They are owned and customized and are portals to a very personal computing paradigm. So it is on these screens that I am expecting the coming TV revolution.

As we gather around the TV, it is the most personal screens which we have customized, where it makes the most sense to bring the personalized experience with broadcast content.

Nearly every major network studio has an iPad app. Some have Android apps but not all of them. Not only do the networks have apps but now many individual TV show brands are also beginning to have an app. One only has to look at the Colbert Report app for a shining example of the possibilities when TV shows themselves start creating software.

A Hybrid Entertainment Experience

The key to thinking about the future of TV is to understand that the TV set itself will remain a communal and shared screen. But our personal devices, like tablets and smartphones, will increasingly become the avenues by which what we watch on the big screen becomes personal and even intimate. Of course both these screens will still function as independent entertainment experiences, but the real revolution will come when you use them together.

The real shift is that content companies (like the big networks) will also need to become software companies. It is my belief that the televsion is the laggard in the computing paradigm. It is the screen that is yet to truly be a platform which software developers can take advantage of. When this happens the TV revolution will begin and take us on a path no one yet envisions.

How To Beat Patent Trolls: Fight

Troll image (© DM7 - Fotolia.com)

When faced with a lawsuit that has even a slim chance of success, lawyers almost always urge businesses to settle rather than fight. Litigation is extremely expensive, and unless the suit raises an issue of principle that is important to defend, the candle simply isn’t worth the game.

Unfortunately, in the world of patents, this attitude had led to a proliferation of patent trolls, companies that buy up unused and generally vague software patents and then claim infringement against businesses, often smaller companies without big legal budgets, that actually make things. The U.S. District Court for the Eastern District of Texas, which has been remarkably friendly to trolls, is the heart of the racket.

It would be nice if the U.S. Patent and Trademark Office would revoke the thousands of ill-considered patents it granted, especially in the early days after software patents were first allowed. It could be nice if Congress changed the laws to make it harder for so-called non-practicing entities to engage in a legal shakedown. But neither of these things is likely to happen any time soon.

So it is time for businesses to stand up and fight. Patent trolling will persist as long as it is a profitable activity. By raising the cost to the trolls, admittedly at some short-term cost to themselves, businesses can destroy the economics of the shakedown.

Rackspace Hosting, an infrastructure-as-a-service company beset by trolls, is leading the way. Last month it won a signal victory by obtaining summary judgment against a company that claimed a patent on rounding off floating-point numbers. (Rackspace was supported in the case by Red Hat Software, whose Linux implementation contained the allegedly infringing code.)

Now Rackspace has gone on the offensive filing a breach of contract suit against “patent assertion entities” Parallel Iron and IP Nav. The case, described in detail in this Rackspace blog post, is legally complicated. Parallel Iron is suing Rackspace for infringement of  a patent it claims covers the open-source Hadoop Distributed  File System. Rackspace argues the suit violates the terms of an earlier stand-off agreement it negotiated with Parallel Iron and IP Nav.

Rackspace, which says it has seen its legal bills rise 500% since 2010, explains why it has decided to fight:

Patent trolls like IP Nav are a serious threat to business and to innovation. Patent trolls brazenly use questionable tactics to force settlements from legitimate businesses that are merely using computers and software as they are intended. These defendants, including most of America’s most innovative companies, are not copying patents or stealing from the patent holders. They often have no knowledge of these patents until they are served with a lawsuit. This is unjust.

The rest of the tech industry shouldn’t leave this battle to the Rackspaces of the world. In particular, big companies with deep pockets should stop paying trolls to go away, a tactic that makes sense in the short run but is ruinous in the long. As independent software developer Joel Spolsky argues:

In the face of organized crime, civilized people don’t pay up. When you pay up, you’re funding the criminals, which makes you complicit in their next attacks. I know, you’re just trying to write a little app for the iPhone with in-app purchases, and you didn’t ask for this fight to be yours, but if you pay the trolls, giving them money and comfort to go after the next round of indie developers, you’re not just being “pragmatic,” you have actually gone over to the dark side. Sorry. Life is a bit hard sometimes, and sometimes you have to step up and fight fights that you never signed up for.

 

Why Google Shouldn’t Be Concerned About Facebook Home

Yesterday, Facebook announced “Home”, a skin that runs on top of Android, pulling consumer’s Facebook experience up to literally the lock–screen of the phone. The demos were facebook homefast, fluid, and very different than anything Android has to offer.  A lot of the press coverage ensued that talked about the big threat this could bring to Android.  Techpinion’s own Steve Wildstrom got into the action, too. The drama is fun, but nothing is farther from the truth on how this will play out.  Facebook Home, in its current form, is nothing more than a skin like MotoBlur, Sense and TouchWiz which will encounter the same challenges and consumer push-back and carrier and handset challenges.

Some of theories that were used to justify the big threat to Google went like this:

  • It’s harder to get to native Google search, their bread and butter
  • Friend updates show up on the lock screen, eliminating the need to get into your phone and Google services
  • Home will lead to Android forking, causing more fragmentation and more app incompatibility

The problem is, none of these logic paths end with the destruction of Google or Android. Let’s peel back the onion.

Anything that slows down the experience for a phone will ultimately get disabled or make consumers very unhappy.  Consider the skins that the major manufacturers install.  There isn’t a single one that doesn’t slow down the base experience when compared to a native Nexus phone.  Not a single one.  I doubt that Facebook Home has found some magical way to crack the code on how to place a layer onto a layer on top of an OS and make it fast.  The demos were fast and fluid, but I am highly skeptical that it will actually work this well.  Only Google holds the keys to this as it involves deep access to the kernel of Android, not the base Android APIs. You think Google gave Facebook access to that?  No way. Facebook will be constantly chasing multiple versions of Android, never able to get the experience where they need it, and it will be slow and buggy.

The next issue with Facebook Home is that doesn’t enable the total experience.  Users will be abruptly moving back and forth between Home and the rest of their home, kind of like switching between two different phones. While not as jarring as moving back and forth between Windows 8 Metro and Desktop, it is still like having two different phones. Facebook Home offers Facebook and Instagram capability, Address book, Messenger and even repackages texts.  But what about the other things you want to do with your phone?  Things like searching for the nearest restaurant, driving directions, tweeting, taking pictures, or web search?  Does anyone really think that if Facebook makes those critical usage more difficult to access, consumers will like that?  The promise of Facebook all the time will be extinguished by the complexity of having two experiences or two phones.

Let’s now address control, control of Android and control of the experience on two levels.  Let’s start with Android control. Google controls Android and they can change the terms and conditions as they see fit.  Android isn’t Linux, it’s owned by Google and they can do what they choose with future versions.  If Facebook Home would surprisingly gain popularity, they will simply change an API or a condition of Google Play or the Android license to make life difficult for Facebook.  It’s no different from what Microsoft has done for years on Windows and I don’t see that changing if or when Tizen or Windows 8 becomes more popular.  Let’s look at control of the experience.  Facebook Home has a built-in governor.  The carriers and handset makers know from Apple that those who control the experience hold the keys to the kingdom.  Sure the carriers and handset makers will take Facebook’s revenue share deal and engineering resources, but don’t think for a second they will keep doing it if it starts to get too much traction.    Therefore Facebook Home can only get limited traction or they will get shut down by carriers and handset makers, which forces Facebook to do what they didn’t want to do, which is do their own phone.

In summary, the Facebook Home announcement showed some nice looking demos of Facebook and how the Facebook experience could be improved.  It doesn’t show, however, how the holistic phone experience is improved.  Consumers do more than Facebook on their phones and that’s where Home breaks down.  Consumers don’t want different experiences, they want one connected experience.   Didn’t Apple teach us that? Even technically, Facebook will have challenges even delivering a fast and engaging experience because, like skins, they are constantly chasing a moving target. They have the same access to the APIs as everyone else does, and only Google holds the keys to the kernel.  If Facebook Home ever does get traction, it will be fleeting because Google can and will change something in Android or change the terms and condition to make life difficult.  Carriers and handset makers will gladly take Facebook’s money now, but if it gains too much traction, they will be forced to drop it else lose control. They don’t want two Googles.

Facebook Home will be a niche offering until Facebook can build out a winning set of holistic phone services and apps, but based on control, will ultimately need to get into the phone business, a tall and risk-laden order.

Facebook Home: The Death of Android

Facebook Home Chat-heat (Facebook)As a core operating system, Android is thriving. As a brand–and a user experience–it is dead. Facebook just killed it.

Android’s brand demise has been coming for a long time. Phone makers have been taking advantage of Android’s open architecture to install their own modified versions, such as Samsung’s TouchWiz. The most recent Android launches, the Samsung Galaxy S 4 and the HTC One, have barely mentioned Android. And in announcing Facebook Home, Mark Zuckerberg talked about Android only to say that Facebook was taking advantage of the openness of both Android and the Google Play Store to let anyone with a fairly recent Android phone replace the Android experience with the Facebook Home experience.

I dont know how many people will want Facebook  completely dominating their phone experience. I’m out of  the target demographic by more than a generation, so I’m probably a poor judge. But I’m pretty sure Facebook’s announcement won’t be the last of its sort. Maybe we’ll see a Twitter Home, or a Microsoft Home built around a growing suite of Windows/Skype/Xbox/SkyDrive products.

All of this seems to leave Google in some difficulty. Facebook is a direct competitor to Google’s primary business of delivering customers’ eyeballs to advertisers. Google’s considerable difficulty in monetizing Android just got considerably worse, and things are likely to go downhill from here.

Of course, one thing Google could do, at the risk of being evil, is lock down future releases of Android. That, however, might well be locking the barn door too late. Open source and free (as in speech) versions of Android are out there and Google action might well be viewed as just another fork of Android.

Google never seemed to know just what it wanted to do with Android. Now it may be too late to figure it out.

3 Years Of iPad Schadenfreude and Lessons Learned

images-47

On April 3, 2013, the iPad turned three. Jay Yarrow over at Business Insider has put together a great summary of How The iPad Totally Changed The World In Just Three Years. A couple of highlights:

— Apple has sold some 140 million iPads for around $75 billion in sales.
— The iPad is one of the fastest growing consumer products ever.
— iPad inspired tablets have virtually destroyed the netbook market, are expected to exceed notebook sales this year and expected to exceed notebook and desktop sales by the end of 2014
— iPad revenue alone is bigger than all Windows revenue
— Traditional software houses like Amazon, Google and Microsoft are all making their own versions of the iPad
— The iPad is popping up everywhere, including airplane cockpits, restaurants and as cash registers. I would add that almost one third of doctors in the U.K. now own tablets and tablets are rapidly spreading into education at every level.

Schadenfreude

I think it’s an understatement to say that the iPad has been an overwhelming success – the biggest technology shift of our generation – which is why it’s all the more delicious to put on our 20/20 hindsight glasses and mock those who got the iPad oh-so-very-wrong those three years ago. However, rather than dwelling on how wrong the iPad’s critics were, let’s focus instead on why they were wrong and see if we can learn from their mistakes.

Screen Size Matters

“You might want to tell me the difference between a large phone and a tablet.” ~ Eric Schmidt, Google, 10 January 2010

Turns out that size really does matter. Some things are better done on a larger screen. Further, a larger screen demands that apps be re-written to accommodate their larger size. Apple recognized this and now they have over 300,000 apps specifically optimized for the iPad. Google has been slow to recognize this fact and their tablet sales have suffered for it.

Focusing On What It Isn’t

Things the iPad can’t do:

1. No Camera, that’s right, you can’t take pics and e-mail them.
2. No Web Cam, that’s right, no iChat or Skype Video chatting.
3. No Flash, that’s right, you can’t watch NBC, CBS, ABC, FOX or HULU.
4. No External Ports, such as Volume, Mic, DVI, USB, Firewire, SD card or HDMI
5. No Multitasking, which means only one App can be running at a time. Think iPhone = Failure.
6. No Software installs except Apps. Again think iPhone = Failure.
7. No SMS, MMS or Phone.
8. Only supports iTunes movies, music and Books, meaning Money, Money, Money for Apple.
9. WAY, WAY, WAY over priced.
10. They will Accessorize you to death if you want to do anything at all with it and you can bet these Accessories will cost $29.99 for each of them.
11. No Full GPS*
12. No Native Widescreen*
13. No 1080P Playback*
14. No File Management*

What an utter disappointment and abysmal failure of an Apple product. How can Steve Jobs stand up on that stage and hype this product up and not see everything this thing is not and everything this thing is lacking?

~ Orange County Web Design Blog, 27 January 2010

There are dozens upon dozens of these lists and their particulars don’t really matter. The truth is that we often tend to focus on what a new product or service DOES NOT do instead of focusing on what it does really well. The iPad made for a terrible phone and a terrible notebook computer. And that’s as far as most people could see. The netbook did a lot of things but it didn’t do any of them well. The iPad did far fewer things than the notebook or even the netbook, but it did some of those things extraordinarily well. In most instances it’s what something does, not what it doesn’t do, that matters most.

Tradeoffs

“Why is the iPad a disappointment? Because it doesn’t allow us to do anything we couldn’t do before. Sure, it is a neat form factor, but it comes with significant trade-offs, too. No 16:9 widescreen, for example.” ~ David Coursey, PC World, 28 January 2010

“I don’t get it. It costs $500 for the basic model, when you could get a laptop with a lot more functionality for about the same price. The iPad hype machine has been in full effect this week, and I still think it’s just that—hype. If I turn out to be wrong, I’ll gladly eat my words, but I’m pretty sure that I’m not wrong ” ~ Alex Cook, Seeking Alpha, 3 April 2010

EVERYTHING has tradeoffs. The key is to get asymmetrical tradeoffs that give more than they take away. The iPad gave people mobility, simplicity, ease of use and seamless integration with a virtually endless number of applications. It gave up power, size and complexity. Turns out, for most people, that was a trade that was well worth making.

It’s Not The Consumer’s Job To Predict The Future

“Before Jan 20th, only 26 percent of people said they were not at all interested in buying an Apple tablet. That number jumped to 52 percent after the announcement. Before Jan 20th, 49 percent of people said they didn’t think they needed an Apple Tablet. That number jumped to 61 percent after the announcement. Fifty-nine percent of buyers wouldn’t pay extra for 3G coverage. Whether this device becomes a big hit is anyone’s guess but based on this study it sure looks doubtful.” ~ Retrevo, 5 February 2010

“We of course build plastic mock-ups that we show (to customers)…we had a slate form factor. The feedback was that for (our) customers it will not work because of the need to have (a physical) keyboard. These were 14-year-old kids, who, I thought, would be most willing to try a virtual keyboard but they said no, we want the physical keyboard.” ~ Mika Majapuro, Worldwide Sr. Product Marketing Manager, Lenovo, 22 February 2010

“The recent launch of Apple, the iPad tablet, has won the award for the second edition of Fiasco Awards delivered this Thursday in Barcelona. From the more than 7,000 people who voted via the website www.fiascoawards.com, 4,325 have considered it the fiasco of the year. Voters through the web have decided that they want the iPad to follow a path similar to the U.S. President Obama with his Nobel Prize, receiving an award before its career starts. However, if within a year the market’s response to the iPad is not the predicted fiasco, the organization will present the 2010 edition of the Fiasco Awards as a finalist to receive the same award next year.” ~ Fiasco Awards, 2010, 11 March 2010

Can we just stop pretending that consumer polls and questionaries have any validity when it comes to predicting future behavior? How is the consumer supposed to evaluate a wholly hypothetical product – especially a revolutionary product – before they’ve even had a chance to use it? Heck, the brightest minds in tech got the iPad wrong even AFTER Apple showed it to them. Why then do we constantly put stock in the opinion of consumers with regard to products that do not yet exist?

Sizzle vs. Subtle

“Yet for some of us who sat in the audience watching Steve Jobs introduce the device, the whole thing felt like a letdown.” ~ Daniel Lyons, BusinessWeek, 28 January 2010

“I think this will appeal to the Apple acolytes, but this is essentially just a really big iPod Touch.” ~ Charles Golvin, Forreter Research, 27 January 2011

Turns out that being a big iPad Touch was all that it needed to be.

The tech press always wants fireworks and is immediately bored by nearly everything not new or different. However, that’s not how people in the real world respond to products. Sometimes subtle is more powerful than sizzle and sometimes subtle is more sublime as well.

Tech is no longer the province of an elite. Tech is now a mass market product that is used and mastered by the majority of humankind. We need to stop thinking about how things affect us personally and start thinking about how they affect the majority of their intended users instead.

Niche

“Thus, a reasoned analysis is that the iPad is to the iPhone & iPod Touch as the MacBook Air is to the MacBook. In other words, a cool product with a devoted base of happy customers, but in relative terms, a niche product in Apple’s arsenal of rainmakers.” ~ Mark Sigal, O’Reilly Radar, 28 January 2011

That’s pretty much how I saw it too. I thought that the iPad would be a successful niche, like the MacBook Air. I was very wrong about the iPad…and I was wrong about the MacBook Air, too. Sheesh, 20/20 hindsight is a cruel mistress.

Schadenfreude Redux

“Anyone who believes (the Ipad) is a game changer is a tool.” ~ Paul Thurrott, Paul Thurrott’s Supersite for Windows, 5 April 2010

What the heck. Not everything has to be a life lesson. A little Schadenfreude can be a good thing too.

The TV Cartel Is Starting To Crack

Aereo antenna array (Aereo, Inc.)

By any reasonable standard, Aereo is a ridiculous service. But the rules and contracts that cover the distribution of television content are anything but reasonable. And that means that Aereo, silly as it is, could be the beginning of the end for the cartel of studios, sports leagues, broadcasters, networks, and cable and satellite distributors that has a headlock on content.

Aereo, which is backed by IAC/Interactive Corp. and its wily CEO, Barry Diller, invested a new way of distributing broadcast television. If you subscribe (currently available only in New York) for $12 a month, you are assigned a tiny TV antenna in an array of antennas (pictured above) in a Brooklyn data center. The content–all over-the-air broadcast stations in the area–is converted to an internet stream and delivered to your iPhone or iPad, computer browser, AppleTV, or Roku box. The service also functions as a DVR in the cloud so you can time-shift your viewing.

The silliness is that broadcasters ought to cut out the middleman and stream broadcasts themselves. But local stations can only stream their own content, mostly local news. Networks could stream a lot more, but only content they own outright or have the streaming rights for (a restriction that excludes most sports and much else.) Besides, local stations, networks, studios, sports leagues, and cable companies are locked into a system of contracts, often long term, which no one wants to break because, in the imortal words of Milo Minderbender*, “everyone has a share.”

It’s obvious why Aereo poses a threat to this cozy relationship. So its not surprising that pretty much every station in New York filed suit claiming that Aereo violated their copyrights. They argued that Aereo was essentially acting as a cable company and was required to negotiated what is called “retransmission consent,” a privilege that typically requires a hefty fee. But Aereo carefully exploited every corner and loophole in the law. Those individual antennas–technically quite unnecessary–allowed it to argue that it was merely piping over-the-air content to customers from their own antennas. And it made sure to deliver content only to subscribers within stations’ service areas, thereby honoring local exclusivity requirements,

Aereo won the first round of the legal battle when a district judge denied an injunction blocking the service. And in a potentially much more important decision, the Second Circuit Court of Appeals, in a 2-1 decision, affirmed the lower court decision. The only bright spot for the broadcasters was the dissent of Judge Denny Chin, who called the approach of individual antennas “a sham” and “a Rube Goldberg-like contrivance over-engineered in an attempt to avoid the reach of the Copyright Act and to take advantage of a perceived loophole in the law.”

It’s not clear what will happen next in the case. The TV stations could request an en banc review by the full Court of Appeals or appeal to the Supreme Court, but both are fairly long shots legally. Aereo CEO Chet Kanojia told The Verge he expects the broadcasters will turn to Congress for legislation blocking Aereo. Local broadcasters still carry considerable heft on Capitol Hill primarily because members count on local news to provide vital free media during campaigns.[pullquote]Local broadcasters still carry considerable heft on Capitol Hill primarily because members count on local news to provide vital free media during campaigns.[/pullquote]

But the loss of  the Aereo case is not the only ill omen for broadcasters and networks. Another major blow to the status quo was the success of House of Cards, the slick, high-budget original series on Netflix. While Netflix won’t give out viewer numbers, the company is clearly pleased with the effort and plans to expand it. Original internet programming that can compete straight-up with HBO and Showtime has to make those networks start rethinking their dependence on cable and satellite companies for distribution. For now, they make their content available online only to viewers who are already subscribers. They know full well that a lot of people are viewing pirated versions of their shows–the season premiere of HBO’s Game of Thrones set a BitTorrent volume record–and they know that subscribers are sharing their IDs and passwords with non-subscribers. For now, they are prepared to tolerate the loss (assuming that folks getting content illegally but for free would be willing to pay for it if it were available a la carte.) But this is purely an economic calculation, not a conviction, and will change when the economics tip.

The condition of the television business shouldn’t be confused with the collapse of the record industry. The music business was in trouble before it was hit with large-scale piracy and the record companies made things worse through denial, resistance, and the idiotic strategy of suing customers. The TV industry knows it has to move into a new era. But the current arrangements are highly profitable and it wants to proceed with all deliberate speed.

In the end, that may not be possible. Dish Networks CEO Charlie Ergen sees the end coming. “One of two things will happen,” he said at the D: Dive Into Media conference in February. The rising cost of content will present an incumbent distributor “with a deal they just can’t stomach” and they’ll blow the system up. “But more than likely, they’ll just die because somebody will come in underneath them on price. The likeliest candidates are Amazon or Netflix. Possible Apple. And Microsoft could do it.”

——–

*–If you don’t know who this is, you should stop whatever you are doing and read Joseph Heller’s Catch-22.”

 

Samsung is Stepping Into the Spotlight

lights01(5).jpgSomething very interesting is happening and I will be very interested to see how it plays out. Samsung is stepping into the spotlight and arguably taking it from Apple. Apple for the past 10 years, or more, has been the unparalleled focus of the mainstream media and for good reason. In 2010 when I started helping on the business side of things at the tech blog SlashGear, I got to have great conversations with nearly all the major bloggers. Throughout my conversations with them one common thread emerged. Every site remarked about how writing about Apple was page view gold. And in a business where page views generate more advertising dollars, over-covering Apple from every angle was–and still is–a business strategy.

As of late, many of the same conversations I have had with media influences and editors is revealing a new thread. Writing about Samsung is now quickly also becoming page view gold. As you could see, there was more content than necessary leading up to the Galaxy S4 event and then even more harsh content and scrutiny of the event itself. Maybe Samsung is getting what they want by being in the spotlight but it comes with a price.

Being under the microscope and managing the burden that comes with it is something few companies have had to do. It is now one that Samsung must do. It will be fascinating to watch how their management handles it. The media and Wall St. can be extremely and almost universally unfair to companies in the spotlight and under the microscope. Being a leader almost always means you also get arrows in your back. I’m assuming Samsung was hoping to get more attention but I’m not sure they are fully ready for the hostility that comes with the spotlight.

Are They Ready For It?

This is the real question. Executives, folks in PR—both internal and the external firm—those in investor relations, board members, etc., will all learn the unique place of being in the spotlight. This may be particularly tough on the PR folks and those at the external agency. Those folks jobs are often judged on the quality of not just press coverage but quality press associated with the company or a product. When you are under the microscope it may often feel like everyone is out to get you and for a company that has never dealt with what seems like media hostility, it may be hard to handle.

Samsung is also an Asian company, and as is the case in Asian culture, often times criticism is taken very personally. Not taking extremely harsh criticism from Wall St. and the media personally is going to be a challenge for them.

Passion and Personal Computing

If Samsung does their job right with both their brand and their products, they will create a sense of passion around their brand. This is also something few companies in personal computing have accomplished. It is something that is necessary if you want to create a sustainable brand yielding loyal consumers. With it, however, comes the possibility of a polarizing effect. I can think of no more polarizing brand in computing than Apple and as we can see it yields loyalty but also hostility. Samsung may also be heading in that direction. If they are not careful they may create the astronomical expectations that can never me satisfied by the media.

Is it Good for Apple?

This is also a very interesting question. In my 13 years as an industry analyst I have observed how the media has covered Apple. There has been many positives but it led to a hype machine that got completely out of control. This led to the external reality distortion field which I have referred to as of late. Even though my sense is that the Apple hype machine has been lessened, and Samsung taking some of the spotlight may be part of that, it still seems as though nothing Apple does is good enough. Perhaps Samsung taking more of the spotlight will work more in Apple’s favor from a media standpoint than many think. Primarily because it will give the media another target other than Apple.

I actually believe this is good for Apple and having two companies compete for mindshare is actually very good for the industry. The media has an insatiable appetite but by them having more story lines than just Apple to focus on may help bring some needed balance.

The spotlight can only focus on a few but the fact that it is focusing on more than just one is a good thing. From what I can gather, managing being in the spotlight can be very rough. Apple has learned to manage it marvelously and we will now see if Samsung can.

Want To Sell Used Digital Content? Not So Fast

Just two week after the Supreme Court stop a publisher’s attempt to impose tight limits on the ability of purchasers to resell books, a federal judge in New York has reminded us of the limits on our resale rights when it comes to digital products. In Kirtsaeng v. John  Wiley & Sons, the Supreme Court ruled 6-3 that the “first sale” doctrine applies to goods made outside the U.S. and that a purchaser has the right to resell a book no matter where it was published.

Today’s decision by Judge Richard J. Sullivan of the U.S. District Court in Manhattan appears to end the effort by ReDigi to create a market in used digital music. The judge granted Capitol Records’ motion for summary judgment and while he did not immediately issue an injunction against ReDigi’s operations, that seems likely to follow.

The decision is highly technical and turns on a distinction between what copyright law calls a “phonorecord” and a sound recording. If you own a vinyl or CD recording–a phonorecord– you are free to sell it, but not so with a digital copy.  In essence, the judge said that if Congress wants to create a right to resell digital content, it may do so, but absent such action, forget about it: “[T]he Court cannot of its own accord condone the wholesale application of the first sale defense to the digital sphere, particularly when Congress itself has failed to take that step.”

Holding Apple to a Higher Standard – Solving Texting While Driving

I love my iPhone. I use it all the time. I take it with me everywhere. Yes, everywhere. I have tried and tested numerous smartphones over the years. I can confidently state that you can do no better than the iPhone. However, iPhone – Apple – can do better by us. Too many of us are texting while driving, and dying. More than nine people everyday, in fact. This has to stop.

Yes, it’s easy to claim that people’s foolish behavior is in no way Apple’s fault. Probably, you are right. I don’t care. I hold Apple to a higher standard. I don’t pay a “premium” to purchase Apple products. There is no “Apple tax.” I pay Apple’s higher prices because their products are the best: the best value, the easiest to use, the most intuitive, the most functional.

Apple even promotes this idea. Witness their latest marketing campaign for iPhone. No pretty women in leather jumpsuits, no ninjas, no lasers – no need. Instead, the powerful truth: iPhone is an amazing device, simple to use, and offers a nearly un-ending amount of fun and function for everyone – from anywhere, as their iPhone “Discovery” ad makes plain.

iPhone ad anywhere

iPhone doesn’t merely dominate the U.S. smartphone market, they dominate pretty much every relevant metric for smartphone use and engagement. Tragically, we remain engaged with our iPhones even while driving.

According to a recent AT&T study, nearly half of adult drivers in the U.S. admit to texting while driving. Over 40% of teens admit to texting while driving. Worse, the numbers are rising.

It’s not ignorance causing this. The texters-and-drivers are fully aware of the potentially deadly and devastating consequences of their actions. Doesn’t matter.They text anyway. No doubt they also tweet, check Facebook, choose a playlist and more, all while behind the wheel.

What’s Apple going to do about this?

Yes, I want Apple to do something. Because possibly only Apple can do something to fix this. Apple gave us the smartphone revolution. The iPhone changed everything. We now use the iPhone – and all the copycat smartphones – everywhere we go, no matter the setting, no matter who we are with. This recent IDC study, for example, noted that well over half of all Americans have a smartphone and a vast majority of us reach for our smartphones the moment we wake up and then never put it away. We use them in the movie theater, at the gym, while we are talking to other people in real life. Don’t believe that getting behind the wheel of a car suddenly changes everything, whether it should or not.

No, I do not care if it’s unfair to place any blame for our behavior on Apple. The fact is, we text while driving. We aren’t going to stop. Apple needs to accept some responsibility for what they have wrought. As much as I want a beautiful Apple Television, as much as you may want an iWatch, and as cool as this patented wraparound display iPhone is, none of that should be a priority for Apple until the company makes using the iPhone while driving a car much, much safer proposition. Or impossible. Either way, the problem needs to be fixed, soon.

Possible solutions? Honestly, I don’t know. Perhaps the iPhone will recognize when we are driving and simply stop working. Maybe Apple can require apps to mess up when we are in a moving vehicle – not autocorrect our texts, for example. Maybe Apple engineers can get Siri to work great, all the time, whether for texting, tweeting, checking our calendar, selecting a playlist. I don’t have the answers. That I leave to Apple. And we need the best they can give us.

Slogans, such as from AT&T’s  “It Can Wait” campaign are unlikely to work, I suspect.

it can wait texting

It Can Wait videos admittedly offer some truly heartbreaking stories of people whose lives have been irreparably and profoundly damaged because someone was texting while driving.

Tragic, sad – but how will this help? As AT&T’s own study says, 98% of those who text while driving already know it’s bad.

It was sobering to realize that texting while driving by adults is not only high, it’s really gone up in the last three years.

That quote is from Charlene Lake, AT&T’s senior vice president for public affairs. You think more marketing is the answer? No. Showing tragic stories may shock a few into proper behavior, I don’t doubt. Realistically, however, this is that rare case where we need a technical solution for a cultural problem.

According to TechCrunch:

The Center for Disease Control says that there are an average of nine people killed in texting-related accidents each day, with 1,060 injured in texting-related crashes.

Since texting occupies your eyes, hands, and mind, it’s considered one of the most dangerous distractions on the road, and elevates the risk of a crash to 23 times worse than driving while not distracted.

Nine people killed every single day. Read that again. Nine people die every single day from texting-related accidents. Going to stop what you’re doing now that you know?

I don’t believe you.

Apple gave us the iPhone. It was like nothing ever before. But Apple’s job is not complete. The iPhone is magical and revolutionary. We mortals have not yet learned to fully control its power. We need Apple’s help.

Images taken from Apple’s iPhone “Discover” commercial and AT&T’s “It Can Wait” campaign against texting and driving.

Apple’s Cloud Conundrum

Photo of tornado (© James Thew - Fotolia.com)

 

Apple is really bad at the cloud. And while that is not hurting the company much today, it is going to become a growing problem as users rely on a growing number of devices and come to expect that all of their data will be available on all of their devices all of the time.

Apple’s cloudy difficulties are becoming apparent through growing unhappiness among developers about the many flaws of Apple’s iCloud synchronization service. Ars Technica has a good survey of developer’s complaints about the challenges iCloud poses for developers.  This long Tumblr post by Rich Siegel of Bare Bones Software is a deeper dive into some moderately technical detail.

These developer issues matter to both Apple and to its customers because iCloud is not being integrated into third-party apps, and some that have integrated it are abandoning it. This leaves users with limited and often complicated solutions for access to their data. Like most tech writers, I’m an extreme case, working regularly on a large assortment of devices working in different ecosystems. I rely on a variety of tools to sync my data, an approach that can be a configuration nightmare. But even someone living entirely within the Mac-iOS ecosystem cannot count on iCloud to provide anything near a complete solution. Just try to move  PDF document from a Mac to an iPad.

The fact is that both Microsoft and Google are far ahead of Apple in cloud services. Microsoft has built on its years of experience with SharePoint and Exchange, plus such commercially unsuccessful but technically important projects as Groove and Live Mesh, to build SkyDrive and its associated services. Google has always lived in the cloud and has put its expertise behind Google Drive. Smaller vendors, such as DropBox and SugarSync, also offer solutions far superior to Apple’s. But all of these companies have taken years to get where they are in large part because this stuff is really, really hard. None of them offers a complete multiplatform, multidevice, multi-application solution, but they are getting there.[pullquote]The fact is that both Microsoft and Google are far ahead of Apple in cloud services. And this stuff is really, really hard.[/pullquote]

Cloud information management solutions are only going to get more important as users choose among multiple devices to pick the one best suited to the need at hand. For many, these devices will be heterogeneous, perhaps an Android phone, and iPad tablet, and a Windows PC. The winners will be service providers who make a full range of services available to all devices on all platforms. Microsoft and Google come close, working hard to look beyond Windows and Android, respectively. Apple provides only grudging iCloud support to non-Apple devices, another self-imposed handicap.

Apple has the advantage of starting in this new multidevice world with the best-integrated solutions. But it is serious danger of blowing that lead unless it can drastically improve its cloud offerings.

And one more thing: The cloud imposes new security challenges for service providers. This is a problem no one has solved yet, but Apple has failed particularly miserably. Check out this Verge article for a good rundown on iCloud security failings.

 

 

 

 

 

The New Era of “Good Enough” Computing

good enough phrase in wood typeA few weeks back I was one of the first to write about Windows Blue and in this column I discussed how Windows Blue could be used on tablets in the 7” to 10” range as well as in clamshell’s up to 11.6 inches.

We are now hearing that this particular version of Windows Blue will be priced aggressively to OEMs and could go to them for about $30 compared to the $75-$125 OEMs pay for Windows 8 on mainstream PCs.

But to use this low cost version of Windows Blue, we understand there are some important caveats that go with it. For this pricing, it can only be used on Intel’s Atom or AMD’s low-voltage processors. These chips were designed especially for use in tablets and as I pointed out in the article I mentioned above, this would give Microsoft a real opportunity to get Windows 8 tablets into the market that could go head-to-head with Apple’s iPad Mini and most mid level 7”-8” Android tablets as well.

Netbook 2.0?

As for clamshells, they too need to use these processors from Intel and AMD to get this pricing for Windows 8 (Blue). What’s interesting about these clamshells is that we understand that they will be fully touched based laptops with very aggressive pricing. In some ways, these clamshells with these lower end processors could be looked at as Netbook 2.0, but for all intent and purposes, these will be full Windows 8 touch laptops only with processors that are not as powerful as the ones using Intel’s core i3, i5 or i7 chips or similar ones from AMD. They will also be thin and light and could easily be categorized as Ultrabooks as well.

Windows 8 Blue is one way to get Windows 8 into more products and make it the defacto Windows OS standard across all types of devices, especially the 7” to 8” tablet segment that we predict will be as much as 65% of all tablets sold by 2014. We also hear that Windows Blue RT version will also take aim at 7”-8” tablets, which means that the ARM camp will have a play in this market as well. However, its use in an x86 clamshell could have a dramatic impact on the laptop market and have unintended consequences for OEMs and chip companies as well.

The ramifications could come from a major trend in which tablets are becoming the primary digital tool for most users. The smaller tablets are used more for consumption but the 10” versions can handle both consumption and productivity in many cases. This translates into the fact that tablets are now handling about 80% of the tasks people use to do on a laptop or PC. That means that traditional laptops or PCs now only handle only 20% of the needs of these users, which are mostly used now for media management, handling personal finances, writing long documents or long emails.

New Price Segments

When we ask consumers that have tablets about their future laptop or PC purchases we are told that for many, if the laptop is only used for 20% of their digital needs, then they will either keep what they have longer or if they do buy a new laptop or PC, it will be a relatively cheap one. Consumers, who are not interested in Macs, tell us that the top amount they want to spend on these products is $599. This suggests two key things for the PC industry that could be quite disruptive. The first is that there would be a bifurcation of the laptop and PC market into distinct sectors. One focused on the consumer where all PC products have to be under $599. The other is what we call the premium market for laptops and PCs which are willing to pay $999-$1499 for their computing tools because of more advanced computing needs. This premium segment is mostly tied to enterprise and the upper end of the SMB market. In fact, the price for PC products in this upper premium price range has proven to be quite resilient.

The second key thing means that the mid level priced laptops and PCs could end up in a no mans’ land. PC products in the $699-$899 could take a pretty big hit while demand for products $599 and under could skyrocket. We believe that this trend would usher in an era we call “Good Enough” computing; a term we became intimately acquainted with during the first Netbook phase. To some degree, the robust sales of Chromebooks already suggest this era has already started. But it would pick up if users could get full touch-based clamshells that look like Ultrabooks and are priced well under $599. We are actually hearing that when these come out in time for back to school they will be priced from $499-$549 and the target price would be to get them around $399 by early next year.

At Creative Strategies we are in the early stages of analyzing the potential impact of these Windows Blue low-cost clamshells but our early take is that they could be huge hits and have a serious impact on demand for laptops or PCs in the mid range, which has been a very important segment for the OEMs and CPU companies in the past. If this happens, the OEMs would need to bulk up on their premium products since these have solid margins and actually bring them significant profit. It also means they need to be creative and innovative in products under $599 and find ways to squeeze profit out of these types of laptops as well.

This does not mean that OEMs will stop offering value notebooks that are bulkier and in some cases use processors more powerful then Atom or low-volt chip from AMD. However, if their Netbook 2.0 like clamshells are thin, light and touch enabled, it could even cause demand for these low end bulkier laptops to dry up too. It will be very important to watch the development of this market over the next 6 months. If our assessment is correct, we could see a rather significant bifurcation of the PC market this fall, something that could have a real impact on all the players in the PC world.

Uniloc v. Rackspace: A Rare Patent Win in East Texas

Patent shingle (USPTO)

The U.S. District Court for the Eastern District of Texas has a well-earned reputation as a place where non-practicing entities, more colorfully known as patent trolls, use their dubious patents to extort money from companies that actually do things and make stuff. So it was deeply gratifying to see infrastructure-as-a-service provider Rackspace Hosting win a summary dismissal of a patent claim brought by Uniloc USA.

Uniloc claimed a patent of a general method for rounding floating point numbers and argued that the Red Hat Linux used by Rackspace infringed upon it. Red Hat defended Rackspace as part of its program for indemnifying customers against such claims.

The Uniloc patent was silly and clearly should never have been granted; the method claimed is neither novel nor non-obvious–two of the three legs on which all patents rest. But mere silliness often fails to stop patent claims from dragging on for years a tremendous expense to all concerned. So the quick end to this case is something of a miracle, especially in a district where patent holders have a very strong chance of winning.

The case does not address the general issue of software patentability nor does it, as some reports have held, determine that mathematical algorithms are not patentable. But Judge Leonard Davis found (PDF of ruling courtesy of Groklaw.net) that the patent in dispute failed to comply with the rules of patentability of algorithms as laid out by the Supreme Court, mainly in Gottschalk v. Benson and in re Bilski. Judge Davis wrote:

 [A]ccording to the patent itself, the claims’ novelty and improvement over the standard is the rounding of the floating-point number before, rather than after, the arithmetic computation… Claim 1 merely constitutes an improvement on the known method for processing floating-point  numbers…  Claim 1, then, is merely an improvement on a mathematical formula. Even when tied to computing, since floating-point numbers are a computerized numeric format, the conversion of floating-point numbers has applications across fields as diverse as science,  math, communications, security, graphics, and games. Thus, a patent on Claim 1 would cover vast end uses, impeding the onward march of science.

A few more Judge Davises and the patent mess could look a whole lot less messy.

 

 

 

Tablet Trifurcation

images-46Yesterday, Tech.pinions columnist, Patrick Moorhead, discussed the implications of the growing popularity of the 7 inch tablet form factor.

Schism

I think that Patrick’s analysis of the schism between Apple’s iOS tablets and Android tablets was spot on. While Apple encouraged their developers to create apps that were optimized for the larger 10 inch tablet form factor, Android eschewed optimization and encouraged a one-size-fits-all approach. The resulting “stretched” Android phone apps worked poorly on the larger tablet form factor. However, “stretched” phone apps seem to work well, or at least adequately, on the slightly smaller 7 inch screens.

This divide in approach between iOS and Android tablets has at least two major implications. First, Apple’s iOS tablets will most likely continue to dominate the 10 inch tablet form factor. In fact, Android has all but ceded the 10 inch form factor to Apple.

Second, because both Apple’s 10 inch iPad and their 7.9 inch iPad Mini run optimized tablet apps, the iPad will most likely become the “go to” tablet for high end users. This means that professionals, businesses, government entities and educators will gravitate towards the iPad. And as the virtuous cycle of developer/app/consumer continues the spiral upwards, the high-end iOS applications will make iOS optimized tablets even more appealing to high-end consumers and even less approachable to Apple’s competitors.

Trifurcation

It seems to me that the tablet market is trifurcating. Apple’s iOS is taking the larger 10 inch form factor and the up-scale markets. Google’s Android may command market share in the mid-level markets. And forked or non-Google Android tablets will take the low end of the market. All can survive, but only Apple has proven that it can profitably thrive in such a setting.

What are the Implications of Increased 7” Tablet Popularity?

Last month, DisplaySearch published an analysis, entitled, “Smaller Tablet PCs to Take Over in 2013?”  The report essentially laid outipad-mini-nexus-7 the volume decrease of 9.7-10” displays and the increase of 7.X” displays in January of 2013.  While this isn’t the freshest of data, it’s still valid and certainly makes sense, given the popularity of the iPad mini, Kindle Fe HD, Nexus 7 and the long tail of “white tablets.”  If we are indeed in a volume shift from larger displays to smaller display tablets, there are two key implications which I think are very important as they impact the entire tech ecosystem.

Let me start by taking a brief look back at tablets a mere 9 months ago as it’s important to know where we came from to appreciate where we are going.

Until the Nexus 7 tablet was launched, Android tablets were literally dead in the water and the “tablet market” was really the “iPad market”.  This makes sense even today as 10” Android tablets lack the app ecosystem that Apple provides.  Why would a consumer pay $399-499 for a tablet that has maybe 5,000 optimized tablet apps?  They didn’t and still won’t.  Google attempted to compensate by “stretching” phone apps to 10” displays, but the experience is still lacking. Stretched phone apps still look and operate horribly on a 10” tablet.  7” Android tablets like the Nexus 7 were different in that they can effectively leverage Android’s large phone app ecosystem.  The rest is history as volumes rise for 7″ tablets.

Let me dive into the implications.

The first implication of 7” tablet popularity is the creation of a new ecosystem. Let me focus on hardware.  The iPad hardware ecosystem is large, but not diversified, particularly in hardware, as it is essentially Apple, Foxconn, and Apple’s chosen IHVs.  On the OEM side, I believe companies with subsidized business models will be the most likely OEM winners in 7” tablets.  These are companies like Amazon, Google, and Microsoft.  They can accept lower hardware margins as they drive their corporate profits from e-tailing, advertising, operating systems and application software.  Other winners will be OEMs with strong consumer brands or huge marketing budgets like Apple and Samsung. Less clear are the 7” opportunities for PC giants HP, Dell, Asus, Lenovo and Acer.  With a new crop of OEMs come new ODMs like Quanta, Pegatron, and the long tail of “white tablet” manufacturers.  With new OEMs and ODMs come the component manufacturers.

To get the full appreciation of just how many different component suppliers are involved, you just need to go over to iFixit’s iPad 4 teardown and see the myriad of companies involved. The challenge before was that if you weren’t in the 9.7″ iPad, you were out of luck, because Apple rarely second sources components and they owned the tablet market.  On the SOC side, now Nvidia, Qualcomm, Intel, Mediatek, Hi-Silicon (Huawei), and even relative unknowns like Rockchip (in the HP Slate 7) will have opportunities.  Needless to say, all this new competition will lead to lower prices and hopefully more innovation. I say “hopefully” because with the lower prices, it’s not a foregone conclusion there’s money left over to invest in a lot of innovation.

The final implication of the increased popularity of 7” tablets has nothing to do with tablets at all, but with personal computers.

10” tablets, more than 7” tablets, had the ability to augment or replace certain PC usage models.  Like many, I used the iPad for years as my primary (% time spent) productivity device when paired with a ZAGG/Logitech Bluetooth keyboard. My personal iPad usage model was probably ahead of the curve, but a 9.7” iPad worked well for email, calendar, research via the web or news apps, reviewing presentations and documents, and even writing reports and research.  I would never never finalize a document on the iPad as I would do this on a notebook, but the initial research and text entry worked well.

Usage models are different on a 7” tablet versus a 10″ tablet.  7″ tablets are more appropriate as content consumption devices, driven primarily by the screen size and input methods.  If you have ever had to type out a lengthy email on a 7” tablet, you know what I mean.  Typing on a tablet entails some very uncomfortable typing where half the display is covered by the on-screen keyboard. 7” tablets are great, however, for reading and deleting emails, watching videos, reading e-books, and browsing simple web sites.

With all of this considered, I believe this means that those with the 7” tablets have a greater need for a modern notebook more than those with a 10” tablet.  This is not to say that I expect the market for MacBooks and PC notebooks to explode immediately with amazing growth, but I do believe it will lead to increased notebook sales. When you consider the aging installed base of low battery life, thick and chunky Windows XP and Vista-based notebooks, my thesis gets stronger.

Net-net, the popularity of 7” tablets make notebooks look more attractive and I believe will give a boost to MacBooks and Windows notebooks.

Summary

Until the Nexus 7 and Kindle Fire HD arrived, the “tablet” market was really the “iPad” market.  While not providing the best experience, 7″ Android tablets provide a good enough consumer experience at a very low entry price.  The iPad mini launch validated the 7-8″” tablet market and, based on DisplaySearch figures, the mass of volume appears to be headed to the 7” form factor.  This shift brings with it two key implications.

Android will becomes a player in the tablet market and with it, brings much more competition across OEMs, ODMs and even component suppliers.  This increased competition will drive lower prices and hopefully more innovation.  There is a case to be made that only those with subsidized business models or a stellar brand will survive, but we will have to wait and see on that.  Finally, I think notebooks will get a boost from the popularity of 7” tablets driven by the usage model differentiation between the two devices, which is absolutely the most ironic implication.

Who thought a display size change was boring?

The FCC: After Four Frustrating Years, Tough Work Ahead

Work Ahead (© iQoncept - Fotolia.com)

 

Julius Genachowski was one of President Obama’s original tech warriors, so hopes were high when he became chairman of the Federal Communications Commission in 2009. He leaves the post some modest accomplishments, some bigger disappointments, and a general sense of stasis that has replaced the excitement of 2008.

This situation is not Genachowski’s fault and there is not much chance that his successor, no matter who it is, will be able to speed process. Inertia is a powerful force in Washington, and few institutions are harder to get moving than the FCC. Why else would the commission still be arguing over rules prohibiting cross-ownership of newspapers and television stations–and issue likely to come to a boil again if Rupert Murdoch goes ahead with a bid for The Los Angeles Times–even as both sets of institutions fade into irrelevance?

The commission has two huge problems. First, the FCC’s actions are governed by a terrible and hopelessly obsolete law, the Telecommunications Act of 1996. Any time the commission seeks to stretch its authority, say, by trying to regulate network neutrality, it can count of being sued and probably slapped down by the courts.

Second, major industry constituencies—big telecommunications companies and wireless carriers, broadcasters, cable companies—see much to lose and little to gain from change, and then opposition of any one constituency can cause things to drag on interminably.

A good example is freeing unused or underused television broadcast spectrum for wireless data use. The fight stems from the transition to digital TV mandated in the mid 1990s and completed in 2007. TV stations ended up with more spectrum than they had good use for. The result was a plan for “incentive auctions,” in which stations would receive part of the proceeds from the sale of spectrum (which they didn’t pay for in the first place). The FCC plan was complex, and Congress, at the behest of broadcasters, made it even more baroque in 2011 legislation authorizing the sales. Broadcasters continue to throw up roadblocks and it now appears that the auction process, originally expected to start next year, is unlikely to get going until 2015. The TV fight is also holding up a plan to make some of the unused TV spectrum, the so-called TV whitespaces, available for unlicensed wireless data. Not surprisingly, the broadcasters oppose that plan too.

Unfortunately, there’s not a lot an FCC chairman can do to speed the agency’s glacial pace. Federal law creates endless possibilities for delay. Any time the commission tries to push its boundaries, it will be sued and objectors have generally found a friendly ear at the conservative D.C. Circuit Court of Appeals, which hears all challenges to FCC actions.[pullquote]Unfortunately, there’s not a lot an FCC chairman can do to speed the agency’s glacial pace. Federal law creates endless possibilities for delay.[/pullquote]

This is troubling, because the FCC has some major items on its agenda. The most urgent is finding ways to find more spectrum for wireless data. It has become clear that the traditional approach of transferring spectrum from incumbents to new users has limited potential to increase bandwidth, at least in any reasonable amount of time. What’s needed is sharing of spectrum—especially between government agencies and private users–and new technologies to use the spectrum we have more efficiently. Steps to do both, sometimes simultaneously–as in the sharing of of the 3.5 gigahertz band between military radars and small-cell wireless data–are underway, but it is going to be a long slog. Incumbent holders of spectrum don’t give it up easily, even for sharing, while establisher service providers will maneuver to prevent competitors from gaining any perceived advantage. Look for a long slog.

Another major issue is mundane, even boring, but very important. The nearly 150-year-old public switched telephone network has nearly reached the end of its useful life; internet technology is a far more efficient way to move voice traffic than traditional circuit switching. The prospect of this happening has been looming for some years, but AT&T has forced the issue with a formal petition to transition its land-line services to an IP network. A lot of money is at stake–there is a huge investment tied up in the existing network. The FCC has to make sure that the transition balances the interests of customers and shareholders of the carriers–this mostly affects AT&T and Verizon Communications–and guarantees a reliable and affordable landline network for the future. (Much as techies disparage it, the landline network is still tremendously important and, of course, the same IP network that will carry voice calls forms the backbone of the internet.

That’s a big agenda for change coming up against a system strongly biased to inertia, complicated by a Congress whose passion for meddling is exceeded only by its lack of understanding of the issues.

When Calculators Had Gears

Photo of Friden D10
“Naked” Friden D10 (Photo: Jake Wildstrom)

My son, a mathematician at the University of Louisville, has a new hobby: restoring mechanical calculators. These machines were obsolete long before he was born, but a visit this past weekend brought back a wave of nostalgia. I was a student just before the advent of electronic desktop calculators (the ubiquitous personal calculators came along a few years later) and I spent a substantial part of my life doing statistical analysis on a mechanical calculator and a spreadsheet, which in those days was a ledger-sized sheet of ruled paper.

 

Mechanical calculators were conceptually simple and mechanically extremely complex. They were basically glorified adding machines which did multiplication as repeated addition and division as repeated subtraction. On the early manual machines, to multiply, say 135 by 25, would enter a number to be multiplied from the keyboard or by rotating pinwheels. You would then turn the crank 5 times, move the carriage one place to the right, and turn the crank twice. The answer would appear in the accumulator register. Division was a bit more complicated, but was basically the same process in reverse and could be carried out to as many decimal places as you had digits in the register. Later models replaced the crank with an electric motor and moved the carriage automatically. The last generation of Friden mechanical calculators had a square root function built in, an enormous benefit in stat work, which requires computing lots of square roots.

Disassembled Odhner 227 (Photo: Stephen Wildstrom)
Pinwheels, setting rings, and shaft of an Odhner 227

These calculators made a marvelous noise–motors whirring, gears meshing, bells ringing. The stat labs at the University of Michigan where I would were big rooms filled with dozens of machines, all going at once, making a glorious racket. The space at the Rackham graduate school included a plugboard-programmed IBM accounting machine (ancient even then), a counter-sorter, and a printer, and when everything was running at once, it sounded more like a stamping plant than a lab.

Curta calculator (Image: vcalc.net)
Curta calculator (Image: vcalc.net)

Mechanical calculators were painfully slow by electronic standards and needed lots of maintenance (those levers in the photo of the Friden, top, represent the complex mechanical logic of the machine.) It was generally a blessing when they were replace by electronic calculators and, ultimately, by software such as Mathematica, Maple, Matlab, and Sage.

But those wonderful old machines created a intimacy between analyst and data that doesn’t exist anymore. So I am happy that some in the younger generation are willing to do the work of restoring these beauties.

(Jake recently acquired a Curta, the Ferrari of crank-operated calculators. It looks a bit like an slightly oversized peppermill, if Swiss watchmakers made peppermills. Plus, there’s the amazing story of its inventor, Curt Herzstark, who designed the machine while a prisoner in the Buchenwald concentration camp. And the Curta is proudly marked “Made in Liechtenstein,” possibly the only industrial export of that postage stamp-sized coiuntry (which, of course, issued a postage stamp to honor the device.)

 

 

The Best Innovations are Still Ahead

I enjoy technology industry history. After the dot come bubble burst, I had a conversation with the then-CEO of National Semiconductor, Brian Halla. He’s also a tech history connoisseur and he explained to me what is called the Boom Bust Build-out Theory. The theory, in short, details how every major industry during the industrial revolution until now went through an initial boom, followed by a bust, followed by a market build-out.

The “boom” period is a period of euphoria where entrepreneurs, investors and early adopters rally around the new product or industry; followed by a relatively short “bust” where tough economic realities are faced; followed by an extraordinary “build-out.”

During the boom, an industry first gains traction and investment money floods the market. The result is that the supply outpaces the demand of the current market state. This is because the early interest is driven by early adopters, which is not a large market. The over-flooding of capital, combined with an immature market, leads to the bust. The bust, however, causes a drop in price of essential market components, which leads to innovation.

In an example with the railroad industry, the “bust” led to such cost declines in essential components that it made it possible for enterprising entrepreneurs to create the frozen car, thus spurring the meat packing industry. The two-year railroad bust, however, was followed by a global build-out that lasted a century. That build-out occurred all around the world and forever changed transportation and commerce.

A similar cycle of boom and bust in the automobile industry led to the creation of the V8 engine, power steering, electric indicators and safety glass.

Looking over the history of the personal computing industry, we can spot many similarities with mega industries of the past. The technology industry was not immune to the boom, bust, build-out cycle and if the signs are true, we are right at the beginning of the build-out stage.

Much of the innovation we are seeing today around smart phones, tablets and new PC form factors is the beginning of this build-out. Just look around at the innovative PCs, desktops, smart phones and tablets, all coming to market with things that seemed impossible just a few short years ago, and all at mass market prices.

The devices we use are going to get smarter, thinner, faster and more. The internet in five years will look and feel nothing like it does today. The build-out that we are at the very beginning of will drive new businesses, new markets, new technologies, and new opportunities.

One of the more interesting elements to what I am point out here is the hardware renaissance happening in the tech industry. Hardware startups are popping up everywhere these days making things from wearable health monitors, automotive intelligence, smart home solutions, computerized toys, and more. The industry bust led to such a drop in critical components that it made feasible for entrepreneurs to very easily and inexpensively start creating hardware solutions to solve every day problems.

Where we are today is both an opportunity and a threat for non-nimble industry incumbents. New innovative startups can come out of no where and disrupt legacy business or established companies can enter new markets effectively.

When a companies growth slows, or begins to slow due to market saturation of a specific category the two key ways to drive new growth is to enter new markets, or create new markets. The cycle that we are in allows innovative companies opportunities like we have never seen before technologically.

Take the iPad for example. For Apple this was a new market creation strategy. They packaged a product together in ways that have never or could never have been done before at mass market prices and completely created a new category in which to compete.

When I look at the cycle the technology industry is in today, it gives me great confidence that our best and most innovative years are ahead of us. Who will dominate these years ahead will be the continual storyline. But without doubt some of our most exciting innovations are still to come.

The Challenge for Smartphone Makers in 2013

Finding the solution of mazeI believe we are in new territory for smartphone manufacturers. Although its true that there are still many people on the planet who do not yet have a smartphone, the reality is that the most mature markets are reaching the saturation point where most consumers–who want and value smartphones–have one. Which means that the battle for the bulk of mature market consumers are now for up-graders and no longer intenders. This changes things quite a bit.

This would explain the concerns over smartphone growth slowing in 2013. For several years the smartphone market was growing at over 50% a year. This year the growth is estimated to be around 30%. I think a better way to look at the growth is to look at the rate specific smartphone price ranges will grow. I think parts of the market may grow more than 30% this year. However, I am not convinced it will be the flagship top-tier devices that grow at faster paces this year but the more second tier devices. Of course this would seem logical given the growth in emerging market but I think this will even be the case in markets like the US and Europe.

If true this brings up an important observation about devices like the Galaxy S4, the next iPhone, and any other flagship device. And that observation leads me to the title of this column.

Good Enough

I think we are getting extremely close to a good enough sentiment by mass market consumers toward their current devices. The quality of most flagship phones and even many tier two phones has been continually raising to the point where they are lasting longer and meeting the simple needs for the mass market.

We have been living in a good enough paradigm in the PC industry for years now and consumers are consistently holding onto PCs longer because they meet their needs and they do not feel an urgency to upgrade their notebooks. I think the smartphone market may be in a similar situation.

The Burning Question

At an absolute fundamental level the biggest challenge smartphone makers face in 2013 is convincing consumers they need to upgrade their smartphones this year. The biggest part of the consumer market is not the early adopters but the early and late majority. These groups think very different about technology and what percentage of this market will routinely upgrade every year or even every two years is a big question mark.

When we talk with consumers and gather our market insights into this specific question, we continually get a sense that consumers are happy with even their later generation devices and don’t necessarily feel the need to rush out and upgrade their devices even if they are eligible. It appears that a growing majority believes their current devices are good enough. It doesn’t mean they won’t upgrade, just that there is no sense of urgency.

This brings up interesting implications for companies like Samsung and Apple. Both companies have garnered a large install base for their devices over the past few years. There will certainly be a significant number of customers who will be eligible in 2013 for upgrades but will they be compelled to upgrade at all? This, I think, is an interesting question.

With regards to the S4 I have my doubts. Samsung will no doubt sell a ton of these devices but will it sell better than the S3? I’m not convinced, and I am not convinced for one primary reason. The S4 runs the dangerous risk of over serving the market needs with the key innovations and features they added.

Horace Dediu tweeted out something I thought was very interesting last week.

Market over-service is a far more dangerous mistake than under-serving it.

Overserving the market means adding features and functions the mass market does not have a perceived need for or is not ready for. Often times when an offering is complex, it is hard to understand. This goes back to the point of what is good enough in today’s market and what is overkill. These are questions to wrestle with.

The S4 has some cool features. Once people get their hands on them we will see if they are just cool or actually useful. Cool and useful are often two very different things. What Samsung doesn’t need with the S4 is consumers hearing the pitch and wondering “why do I need that?” What if the S3 is good enough?

The S4’s biggest challenge will be to address the question in the minds of consumers as to “why should I upgrade?”

Of course Apple will face this question as well. We have seen Samsung’s flagship device and we are yet to see Apple’s. I think this is an interesting year for Apple. I’m not sure Apple has ever found themselves in a position in the past decade with such a strong competitor as Samsung, who is willing to spend more money than them on marketing to convince the world to buy into the Samsung brand.

This is perhaps the first year where I think Apple needs to do more with the next iPhone than the traditional strategy. For the primary reason that the iPhone 5 in its current form is good enough for the masses. If the next iteration of the iPhone does not offer either some entirely new innovation or feature not found on the iPhone 5 that provides an answer to the upgrade question, then I fear consumers who are in the market and eligible for the upgrade may just end up buying the iPhone 5 at a discounted price. Even if that happens it still means healthy sales for Apple but it begs the question about the necessity of a new flagship device it isn’t going to make a compelling upgrade case.

The question will be what features are worth a $100 premium in the good enough market that we find ourselves in. There are fascinating dynamics at play in the market right now from my viewpoint. I do believe that every smartphone maker is now entering uncharted waters due to the saturation and maturity of the smartphone market. It will be exciting to see how these new waters are navigated. I’m glad I have a seat to watch the show.

Google Keep: Bleeding from Self-inflicted Wounds

Google keep icon

 

I don’t know how much Google is saving by killing off Reader, but it is rapidly becoming clear that it wasn’t worth it.

Most people don’t know what an RSS reader is and Reader never became a popular offering  on the scale of, say, Gmail. But it was heavily used by techies, especially tech writers who counted on it to provide  easy access to a broad variety of industry information. I definitely count myself among them. So the decision to kill Reader after years of neglect caused widespread dismay among industry influencers.

The cost of this became clear when, a week after the Reader announcement, Google rolled out Keep, a competitor to Evernote, Microsoft OneNote, and other note-taking and syncing apps. GigaOm’s Om Malik led the charge with a post headlined Sorry Google; you can Keep it to yourself.” His argument: “It might actually be good, or even better than Evernote. But I still won’t use Keep. You know why? Google Reader.” IDG’s Jason Snell chimed in with a tweet: “Can’t wait for Google to cancel Google Keep in four years after it’s decimated Evernote’s market.”

Google, of course, has the right to kill off any service it wants, especially where it provides the service without charge and has no contractual relationship with users. But Google wants to be something new in the world: A company that can be a trusted partner providing services at little or no cost. But gaining trust requires confidence on the part of customers that the services will be there after they have come to depend on them. The termination of Reader did grave damage to that trust. The price was a rocky launch for Keep, even though the product itself generally got good reviews.

Like Malik and Snell, I’m sticking with Evernote, which is offered in both free and premium paid versions. The company is doing well and note-taking and related services are its only business, so I have confidence it’s not going to abandon the market. But Google is going to have to work hard to convince me it can be trusted.

 

 

Android’s Penetration Vs. Apple’s Skimming Marketing Strategies

images-45Technology pundits and press, alike, seem obsessed with market share. But obtaining large market share is just one of many successful business strategies. Android follows a penetration pricing strategy. Apple uses a skimming strategy. Neither is inherently superior to the other. Like any strategy, each has advantages and disadvantages and their ultimate success often depends upon both circumstances and execution.

Penetration Pricing

Penetration pricing occurs when a company launches a low-priced product with the goal of securing market share. For example, a sponge manufacturer might use a penetration pricing strategy to lure customers from current competitors and to discourage new competitors from entering the industry. If the sponge’s price is low enough, consumers will flock to the new product. Competitors who can’t produce and promote sponges for such a small profit will avoid the market, freeing the sponge company to maximize brand recognition and goodwill. ~ Stan Mack, Demand Media

Price Skimming

A price skimming strategy focuses on maximizing profits by charging a high price for early adopters of a new product, then gradually lowering the price to attract thriftier consumers. For example, a cell phone company might launch a new product with an initial high price, capitalizing on some people’s willingness to pay a premium for cutting-edge technology. When sales to that group slow or competitors emerge, the company progressively lowers its price, skimming each layer of the market until the low price wins over even frugal buyers. ~ Stan Mack, Demand Media

Apple has added a twist to the skimming strategy. Rather than introducing their products at a high price and then lowering their prices later, Apple stakes out a price and then maintains and defends that price by significantly increasing the value of their products in future iterations.

For example, over the past six years, the average sales price of the iPhone has remained remarkably stable with the subsidized price remaining at ~$200 and the unsubsidized price hovering around $650.

Advantages and Disadvantages Of Price Skimming

Price skimming offers four major advantages…. It can offer insight into what consumers are willing to pay. It can create an aura of prestige around your product. If the initial price is too high, you can lower it easily. Finally, late adopters might be pleased to get your prestigious product at a bargain price, which creates goodwill for your company. A major disadvantage, however, is that large profits attract competitors, so this price strategy only works well for businesses that have a significant competitive advantage, such as proprietary technology.

The argument against Apple’s price skimming strategy is that the competition has caught up with the iPhone and Apple is no longer able to compete unless they lower their prices. But do the facts support this argument?

First, the iPhone has received 8 (EDIT: make that 9, as of March 21, 2013) straight J.D. Power and Associates awards for customer satisfaction and Apple reported that four times as many iPhone users switched from an Android phone than to an Android phone in the fourth quarter of 2012. Clearly Apple’s cachet is not on the wane, at least not in the minds of phone buying consumers.

Second, in 2012, Apple garnered 69% of all mobile phone profits. Further, they did it with only 8% of the total market share. That means that the remaining 92% of the market provided only 31% of the sector’s total profits. That’s price skimming at its finest.

Conclusion

The current meme is that Apple MUST abandon their skimming strategy and pursue a price penetration strategy instead. However, the facts simply do not support this contention. Apple could, of course, “buy” more market share simply by lowering their prices, but this has two major disadvantages. First, the market share that they would be buying is worth far less than the market share that they already own. Second, a lower price would lead to lower profits as well. It is obvious – or rather it SHOULD be obvious – that this could be counter-productive.

There’s nothing wrong with market share and I’m quite certain that Apple would be more than happy to expand their market share – but not at any price. For example, Apple has some 70% market share in iPods and around 50% market share in iPads. Yet they are doing this while still maintaining their price skimming strategy.

Price skimming is neither the only strategy nor is it the only superior strategy. It is just one of many marketing strategies. However, Apple is executing the strategy of price skimming brilliantly…even if Wall Street and the pundits stubbornly refuse to acknowledge it.