Apple v Samsung v Huawei and others: measuring processor efficiency

Whenever consumers are polled about what they want from their smartphone, the top answer is regularly “longer battery life”. That doesn’t necessarily translate into actual purchasing decisions; the number of plug-in power packs for phones is a testament to millions of people who didn’t, or couldn’t, evaluate battery life ahead of purchase or ended up pushing it down the stack of priorities when it came to buying.

A smartphone can, in essence, be boiled down to three elements: a battery, a screen and a processor. Yes, you need lots of other things too, but those are your essential building blocks.

With the release of the Samsung Galaxy S7 with Qualcomm’s new Snapdragon 820 processor, it seems like a good time to examine how the interplay of those elements shapes up. Is Apple ahead in processor design for battery life? Has the Snapdragon 820 created a new standard for the industry?

Benchmarks, benchmarks everywhere

I chose to use the benchmarks on battery life from Anandtech because they cover a wide range of handsets and they perform their own standardised tests. There is a wrinkle (there always is with benchmarks): they recently changed their methodology, which has substantially reduced the apparent battery life of devices being tested. For instance, the iPhone 6S Plus life was 13.1 hours on the old benchmark, but 9.05 on the new one. The Samsung Galaxy S6 went from 10.44 hours to 7.07.

The old benchmark dated from 2013, explains Joshua Ho at Anandtech: “[it] was relatively simple in the sense that it simply loaded a web page, then started a timer to wait a certain period of time before loading the next page. And after nearly 3 years it was time for it to evolve.” (Ho also confirmed to me “13.1 hours” means 13 hours and six minutes, rather than 13 hours and 10 minutes.)

You can find a discussion of why and what they changed on Anandtech’s Samsung Galaxy 7 review section on battery life.

In the new benchmark, Ho says, “we’ve added a major scrolling component to this battery life test. The use of scrolling should add an extra element of GPU compositing, in addition to providing a CPU workload that is not purely race to sleep from a UI perspective. Unfortunately, while it would be quite interesting to also test the power impact of touch-based CPU boost, due to issues with reproducing such a test reliably we’ve elected to avoid doing such things.” He cautions, “It’s important to emphasize that these results could change in the future as much of this data is preliminary”.

Noting that, let’s go to work. The three main elements of the phone – battery, screen, processor – all affect battery life. In theory, a bigger battery, smaller chip die, and fewer pixels on the screen will all lead to longer battery life.

I collected the recorded battery capacity and screen resolution from Phonearena for a range of phones benchmarked by Anandtech and put them into a spreadsheet.

The batteries

On the face of it, Apple’s battery capacities lag behind those of Android OEMs. The figure is boosted by Huawei in particular, so the Android average here is 3,131mAh against 2,297mAh for Apple.

Battery capacity for various phones

I’ve highlighted Apple, Samsung and Huawei because they’re the biggest players in the smartphone game. Also, those are the makers for which Anandtech has the most tests.

The first calculation: simple efficiency

The first obvious calculation to do is: divide the battery life (minutes) by the battery capacity (milliAmphours) to get “minutes per milliamp-hour”. Doing that gives the following graphs for the old and new benchmarks:

Apple, Samsung, Huawei: battery life divided by capacity

And for the new benchmark:

Longer is better: battery life divided by battery capacity

In both graphs, longer is better for this particular metric.

There are a couple of obvious points here. Apple seems to do very well on “bang for buck” on the old benchmark, well ahead of any Android OEM; only Samsung did well there, on last year’s phones.

On the new benchmark, Apple still comes out some distance ahead, with Samsung a lot closer with the Galaxy S7 using the Snapdragon 820. (The S7 wasn’t tested on the old benchmark; not all of the iPhones have yet been tested against the new benchmark.) Huawei, which uses its own Kirin processor for most of its phones, also shows respectable performance – above the average for non-Samsung Androids – except, strangely, in the Nexus 6P.

The second calculation: processor efficiency at pushing pixels

There’s only one problem with the calculations above: it doesn’t take into account how many pixels the battery has to light. The phones we’re looking at there have different numbers of pixels, so lighting each one must take battery power. So now we have to do a new calculation: manipulate the above calculation to account for the number of screen pixels. In other words, if one processor gets (say) 2 minutes per mAh to light 100 pixels, but another processor gets 2 minutes per mAh to light 1000 pixels, the second processor is clearly more efficient, by a factor of 10.

To get a view of processor efficiency, we multiply the above calculation by the number of pixels on the screens.

Let’s look at screen pixels:

Display pixels: big variation

As you can see, Apple is a long way behind most Android OEMs on this (except, notably, Huawei, and even then only for the phablet-sized Plus phones). Certainly you can argue the difference in pixel count actually makes no difference in the real world because, when held at a normal distance, the individual pixels can’t be discerned on an Apple device thus adding more to the screen makes no difference at all, except to put more load on the processor and the battery.

Now we’re ready to see how the processors perform when we break out battery life per pixel. This gives us some insight into processor efficiency.

Here are the results on the old Anandtech benchmark:

'Processor efficiency': Samsung leads

And the new:

'Processor efficiency': Samsung eroding its lead?

As above, longer is better on this metric.

(As a reminder, Anandtech hasn’t tested every device on the new benchmark that it did for the old.)

There are quite a few points to note here. The ones that stick out to me are:
• Samsung’s Exynos processor/display efficiency leads the pack
• The Snapdragon 820 is actually a slight regression from the S6’s Exynos XXX, though apparently better than the Snapdragon 810 (which powers the Xiaomi Mi Note Pro)
• Apple’s A9 does well under the new benchmark, though it’s behind Samsung’s implementation; on the old benchmark, it’s all over the place
• Huawei’s Kirin processor lags the rest of the pack. The Nexus 6P uses the Snapdragon 810; the Kirin seems to perform about as well as a second-tier Snapdragon processor
• there are variations between manufacturers, probably down to power management and other elements. For example, the Mi Note Pro and the Nexus 6P use the same processor and have the same number of pixels (though the Nexus has a larger battery) but the Xiaomi product comes out ahead on both versions of the “efficiency” benchmark
• Samsung’s clear advantage could be due to it making the screens as well as the processors, and so having much more control of manufacturing integration.

I’m sure there are plenty more observations to be made; they’re welcomed in the comments. One thing that’s definitely worth noting is these calculations don’t take into account any usability or user experience measurements. They don’t tell you about frame rates or what the phones or screens or user interface is like. That’s a far more complex question which likely remains beyond the province of benchmarks.

The Devil is in the Details with Cable Box Reform

The US Federal Communications Commission has decided to reform the pay TV set top box system once again. This isn’t the first time it’s done so – the 1996 Telecoms Act directed the FCC to find a way to open up competition in this space and the FCC created the CableCARD standard as its solution to the problem. But that CableCARD system never really made much difference and few companies other than TiVo ever really took advantage of it. As such, the details of how the cable box reform contemplated by the FCC are incredibly important this time around.

Carterfone – the inspiration for reform then and now

The inspiration for cable box reform both in the 1990s and today comes, at least in part, from the landmark FCC decision in the Carterfone case in 1968. That decision allowed a new device – the Carterfone – to connect to AT&T’s network even though it wasn’t made or sold by AT&T, a significant departure from past practice. In the process, the FCC essentially created a market for third-party phones and related devices, and ended AT&T’s monopoly over the devices that could connect to its network. The 1996 Telecoms Act and the subsequent implementation of CableCARD were intended to achieve the same result for cable boxes but clearly fell short of that goal.

One major reason is phones are, technically, much simpler than cable boxes, not least because the box had to be able to verify itself to the network as a legitimate receiver for the cable signals. The CableCARD authenticated the box in which it was inserted and decrypted the signal so it could be viewed by the consumer. Actual consumer experience with CableCARD, however, has generally been poor – installing a device that uses a card often requires calls to the cable company’s support line and even then can be hit and miss, as anyone who has visited online forums for these devices can attest.

CableCARD failed because it was an overly complex solution to what is inherently a complex problem – cable companies (and other pay TV providers) run sophisticated infrastructure to deliver signals in a highly managed way across networks which they control. Simply swapping in a third-party box isn’t straightforward. Add in the fact cable companies are highly motivated to prevent competition – some 20% of cable company TV revenues come from these boxes and the services they provide – and you have a recipe for disaster.

Two possible approaches to reform

As I see it, there are two possible approaches to the current attempts at reform. One would essentially replicate the CableCARD system but in a less clunky way, implementing physical solutions in third-party boxes that would interface with the standard cable TV infrastructure in the same way as a cable STB does today. That seems likely to run into many of the same issues as its predecessor and I generally think that’s not the way to go. The FCC has in fact previously proposed a CableCARD replacement generally referred to as AllVid, which takes a hardware-based approach, but it’s never really gone anywhere despite support from Google and others.

The other approach is to implement the solution in software, with third party boxes interacting with the head end infrastructure in much the same way as pay TV companies’ own apps on mobile devices do. This would be far less restrictive in the design of such boxes than a hardware solution, and would be more in keeping with how boxes for watching TV are evolving, in that it would be app-centric. This would still be complex but, if implemented in a smart way, would allow for far more innovation around TV boxes than any hardware solution. I would very much hope the FCC will end up going down this road rather than the hardware route – it seems far more likely to succeed and to allow makers of smaller TV boxes, not just traditional STB manufacturers, to participate.

Pay TV companies only have themselves to blame

As I mentioned earlier, pay TV companies have significant incentives to preserve the status quo – box installations and rental fees as well as DVR and other associated service fees are both sources of significant revenue and high margins for these companies. Even as these companies have slowly embraced apps for mobile devices, they have largely refused to do so on TVs or devices connected to them. So far, Time Warner Cable has a trial using Roku boxes for an app-based version of its service but, other than that, there’s very little by way of innovation in this area from the major pay TV companies. All they offer on most smart TV boxes is TV Everywhere authentication for third party apps, which makes for a cobbled-together solution that does very little to replicate the usual pay TV experience.

It’s precisely because the pay TV companies have resisted the app-based approach the FCC now feels the need to intervene. These companies will only have themselves to blame if, as a result of their intransigence in the face of consumer demand, they find themselves forced into something they could have done much less painfully themselves.

Microsoft’s Universal Windows Platform Challenge

Last week, Epic Games co-founder Tim Sweeney wrote an opinion piece for the Guardian complaining about the Microsoft Windows Store and Universal Windows Platform (UWP). He said the company is trying to create a walled garden that’s bad for everyone except Microsoft. Executives at Microsoft were quick to dispute Sweeney’s argument. In short order, much of the tech press also sounded off. This week, Sweeney wrote a follow-up article in Venture Beat, striking a more conciliatory tone but still pushing for changes from Microsoft. All sides have valid points, but the discussion itself is fascinating as it succinctly reflects the challenges facing Microsoft. Broadly speaking, Microsoft is attempting to transition Windows from the complex and powerful operating system that dominated the world during the boom days of the PC industry to a more modern, secure, and restricted OS in a world where PCs are an important but minority device stakeholder. What’s perhaps the most interesting piece of all of this is how the company’s UWP strategy will impact its big bet on the future: HoloLens.

Understanding UWP

UWP is an extension of work Microsoft began with Windows 8 to enable developers to create modern, universal Windows apps. The idea was that, by unifying the code base between the traditional version of Windows used for desktops and notebooks with that of Windows Phone, the company could incentivize developers to create apps that would, with minimal work, run across all form factors. At the time, one could argue, Microsoft’s primary driver here was an attempt to close the widening app gap Windows Phone faced versus Apple’s iOS and Google’s Android. When it launched Windows 10, Microsoft further evolved the model, bringing forward a fully unified OS platform so that every device, from phones to tablets to PCs to even the Xbox will run the same apps, with developers able to easily tailor for the screen size and capabilities of the device in question.

In addition to enabling cross-device capabilities, UWP also brings forth a host of modern OS functions. The most notable being a new, more restrictive set of rules on what apps can and can’t do. Chief among these is security features that effectively sandbox UWP apps, limiting the impact they can have on the underlying OS as well as adjacent apps. In effect, UWP apps act more like apps running on modern mobile operating systems such as iOS. The difference here, however, is Windows 10 also still supports the traditional Win32 system, the underlying platform for Windows going back to Windows NT. This platform supports the legacy apps most of us use daily and is part of what made the PC so powerful, as it allowed programs deep access to the OS and adjacent apps. It also lets users download and install apps from anywhere. Alas, it’s also what makes the PC such a fertile ground for malware along with security and long-term performance issues.

By supporting both UWP and Win32 on Windows 10, Microsoft is trying to straddle two worlds. The traditional PC ecosystem with a long, rich tradition of powerful, backward compatible apps that can, at their worst, bring the OS to its knees, versus the modern mobile world where apps have a more limited impact on the underlying OS, creating a more secure, more stable experience. At present, the success of UWP is hard to gauge. I fully expect to hear more about the platform’s progress at Microsoft’s Build conference later this month.

Sweeney’s Diatribe

In his original piece, Sweeney calls UWP “a closed platform within a platform” that he posits is bad for consumers, developers, and publishers because Microsoft is launching some features exclusively in UWP it won’t enable for traditional Win32 applications. Further, he suggests the company’s long-term goal is to get everyone to develop only UWP apps they sell exclusively through the Microsoft-controlled (and monetized) Windows Store. He argues this is harmful to the industry in general and to developers who sell through their Web sites and that it effectively ends the open PC platform. The company rebutted this accusation, with Microsoft’s Kevin Gallo noting “The Universal Windows Platform is a fully open ecosystem, available to every developer that can be supported by any store.”

Sweeney acknowledges Microsoft’s points in his follow-up piece but suggests UWP is still less open than Win32 because developers on the new platform must become a Microsoft Registered Developer and must submit their apps to Microsoft for approval. If Microsoft accepts the app, it digitally signs it and returns it to the developer, who can then distribute it. He questions whether this is a truly open system, and suggests he would like to see a CEO-level commitment from Microsoft to keeping the PC and UWP open.

Frankly, I’m not sure he can expect such a commitment. As Microsoft moves Windows 10 toward the future, you have to imagine company will endeavor to exert ever-greater control over the applications that run on its platform and the resulting experience for users. And, while it is equally hard to imagine a Windows platform that doesn’t support legacy Win32 apps, it also seems counter-intuitive to assume already ancient legacy apps will run as they do today on all future versions of Windows. Many of these apps, while powerful, contribute to a poor long-term experience for the average user. Sure, the serious PC gamers Sweeney serves want their games to have deep access to the OS to drive better gaming experiences. But the vast majority of consumer and commercial PC users would likely choose the other option — a PC experience that feels more like the one they have on their mobile phones. Fewer crashes, fewer issues when installing or deleting apps, and an operating system that remains more stable over the lifetime of the product.

While Microsoft has addressed Sweeney’s earlier comments, it’s unlikely we’ll see the company specifically address his call for a long-term commitment to his version of an open UWP. But a closer look at another set of developer documentation from the company offers a glimpse of its future.

The Future with HoloLens

As noted previously, Universal Apps and the evolved UWP started life in part as an incentive to get developers creating apps to run on Windows phone, which had fallen far behind iOS and Android in mobile. Bluntly, Microsoft missed the broader transition to mobile. The company has no intention of missing the next big evolution of computing: Augmented Reality. This is evident in its announcements and pending developer launch of its impressive HoloLens product. (Actually, Microsoft calls HoloLens a Mixed Reality product, but that’s a column for another day.) A review of Microsoft’s developer information for HoloLens is telling. The very first sentence on the development overview page states: “Developing holographic apps uses the Universal Windows Platform. All holographic apps are Universal Windows Apps, and all Universal Windows apps can be made to run on Microsoft HoloLens.”

Microsoft obviously sees HoloLens as crucial future technology for the company. It clearly states holographic apps developed for HoloLens will be UWP. That’s not to say the device won’t run Win32 apps but it seems clear the company expects developers to create all new, holographic apps on the UWP. I’ll let you draw your own conclusions about Microsoft’s stance on the future of app development for Windows from there.

So does that mean Microsoft will abandon support for Win32 on traditional PCs? Certainly not. But it’s hard to imagine the company maintaining the status quo indefinitely. I suspect at some point we’ll see such support evolve into something different from what it is today. While this will undoubtedly cause a fair amount of consternation among a large subset of existing users, it seems an inevitable conclusion to Microsoft’s current course.

Where the iPhone SE Fits

In the next few weeks, Apple is likely to hold an event in which it will announce a new iPhone which, according to rumors, may be called the iPhone SE. This phone is supposed to be smaller (in line with the iPhone 5 models), made from similar materials to the iPhone 6 series, and feature some of the same components. Reporting on the phone has focused on the physical appearance, components, and specs, but there hasn’t been any solid sourcing on the reasons for the phone to exist or how it will be positioned in the lineup. As such, it’s worth thinking through why Apple might want to release such a phone and how it’s likely to fit into the overall portfolio of iPhones going forward.

This is one of two devices likely to be announced this month by Apple where the questions of pricing and positioning are at least as interesting as the devices themselves. The other being the new mid-sized iPad, which might well inherit the iPad Pro branding from its larger sibling. But I’ll leave that for another time.

Lessons from the iPhone 5c

The iPhone 5c was announced two and a half years ago and was surrounded by some of the same speculation ahead of time as the iPhone SE is today. Some speculated the C stood for China, others that it stood for cheap, though of course Apple never confirmed either. Opinions on the 5c vary, and I suspect many see it as a flop for Apple, though I think that’s wrong. The iPhone 5c did two things for Apple which were very valuable: at the very least, it served as a useful experiment but I think it also bolstered sales in the quieter spring and summer months when iPhone sales tend to lag. By definition, the kind of people interested in a 5c were not those who needed the latest and greatest device as soon as it was available. So it was a great fit for carrier promotions and other marketing activities in the March-August period. Q2 and Q3 sales are typically off by about a third from sales in Q4 and about 25% from sales in Q1, so boosting sales in this quarter would help even out the seasonal variability.

The iPhone 5c, of course, launched in the standard fall iPhone slot alongside the iPhone 5s, but this new rumored phone is apparently to be the first in years to launch outside that window. I suspect the reason is the 5c sold well during just this time of year, when sales of the flagships were down, and it will help to bolster sales during this off-peak period just as the 5c did before it. If that’s part of the intent, then why not launch it into this window, when it can gain the most attention and feel new and different, rather than getting overshadowed by brand-new top of the line phones?

Bringing 4-inch iPhones back

When Apple announced the iPhone 6, I wrote about how it closed one of the last remaining competitive windows by introducing iPhones with larger screens. But in doing so, Apple also opened another window by discontinuing new 4-inch phones at the same time. I believe Apple wanted to keep its portfolio simple and was also betting no meaningful competitor would take advantage of that window, so it could safely ignore the 4-inch size without losing those customers to competitors. However, what’s happened is many of those owners of smaller iPhones have simply stuck with them, which has also dampened iPhone sales over the last year and a half. By introducing a new 4-inch phone, Apple is giving those customers a reason to upgrade. Of course, many of those holdouts have opted out of having the latest and greatest device already, so they’re a good fit for the mid-year approach I outlined above. Again, this should help to boost Q2 and Q3 iPhone sales significantly.

Pricing and positioning

If that’s the purpose of this new phone, how should Apple price it and where should it fit in the iPhone hierarchy? Here is where I think a lot of the speculation has been wrong. As I’ve already said, I suspect this is far more about boosting off-season sales than it is about introducing a new iPhone at a dramatically lower price point, for example, for emerging markets. As such, I think we need to consider where this new iPhone would fit within the existing iPhone portfolio. Take a look at that portfolio as it stands today from a pricing perspective:

iPhone pricing March 9th 2016

In what’s effectively a three-by-three matrix, Apple has several empty spots, notably in the bottom left corner, where there’s no new 4-inch device. That might suggest a launch price for the iPhone SE of $550, to slot in neatly with the other two new phones. This preserves the $100 price differential between new phones based on size, which makes some sense.

However, there are a couple of reasons to doubt that strategy. For one thing, this new device won’t have all the same top-of-the-line specs as the iPhone 6S line, which that pricing would suggest. For another, this device is launching off-cycle and likely won’t get a price discount come September. As such, Apple can likely afford to sell it for less, and doesn’t want to put it at the same price point at the year-old 6S in September. For these reasons, I wonder if Apple might bring the SE in at $450 instead, replacing the 5S in the portfolio immediately rather than waiting until September to drop that device. It would then likely stay at that price point until next March, when it would presumably be replaced by another phone similarly positioned, assuming Apple deems the experiment a success.

What about emerging markets?

The big implication of all this is Apple won’t actually extend the bottom end of the price spread at all with this new device. In all likelihood, the SE simply takes the place of a device already in the portfolio from a price perspective. So what about emerging markets and the need to bring prices down there? For those markets, I actually expect Apple to continue its existing strategy of selling older phones, but with a new wrinkle: refurbished devices.

One of the biggest problems with the old-phones strategy for lower price points is those phones are likely to stay in market for several years from the time they’re bought. As such. you could easily end up with phones that are 5-6 years old still in market. While a handful of such phones will always remain in use, the risk for Apple is these numbers rise dramatically as it pushes this strategy in emerging markets, which may constrain its ability to move iOS and the iPhone platform forward. So it makes sense for Apple to start shortening the lifecycle of these devices, which is part of the rationale for the new strategy evidenced by the SE.

However, the other part of this strategy has to be putting more used phones back on sale. With the iPhone Upgrade Program, and the less high-profile iPhone Trade-in program, Apple now has a couple of channels through which to acquire used but relatively new iPhones which it can refurbish and put back on sale in emerging markets. Apple has long sold refurbished devices such as iPods, iPads, and Macs through its website, but it hasn’t done this with iPhones until now. Most of those devices will have been returned or replaced devices for which Apple gets no revenue, and yet they’ve still been discounted by as much as a couple of hundred dollars. With the iPhone Upgrade Program, Apple will already have received around $400 or more in monthly payments after the first year from a customer, and so could potentially afford to discount these devices even more heavily when resold.

You could see refurbished year-old devices on sale for several hundred dollars less than retail price for new devices. That could easily get those phones below the $450 floor for new, year-old and two-year-old devices. A price point of $350 for a year-old device seems entirely realistic, and you could even see $250 for a two-year-old device. That suddenly allows the iPhone to hit price points it’s never been able to hit before, which in turn could make it more viable in markets like India.

All this would leave us with a pricing approach that looks roughly like this after this month’s announcements:

iPhone pricing strategy post March 2016

Going forward, I could actually see the yellow box eliminated over time, with 2-year-old devices being replaced from a price perspective by the refurbished devices and the smaller new devices. We certainly can’t be sure about any of this, but I’m very much looking forward to Apple’s event in the next few weeks and watching how all this plays out.

HP Aims to “Reinvent” Mobile

HP recently announced its new Elite X3 convertible smartphone that can become a notebook or a desktop through use of smart adapters and wireless technology. Running Windows 10, it’s targeted at enterprise users who want portability but are not able to get all their work done on a smartphone form factor. It sports some impressive technology, including the latest Qualcomm Snapdragon 820 processor with 4GB of memory, a huge 4150 maH battery, a Gorilla Glass 4 9.6” edge-to-edge high res display, full Cat 6 LTE modem, dual SIM capability, and Mil Std 8 durability. To compliment and extend the core device, HP has created a desk solution that allows the phone to rest in a dock and provide connectivity via Display Port and Ethernet to a full size display and keyboard as well as corporate networks. HP also created a mobile extender (called ME-Dock) that essentially coverts the device into a 12.5″ laptop.

HP is going for the Swiss Army Knife approach with this device. It believes users would prefer a single device that can be configured “on the fly” to the user’s needs and circumstances. Such handheld convertible approaches have been tried before (all the way back to Palm days) with limited success. HP is betting this time is different, driven by the adoption and standardization on Windows 10. But there are a few challenges to this strategy.

First, Windows 10 is not all that good at legacy apps. To fix this, HP includes a VDI environment it OEM’s from Citrix, which it calls HP Workspace. This is more than just Microsoft Continuum, as it is a full VDI solution that can run any Windows app loaded on the virtual server. However, and this is a major issue, it only works in an on-line scenario. If users want to interact with a legacy app, say on an airplane with no WiFi, they cannot. This may be the kiss of death for some users wanting to work with legacy corporate apps, as they can’t be natively loaded on the device (Windows 10 running on a Qualcomm chip only supports the newer Windows 10 native universal app environment).

Second, HP has not announced pricing for any of this yet. Given the high performance features of the device, it appears it will be fairly expensive. And given that a user would have to buy multiple components to make it into both a smartphone and a desktop/notebook replacement, it may be cost prohibitive. HP is betting it will still cost less than buying a feature rich smartphone and a business class 2-in-1 or Ultrabook class machine.

Third, the size of the device puts it squarely into the upper end of the phablet range, not as a replacement for the popular smartphones in the 5-6 inch range. While phablet class devices are picking up in popularity, especially with business users who can utilize the bigger screen, the majority of users still purchase a smaller, more “svelte” device. Can a device, sized in the range of smaller tablets, be competitive as a smartphone communications device with users?

Fourth, many users of smartphone devices rely on a growing list of apps available from the various app stores. Running Windows 10 means this device will have access to far fewer apps, both for business and personal use. This has been a shortcoming of Windows phones for some time and it is likely many potential users would not find this an acceptable substitute for their iOS or Android-powered devices. Even if this is primarily targeted at enterprise users, the availability of personal apps is still a driving factor for device selections (hence the whole BYOD movement).

Finally, to take full advantage of the benefits of Windows 10 requires new apps be compliant across all form factors. However few companies have redesigned their apps for this new universal app requirement. Given the history of business apps, it will take many years before the majority of such enterprise apps are available, hence the need for HP Workspace. But will companies want to deploy yet another infrastructure product, even if it is relatively easy to do?

HP is taking a gamble on an approach that might have appeal to the growing number of users who are burdened with having to use several devices to get their jobs done. Clearly, this is not a device for the mass consumer market. But the price and performance of this product will have a major impact on acceptance.

Bottom line: It is encouraging that HP is trying to regain its reputation for innovation of years past. But this tablet sized device may just be too big for a majority of users replacing their smartphones. Further, the need for convenience apps so prevalent in the Android and iOS ecosystem will be a limiting factor for many mobile users. Clearly this is innovative and a major addition to HP’s product line. But acceptance (and success) is not assured.

US Cracking Down on China’s ZTE is Bad News for Silicon Valley’s China Relations

Earlier this week, we learned the US has placed major trade restrictions on China’s ZTE and made it difficult for them to buy components and software from US suppliers. According to multiple reports, ZTE broke US rules by selling US-made technology to Iran and had planned to use a series of shell companies to illegally re-export controlled items to Iran in violation of US expert control laws.

US companies are now prohibited from selling a long list of products to ZTE, including computers, software and telecom equipment, all of which strikes directly at the heart of ZTE’s business. ZTE makes mid to upper-end smartphones as well as telecom equipment and now they can’t get chips from Qualcomm, which are key to their smartphone program and processors from companies like Intel used in their telecom equipment.

I learned firsthand about this kind of trade restriction program in 1984 when I got a call from the Department of Defense asking for help on a related issue. At that time, Creative Strategies was owned by Business International, a major global econometric consulting firm. We were their high tech arm and the US Government was one of our major clients. The US had a hands off policy with most tech companies and actually had spent very little time with them outside of their cross relationships through DARPA.

During the call, I was asked to broker a meeting between them and Intel. They did not know who the right person was inside Intel to speak with to confidentially share some important trade restrictions with them. So I set them up with a group that managed Intel’s international relationships and, a few weeks later, two government officials met with Intel in Santa Clara.

It turns out they had become aware Intel was getting requests to ship their newest 80386 chips to Russia and China and under the current laws that would be illegal. The Dept of Defense had got wind of this and while this would normally have come under the State Department, it was the Department of Defense that took the lead and told Intel that under no circumstances could Intel ship any PC with 80386 chips in them to any “enemy” of America.

This was during the Cold War and, while our major conflict was with Russia at the time, the US was also very concerned about our highest levels of PC technology getting into the hands of the Chinese as well. Of course, we look back at this now and can’t imagine how a low level chip like the 80386 was considered too powerful for use in PCs but it was state of the art at the time and caused serious concern in Washington.

I ran across this again when IBM sold their PC business to Lenovo and the deal went through major government scrutiny regarding the fact that a Chinese company would not only own a major PC entity but also have access to the highest level of technology the US PC industry had in 2005. These types of technology restrictions are still active and always being monitored closely by US officials although these days are more about the technology not getting into countries on a major sanctions list like Iran and North Korea.

But given the changing world and the advanced economies of places like Russia and China, a restriction like the one put on ZTE now has major trade and relationship ramifications. Indeed, not long after this trade restriction was put on ZTE, the Chinese government condemned it and said point blank this would have a serious impact on US and China relations. They pointed out this will hurt Chinese companies and made a not-so-veiled threat about China imposing similar restrictions on US companies doing business in China.

Various US trade groups also pointed out the potential economic harm to US companies. While they agree limiting arms and arms-related technology to adversaries was understandable, they questioned blocking things like telecommunications products that are mainstream and could be counter-productive to US and China relations. They also pointed out that, if they don’t get them from the US, they will get them from someone else.

They also see these restrictions will give China other reasons to take actions to make the country less dependent on foreign suppliers. They fear China will only accelerate their ability to create chips equal to Intel, Qualcomm or other vendors in the US today and eventually only buy from suppliers inside China.

As a serious China watcher, this issue of China moving to create everything they need inside their country and become less dependent on their US suppliers is a serious one. We already know China, Russia and other countries have been getting closer to US companies and I know of various times industrial espionage has happened. In fact, this has been happening in Silicon Valley since the 1940s and US companies have to be more diligent than ever over this problem.

I don’t think the US will budge on these ZTE restrictions so this will need to be watched very closely by the US tech world. I sense China is looking for something to try and make their market more inclusive and this could be the case that pushes them further in this direction.

How Safe are We from Our Apps?

Like many others in the tech community, I applaud Apple’s efforts to encrypt the iPhone to protect our privacy. But there’s been noticeably little attention given to the impact of apps on that same privacy.

I’ve always been surprised at how many permissions some apps requests before they can be installed. They typically request access to our contacts, location, calendar, email and sometimes even our mic and camera. Yet, rarely do the apps explain why they need all of this information or what they plan to do with it. In fact, many of the items they ask to access have no bearing on the app’s functionality. I’ve yet to come across an app that allows us to selectively accept or reject these permissions item by item.

So the question is, how serious an issue is this when it comes to protecting our privacy and how do Android and iOS phones compare?

I posed this question to Amit Ashbel, a Cybersecurity professional with Checkmarx.com. The Israeli-based company provides services that review software code for vulnerabilities and has published a notable report on this subject, “The State of Mobile Application Security, 2014-2015”.

He pointed out mobile apps have two main attack vectors: (1) The operating system and (2) The application installed on the device.

Ashbel noted Apple does a good job in securing its operating system and significantly limits the user’s access to core OS level controls. Google takes a different approach and enables more flexibility which, at times, might expose the OS to more risks. Neither Google nor Apple do a good job in securing the apps, because neither company seem to analyze the apps for security vulnerabilities they may expose the user to.

The task to analyze code is obviously immense. The iOS platform alone has more than 1.5 million unique apps, downloaded over 75 billion times!

But according to Ashbel, the vulnerabilities exposed by the apps are less a result of the developers intentionally compromising our data and more the result of poor coding that allows others to attack our phones and obtain that personal data.

The Checkmarx and AppSec-Labs study identified the top seven development sins based on testing hundreds of applications of all types, from banking to games to utilities:

1. Authentication/Authorization – Acting on or accessing data without sufficient permissions, such as bypassing the security pin code and allowing access to personal information

2. Availability – Issues resulting in denial of service from the application or part of it that can result in crashes

3. Configuration Management – Incorrect or inappropriate configurations

4. Weak Cryptography – Breaches related to insecure ways of protecting data

5. Information Disclosure – Exposure of technical information such as application logs

6. Input Validation Handling – Results of mishandling data received from the user

7. Personal/Sensitive Information Leakage – Exposure of personal or other sensitive data such as passwords, documents, credit card numbers, etc.

In comparing iOS and Android, the report finds few differences:

It is a common myth that the iOS development platform is more secure than the Android equivalent for several legitimate reasons:

a) iOS has more restrictive controls over what developers can do and tight application sandboxing
b) iOS applications are fully vetted before being released to customers – preventing malware from entering the Apple App Store

Yet, in the field of pure application security where vulnerabilities are built in the code or into the application logic, the story is quite different.

Our statistics show the distribution of vulnerability exposed by severity is almost identical between iOS and Android applications with a slightly higher percentage of critical vulnerabilities in iOS applications.

40 percent of iOS vulnerabilities were critical or of high severity, compared to 36 percent of the Android vulnerabilities.

The conclusion is there’s more vulnerability from apps, due to the way they are coded, rather than from intention. But, because of poor coding, it’s even more of a reason not to provide access to information not needed for the app to function properly.

What does Ashbel do when loading apps on his Android phone? He reads the permissions carefully and, if they ask for access to information not needed, he says no.

One would think as part of the approval process to allow an app to be sold in their stores, both Apple and Google would require the permissions asked by the apps are just what’s needed. Perhaps they need to begin examining the app’s code in greater depth. After all, Apple has raised the importance of securing the personal information on our phone and that should include all areas of vulnerabilities.

How AlphaGo Illustrates the “Warm Bath And Ice Bucket” View of Technology Progress

Positions from AlphaGo's first win against Fan Hui

Remember the last time you took a bath or shower and it started lukewarm but you gradually warmed it by adding more hot water, until it reached a temperature so hot you could never have got into it at the start? Isn’t it strange how we can be immune to subtle, slow changes all around us?

Then there’s the other extreme – the ice bucket experience, where you’re abruptly plunged into something so dramatically different you can’t think of anything else.

The warm bath and the ice bucket: that’s how technology progresses, too.

As an example of the warm bath, you could point to the improvements in computing power in PCs and smartphones. Every year, they’re faster. You don’t notice how much better until you have to use an old device. (Or, of course, upgrade from a years-old device to a brand new one. These days, the effect is less visible on PCs than smartphones.)

Warm enough yet?

Another, less familiar, example is the burgeoning field of artificial intelligence (also known as machine learning, deep learning, neural networks, expert systems and so on). AI has been the “next big thing” for decades; the burden of expectation was so great it couldn’t meet them. Where, in the year 2001, was HAL, the talking sentient computer from 1968’s film 2001?

And yet, bit by bit, AI has been improving. I realised something was going on two years ago when I wrote a story about an app called Jetpac which could examine Instagram photos and determine whether the people in it were happy, grumpy, and so on.

To do that, Jetpac analysed 100 million photos and was able to determine whether those in it were wearing lipstick (so must be “glammed up”), had moustaches, etc.

A fun story, but it was the underlying technology, which Pete Warden, then CTO at Jetpac, explained to me that made me realise AI was back on the agenda again. He had used a neural network (which mimics, in machine form, the way neurons in the brain work: certain stimuli are reinforced, others are de-emphasised, in a learning process) to do the analysis.

It wasn’t surprising when eight months later Google bought Jetpac. The fit with Google’s broader AI drive was so obvious.

Where’s that tech now? Almost certainly powering the recognition system in Google Photos. Isn’t the Photos recognition system clever? But equally, isn’t it so obviously a progression from the face recognition we’ve seen in apps for years? The temperature is rising.

In fact, the AI temperature is now so high that this week we may witness a key event: a machine winning against a human at one of the subtlest board games ever. AlphaGo, an AI program developed at Google’s Deepmind in London, learned how to play the Chinese game Go at a professional level – and then beat Europe’s best player 5-0. On Wednesday – Tuesday night in the US – AlphaGo takes on Lee Sedol, the game’s top player. (If you haven’t played Go (most people in the West haven’t) let me put it like this: it makes chess look crowded and trivial: the board has four times more points than a chessboard, and the number of possible moves is far, far larger.)

AlphaGo isn’t like Deep Blue; it isn’t programmed just to play Go. Instead, it has a “learning” system which was tuned to play Go by working through millions of games and learning what outcomes were best. It could probably learn to win at chess. The core program learned to play video games.

This is the warming bath: how did we get to the point where computers could learn to beat the best player in the world at a game where intuition and “feel” are seen as essential?

The ice bucket

By contrast, some technologies are ice baths – so dramatically different from what has gone before they upend our expectations. Virtual reality (VR) fits this well. Immersive VR is an utterly different experience from what has gone before and the potential for creating new ways of interacting are what have so many people excited about it.

To people who haven’t tried it, VR tends to be “that thing where you wear the stupid helmet”. But that’s because they haven’t experienced the ice bucket. In the past, trains were a similarly disjunctive experience, able to travel at absurd speeds. There were even fears that the velocity would make passengers’ bodies fly apart.

Are there other “ice bucket” technology examples? The original iPhone was a shock to pretty much everyone, even though the technologies it contained (notably the multi-touch screen) were already known. From January 2007, Google’s Android team sidelined their work on a BlackBerry-like device and focussed instead on a multi-touch product.

Your preference doesn’t matter

Ice buckets change the game abruptly; warm baths surround us and raise the temperature so we can’t imagine life before them. There’s no way to pick which is “better” – and we don’t get to pick anyway, because they happen quite independently of our wishes or expectations. But in truth, there are more warm baths than ice buckets. The gradual improvement of smartphone screens, battery life, chip speeds, mobile reception, mobile speeds, design improvements – they’re all slow improvements which you don’t notice until you don’t have them. For dramatic change, though, the ice bucket beats the lot.

Moment of truth

There’s an instant as you first experience a splash of water when you don’t know whether it’s hot or cold. The match between Lee Sedol and AlphaGo could be like that: an odd mixture of hot and cold, a “where were you when…?” moment. Garry Kasparov’s loss to Deep Blue in 1997 was an iconic moment, remembered by many. It has taken nearly 20 years for a computer program to be able to challenge the top human in Go, which tells you about the gap in complexity between the two games. Fewer people understand Go than chess; but everyone understands winning and losing. Computing’s advance is bringing us a moment when the ice bucket comes from a warm bath.


The first match between AlphaGo and Lee Sedol starts at 1pm Seoul time on Wednesday (4am GMT Wednesday, 11pm EST Tuesday, 8pm PST Tuesday). The match will last up to four hours. It can be viewed on Youtube; there will be commentary (which might not mean much to non-Go players) at Gogame.

Predicting the Markets Apple will Disrupt Next

I’m admittedly using the headline as a bit of click-bait. But it is absolutely applicable to Apple. I believe many markets are on their short list to disrupt, for a specific reason I will explain. However, any company that takes this philosophy to heart and actually delivers on it will play a disrupting role.

One of the pieces I wrote that got quite a bit of feedback was one on why Apple is a User Experience Company. I articulated in this article how at the heart of Apple’s business model is user experience. All the decisions they strive to make are focused with the user experience in mind. And, while they don’t always hit that mark, objectively they have a better batting average than many in the market. Ultimately, those companies who have user experience at the heart of their philosophy have a solid chance at disrupting new markets, particularly those where the user experience is terrible today.

With that background in mind, let’s look at a few examples of terrible user experiences: Subscription cable and automotive.

TV is ripe for disruption and we know Apple wants to do it. Everyone feels it. The way we discover, consume, and share TV shows and movies is ready for a revolution. Cable companies are the farthest thing from good user experience companies. Sometimes, as with the experience I have had with Dish and the hardware they gave us, I wonder if they actually hate us. Their whole world must collapse in the same way the carrier experience with controlled hardware and proprietary carrier “on deck” content stores were entirely disrupted and went the way of the dodo bird once smartphones hit the scene. This is why I’m in favor the FCC’s move to break the cable companies hold over us and the proprietary hardware model they have to access their subscription content. The entire thing today feels like a giant Ponzi scheme.

Just for grins, I timed how long it took me to turn on my TV and get to a DVR show recorded last week. Starting from the sleep screen (my tv was already on) it took me 34 seconds to get to the show I wanted to watch. Inferior technology and slow set top boxes is what we put up with today because there is no better solution. It simply feels ripe for disruption as the user experience with this hardware is absolutely terrible. In a tie with my printer, my set-top-box is the most hated piece of technology in my house.

Similarly, I was discussing with a friend his recent car shopping experience. His family is growing and they were looking at a minivan. The manufacturer in question had offered a wide range of choices in features and functions that came with the vehicle. When it came down to features, there were two features his wife really wanted. Navigation and a built-in vacuum. He found it strange to realize they could get the navigation system or the vacuum but the manufacturer had no configuration that included both.

Contrast the mainstream consumer car shopping experience with Tesla and you immediately see what I mean. Tesla is the prime example of a consumer experience company making a car. Everything from the showroom experience, to the way they handle the demonstration, to the seamless process to take a test drive, to the first feeling of walking up to the car, getting in the car, being in the car, etc. Any high-end automotive shopping experience has this in common. It is an entirely different experience than shopping for a Dodge minivan.

As technology invades all areas of every industry, user experience companies are the ones to bet on. Those are companies that, when you use their product, you see the category as it was meant to be. Tesla shows us this in the automotive industry. Apple’s unprecedented customer satisfaction level with their products, head and shoulders above their closest competitors, show us consumer reaction to what happens when luxury experiences are made mainstream.

Healthcare is a mess and ready to be disrupted. Our user experience with our doctors is terrible. Retail is about to get disrupted. Who wants to go to a mall and spend an hour trying to park on the weekend just to be surrounded by hoards of people and sort through racks and racks of things? Banking and a plethora of financial services from lending to payments are all poised for disruption. When you look at the world today from a user experience perspective and critique it, it becomes clear there is dramatic room for improvement in many markets and product segments. This will come and it will be done by user experience-focused companies. Currently, that list is a very short list.

If we understand Apple as a user-experience company, not a computer company, not even a technology company, then any market where user experience is terrible is a potential for them to disrupt.

A 5G Primer

Over recent months, the wireless industry has begun to talk more and more about 5G, the next generation of wireless technology. Verizon and AT&T have announced something of a roadmap for testing the technology here in the US and it was a major theme at the recent Mobile World Congress as well. All this seems to be happening just as people in the US finally have almost ubiquitous access to 4G, the most advanced carrier wireless technology currently available, so this may seem premature. What exactly is 5G and do we really need to start learning about a new technology already?

Understanding 4G is critical to understanding 5G

To understand 5G and all the complexities surrounding it, you first need to understand 4G and, to some extent, the other generations that came before it. These “Gs” are named for generations of mobile technology, with the first being the earliest wireless technology, 2G being digital services, 3G ushering in basic mobile broadband, and 4G finally bringing really usable mobile broadband to the masses. Each of these technologies has been represented by a set of standards which took many years to develop, formalize, and then roll out in the form of commercial networks.

Roughly ten years separated each of these generations, with 1G services being rolled commercially in the early 1980s, 2G in the early 1990s, 3G in the early 2000s, and 4G in the early 2010s. On that basis, we’re due for another generation in the early 2020s. That’s about where 5G is likely to land, but we’ll come back to that.

With regard to 4G specifically, the most important thing to understand is it’s a far more expansive set of technologies than previous generations were. What I mean is previous generations came with a set of performance specs that were pretty well defined and ultimately bounded by certain limitations. As soon as each of these generations was standardized, it was clear another generation would need to come afterwards. But that’s not as true for 4G as it was for the earlier generations. Most 4G networks today run some form of the LTE technology, but even though that technology has certain performance characteristics in its current form, there are other flavors of LTE which will deliver far higher throughput and other improved characteristics in a way that wasn’t true for the earlier generations of technology.

The need for 5G

The need for 5G then isn’t as obvious as it was in previous eras in the wireless industry. Because 4G has so much headroom in terms of performance and because it solved many of the pain points that existed in the industry before its introduction, including true broadband speeds, very low latency, and greater cost and spectral efficiency, 5G can’t simply provide more of the same on an incremental basis and still qualify as a new generation. In order to merit the name, 5G has to bring steep changes in several different characteristics, but those are harder to find. Arguably, wireless equipment vendors and carriers need 5G far more than most of their customers do.

No standards yet

As a result of all this, there actually isn’t any sort of standard definition of 5G today and it’s far from being a recognized standard. That hasn’t stopped both the equipment vendors and the carriers from talking about it as if it were, but it’s important to note that what these companies are talking about is a broad vision of what 5G could become rather than something more specific. However, most definitions put forward suggest some combination of the following characteristics:

  • Very high speeds – often talked about in terms of gigabits per second versus tens or hundreds of megabits per second for 4G
  • Extremely low latency – whereas 4G introduced much lower latency, 5G technology could bring single millisecond latency
  • Better support for IoT deployments, including both support for very dense sensor arrays and much lower power radios so devices in the field can run for years without needing new batteries. Some conceptions of 5G also include the ability to dynamically assign bandwidth and other characteristics to different devices running on the same network

Despite these shared goals however, it’s not yet clear what the underlying wireless technology will be that delivers these results, including what spectrum might be used. Millimeter wave spectrum (higher frequency bands than those currently used for most cellular services) is considered promising for some deployments, but other versions of 5G envisage combining various different spectrum bands and technologies together. This is one of quite a number of areas where consensus has yet to emerge.

More marketing than reality today

What then to make of all the field trials and other activity that’s going on today? Well, ultimately it comes down to marketing, both on the part of the operators and their equipment suppliers. Each of these companies wants to get a head start on the competition in terms of shaping people’s views of what 5G will bring and demonstrating they’re at the forefront of the technology. There are significant downsides to all this. Introducing another term like 5G at a time when it doesn’t have a clear definition risks confusing customers, especially when many of them have barely begun to understand 4G. In addition, getting too far down the road with a particular version of 5G before standards are set risks either pursuing a dead end or fragmenting the industry. The reality is 5G standards won’t be set for some time to come and, even once they are, networks based on those standards won’t be commercially available for some time after that. As such, it’s realistic to think of 5G arriving in the real world in a way that will actually matter to customers sometime in the early 2020s, just in time to continue the pattern we outlined earlier.

Wearables can Drive the Digital Health Movement

The more I study the consumer landscape for wearables, the more I’m convinced the wrong narratives are circulating about their value. Because the market is so young, most of the use cases being presented skew toward a tech or fitness lifestyle. Consumers see people running or working out as primary advertising angles and most won’t immediately identify with the use case. Or they see and hear about more tech/gadget-centric value propositions being the reason they need a wearable and don’t identify with the use case. This is the challenge of being early in a market. The mainstream value is there. Consumers just don’t see it yet.

Over the past few months, I’ve had a range of discussions with agencies in the healthcare industry. Following those discussions with my own interviews of consumers who don’t own a wearable, I get the sense the health angle is the least understood but also has the most potential in helping wearables go mainstream. Most consumers (74% of our consumer panel) have no immediate plans to buy a wearable tech product and 53% say they don’t see the need for one. However, when you talk them through the health benefits specifically, you can see their attitude soften.

On this note, some research was recently conducted by Accenture. Here are the stats that stood to me:

  1. 77% of consumers and 85% of doctors say using wearables helps patient engagement
  2. 78% of healthcare consumers wear or are willing to wear technology to track their lifestyle and/or vital signs
  3. 40% of health app users have already shared data with a doctor in some capacity
  4. When recommended by a doctor, 3 in 4 consumers followed advice to wear technology to track health

Interestingly, the study found that, when it came to whom to share this data with, consumers responded with the following:

Screen Shot 2016-03-06 at 11.59.25 AM

In recent Wristly research, we discovered 61% of the panel would consider switching healthcare providers if they offered a subsidized smart watch. 49% of the Wristly panel also said they would be “likely” or “very likely” to share health information with their provider if offered a discount on their bill. (This information and more are detailed in the latest edition of Wristly Pro)

All of this confirming the motivation behind the recent trend of corporations planning to implement corporate wellness strategies to offer employees a subsidy on a wearable in order to motivate them and track their progress toward staying healthy. The benefits to increased health and wellness to the consumer, employer, and consumer/doctor relationship is overwhelmingly positive and can’t be ignored.

We are in the midst of a transition, however. The healthcare IT world is still working out the kinks as they move to a digital world. A specific study done by Accenture with doctors found, “Nearly all US doctors (90%) say better functionality and an easy-to-use data entry system are important for improving the quality of patient care through healthcare IT. Interoperability remains an unmet need.”

When you take a step back and see the foundation being laid right now with the digital healthcare transition and technologies around health apps and wearables, it seems clear where we are headed but not clear how long it will take to get there. The health story is a strong one, as even consumers will agree health is important to them but admit a lack of education is a main reason they feel challenged to get and stay healthy. Technology will aid in this process and that core understanding is one reason I remain so bullish on wearable products. The sensors which can capture essential information in helping us stay healthy will remain unique to something we wear on our person. It remains one of few functions our smartphones will not be able to do. Whether these sensors stay on our wrists, move to our ears, get embedded into our clothing, or something else, we don’t know. But we do know we will wear quite a bit more technology in the future.

Options for Improved Wireless Coverage and Performance are Multiplying

Despite all the marketing, TV ads, and even talk about 4G LTE and now 5G, instances of poor wireless coverage remain a common frustration among users. But even where coverage is generally good, the combination of the imperfections of RF, the hundreds of variables impacting radio signals, and sheer economics, it means we will never have ‘perfect’ coverage. In fact, for each incremental improvement in coverage, getting to the next level becomes economically and physically more challenging.

Despite all this, there is a wave of new technology and products hitting the market in earnest, promising to help address some common coverage and capacity challenges, for both cellular and Wi-Fi. We’re not talking about rural or remote areas where, to be honest, if there’s no tower in the vicinity 30 years after the birth of cellular, there probably never will be. The focus here is on filling dead spots and “donut holes” in outdoor areas, and providing deeper and more reliable coverage inside buildings. Let’s divide these developments between cellular and Wi-Fi.

Cellular

Four important things are happening in the cellular realm that could have some meaningful impact this year. First, wireless carriers are starting to deploy small cells in significant numbers. Small cells increase network capacity and provide improved outdoor and in-building coverage. Verizon for example, is deploying small cells in major cities, in order to alleviate congestion issues being experienced in large issues due to burgeoning data usage. So your four bars of LTE will get you the sorts of speeds that feel like four bars of LTE. Sprint, for its buildout of the 2.5 GHz band in major cities (branded LTE Plus), is relying heavily on small cells.

Second, additional spectrum acquired over the past few years is coming online this year. T-Mobile continues to roll out its A-block 700 MHz spectrum, which provides approximately 2x the coverage from a single cell site, and better in-building coverage, compared to its higher band spectrum. TMO’s A-block holdings now cover about 2/3 of the U.S. population. As another example, AT&T is slated to turn on some of its WCS spectrum this year. And 600 MHz auctions are scheduled to begin soon, although services using that spectrum will not be rolled out until at least 2018.

Third, wireless carriers are starting to employ elements of LTE Advanced, namely carrier aggregation, which combines channels across all their spectrum holdings. This “wider channel” results in capacity and throughput enhancements.

And finally, improved femtocell and residential small cell products are becoming available. The most public and notable example is T-Mobile’s residential 4G LTE CellSpot product, which the carrier is basically giving away to subscribers who require improved in-building coverage (read our test drive of the product here). As another example, Nextivity, a leader in the femto market, has introduced a new signal booster for in-building coverage, supporting LTE and UMTS, that provides significant increases in gain and bandwidth compared to previous products.

Wi-Fi

Wi-Fi is another important part of connectivity, of course. What’s always been amazing to me is consumers rush out and buy the latest smartphones, or try to make the most out of their cellular network, while their router hasn’t been updated in five years and sits behind a shelf collecting dust. Well, now might be the time for an upgrade. A new crop of routers is coming to market, promising significant improvements in range and performance. The most advanced 802.11ac routers, such as the D-Link AC3200 Ultra Wi-Fi Router have one channel at 2.4 GHz and 2 channels at 5 GHz and up to 8 spatial streams of MIMO, providing 4x the channel bandwidth of previous versions. And just a couple of weeks ago, Eero launched its new home “Wi-Fi System” to rave reviews. Eero relies on multiple APs in the home connected via a mesh network to deliver significantly improved coverage and performance.

In addition to new and better Wi-Fi network equipment, there are significant efforts to increase capacity in the unlicensed (Wi-Fi) bands. Over the past couple of years, additional capacity has been allocated in the 5 GHz band for Wi-Fi, which has led to a new crop of multi-band routers. There’s also terrestrial low power service (TLPS), which uses a slice of the spectrum that satellite provider Globalstar owns in the 2.4 GHz band, as a supplemental channel for Wi-Fi. Globalstar has petitioned the FCC to allow it to open channel 14, which would add to channels 1, 6, & 11 in the 2.4 GHz band currently used for Wi-Fi, providing a meaningful increase in capacity. Finally, there’s progress being made on LTE-U, which would allow incumbent wireless operators to use up to 500 MHz in the 5 GHz unlicensed band, for mainly downlink LTE services. LTE-U promises about 2x the range and capacity of current Wi-Fi (see my recent piece on LTE-U).

RF will always be capacity-limited and coverage will never be perfect. But 2016 looks to be a breakout year in terms of more spectrum and innovative products and solutions in both cellular and Wi-Fi.

Is the iPhone Coke, New Coke, Pepsi or Just Sugar Water?

On January 10, 2016, long time subscriber and frequent commentator, Obarthelemy wrote:

User Experience is in the eye of the beholder. Until I see double-blind tests about it, I flatly deny that Apple’s is superior/premium…. ~ Obarthelemy

This really got me thinking. How would the Apple iPhone, Phones that ran Android, Windows Phone, etc. fare against one another in a double-blind test?

Like Carl Sagan, I’m a huge believer in the scientific method:

…the scientific method was the best method ever invented for arriving at the truth of things. ~ Carl Sagan

However, before we discuss whether the iPhone or Android Smartphones ((Smartphones that run the Android operating system.)) or any other smartphone would win in a double-blind test, we should first take a step back and ask ourselves whether a double-blind test is the best way to judge consumer preferences for smartphones — or consumer preferences for ANY product, for that matter.

Fortunately, we don’t have to guess. That question was asked and answered in the 1980s by the marketing campaign known as the Pepsi Challenge.

The Pepsi Challenge

The Pepsi Challenge (see 30 second commercial) marketing campaign of the 1980s was supposed to be a scientific inquiry; a double-blind experiment.

In a world overwhelmed with soda options, how could you really know which soda you liked best? It made sense to put prejudice and branding aside, wear a metaphorical blindfold and focus purely on the flavor of the various options. ~ Pepsi paradox: Why people prefer Coke even though Pepsi wins in taste tests

Here’s the thing: The Pepsi Challenge wasn’t just a marketing gimmick. It really is true that in blind taste tests people preferred the taste of Pepsi over Coke.

In fact, the Pepsi Challenge marketing campaign was so successful that Coke began a series of its own internal taste tests aimed at developing a superior product. The result was New Coke — a sweeter cola reformulated to be better than Pepsi and better than the classic formulation of Coke in blind taste tests.

The reaction to New Coke was not at all what the Coca-Cola company had expected. Regular Pepsi drinkers were underwhelmed. Regular Coke drinkers hated it.

The board of Coca-Cola then reversed itself, re-introduced the old formula under the brand name “Classic Coke”, and sold both New Coke and Classic Coke side-by-side. Over time, New Coke all but disappeared with Classic Coke, once again, taking its place as the company’s flagship product.

Today, despite the double-blind taste tests that showed that Pepsi was preferred over ‘Classic’ Coke, and New Coke was preferred over both Pepsi and Classic Coke, people buy far more Coke than Pepsi, and almost no one at all is interested in buying New Coke. ((According to industry statistics compiled by Beverage Digest, Coke owns 17 percent of the American market for carbonated soft drinks. The next most popular choice is Diet Coke with 9.4 percent. Pepsi languishes in third place at 8.9 percent.))

What’s the heck is going on here? If New Coke beats Pepsi in taste tests, then why is it less popular than Pepsi? If Pepsi wins taste tests against Coke, then why does Coke still dominate the soda market?

Hypothesis #1: Marketing Is All That Matters

Some industry observers contended that Coke’s ultimate success over Pepsi was proof that superior marketing wins out even over a superior product. Marketing, therefore, was all that really mattered and consumer companies should invest lots of money in advertising. But that explanation doesn’t really hold water.

If marketing is all that matters, then why didn’t Pepsi — which supposedly had the superior product — just improve its marketing? For that matter, if marketing is more important than product, why doesn’t every company just improve their marketing?

Half the money I spend on advertising is wasted, and the trouble is I don’t know which half. ~ A maxim of obscure origins, put in famous mouths

And if marketing were all that, then why wasn’t the Coca-Cola company able to sell New Coke? They devoted far more advertising dollars to New Coke than they had ever used to promote the former version of Coke, but New Coke — which was specifically formulated to beat both Coke and Pepsi in taste tests — went exactly nowhere.

When a man says there’s nothing that marketing can’t do, you know that the man has nothing to do with marketing.

Hypothesis #2: Sweet Sips

A second theory was that people preferred Pepsi in bind taste tests because people prefer sweeter tastes when sipping. And there is a factual foundation for this assertion. Even in blind taste tests of wine, people almost invariably preferred sweeter varieties. And no one is seriously contending that sweeter wines are always superior to other varieties of wine.

However, if people prefer sweet tastes when sipping, then why do the taste tests reverse themselves when the sodas being tested are labeled as Coke and Pepsi? If sweetness was what mattered most in taste tests, then Pepsi should win out over Coke. And New Coke — which is even sweeter than Pepsi — should win out over both Pepsi and Coke. But this is not what happens. When taste tests with labeled sodas are conducted, the verdicts are reversed. Coke beats Pepsi and both Coke and Pepsi beat New Coke.

Again, what the heck is going on here?

The Brain Overrules The Taste Buds

When Read Montague of Baylor College Medicine performed a version of the Pepsi Challenge with subjects hooked up to an fMRI machine, he found something interesting. In blind taste tests, most people preferred Pepsi, and Pepsi was associated with a higher level of activity in an area of the brain known as the ventral putamen, which helps us evaluate different flavors. By contrast, in a non-blind test, Coke was more popular and was also associated with increased activity in the medial prefrontal cortex — the part of our brain associated with higher-thinking functions. In other words, the higher-thinking functions of the brain were overruling the decision of the taste buds.

You might be saying, “See! This is exactly why we need double-blind studies. People are letting their irrational feelings for a brand interfere with their taste buds and when it comes to choosing flavors that we like, the taste buds — not our irrational brand preferences — should win out. Double-blind tests are the answer.”

Umm…no.

Double-blind tests are the answer all right, but they’re the answer to the wrong question.

There are no right answers to wrong questions. ~ Ursula K. Le Guin

Let’s re-review Carl Sagan’s quote on the scientific method:

…the scientific method (is) the best method ever invented for arriving at the truth of things. ~ Carl Sagan

The truth of “things”, yes. But people are not “things”. Consumer preferences are not “things”.

Double-blind tests are designed to eliminate pre-existing biases and the power of suggestion. That’s ideal for scientific inquiries, but it’s totally inappropriate for studying consumer preferences. In fact, it’s worse than useless because double-blind tests eliminate the very thing we’re looking for. When determining consumer preferences, biases and the power of suggestion are not noise to be eliminated — they’re the signal we wish to identify. When studying consumer preferences, we don’t eliminate biases to get to reality. Our biases ARE reality.

The Human Rowboat

The philosopher J. S. Mill once observed:

(T)here are two kinds of wisdom in the world:

1) Scientific; and
2) The Wisdom of Ages.

The first kind of wisdom changes every day. The second kind of wisdom changes not at all. The first kind of wisdom consists in what we know about the world and how it works. The second is what we’ve collectively learned about human nature through the experience of individuals across thousands of years of history. The second kind of knowledge is unsystematic, consists in psychological rather than empirical facts, and is present in more or less equal amounts in every historical period. ((As an aside, my style of writing has been deeply influenced by this idea that there are two kinds of wisdom. I write about technology, but I pepper my articles with quotes filled with the wisdom of ages. It’s always surprising to me how relevant the thoughts of Socrates, Nietzsche, Benjamin Franklin, Groucho Marx, and others who lived yesteryear are to the technology problems of today.))

The scientific method is good for discovering facts, but personal preferences are not facts to be discovered, they are feelings to be uncovered. We are creatures of both logic and emotion. To assume that human beings are only logical is — well — it’s illogical. And very, very counterproductive.

The brain and the heart are like the oars of a rowboat. When you use only one to the exclusion of the other, you end up going around in circles. ~ Dr. Mardy

The above is just a simile, but I think it’s a great one. When it comes to understanding consumer preferences in technology, many otherwise very intelligent people go around and around in circles because they put all their weight behind technology, and neglect — or refuse to acknowledge — the human half of the equation.
???

The history of technology is the history of understanding things and misunderstanding people.

The Intersection

The best technology products — and this is important, because it’s so widely misunderstood — do not consist only of the best technology. The technology must also cater to the way human beings think and work.

You’ve got to start with the customer experience and work back toward the technology, not the other way around. ~ Steve Jobs

The broader one’s understanding of the human experience, the better design we will have. ~ Steve Jobs

This is why Apple goes on, and on, and on about standing at the intersection of technology and psychology.

I think really great products come from melding two points of view: the technology point of view and the customer point of view. You need both. ~ Steve Jobs

Dr. Land at Polaroid said, “I want Polaroid to stand at the intersection of art and science,” and I’ve never forgotten that. ~ Steve Jobs

Apple has the opportunity to set a new example of how great an American corporation can be, sort of an intersection between science and aesthetics.

We want to stand at the intersection of computers and humanism. ~ Steve Jobs

The reason Apple resonates with people is that there’s a deep current of humanity in our innovation. ~ Steve Jobs

People Don’t Buy A Product, They Buy An Experience

When we make purchases, we use all of our senses, along with our accumulated memories, feelings, knowledge, etc. We apply the lessons we learned yesterday to the purchases of today. We don’t buy products in a vacuum, we buy them within the context of our lives.

When people drink soda in a blind taste test, they prefer Pepsi. When people drink soda in a cup with the soda’s logo on it, they prefer Coke. Why? Because people don’t buy a product, they buy an experience. And for most, drinking Coke is a better experience than drinking Pepsi.

It was actually Pepsi, not Coke, that tricked us with their marketing by convincing us that a blind taste test represented an accurate way to measure the desirability of a soda. The Pepsi challenge wasn’t scientific at all — it was a gimmick because it measured the wrong thing. Products and services aren’t judged by how good they are, they’re judged by how good they make us feel.

We never desire strongly, what we (only) desire rationally. ~ Francois De La Rochefoucauld

New Coke succeeded in labs and Pepsi succeeded in blind taste tests because they appealed to a single sense. Classic Coke succeeded in the marketplace because it appealed to our overall sense of well being.

People don`t ask for facts in making up their minds. They would rather have one good, soul-satisfying emotion than a dozen facts. ~ Leavitt

The Apple Experience

Coke is not successful because of their ingredients any more than a great Chef is successful because of his or her ingredients. It’s how the ingredients are put together and how they are presented that make a great meal.

— Great Chefs sell an experience.
— Coke sells an experience.
— Apple sells an experience.

You can’t use double-blind tests to determine which is the best soda and you can’t use double-blind tests to determine which is the best smartphone either.

The primary reason why so many industry analysts misunderstand Apple is confusion surrounding what Apple actually sells. Apple doesn’t sell phones, tablets, laptops, and desktops, and… smart watches. Apple sells experiences.

Apple is a counterintuitive company because they are an experienced technology company that tries to arrange technology so we don’t have to experience it.

Apple has always been, and I hope it will always be, one of the premiere bridges between mere mortals and this very difficult technology. We may have the fastest PCs, which we do, we may have the most sophisticated machines, which we do. But the most important thing is that Apple is the bridge. ~ Steve Jobs

Apple is a company of solutions wrapped in experiences. ~ Lou Miranda

The reason why Apple can consistently collect between 90-95% of all the smartphone profits is because, while they’re competitors are selling technology, Apple is selling an experience.

Apple has no competition who sell what their customers are buying. ~ Horace Dediu (@asymco)

Conclusion

User Experience is in the eye of the beholder. Until I see double-blind tests about it, I flatly deny that Apple’s is superior/premium…. ~ Obarthelemy

Dude, that ship has sailed.

Double blind tests are not the standard by which you judge the taste of soda and they’re not the standard by which you judge the user experience of a smartphone either. In the marketplace, profit garnered from sales — not double-blind tests — is the only measure that matters. And Coca-Cola and Apple Co., have most all the profits.

What I love about the consumer market that I always hated about the enterprise market is that we come up with a product, we try to tell everybody about it, and every person votes for themselves. They go “yes” or “no.” And if enough of them say yes, we get to come to work tomorrow. ~ Steve Jobs

In the free market, you get to vote “yes” with your dollars, but you don’t get to veto the votes of others.

Indeed, a major source of objection to a free economy is precisely that it… gives people what they want instead of what a particular group thinks they ought to want. ~ Milton Friedman

The verdict of the market is final and inviolate. ((Absent the use of force, the verdict of the market is final and inviolate))

Facts do not cease to exist because they are ignored. ~ Aldous Huxley

Facebook’s Trojan Horse Commerce Strategy

If you were to ask the question, “what is Facebook?”, you might get a variety of different answers: a social network, a communication platform, a content aggregator, and so on. Yet one thing Facebook isn’t – yet – is an e-commerce player. But it’s arguable that much of what Facebook has been building over the last several years amounts to a Trojan horse strategy to become one. Rather than entering Amazon’s market directly, Facebook has been building the scaffolding around its e-commerce business and doing everything but actually introducing ways to buy things across Facebook. Given how Amazon tends to respond to direct threats to its business, this may be one of the smartest things Facebook has done in its history.

What then, is this e-commerce scaffolding Facebook has been building? Think about what you’d need to have in place to build a successful e-commerce business:

  • Goods – either goods procured directly or close connections with companies that could be suppliers of such goods
  • Potential buyers – as large a number as possible of potential customers for those goods you have to sell
  • Signals – ideally strong signals about which goods you’ll offer will be of interest to your potential customers
  • Customer service infrastructure – ways for your buyers and sellers to communicate with each other before, during, and after the sale
  • Marketing tools – both above-the-line advertising and word-of-mouth tools for making potential customers aware of the goods on sale

Facebook, at this point, has all of this and more. With over a billion and a half users and masses of businesses who already run Pages and buy advertising on the site, it has well beyond a critical mass of potential buyers and sellers. Every company of any size in the vast majority of the major countries in which Facebook operates has a presence on the site and a relationship with Facebook, and a majority of individuals in many of those countries do, too.

Facebook already knows a great deal about your interests, and indeed, serves up targeted advertising for those products already, even though today the purchase flow that follows a click on one of those ads is completed outside the Facebook walled garden. In the form of Messenger, which was recently expanded to allow businesses to communicate with customers, Facebook has a customer service infrastructure and communication medium to allow buyers and sellers to interact throughout the purchase cycle. And through both paid advertising and the social connections and ability to amplify messages through social sharing, Facebook has a variety of tools to get the word out about products. With the M virtual assistant, Facebook even has a discovery mechanism for connecting potential buyers with sellers that may meet a particular current need, combining timeliness and relevance in a way only search engines have previously achieved.

The next interesting question then becomes where the Buy button shows up first. It clearly has potential in the context of advertising but, given the emphasis on Messenger as a channel for B2C interactions, it seems likely it will have a role there, too. Ultimately, Buy buttons should pop up in appropriate contexts throughout Facebook, probably even inserted automatically by AI that recognizes product names in status updates and Messenger conversations and recognizes the products themselves in pictures and videos users share. Of course, all of this will spread to both Instagram and WhatsApp (perhaps even Oculus) over time as well. At this point, it’s far less a question of whether Facebook will embrace commerce and much more a question of when.

One of the biggest determinants of Facebook’s ability to finally pull the trigger on commerce will be its ability to solve its biggest and thorniest problem — payments. For all the information users have willingly handed over to Facebook over the years, the one vital piece of information it hasn’t asked for or received is the 15 or 16 digits on the front of users’ credit cards. Yet having some kind of payment infrastructure in place is an essential piece of that commerce scaffolding. David Marcus, who now runs Messenger, obviously has a long history in this space, but so far all we’ve seen along these lines is peer to peer payments in Messenger. When that product launched, I surmised it might well be a way to break through the payments barrier but it hasn’t exactly set the world alight since. Using partners that have existing payment systems would be one option, but many of the most obvious partners have become direct or quasi-competitors to Facebook over recent years, including both Amazon and Apple. Once a user’s first transaction is completed, the rest is straightforward, but it’s that first transaction that’s going to be the hard part. Crack that and Facebook should quickly become a major player in e-commerce, adding yet another arrow to its quiver and opening up a whole new set of revenue streams.

Dual-Lens Cameras and a New High End iPhone

The rumor mill is alive and well discussing some research analyst notes around the alleged dual camera feature for the new iPhone 7 this fall. It appears the rumors are looking at three variants for the iPhone with the 7, 7 Plus, and something above that which will supposedly feature dual-lens cameras. I’ve been tracking the component landscape in optics for some time and dual-lens cameras (I’ve seen demos with a dozen camera arrays) is a question of when not if.

The more demos I’ve had of some of the capabilities of dual-lens cameras on a smartphone, the more excited I get about the possibilities. For example, recording multiple videos at one time, perhaps one in slow motion and the other speed up and then being able to blend or edit them. Taking a large enough picture, megapixel wise, that you get the equivalent of 10x or more optical zoom with no image degradation. All consumers will see the benefit of zoom and not needing to carry a DSLR just so they can get pictures of their kids playing sports because they aren’t close enough. With dual-lens cameras, those days are gone. The ability to take pictures in extremely low-light gets even better with dual-lens cameras. Optical image stabilization sees dramatic performance increases. 3D imaging becomes possible. This could lead the way to VR and more immersive images and video. The benefits of dual-lens cameras are significant and, when consumers see what can be done with imaging with phones that sport dual-lens cameras, there will be a real “wow” factor.

Knowing this gives me confidence the technology is the kind of thing which could drive a new buying cycle and be the “must have” of the high-end devices that support it. Interestingly for Apple, many component analysts point to the LinX acquisition as playing a role with the needed IP to do unique things with optics. There is also clear capacity expansion in many of Apple’s core suppliers around optics. There is enough smoke here to take it seriously. However, the rumors are saying this dual-lens camera tech will only come to the highest end variant of the iPhone 7 Plus due to supply chain constraints. Essentially, there is going to be a very expensive high-end version of the 7 Plus which contains this stuff and it will be in short supply but likely very high demand.

Huawei was actually the first OEM to ship a dual-lens camera smartphone in the fall of 2014. It seems the supply chain is reinforcing that Huawei is also looking to be aggressive with this feature. Right now, these are the only two OEMs being mentioned for 2016. Samsung it appears won’t have anything until 2017 and, hopefully during that time frame, Apple will be able to scale the tech down the line.

From what I’ve seen and heard, it does seem this could be one of those must-have features. More importantly for Apple, the camera capabilities rank quite high among the features Android owners are considering when thinking of switching to an iPhone.

The challenge will come in the form of the rest of the market embracing these features and moving them to lower cost smartphones fairly quickly. I think 2019 is probably the time frame when you see really good dual-lens camera experiences in products under $500 so there will be a time advantage.

However, there is a software and ecosystem element of this which will favor Apple. The software to do this well will be very difficult and, more importantly, the app ecosystem to take advantage of it favors Apple’s developers who move much faster than others. So, while the rest of the market will get these features, the ecosystem may be void or less supportive of the value.

The long and short of it is that I’m bullish on the prospects.

The Amazingly Elusive Non-Smartphone Owner

The non-smartphone owner — you know who I’m talking about. You may even know or be related to one of these people. You may even be one yourself. We spot them every now and then in the wild using these ancient devices and we are bewildered.

Screen Shot 2016-02-29 at 8.58.16 PM

In the US, roughly 15-18% of the mobile market still uses a feature phone. Personally, I find this fascinating and I’d like to share some insights we uncovered in our latest US smartphone market study.

I take nearly every opportunity to talk to a consumer who is doing something interesting whenever I spot them in public. Often these conversations happen in a line, at a gas station, while waiting for my wife outside the bathroom at a movie theater, etc. One thing I learn when talking with these non-smartphone folks is how it all boils down to them simply not wanting a smartphone. Sometimes this is out of principle, sometimes cost, sometimes they don’t want to learn something new or be bothered by technology. But I decided I’d ask questions specifically to those in our mainstream consumer research panel who say they don’t own a smartphone. Here are some of the things I uncovered.

Screen Shot 2016-02-29 at 9.16.08 PM

The top answer from the non-smartphone owners of our panel was “no interest in capabilities of a smartphone.” I added the “I like my basic cell phone” in order to capture sentiment. This mentality is exactly the one I encounter whenever I get a chance to interview someone who doesn’t own a smartphone. They simply aren’t interested. They understand the benefits, they don’t find them too hard to use, they don’t want to be bothered by the costs and, when it comes right down to it, they don’t believe they are worth it.

They skew older with 50% of them saying they were in the 60+ demographic. They skew slightly more male than female. Here is the really crazy part. Most non-smartphone owners in our panel have owned their current feature phone for 3-4 years and said they have no intention of replacing it for another 2-3 years. Does a Samsung or LG (the most popular brands owned by this cohort) last for 6 to 7 years? Remarkable if so.

Out of curiosity, I wanted to gauge what brand of smartphone they may lean toward should the dark day come when they can no longer get their precious feature phone. Samsung, Apple, Motorola, and LG were the top five answers with Samsung among the top with just over 50% of the responses. Interestingly, this cohort tends to lean more Android if they had to choose a smartphone and lean toward a similar brand of feature phone they previously had like Samsung or LG.

It intrigues me that price comes up as much as it does, given it seems US carriers are penalizing those who don’t yet have smartphones by charging them more in various ways on their bill than consumers who do have smartphones. We see this often on family plans where the kids with the smartphones pay less, either per line, or something else, than the parents with feature phones. So you would think at some point in time the cost issue goes away and it just becomes a principled stand against smartphones themselves.

Around the same time we did this study a few weeks ago, I also did one on the PC/Tablet market to gauge where the market is currently leaning with purchase plans for 2016. Those non-smartphone owners also skew toward Windows desktops from brands like Dell or HP. They purchased their current machine 5-6 years ago and paid less than $400 for it. Most don’t own a tablet of any kind, most don’t plan to and the small percentage who do plan on buying an iPad. Over 60% have no plans to buy a PC/laptop of any kind this year while 12% said it they would “possibly” buy a new PC this year and only 10% have definite plans to buy a new PC in 2016. And when they do, the majority of respondents said they plan to spend in the $400 range–again.

They spend most of their PC time doing social networking, a list of things that qualify under file management, and streaming videos. Nothing which requires a high-priced PC and, since they don’t have a smartphone or tablet, it is their only product to do such things.

The picture is clear, after both studies, who this type of customer is, what they own and don’t own, their primary use cases and behaviors, price bands, and sentiment toward the smartphone. While interesting, and rare, these customers are unique in many ways and represent a part of the market many of us who live and breath tech find hard to comprehend.

I want to leave you with this key understanding as to why I bring up this customer. In many of the consumer market and device usage studies we have conducted in the past year the same glaring evidence stands out. We can directly tie price paid for a PC/Smartphone/tablet to usage of the product. Simply, those who pay more for their computers use them more. For a consumer who is very price conscious like the non-smartphone owner, they have no intention on using the increased capabilities so see no need to pay for it. Similarly, those who buy lower end smartphones, PCs, and tablets are less engaged with the device and the surrounding ecosystem. This insight helps us understand the surrounding ecosystems, and engagement levels around hardware prices. Anyone in the software (apps) or services ecosystem needs to understand this dynamic as it relates to their business focus and customer priorities.

Could the FBI Court Order to Apple be Counter-Productive?

Since the court order came out ordering Apple to assist the FBI in accessing the iPhone of the San Bernadino terrorist, I have been talking to various legal authorities and Washington insiders to try and get a real world sense of how Apple’s strong stand against this could play out. Clearly, Apple is highly committed to their position and, as Tim Cook told ABC recently, they are willing to take this to the Supreme Court if they must.

This is of course a highly emotional issue and, at least in the FBI’s mind, a kind of one-off situation. Since the shooter is dead and the phone itself belongs to the City of San Bernadino, they felt this was the one case they could really challenge Apple on and, using public opinion, force them to find a way to get Apply to comply with this specific order.

But I get a sense that, while the FBI did expect Apple to appeal, they did not anticipate Apple would also use this case to actually champion the importance of personal privacy and security and to challenge it at a level that could ultimately force it to Congress and/or the Supreme Court for resolution.

From Apple’s viewpoint, they appear to have ultimately realized that at some point this larger question of digital security and privacy had to be forced to higher authorities to get a proper ruling. To the FBI’s chagrin, this court order was looked at by Apple as the ideal way to force the national conversation and get the kind of legisltative action needed to determine the digital privacy and security rights of US citizens. But any legislation also needs to balance the real need to protect citizens from terrorism and any other national threats in which data on something like a smartphone would be critical to use for this purpose.

This push towards making this a constitutional issue was reinforced by a NY judge on Monday who agreed this type of case must be looked at from a congressional level since the original founders could not have imagined the concept of digital rights, encryption and its impact on the Constitution. But think of this as the “top of the first” in this battle between the FBI, Apple, and privacy and security advocates.

My Washington contacts feel that, no matter who is elected president, the US Senate and House will still be polarized and will take various positions on this issue and will never get to doing anything constructive about this legal question. Instead, they seem confident it will take the Supreme Court to actually bring a ruling on this important and sensitive topic. With that in mind, Silicon Valley will be probably be more active when it comes to supporting whoever is put forth to replace Justice Scalia, as it is most likely this will not get to the Court until it has all nine judges seated.

But I am getting a sense from people in the know I talk with that the FBI bit off more than they could chew with this legal maneuver.

More importantly, if the eventual outcome is to tighten cyber security and privacy laws, this move by the FBI would have been very counter-productive for them. But, at the same time, they probably did the American people a favor since it will now force our government officials and the higher court to, at the very least, give us more precise rules and laws on this critical issue for America and other democracies and governments around the world.

Installment Plans and Smartphone Upgrade Rates

About six months ago, I wrote a post here about how installment plans appeared to be accelerating smartphone upgrade rates, based on the data available at the time. Across the three carriers we had data from at the time, upgrade rates were higher in the first two quarters of 2015 than in the same periods in 2014. However over the last two quarters, things have begun to change. It’s now clear something of a dichotomy is emerging, where some carriers are seeing faster upgrades while others are seeing slower ones as they embrace installment plans.

The new data

The charts below show the postpaid phone upgrade rate (i.e. the percentage of the postpaid phone base upgrading their handset in the quarter) for each of the four major US wireless carriers:

Q4 2015 phone upgrade rates

What you can see is, even though the first two quarters of 2015 did see higher upgrade rates for all three carriers who reported the figure for both years, in the last two quarters of 2015 the situation somewhat reversed. In Q3, both Sprint and Verizon saw lower upgrade rates in 2015, while in Q4 all four carriers saw lower upgrade rates. Now, part of that is because Q4 simply saw lower smartphone sales in general (after an exceptional Q4 2014) due to the launch of the iPhone 6 (something I wrote about previously). But part of what’s happening is a slowing of upgrades at two of the carriers in particular – Verizon and AT&T – while the picture is somewhat different at Sprint and T-Mobile.

At the same time, actual upgrade rates are now considerably lower at the larger two carriers than at the two smaller carriers:

Comparative upgrade rates

As you can see, AT&T and Verizon’s upgrade rates are the lowest, while Sprint’s and T-Mobile’s are considerably higher. The outgrowth of this is that T-Mobile in particular now sells almost as many smartphones per quarter as AT&T, despite having a much smaller base of smartphone customers. The next question then becomes, why this discrepancy?

Attitude toward upgrades, and the prevalence of leasing

The two biggest drivers of the difference between these groups of carriers are their overall attitudes towards upgrades and their approach to leasing in particular. It’s worth remembering that T-Mobile introduced frequent-upgrade policies with its Jump program in 2013 and the other carriers followed with their own installment plan programs over the following year. However, even though the programs share superficial similarity, the details have been different, especially in the implementation. For one thing, though Jump offers very frequent upgrades by default, plans from the other carriers offer a range of payoff periods and, in practice, Verizon and AT&T’s plans are essentially intended to recreate the standard two year plus upgrade cycle rather than really incentivize more frequent upgrades. Of course, even T-Mobile doesn’t see all its customers upgrading as often as they’re entitled to, for a variety of reasons, but AT&T and Verizon have largely backed off encouraging customers to upgrade as soon as they’re eligible, while T-Mobile has done more to push customers in that direction.

The other issue is leasing, where both Sprint and T-Mobile now have programs, and where Sprint is essentially making leasing its default option for new handsets. T-Mobile has been a little more equivocal about its leasing plans (marketed as Jump on Demand) and in fact said on its recent earnings call it would be mostly prioritizing installment plans in Q1 after more heavily favoring leasing in Q4. However, given that T-Mobile’s installment plans closely mimic leasing in key respects, including the ability to upgrade frequently, that actually doesn’t matter much from an upgrade rate perspective.

Some carriers are punching above their weight

The long and short of all of this is AT&T and Verizon are taking their feet off the gas as far as driving smartphone upgrades, while Sprint and T-Mobile are doing all they can to accelerate them. The result is Sprint and T-Mobile are punching considerably above their weight in terms of smartphone sales, while AT&T and Verizon are selling fewer smartphones than their sizable bases would suggest:

Smartphone base vs sales

Why does all this matter? For starters, if you’re a handset vendor, it means if you allocate your resources by who has the bigger smartphone base, you’re likely to put your money in the wrong place. But even if you’re only watching the dynamics of the US smartphone market, it’s worth noting how these trends shift and change over time, because the impact of installment plans is proving much less straightforward than it first appeared.

Uber and Safety

When it comes to developing new products, I’ve always found the idea to be the easy part. It’s the execution that’s really the hard part. Execution requires implementing all of the tiny details to get everything just right. Miss one important detail and it can lead to failure.

I thought of this same issue with regards to Uber over the past several days with it being in the news. The Uber idea is brilliant, one of the best to come out of the tech industry since perhaps Facebook. The company has gotten most everything right, except one: its approach to the safety of its customers. Does that make Uber vulnerable?

Now, Uber is not going to fail but its success can seriously be affected if customers begin to worry about their safety. Even when the odds are tiny, consider how many people are afraid to fly after a serious plane crash.

Had I not covered two incidents of Uber customers this past summer for the San Diego Transcript, I might have felt differently when reading about how an Uber driver in Kalamazoo, MI was charged with gunning down a half-dozen people at random. I found Uber brings some of this on itself with its own indifference and stubbornness. They are not doing everything they can to protect their customers. That was my conclusion after covering these two events and interviewing the victims.

A friend of mine, a female Silicon Valley executive, experienced a harrowing experience when she took an Uber on what should have been a ten-minute drive from one part of San Francisco to another. The driver took a long route onto a congested freeway, even though the driver’s GPS suggested a local route, and then went into a frenzy when she questioned him. The driver sped down the breakdown lane past stopped traffic to his left and cut across several lanes of traffic, threatening to let her out on the freeway. She was fortunate to get out when the car stopped several blocks from her destination.

A second victim called right after this first story ran. Her Uber driver took her in the opposite direction to where she was going and told her he was going to show her “a good time”. She only got the driver to relent when she started filming him and threatening to call the police.

In each instance, neither was able to reach Uber to report these incidents. Uber has no phone number to report problems.

The first victim was able to reach Uber only by tweeting. The company’s response was to send a link to fill out a form and a $5 refund. The second victim called the police and, when she finally was able to reach Uber, they said the driver has a known hearing problem and blamed her for misunderstanding him. She also discovered when she tried to give the police the driver information and license number from her phone, the Uber app deletes that information once the ride begins.

In each case, the victims felt Uber showed indifference and denial.

Think about this. In each case, a life was put in danger. Yet, Uber did not react as any of us would if we saw a person in harm’s way. It’s lunacy that Uber does not provide a phone number to report incidences of this type. You would think the company would want to know whenever such incidents occurred.

Any company experiencing the rapid growth Uber is going through can’t possibly prevent the hiring of problem drivers. Some bad ones will get through their screening system. And proper vetting is likely one of their biggest costs, perhaps next to legal.

But even after the killings in Kalamazoo, Uber insisted they plan no changes to their screening process, which currently requires that an applicant submits their name, birth date, social security number, vehicle registration, insurance, license, and a vehicle inspection report.

Yet, they are facing lawsuits from several cities, including San Francisco, for not doing enough to screen drivers, exaggerating how safe they are, and allowing convicted felons to become drivers. Uber does not do any face-to-face screening nor do they do fingerprint checking, something most law enforcement agencies recommend.

Uber just settled one lawsuit for $28.5 million for, among other things, claiming it was “the safest ride on the road”. Clearly Uber can do more to protect its customers.

Maturing Tech Markets means Increasing Cross-Category Competition

Being a device vendor in 2016 is hard. Former high-growth markets (smartphones) are slowing or declining (slate tablets). The traditional PC market continues to struggle to find its new normal. And nascent categories such as wearables, virtual reality, and augmented reality are more future promise than shipment drivers today.

For all but a few, the march toward commodity status is relentless. The result: Competition among hardware vendors to find new segments and areas of growth is becoming ever-more fierce.

Such competition was on display at the Mobile World Congress this week, where we saw several vendors known for products in one category launch new products into adjacent categories. Now, it’s important to note that increasingly the lines that distinguish one device category from another—a smartphone from a tablet, a tablet from a PC, a PC from a smartphone—will continue to blur. At IDC, we track device shipments and we need clear product categories to enable accurate counting. So this makes our job harder. But consumers and business buyers don’t care about such labels. They just want the right device for the job. As a result, the fact more vendors are playing in more categories should lead to better products for end users, which is, of course, a good thing.

Huawei Launches a Windows 10 Device

While the broader tablet market has experienced steep declines (down nearly 10% worldwide, YoY in 2015), the detachable segment—those devices with first-party detachable keyboards—has been growing at a fast clip (albeit, from a small base). In 2015, worldwide detachable shipments increased to 16.6M units, up from 7.9M in 2014. IDC expects that strong growth to continue in 2016. Now, back in 2010—2012 when Apple’s iPad and later Android tablets started gaining steam in the market, there was much discussion in the industry about tablet shipments cannibalizing PC shipments. What our research showed however, was that very few people bought a traditional tablet back then with the specific intention of using it to replace a PC. There was plenty of usage cannibalization with people using their tablets to do more things, which ultimately resulted in them using their PCs less. Less usage led to extended PC lifetimes. But few people swapped a tablet for a PC outright.

This time, however, it’s going to be different. Today’s detachables are increasingly capable. When Microsoft launched the first Surface products, the company proudly proclaimed them to be no-compromise tablets and PCs when, in fact, they compromised on both counts, leading to devices that weren’t particularly good tablets or PCs.

Today’s Surface Pro 4—and competing products from all the major PC vendors—are very different machines fully capable of replacing most traditional PC user’s notebook. As a result, consumers and companies will increasingly purchase detachable products to replace traditional PCs. Real cannibalization is happening and, as detachable growth accelerates, those unit volumes will increasingly come out of traditional notebooks.

Looking to capitalize on this opportunity at MWC, Chinese smartphone giant Huawei launched its first Windows 10 product, a detachable tablet called the MateBook. By all counts, it’s a solid first offering that leverages the company’s mobile design prowess. It features a 12-inch screen, is just 6.9-mm thick, and includes a battery the company claims will run for up to 13 hours. Also, Huawei embedded the same fingerprint sensor from its Mate 8 smartphone into the device’s volume rocker, which should make locking and unlock the device easy and fast. It runs Core M processors with an optional stylus. The company didn’t announce final pricing on the device but it will likely range from $599 to $1699.

Naysayers might suggest that traditional PC vendors, and Microsoft itself, have little to fear from Huawei entering this market. But the company is a serious smartphone powerhouse, with strength both in China and at the worldwide level. Its design capabilities, scale, and channel partnerships make it a legitimate contender out of the gate. I fully expect other major Chinese smartphone players to enter the Windows 10 market later this year, but the industry will carefully watch Huawei’s success or failure here.

HP Launches a Smartphone

While many expected to see a Windows 10 device from Huawei at this years’ show, it’s safe to say few expected PC giant HP to launch a Windows 10 Mobile-based smartphone. Fewer still would have expected it to be so notably good. As noted in a Bob O’Donnell’s column earlier this week, the X3 is a robust, 6-inch, Qualcomm Snapdragon 820-based device. But what makes it interesting is HP has launched a product it clearly hopes will eventually cannibalize some portion of the PC market where it is currently the second-largest player (behind Lenovo). The product’s optional desktop dock turns it into a full-fledged desktop and its Mobile Extender dock turns it into a notebook (albeit one with a phone dangling from it). As a result, HP envisions enterprise using the device to replace, not just a person’s existing smartphone, but their notebook and desktop, too. It’s an audacious move and one that’s going to be very tricky to pull off, but you have to give them credit for trying.

Part of the risk here for HP is they only control part of the X3’s destiny, as the company is entirely dependent upon Microsoft fulfilling some key Windows 10 Mobile platform promises. Specifically, the Universal Windows Platform (UWP) has to gain traction and attract enterprise app developers. And MS must deliver the promised experience within Continuum, the feature within Windows 10 Mobile that transforms the interface of the device depending upon whether the user is on a 6-inch phone screen, a 12-inch notebook screen, or a 22-inch desktop. That’s a lot of things that have to go right — only a few of which HP controls. The product is still months away from shipping, and HP hasn’t announced final pricing and channels.

Complicating matters further is the presumed existence of a future Microsoft Surface smartphone. If such a product exists, it may well do all the things HP has set out to do with the X3 with the added benefit of being more tightly integrated with the OS, since Microsoft would be making both the software and the hardware.

Yep, competition is hard.

MWC Smartphone Announcements Symbolize a Changing Market

This week has seen a slew of announcements from major smartphone manufacturers in conjunction with the global mobile industry’s annual event, Mobile World Congress, held in Barcelona, Spain. All the usual suspects, except Apple, have introduced new phones or other devices at the show, including a number of the flagship phones these vendors will sell over the coming year. You’ll find reviews aplenty elsewhere, so I won’t attempt another here. There are several themes that emerge from these device launches, which are symbolic of changes in the smartphone market and its competitive dynamics.

Android vs. Android, not iPhone

One of the biggest changes in the competitive dynamic is the largest Android vendors are now competing against other Android vendors more than they’re competing against the iPhone. Though the Android vs iOS and Samsung vs Apple rivalries have dominated coverage of the smartphone market over the last few years, focusing on these today misses some of the broader movements in market share. Though Google’s latest Android ad campaign continues to fight the platform wars, most of the individual vendors on the Android side are more focused – rightly so – on differentiating themselves from their Android siblings.

The simple reason is the differences between iOS and Android are clear and have been so for years now. Most people with the ability to buy either have long since made a choice and, although there are always some switchers, most of the switching is going from Android to iOS, especially in mature markets. At the same time, premium Android smartphone makers are suffering much more from the encroachment of hitherto low-end competitors also running Android into their target markets. Competition from smartphones with similar specs and running the same version of Android is tough and as such, these vendors have to demonstrate how their implementation of Android is different. Hence, you see Samsung focusing on things like its always-on screen, its improved camera, water and dust resistance, and the inclusion of the Vulkan APIs for graphics.

Beyond the phones themselves, several major vendors also pushed accessories, including Samsung’s Gear 360 camera, LG’s swappable Friends modules, and Sony’s Xperia Ear wireless earbuds. Each of these companies is attempting to build its own ecosystem of devices independent of Android, to try to drive user loyalty to its particular brand rather than to Android in general. Although Xiaomi’s focus at MWC was another implementation of its vision of high specs at low prices, with a Snapdragon 820-based Mi 5, it has long offered a wide range of other consumer electronics products under the Xiaomi brand.

Enterprise an increasing focus

Though these companies are all trying hard to differentiate themselves, they’ve gone through several interesting cycles in this regard. Samsung focused overly much on gimmicky features a few years back, turning its annual keynotes into litanies of features and UI elements no one would ever discover or use. Somewhat chastened by its slowing growth, it then dialed back these customizations but, in the process, went from meaningless differentiation to little differentiation. Over the last couple of years, we’ve seen an increasing emphasis on premium hardware paired with more meaningful innovation in software and that’s definitely an improvement. But many other Android vendors are arguably heading in the same direction and that’s where Samsung’s enterprise strategy comes in.

The enterprise is one area where no other Android vendor has a serious standalone strategy, despite some efforts a few years back from some of them. Samsung’s Knox solution has turned into a really solid offering for enterprise customers who want to offer their employees Android devices without sacrificing security. Knox allows Samsung to overcome many of the inherent weaknesses of Android from an enterprise security perspective while adding additional layers on top to provide businesses with better manageability and separation of consumer and enterprise data. This year at MWC, it added its Enterprise Device Program and a connected vehicle solution to the mix. The EDP builds on Samsung’s work to bypass the standard Android update program to provide faster patches for security issues, as well as guaranteeing longer life cycles for enterprise buyers of Samsung smartphones. The connected vehicle solution will certainly have a service-oriented consumer solution too, but will also be part of Samsung’s expansion into enterprise services beyond the smartphone. Samsung is here extending its lead in the enterprise in a way no other Android vendor can match and this is differentiation that will stick. The other benefit is enterprises will be willing to pay a premium for these features, helping Samsung to maintain margins.

Samsung wasn’t the only smartphone vendor to target the enterprise at MWC – both Vaio and HP announced business-centric smartphones running Windows around the time of the show. However, in both cases, businesses would be wise to exercise caution about jumping on board with vendors that have shown little commitment to enterprise smartphones in the past. It’s clear that, as Windows on mobile devices becomes ever less relevant in the broader smartphone market, it may well find refuge in enterprises looking to commit to an end-to-end Microsoft solution for their devices.

Xiaomi – disrupter turned disrupted

I’ve mentioned Xiaomi briefly already, but I’ll just close with this thought: it’s intriguing to look at Xiaomi and what’s become of it over the last couple of years as a lesson in trends in the Android market. Xiaomi broke into the market during a brief window of opportunity when it was the only Android vendor offering iPhone-like products at much lower prices. It did very well during that period, growing rapidly off the back of a mix of truly cheap handsets and competitive priced mid-range handsets and both Xiaomi and many observers expected it to continue to do so. But that window of opportunity began to close as a number of other manufacturers adopted a similar strategy but without the flashy marketing, thereby disrupting Xiaomi much as Xiaomi had intended to disrupt others. Having originally set a target of 100 million smartphone sales in 2015, it ended the year with just 70 million. It’s not clear things will get much better in 2016. The Mi 5 is more of the same, though now with a Qualcomm processor, but even as it seeks to undermine Samsung and other premium Android vendors, its own strategy of cheaper but serviceable alternatives is likely to continue to come back to bite it both in China and in other markets. The Android market is essentially eating itself, pulling down the premium vendors and depressing margins, even as Apple continues to plough its own furrow without anything like the same threat of disruption.

The End of Standardized Platforms

Historians of the technology industry observed a pattern that was predictable with regards to new computing platforms. In the early days of a computing segment, like mainframes, minis, and desktops, there was a great deal of platform fragmentation. These early computing systems often ran proprietary software and operating systems with little interoperability. Eventually, a standard emerged — Windows. Even though Macs stuck around, their market share stayed well below 3% for much of the build-out of the PC era. Here is a visual to show how platforms started fragmented and then standardized around Windows.

Screen Shot 2016-02-12 at 5.30.49 PM

What you are looking at is operating system share of devices sold annually. Of computing devices sold for most of the time period above, Microsoft was the standard. Fast forward to today and we see a slightly different picture emerge. Windows runs on PCs and Android and iOS run on mobile devices, giving us three primary operating systems occupying the bulk of computing devices sold each year.

Screen Shot 2016-02-12 at 5.31.40 PM

You can see as the pie went from several hundred million computing devices sold each year to now almost 2 billion computing devices sold annually (when you add up PCs, tablets, and smartphones), the pie has gotten much larger but the landscape has also changed. While Android has the largest chunk of the pie, they do not have the 97% share Microsoft once had. The size of the pie and the global diversity of the consumer market brought with it the opportunity for several computing platforms to exist simultaneously.

If we take a step back and look at the installed base, we see an even clearer picture of the diversity in computing platforms in use today as well as the size of the market.

Screen Shot 2016-02-12 at 4.50.01 PM

There are now well over three billion active computing devices in the world with five primary operating systems/computing platforms — Windows, OS X, iOS, Android, and AOSP (non-Google) Android running in China. The key point here is, with the addition of scale in mobile and the inclusion of the global consumer market, there is no single standard computing platform. The question then is, what will happen with things like Virtual/Augmented reality platforms or artificial intelligence platforms? Should we expect VR/AR or Artificial Intelligence to unify and one single platform emerge like during the enterprise PC days or will many different platforms exist as we see today in consumer computing?

I tend to lean toward the latter. While VR/AR will start off segmented with Oculus having a platform, Sony having a platform, Microsoft having a platform, Google having a platform and even Apple having a platform eventually, it may also stay segmented rather than consolidated.

The global consumer smartphone market has shown us it can sustain many platforms so perhaps whatever comes next will follow the same paradigm. As I’m observing with wearables, where the market is actually developing into a rich segmentation, perhaps VR/AR or artificial intelligence will do the same, adding new layers of computing platforms onto the existing ones rather than consolidating into a single one.

Why the Apple/IBM partnership is a Bigger Deal than I Imagined

I spent the last few days in Las Vegas at IBM’s annual Interconnect customer conference where they shared with 23,000 attendees the latest and greatest technology coming from Big Blue. It is one of the largest customer events I go to and I always enjoy hearing about things like Watson, BlueMix and all the various products and services they offer their clients.

My own history with IBM dates back to the early 1980s. Right after I joined Creative Strategies, one of my first major consulting projects was with their original PC team and I helped them do the research for what became their original distribution strategy. This project allowed me to work with the father of the IBM PC, Don Estridge, and his team and I got to see the birth of the PC industry up close and personal.

Of course, over the last 30 years IBM has evolved dramatically. They are now almost exclusively a software and services company that provides all types of products for large enterprises around the world. Although I have not dealt with IBM a lot in the last 15 years, given their new focus and our research being more consumer related, I began to take more interest in them again when they developed a major partnership with Apple and decided to support Mac’s and iOS in a big way.

One of IBM’s major initiatives is called Mobile First and it pertains to creating mobile solutions for their customers. As IBM began this journey into mobile, they found many of their customers had iPhones, iPads and Macs. It became clear that, if they really wanted to make headway with a mobile first program, they would need help from Apple. Once they approached Apple and got to see their mobile roadmap and how the iPhone and iPad were getting serious traction in the enterprise, IBM made the big decision to port all of their mobile apps to iOS in order to support their customers at all levels of need.

The result is IBM is now offering over 100 iOS apps for use on iPhones and iPads in the enterprise and giving their customers very powerful mobile solutions that help them integrate mobile apps and solutions into their overall IT programs. I got to see some of the apps and they are amazingly powerful. One interesting one is for aviation pilots. It allows them to manage fuel, does the calculations, and allows them to adjust the plane’s use of it automatically. Another one was focused on how a major European airline uses it to handle priority bookings for their premium customers who miss connections and can be booked on a new or different flights even when they are in the air approaching the first airport to make further connections.

Ultimately, all of these mobile-first apps on iOS gives IBM more latitude for meeting the needs of a huge customer base and endears them as a trusted vendor to a worldwide base of IT professionals. But there is another part of this deal that is equally big and, while it was announced when the partnership was launched, I don’t think most understand why this makes the relationship with IBM so big.

As part of the deal, IBM has become a reseller of all Apple hardware. But not only a reseller-an actual sales arm for Apple that represents their hardware and walks them in to big accounts as part of a full-service solution. I talked with some of the IBM mobile team and it was clear that having this broad relationship with Apple is very strategic to their overall IT solutions since almost all of their customers have some form of mobile solutions in their enterprise mix. While they also support Android mobile devices to a degree, they have not ported their mobile first apps to Android yet and prefer optioning an Apple solution whenever possible. That means Apple is more than just a partner, they are a preferred part of IBM for mobile solutions and this makes this relationship an even bigger deal than many of us thought at the beginning.

One other thing that was really remarkable about Interconnect is in the opening keynote, they had Brian Croll, VP of Worldwide Marketing for Mac and iOS at Apple do a presentation of their SWIFT programming language. It let this audience know this significant programming tool from Apple can be used to create custom software much more easily as part of any solution IBM and an IT shop are working on together. He also explained it has been made an open source project so it could be used for all types of software programming and in any type of project in the future. Apple’s presence in this keynote clearly reinforced their commitment to Apple and in a not-to-subtle way made it clear Apple was a serious partner with IBM.

It was made clear to me IBM has become Apple’s literal salesforce into the enterprise in a highly visible, calculable and strategic way and this partnership is highly profitable to IBM and, in turn, is very profitable for Apple as well. While we may think of Apple more as a consumer company, thanks to this IBM partnership, Apple is just as much an enterprise company and, with IBM’s continued help, this should help Apple grow their enterprise business exponentially in the future.

Music Industry Update

Bob O’Donnell wrote last week about the challenging economics associated with online music, and, especially, streaming services. We discussed this further on last week’s podcast and I wanted to unpack a little for subscribers some of the numbers I gather regularly on the state of the music industry and specifically the various online music services.

Paid subscriber numbers remain small

Overall subscriber numbers for music streaming services remain small in the context of the global music market. The chart below shows subscriber numbers – paid and otherwise – for some of the major music services:

Music subscriber numbers

The chart includes new data points for several of the companies, based either on public reporting (in the case of Pandora and SiriusXM) or on other public statements or third-party reporting, including new numbers for paid subscribers for Spotify and Apple Music. The overall pattern hasn’t changed a great deal over the last few months but what is clear is Apple Music now has the second-highest number of paid subscribers for an online streaming service, behind Spotify. It’s passed Deezer some time ago (Deezer itself seems to be either static or shrinking, and had to abandon IPO plans a few months back) and both Apple Music and Spotify seem to be growing their number of paid subscribers fairly rapidly. Apple’s growth has slowed a little over recent months, but its most recent numbers suggest a roughly 30k per day run-rate, which is pretty healthy. If that rate continues, it should hit pass 20 million paid subscribers by the end of 2016.

Usage is high but skews towards free services

Of course, if you take a step back and look at active users, not just paid users, you get a very different picture, because both Spotify and Pandora have vastly higher numbers there than any other service. Pandora’s subscriber numbers are dominated by these non-paying subscribers and, of course, if you were to step back even further and look at users rather than subscribers, YouTube would dominate the whole space with its massive number of regular consumers of free online music. And herein lies the rub: it’s this ad-supported, free-to-the-user version of streaming music that has such unpleasant economics for the music industry and especially for artists, because the payouts are so low. Pandora famously benefits from unusually low royalty rates (even though those were recently adjusted upwards) and yet still loses money each quarter while paying artists very little. Spotify’s payout rate for paid streaming is considerably higher than for ad-based streaming, but it’s ad-based streaming that dominates there too, so its blended rate is pretty low. Services like Apple Music and Tidal, meanwhile, which only offer paid streaming, tend to pay out at much higher rates.

It’s telling that in a segment at the Grammy awards last week, Academy President Neil Portnow called out streaming services for the small amounts they pay artists, but his co-cost for the segment, rapper Common, mentioned subscribing to a music service as a way people could better support artists. Again, it’s the ad-supported services the music industry so dislikes, which is one reason they were largely happy to get behind Apple Music.

Paid streaming is making an increasing contribution

Even with all this usage of free services, paid streaming is starting to make an increasing contribution to overall revenues from streaming. The most recent figures from the RIAA, the US industry body, suggest paid subscriptions now generate revenues approaching those of ad-supported streaming and Pandora-style services combined:

RIAA numbers

If the music industry wants to improve the financials associated with online music streaming, the solution isn’t to abandon online music entirely, but rather to continue to incentivize both artists and service providers to shift more and more listening to paid platforms. This means more exclusives and windowing of new music through the paid platforms and more working with service providers to find ways to add value to the music listening experience through subscriptions. Streaming isn’t killing the music industry – it’s ad-supported streaming that’s the real problem. Convincing more users to pay a monthly subscription fee is the key to improving payouts for artists.

Can You Put A Face to Big Data?

One of the most popular buzzwords in tech is something called Big Data. However, trying to get a straight answer as to what Big Data is can be difficult. In fact, as I looked into this deeper, I found at least 20 different definitions of what people believe Big Data means.

The fundamental concept of Big Data is that all types of computing devices — computers, smartphones, cars, fitness trackers, bar code scanners and even your TV and other IoT devices — are creating data and, in most cases, sending that data to the cloud.

Once it is in the cloud, data is stored and collated. Using various analytical tools, companies or individuals can mine that data to get answers to all types of questions or learn important things through statistical analysis. Using Big Data, one could study people’s habits and look at global health information in search of patterns that could help create new drugs or treatments in the medical world. It can be is used to find the latest fields for drilling oil and, in one example that impacts all of us, it gives advertisers a glimpse into what people are thinking and what they want in order to create better targeted ads for their clients or customers.

However on the surface, Big Data is all about numbers and number crunching. If looked at in these terms, Big Data seems cold, calculating and highly impersonal. A friend of mine, Rick Smolan, who is considered one of the great photographers of our time, looked at this idea of Big Data about five years ago and wondered not only what it means, but how it looked in terms of people creating and using it in real life. The result of this quest was a coffee table book entitled “The Human Face of Big Data”. Rick and a team of photographers, researchers and bloggers went around the world to photograph people using technology and put a face to this idea of Bid Data.

Ted Anthony of the Associated Press defines what this book is about:

“…an enormous volume… that chronicles, through a splash of photos and eye-opening essays and graphics, the rise of the information society…. a curious, wonderful beast — a solid slab that captures a virtual universe… This is one of those rare animals that captures its era in the most distinct of ways. It’s the kind of thing you’d put in a time capsule for your children today to show them, long after you’re gone, what the world was like at the beginning of their lives.”

When Rick sent me a copy of the book three years ago, it was a real eye opener for me and I suspect for anyone who reads it since it demystified the idea of Big Data and put a human face to it.

I recently found out Rick and his team were not content with covering this topic in book form alone. Last week, I was invited to the West coast premiere of a new movie on this topic directed by his brother Sandy Smolan and executive produced by Rick.

The movie is called, “The Human Face of Big Data-The Promise and Perils of Growing A Planetary Nervous System”. The hour long documentary premieres nationally on PBS Wednesday, February 24, 2016, at 10:00 p.m. ET (check local listings.)

Narrated by actor Joel McHale, the award-winning film features compelling human stories, captivating visuals and in-depth interviews with dozens of pioneering scientists, entrepreneurs, futurists and experts to illustrate powerful new data-driven tools, which have the potential to address some of humanity’s biggest challenges, including health, hunger, pollution, security and disaster response.

Some interesting tidbits from the pre-movie briefing I had as well as from the movie itself:

“The average person today processes more data in a single day than a person in the 1500’s did in an entire lifetime” Mick Greenwood

“Big Data is truly revolutionary because it fundamentally changes mankind’s relationship with information.” Michael S. Malone

“We’ve reached a tipping point in history: today more data is being manufactured by machines-servers, cell phones, GPS-enabled cars-than by people” Esther Dyson

“From the Dawn of civilization until 2003, humankind generated five exabytes of data. Now we produce five exabytes every two days. And the pace is accelerating.” Eric Schmidt

“As we begin to distribute trillions of connected sensors around the planet virtually every animate and inanimate object on earth will be generating and transmitting data, including our homes, cars, our natural and made environment and yes, even our bodies.” Anthony D Williams

This documentary looks at how people all over the world are using technology and, in turn creating, collecting and communicating data that, in most cases, goes to the cloud and can be used for all types of purposes. The movie itself is very positive about the potential impact of Big Data on us but it is also very realistic and shows the dark side as well, since it can be used by hackers and criminals against people and mankind.

I see this documentary enlightening to anyone that watches it since it succeeds well in defining Big Data — what it is and, more importantly, how it can and will impact mankind in the future. It also makes the concept of Big Data more personal as it puts a face on it and makes us realize that, while data itself in computing code is just numbers, it is people who create that data and are really at the heart of it.