Should Apple Build a Car?

As your mother or other caregiver likely told you as a child, just because you can do something, doesn’t mean you necessarily should.

So, given last week’s news that Apple has obtained a permit to test drive three autonomous cars on public streets and highways in California, the existential question that now faces the company’s Project Titan car effort is, should they build it?

Of course, the answer is very dependent on what “it” turns out to be. There’s been rampant speculation on what Apple’s automotive aspirations actually are, with several commentaries suggesting that those plans have morphed quite a bit over the last few years, and are now very different (and perhaps more modest) than they originally were.

While some Apple fans are still holding out hope for a fully-designed Apple car, complete with unique exterior and interior physical design, a (likely) electric drivetrain, and a complete suite of innovative software-driven capabilities—everything from autonomous and assisted driving features, the in-vehicle infotainment (IVI) system, and more—other observers are a bit less enthusiastic. In fact, the more pragmatic view of the company creating autonomous driving software for existing cars—especially given the news on their public test driving effort—has been getting much more attention recently.

Regardless of what the specific elements of the automotive project turn out to be, there remains the philosophical question of whether or not this is a good thing for Apple to do. On the one hand, there are quite a few major tech players who are trying their hands at autonomous driving and connected car-related developments. In fact, many industry participants and observers see it as a critical frontier in the overall development and evolution of the tech industry. From that perspective, it certainly makes sense for Apple to, at the very least, explore what’s possible, and to make sure that some of its key competitors can’t leapfrog them in important new consumer technologies.

In addition, this could be an important new business opportunity for the company, particularly critical now that many of its core products for the last decade have either started to slow or are on the cusp of hitting peak shipment levels. Bottom line, Apple could really use a completely different kind of hardware hit.

The prospect is particularly alluring because some research conducted by TECHnalysis Research last fall shows that there is actually some surprisingly large pent-up demand (in theory at least) for an Apple-branded car. In fact, when asked about the theoretical possibility of buying just such an automobile, 12% of the 1,000-person sample said they would “definitely” buy an Apple car. (Note that 11% said they would definitely buy a Google-branded car.) Obviously, until such a beast becomes a reality, this is a completely speculative exercise, but remember that Tesla currently has a tiny fraction of one percent of car sales in the US.

Look at the possibility of an Apple car from another perspective, however, and a number of serious questions quickly come to mind. First, is the fact that it’s really hard to build and sell a complete car if you’re not in the auto industry. From component and supplier relationships, to dealer networks, through government-regulated safety requirements, completely different manufacturing processes, and significantly different business and profitability models, the car business is not an easy one to successfully enter at a reasonable scale. Sure, there’s the possibility of finding the auto equivalent of an ODM (Original Device Manufacturer) to help with many of these steps, but there’s no Foxconn equivalent for cars in terms of volume capacity. At best, production levels would have to be very modest for an ODM-built Apple car, which doesn’t seem like an Apple thing to do.

Speaking of which, the very public nature of the auto business and the need to reveal product plans and subject products for testing well in advance of their release is also very counter to typical Apple philosophy. Similarly, while creating software solutions for existing car makers is technically intriguing, the idea of Apple merely supplying a component on products that are branded by someone else seems incredibly unlikely. Plus, most car vendors are eager to maintain their brand throughout the in-car experience, and giving up the key software interfaces to a “supplier” isn’t attractive to them either.

So, then, if it doesn’t make sense or seem feasible to offer just a portion of an automotive experience and if doing a complete branded car seems out of reach, what other options are left? (And let’s be honest—in an ideal situation, autonomous driving capabilities should be completely invisible to the driver, so what’s the brand value for offering that?)

Theoretically, Apple could come up with some type of co-branded partnership arrangement with a willing major car maker, but again, does that seem like something Steve would do?

There’s no doubt Apple has the technical ability and financial wherewithal to pull off an Apple car if they really wanted to, but the practical challenges it faces suggest it’s probably not their best option. Only time will tell.

Reviewing the Tech Reviewers

When I read that Walt Mossberg would be retiring, it reminded me of how much has changed in the way consumer technology products have been reviewed over the years. I write this as one who has been on both sides: developing products that ultimately were reviewed and writing my own column for twelve years that reviewed products for the now defunct San Diego Transcript.

In the late eighties, as technology products began to appeal to non-technical consumers, the only place where they could go for buying advice was the numerous technology magazines. The magazines did a great job of evaluating the technical details of computers, printers, and other complex devices, with some periodicals even creating their own test labs.

But, for the most part, the reviews were written by those who valued a product by how many features it contained. The reviewers appreciated technological wizardry above all else. So, the articles were filled with graphs and tables with checkmarks comparing the plethora of features each product had, usually awarding the editor’s choice to the product with the most checkmarks. It was assumed the customers would find the products as easy to use as they did.

For the non-technical reader, getting through each article could be a challenge with the new terminology and abbreviations that were used by the industry. I remember trying to keep straight the different units of memory, data speed, and processor speeds.

As a product designer, it was frustrating to see a product reviewed and rated based on the number of features it had, even when many of those features would never be used. I saw how the magazines had influenced the design of new products. Design engineers and marketing people would tend to pile on feature after feature without much thought to usability. That made products take longer to design, harder to use, and less reliable.

In 1991, Walt Mossberg created a much different approach to product reviews that not only made it easier to assess a new product but also changed how products would be designed.

He would look at products, not based on the number of features, but on their practicality and usability. He was one of the first to understand that these products would find a much larger audience among those who might not be technically inclined and they needed to be assessed differently. He took a position as an advocate for the user and found a receptive audience by reminding his audience not to blame themselves for a product being hard to use because they were not alone.

When I was writing my book, “From Concept to Consumer”, I asked Walt to describe the attributes of what he considered to be an excellent product. He said, “It is a product so useful in function and clear in its operation that its user, within days or weeks, wonders how she ever got along without it. This is not the same as having long lists of features, specs, speeds and feeds. In fact, my rule is that, if a product claims to have, say, 100 features, but an average person can only locate and use 11 of them in the first hour, then it has 11 features.”

That was the basis for his judging products. Because of his ability to understand products from the position of the consumer, his observations were much more relevant and useful. From his post at the Wall Street Journal, his influence was widely felt. Companies knew his reviews could make or break a product or even a company.

Walt was also instrumental in advocating for the consumer beyond just products. He saw how cellular providers were restricting product advancements and compared them to Soviet ministries.

Walt, along with David Pogue of the New York Times, the late Steve Wildstrom of BusinessWeek (and a co-founder of Tech.pinions), and Ed Baig of USA Today, were among the first to review major new products. The four were courted by big name companies such as Apple, Samsung, Sony, and others so that their reviews would appear at nearly the same time. With their columns published in each Thursday’s edition of their respective publications, the marketing people, engineers, and company executives would frantically wait for the first edition to see how their product fared, much like the cast of a Broadway show reads their reviews the morning after opening night.

On a personal note, I always found Walt, Steve, and Ed to be thoughtful, insightful and fairminded. While one might disagree with their product assessments, they were always respectful and considerate. If they encountered a problem with a product, they’d get back to the company and get their comments but reported their complete experiences without omissions. They took their job and the impact of what they wrote with great responsibility. And they would not waffle but gave their opinions and backed them up with facts. David Pogue does do reviews but with a more entertainment focus.

In recent years, as gadget blogs replaced newspapers for our source of new product news, the number of reviews have multiplied, although the quality seems to have fallen. Many are done by those with limited product experience and often reflect their own biases without thinking from the position of the consumer. I’m often appalled at how inaccurate they are about products and technology I know well.

There are good sites with in-depth reviews, such as Digital Photo Review, PC Week, Tom’s Hardware, The Gadgeteer, iLounge, The Verge, the Wirecutter (owned by the New York Times), and many others. Many of these sites now derive revenue from their reviews by linking the products to Amazon to receive referral fees.

So, while we have more sources, they will be hard to replace the wisdom of a few good writers that avoid parroting press releases and take a very thoughtful approach to assessing new products, based on their years of experience.

Podcast: Huawei Analyst Summit, Le Eco, Chinese Vendors

In this week’s Tech.pinions podcast Carolina Milanesi, Jan Dawson and Bob O’Donnell discuss Huawei’s recent analyst summit event, difficulties facing Le Eco, and the overall opportunities and challenges for Chinese vendors to break into the US and WW markets in a meaningful way.

If you happen to use a podcast aggregator or want to add it to iTunes manually the feed to our podcast is: techpinions.com/feed/podcast

Apple’s Mac Pro Rethink is Good, but will It Be Good Enough?

Apple’s recent acknowledgment it is planning a design reset on its Mac Pro desktop was a much needed shot in the arm for longtime Apple loyalists. Make no mistake, the company’s decision to go back to the drawing board on its professional desktop has less to do with the bottom line and more to do with pleasing its base, full stop. That said, there is still real money to be made in the desktop market, especially at the high end. Moreover, there are some interesting personal computing developments happening in this space. Which leads me to wonder: Is Apple’s yet-to-be-revealed Mac Pro rethink going to be ambitious enough?

World Wide Desktop Market Numbers
It’s widely understood the PC market has been in long-term decline. The combined desktop plus notebook PC market peaked in 2011 when the industry shipped 331 million units worldwide. However, desktop shipments started to decline years earlier, peaking in 2007 at 161.1 million units (versus 105.3 million notebooks). By 2016, the total PC market had slipped to 246.6 million units, with desktops at 100.7 million. While those numbers may seem bleak, the reality is the world has come to the realization it still needs PCs and the market is stabilizing (as noted in IDC’s preliminary estimates for the first quarter of 2017). IDC has forecasted very modest growth for notebooks in 2017 and beyond, while desktop contraction will slow to low single digits over the course of the next two years.

Declining shipments don’t tell the whole story, though. As the market has contracted, the biggest market players have moved aggressively to grab more share. Lenovo, Dell, and HP have been particularly vocal about this and for many, many quarters, Apple’s Tim Cook announced during the company’s quarterly earnings calls that Apple’s Mac shipments were growing while the industry declined overall. Since the market’s peak in 2011, market share consolidation has been swift. In that year, the top five vendors accounted for 49% of total shipments. By 2016, the top five—Lenovo, HP, Dell, ASUS, and Apple—accounted for 72.4% of total shipments. The shift has been less drastic in desktops, where smaller players can still compete, to some degree. Still, in 2011 the top five vendors accounted for 46.3% of shipments and, by 2016, the top five—Lenovo, HP Inc, Dell, Acer, and Apple—made up 59.1% of shipments.

In 2016, Apple shipped about 3.3 million desktops worldwide, for a 3.3% share and fifth place in terms of total shipments (Lenovo was number one with 19.4 million units). However, as is often the case, Apple’s average selling price was notably higher than anyone else in the market. Apple’s ASP was $1,384, versus $505 for Lenovo, $554 for HP Inc, $557 for Dell, and $440 for Acer. All told, Apple’s desktop business generated revenues of $4.5 billion dollars. That’s roughly 9% of the entire desktop market’s revenues on 3.3% of shipments. Small versus iPhone revenues but a serious business nonetheless.

Forward-Looking Designs
So we’ve established Apple is embarking on a re-do of the Mac Pro primarily to appease its most loyal customers, not for the money. But that the money is pretty good, too. I’d argue there is at least one more reason Apple needs to focus its attention on the desktop. It’s an area where the company risks falling behind: Design Cachet.

Back in 2014, I was at the event when Apple launched the first Retina 5K iMac. It was gorgeous, expensive, and total overkill for my needs. But I wanted it just the same. At a time when the industry was shipping 5K monitors for $2,500 (despite a dearth of PCs fully capable of supporting them), Apple jumped the line by integrating the necessary technology to power that many pixels right in the box. The intention was never to sell tens of millions of them but to make a clear statement about Apple’s ability to create a desktop people aspired to own. The starting price was $2,499, shipping into a market where the average selling price of a tower desktop was $446 and the average price of an all-in-one PC was $890.

One of the tidbits of information to come out of Apple’s press meeting on the Mac Pro was the company isn’t looking at touch technology for the Mac because it says its clients aren’t asking for it. But I have to wonder how many of Apple’s pro customers were asking for a 5K iMac back in 2014 (or how many were asking for a TouchBar in 2016). Just as important, I would argue that, by refusing to explore touch on the desktop, Apple is automatically ceding the space to competitors such as Dell, which recently announced its long-gestating Smart Desk, and Microsoft, which is now shipping its Surface Studio (starting price: $2,999). Neither of these products will ship in large volumes but I suspect a small but vocal group of professionals will find them indispensable, while the rest of us will find them aspirational. And neither support touch as a mere technical gimmick but instead to enable new usage models. I get that nobody wants to reach up and tap a vertical desktop screen with their finger. But there are plenty of interesting things you can do with a finger, a pen, or a dial on a screen that shifts to the horizontal plane.

Apple obviously takes design very seriously. So seriously, in fact, it refuses to rush the new design of its next Mac Pro. The company doesn’t expect to ship the new Mac Pro this year (although it will ship updated versions of the existing Mac Pro as well as more pro-focused iMacs). Such attention to detail is part of what draws so many Mac fans to the brand but it also makes it very hard for the company to respond quickly to shifts in the market. If Apple really does expect one or more of its Mac products to be vital pro-level tools well into the future, the company must consider what future pro desktop owners will need before they know they need it. As such, I can’t help but think that they shouldn’t disqualify touch from the desktop equation entirely.

What I’m Looking For at Facebook’s F8

It’s that time of year again when the big developer events start. With Twitter having ended its developer events, we’re left with Facebook, Microsoft, Google, and Apple as the big four consumer-centric developer conferences. Next week, the season kicks off with Facebook’s F8. Facebook’s past events have been a fascinating mix of short-term developer-centric announcements and bigger pronouncements on Facebook’s future roadmap, so they’re generally pretty interesting affairs. Making predictions about these things is always a risky business so I’m going to share what I’m looking for next week, rather than making firm predictions about what we’ll actually see.

An Update on Bot Strategy

One of the big themes out of both Microsoft’s and Facebook’s developer events last year was bots, which went through a hype cycle right around the time of their events. A year later, we’ve seen interesting developments on both sides, with Microsoft mostly focusing on enabling third parties but also dabbling in several of its own AI chatbots, including in emerging markets like India and China. By contrast, Facebook has largely had to walk back its bot strategy from last year, conceding (in a series of moves) that its original conception was off the mark.

In September of last year, Facebook adjusted its bot strategy for the first time, with David Marcus announcing Facebook was investing in new capabilities and doing more to help developers create successful experiences with bots. Part of that announcement was an acknowledgement that the original vision for bots as broad-based replacements for apps was misguided and it needed to be narrower. Since then, we’ve seen the M assistant (which Facebook first envisaged as a sort of assistant bot within Messenger) become something narrower: a helpful tool that pipes up within conversations, becoming less like a bot in the process. In March, we saw Facebook do more to focus on menus and other user interface elements, again making their bots less bot-like and more app-like (and actually more like the messaging-based apps so popular in Asia).

I would expect the revised bot strategy to be a theme at Facebook, with announcements around group chatbots so developers can integrate their bots into conversations between real people along the same lines as M. But we also need to see clarity on the role Facebook now sees bots playing in the broader landscape of the web, Facebook pages, interaction with humans through Messenger, and dedicated apps. We need to see more realism about what bots can really be good at and which developers should be focusing on them. That focus was missing in last year’s more grandiose vision.

More Creative Ad Products

As a user, I don’t necessarily want to see more ads in more places on Facebook but as it’s facing saturating ad load in the core product, Facebook needs to be more creative about where ads can go next without destroying the user experience. One area I expect we’ll see some announcements is its Camera Effects feature, which is fairly limited for now but could easily be opened up to developers along the lines of Snapchat’s Sponsored Filters. That should be interesting and will provide new ad opportunities but I’d love to see more creative ideas from Facebook which don’t feel like they’re just being cloned from Snapchat.

Social and VR Coming Together

When Facebook bought Oculus, it acquired a largely gaming-focused VR outfit and it has continued down that path with the products released since. But the vision it outlined at the time saw VR as not just a gaming platform but the next user interface for much more. At last year’s F8, we saw some proofs of concept around social experiences in VR, through which people could virtually visit a faraway place with a friend in virtual reality. That starts to get at a vision for VR which is more aligned with Facebook’s core value proposition of connecting people. I’d love to see them iterate on that vision and start to productize some social VR experiences this year.

Monetizing Messenger and WhatsApp

Again, given the ad load issue in the core Facebook product, the company needs to be leaning more heavily on its other apps to drive growth. While Instagram has already become a massive revenue generator (though the company continues to keep the numbers to itself), the same can’t be said for Messenger and WhatsApp. It’s increasingly clear that ads are the business model Facebook will pursue to monetize Messenger (and we’ve seen the first examples of that) but it’s less clear how it will monetize WhatsApp, whose founder has always pooh-poohed ads as a business model.

The vision Facebook outlined for monetizing these messaging platforms last year was around businesses connecting with people, which obviously aligns, to some extent, with the bot strategy. So far, the ad products have been a little bland and verging on the invasive. Messaging is a uniquely personal and private space relative to the noisy commons of Facebook’s News Feed, so it needs to be careful in how it approaches advertising and monetization on these two platforms. With both now having over 1.2 billion users and therefore, twice the size of the last audience number we have for Instagram, they could be generating significant revenue at this point but aren’t. It would be great to hear more about how that will change without ruining the user experience.

Hardware Innovation

The last area I’m really curious about is hardware, where Facebook now has a dedicated team under former Googler Regina Dugan. Her focus – and that of the team she leads – has always been cutting edge innovation of the kind that doesn’t necessarily make it into products. I expect that’s where we’ll continue to see the bulk of the hardware effort at Facebook heading. But I think we could easily see some examples of those projects at F8 and I’d love to see both why Facebook is pursuing some of these areas which seem disconnected from its core business and how it will make those efforts pay off over time. We could see licensing models or partnerships and, in some cases, we may even see Facebook go beyond its current small-scale dabbling in hardware through Oculus into something more mainstream and scalable.

When You need to Explain to Your Kid the Internet Is not Safe

Last week was time for me to explain to my child the internet isn’t a safe place. It wasn’t pretty. My nine-year-old daughter has been going online on a parental controlled browser and to play multi-gamer Minecraft with her friends but nothing else — or so I thought. Last week, she mentioned playing with these “friends” on an app that lets you create a family of dogs. I remained calm as I explained we had discussed this issue before and that she was not allowed to go online because people on the internet are not always who they seem to be and they might ask her questions that are personal. With a somewhat annoyed tone, she replied that she is not naïve and that when “this boy” asked her how old she was and where she lived she did not reply. That is when I freaked out. I took a deep breath and started explaining.

Just because You are not Face to Face with Someone, doesn’t make it Safer

While not being physically in the same room or playground might mean you do not get punched or pushed or mocked, it does not mean they cannot hurt you. Just because you do not see them, it does not mean they are not real. That was the easy part.

“But mom, they are just kids like me!” My heartbroken daughter whispered. That was when the hard part started. Explaining that people online can pretend to be kids and they might be interested in her the way grownups are interested in each other was the hardest thing I ever had to explain. Much harder than explaining where babies come from. Within a couple of minutes, my daughter went from my sweet little girl to the potential victim of an online predator. I know I might be overreacting. I know there are more genuine kids online than there are predators but there are also numbers. According to the US Department of Justice, approximately 1 in 7 (13%) youth internet users received unwanted sexual solicitations. One in 25 youths received an online sexual solicitation in which the solicitor tried to make offline contact. So, forgive me but it’s my baby and I am not taking any chances. As much as I think she is too young to fully understand what I am talking about, it is my duty as a parent not to scare her but to make her aware of the risks. This is no different than telling your children they should not talk to strangers the first time they are somewhere without you at their side.

Technology Alone is not Enough

There are many risks our children are subjected when going online. Some involve their information and data and others involved them as a person. In a way, I look at the former as security risks and the latter as safety risks. While tools can help with many security risks, it is only education and awareness that will help with safety risks, in my view. A key part of this education is to help them understand the internet is not just magic. There is a real human behind anything that happens online whether that presence is direct or through software programmed by a person. Educating, not scolding, so my daughter feels like she can come to me and ask questions is important and, of course, challenging.

The good old days of loading an antivirus app and restricting access are over. Phones and tablets have changed that dramatically and, although parental control tools for these devices have been growing over the past few years, they concentrate on the web vs apps which makes the whole “being safe” more complex. The small screen these devices have also offer less visibility to parents compared to a console game played on the TV in the family room. This means we cannot just “fix” it with technology. We need to take an interest. Whether we monitor the apps our kids use or we vet every app before they use it, it is up to us to keep up with the whole process.

I dropped the ball. My daughter knows she needs to ask permission before purchasing any app. When that happens, we go through the reviews together to evaluate how good they are and read the description to better understand what is behind the catchy name. Yet, I never thought about vetting the free apps she downloads as we have set up an age filter for the apps she can access. It goes without saying I do now. Clearly, the age filter helps with content appropriateness but not necessarily kids’ safety.

Monkey See, Monkey Do

Fortunately, I do not have to worry about social media yet. At nine, my offspring does not have a social media presence other than what I post about her. And this is, of course, a whole different problem. Because she sees me sharing what we do on Facebook and “talking” to people I do not necessarily know on Twitter, she might think it is OK for her to do the same. As in real life, kids do pick up social cues from us without necessarily having all the information to make an informed decision. So, for some behaviors, leading by example will suffice (“Don’t text and drive”). For others, we will need to educate once again.

I now ask permission before sharing something about her, including writing this article. I explain that, very much like what you say in real life, what you post has implications. I explain why I post, what I post and, more importantly, I explain why I do not post certain things — well aware not all my decisions are actually foolproof.

Learn so You can Teach

Being a parent in a digital world is not easy but one thing is certain — it will be a lot easier if we, as parents, are informed and up to date with what children do. Our kids are growing up in a world full of screens and where social media rules. As parents, we need to make sure we are a step ahead when it comes to technology. If we think today is scary, we should try and imagine what it will be like when our kids will live in a VR world we do not have access to. While we can ask content providers and app stores owners to be more transparent and accountable, the buck stops with us.

Little Data Analytics

For years, the mantra in the world of business software and enterprise IT has been “data is the new gold.” The idea was that companies of nearly every shape and size across every industry imaginable were essentially sitting on top of buried treasure that was just waiting to be tapped into. All they needed to do was to dig into the correct vein of their business data trove, and they would be able to unleash valuable insights that could unlock hidden business opportunities, new sources of revenue, better efficiencies, and much more.

Big software companies like IBM, Oracle, SAP, and many more all touted these visions of data grandeur and turned the concept of big data analytics, or just big data, into everyday business nomenclature.

Even now, analytics is also playing an important role in the Internet of Things (IoT), on both the commercial and industrial side, as well as on the consumer side. On the industrial side, companies are working to mine various datastreams for insights into how to improve their processes, while consumer-focused analytics show up in things like health and fitness data linked to wearables, and will soon be a part of assisted and autonomous driving systems in our cars.

Of course, the everyday reality of these grand ideas hasn’t always lived up to the hype. While there certainly have been many great success stories of companies reducing their costs or figuring out new business models, there are probably an equal (though unreported) number of companies that tried to find the gold in their data—and spent a lot of money doing so—but came up relatively empty.

The truth is, analytics is hard, and there’s no guarantee that analyzing huge chunks of data is going to translate into meaningful insights. Challenges may arise from applying the wrong tools to a given job, not analyzing the right data, or not even really knowing exactly what to look for in the first place. Regardless, it’s becoming clear to many organizations that a decade or more into the “big data” revolution, not everyone is hitting it rich.

Part of the problem is that some of the efforts are simply too big—at several different levels. Sometimes the goals are too grandiose, sometimes the datasets are too large, and sometimes the valuable insights are buried beneath a mound of numbers or other data that just really isn’t that useful. Implicit in the phrase “big data,” as well as the concept of data as gold, is that more is better. But in the case of analytics, a legitimate question worth considering: Is more really better?

In the world of IoT, for example, many organizations are realizing that doing what I call “little data analytics” is actually much more useful. Instead of trying to mine through large datasets, these organizations are focusing their efforts on a simple stream of sensor-based data or other straightforward data collection work. For the untold number of situations across a range of industries where these kinds of efforts haven’t been done before, the results can be surprisingly useful. In some instances, these projects create nothing more than a single insight into a given process for which companies can quickly adjust—a “one and done” type of effort—but ongoing monitoring of these processes can ensure that the adjustments continue to run efficiently.

Of course, it’s easy to understand why nobody really wants to talk about little data. It’s not exactly a sexy, attention-grabbing topic, and working with it requires much less sophisticated tools—think Excel spreadsheet (or the equivalent) on PC, for example. The analytical insights from these “little data” efforts are also likely to be relatively simple. However, that doesn’t mean they are less practical and valuable to an organization. In fact, building up a collection of these little data analytics could prove to be exactly what many organizations need. Plus, they’re the kind of results that can help justify the expenses necessary for companies to start investing in IoT efforts.

To be fair, not all applications are really suited for little data analytics. Monitoring the real-time performance of a jet engine or even a moving car involves a staggering amount of data that’s going to continue to require the most advanced computing and big data analytics tools available.

But to get more real-world traction for IoT-based efforts, companies may want to change their approach to data analytics efforts and start thinking small.

Computing and Eliminating Complexity

As I look at the last thirty years of the technology industry, a few important observations stand. Allow me to share some visuals from a presentation I gave a few years ago at the Postmodern Computing Summit.

The first significant observation is that computers are getting smaller. The advancements being made in microprocessors are allowing us to take supercomputers from form factors that once fit in a closet to ones that now fit in our pockets and, in the future, be worn on our bodies.

The second observation is what that computational curve of processing evolution enabled. Each step function in computational power brought with it a step function in ease of use and eliminated layers of complexity which existed previously.

In software, we went from text-based user interfaces to more visual ones and, each step of the way, computers were embraced by more and more people. Each step function enabled new step functions in scale as easier user interfaces, smaller more personal form factors, and lower prices, enabled what I call “Computing’s S-curve”. This curve, and perhaps all the step functions I mentioned, culminated with the smartphone which has brought computing to people who have never owned computers before. Now, roughly 2.5 billion people have pocket computers.

We remain on a journey to connect the unconnected. We still have rouhgly three billion people on the planet who have yet to get a smartphone. Many of those people can and will benefit from the value of the internet and instant communication. We still need to continue to make cellular plans more affordable for those making less than $5 a day. For this same group, we need to solve battery problems so they don’t have to charge it every day because that costs money they need for other things like food. Another key development needed to connect the unconnected is language support. This is where I think we are on the cusp of another elimination of complexity in computer user interfaces as we embrace voice-based UIs.

The Voice UI will play an important role in making computers even easier to use than they are today. While the screen is not going away, I do believe voice will become a key way to interact with our smart devices and get more value from them. In the case of rural parts of the world where their languages are not yet supported (or even if it is, they may not be able to read), they will be able to speak to it. Machine learning advances will likely help us advance training machines to learn new languages which, in turn, will hopefully help our smart devices support hundreds of languages and dialects not yet supported.

Watch kids who can’t read yet use Siri or an Amazon Echo to search the internet, play videos, and more. Similarly, as touch-based interfaces made it possible for kids to engage with computers, voice will open up even more possibilities and depth to their computing experience.

What’s interesting is voice will not open up new computing possibilities for everyone, even people with a strong grasp of computers today. I’d posit that, for a normal human, not a hardcore techie or early adopter, many of them do not get the most out of the computers they have on their laps or in their pockets. It sounds crazy to say this but even our smartphones still overserve the basic needs of billions of people. But the question in my mind is, how can we get these people doing more with their smart devices? I believe voice-based interfaces will play a key role in that experience.

Imagine being able to use your voice while looking at a screen to edit video, photos, create visual representations of data and, more importantly, have the computer engage back with you to help you learn, discover, and do more.

The next step function of computing, which will eliminate another layer of complexity, will come as we build computers that can see and hear. We are in the early stages of this right now as the machine learning era is taking off. That will lead us to have some of the most powerful and interactive computers with us at all times as nearly everyone on the planet will have an incredibly smart personal assistant.

Podcast: Apple Mac Pro, IPad As PC, Surface J.D. Power

In this week’s Tech.pinions podcast Carolina Milanesi, Ben Bajarin and Bob O’Donnell discuss Apple’s recent announcements about future iterations of the Mac Pro, the potential (or not) of an iPad as a PC, and J.D. Power’s satisfaction ratings putting Microsoft’s Surface above the iPad.

If you happen to use a podcast aggregator or want to add it to iTunes manually the feed to our podcast is: techpinions.com/feed/podcast

Comcast’s Wireless Service Lacks Compelling Reason to Buy

Following years of discussion, hints, and speculation, Comcast finally announced its MVNO wireless service, branded Xfinity Mobile. In its current iteration, I cannot see it being an overwhelming success. There’s no particularly compelling reason for anyone to switch to Xfinity Mobile: it’s only slightly cheaper than the competition and there aren’t any features that offer a distinct competitive advantage. Ironically, perhaps the best reason to consider Comcast is if you want to put your family on a “wireless diet”, by taking advantage of the zero charge per access line and buying a limited amount of wireless data.

So here’s the Xfinity service in a nutshell:
• One has to be a Comcast customer (there are about 29 million households, representing about 130 million potential ‘lines’)
• You buy a phone from Comcast (no BYOD at launch), and sign up for Unlimited or by the GB
• There is no ‘per line access charge’. Voice and text are free and data is $65 per line for Unlimited or $12 per GB. For X1 customers spending more than $150/month, the Unlimited price is $45 per line
• When using the Xfinity Mobile service, the customer is signed into other Xfinity apps, such as home entertainment and Xfinity home – although that’s available to any Xfinity customer
• It’s a digital-centric service in terms of buying the phone, paying the bill, and contacting customer care

The Xfinity service is a bit less expensive than the competition but not sufficiently so to compel one to switch, especially when the purchase of a new device is required. It’s hard for me to see how a ‘family’ will spend $500-700 per device to switch to a service that saves maybe $20 per month over AT&T or Verizon. And, as long as Comcast needs Verizon for the cellular service, there’s little likelihood Comcast will further discount its service, since it is paying Verizon $6-8 per GB for data on a wholesale basis. In fact, since Comcast allows up to 20 GB of full speed data, there’s the possibility the company could be underwater in some scenarios. Although, in reality, average usage is ~4 GB per line and Comcast will be pushing subs onto Wi-Fi.

There aren’t any particularly compelling features. There isn’t any zero rating of entertainment content, unlike AT&T/DirecTV, T-Mobile BingeOn, or some of the content on Verizon.

Ironically, Xfinity Mobile could be a great budget option if you want to put your family on a wireless data ‘diet’. You could have 3-4 lines, with unlimited voice and text, and buy 4 GB a month, for under $50 total. This could be especially compelling if and when Comcast allows customers to bring their own phone or has some other option for customers to get an inexpensive device.

A big question is whether the Wi-Fi aspect will be an asset or a liability for Comcast or its customers. Comcast is relying on the 16 million Wi-Fi hotspots — a combination of indoor/outdoor hotspots the company has deployed, plus residential gateways broadcasting an extra SSID — to siphon a healthy percentage of data off the wireless network. This would clearly help Comcast’s economics. From the customer perspective, data speeds could be faster than cellular in some instances.

In my view, the execution on the Wi-Fi side is the big wildcard here. My personal experience using Xfinity Wi-Fi outside the home has been mixed. I have found many hotspots to be slow and unreliable and my phone occasionally gets stuck in what I call ‘purgatory mode’. It attaches to a poor Xfinity Wi-Fi signal and can’t do anything. So a big question is whether Comcast has improved this issue. How effectively the phone seamlessly attaches to a hotspot and adjudicates whether to use Wi-Fi or cellular in a given context to deliver the best connection will be a major governor of the quality of service. This was a problematic issue when Comcast tested the service last year. It is telling that Comcast is not offering Voice over Wi-Fi, which is a signal Comcast is still nervous enough about the reliability of Wi-Fi to not offer a ‘Wi-Fi First’ experience a la Republic Wireless or Google Fi.

So, why is Comcast even doing this? Given the economics of the Verizon deal, Xfinity Mobile won’t be massively profitable on a standalone basis. Comcast is clearly banking on some increased level of stickiness with regard to its broadband and Pay-TV offerings. More likely, this is an initial foray into mobile (actually it’s Comcast’s third whack at wireless, for the historians among us). Within a week or two, we’ll learn how much spectrum Comcast won in the 600 MHz auction and its potential plans to build some form of a facilities-based wireless network, perhaps even as part of a potential acquisition of T-Mobile, Sprint, or a deal for some of DISH’s spectrum down the line. In the meantime, this is a relatively low-risk and inexpensive way to test the wireless waters (again).

Who will be the Next TV Aggregators?

Pay-TV is essentially an aggregation play: providers bundle up channels from a variety of owners into a single, simple package and sell that package to consumers, who value the ease of use of paying a single monthly bill and using a single user interface for a diverse set of content. But current trends in the TV market threaten to break these bundles apart, as both new players and traditional content owners create standalone offerings sold direct to consumers as a response to the size and inflexibility of the classic cable package. Yet, the benefits of bundling haven’t gone away and there’s some evidence consumers would prefer to get these standalone offerings re-bundled when it comes to bills and user interfaces. It creates an opportunity for a new set of aggregators. Who might those be?

Disaggregation and Fragmentation the Order of the Day

One of the single biggest trends in the TV industry at present is disaggregation and fragmentation. We’re moving from a world in which the vast majority of people consume, almost exclusively, pay-TV delivered through a traditional infrastructure and set-top box to one in which people consume a variety of video content through many different channels. Yes, pay-TV adoption remains strong, with over 90 million US households still going the traditional route, but that number declines by 1-1.5 million each year. In many cases, the traditional bundle is part of an increasingly varied video diet which also includes web-native services like Netflix and Hulu and, increasingly, standalone offerings from traditional content owners like HBO and CBS.

Among younger viewers, the traditional bundle and content owners are far less relevant and short-form video content found on YouTube, Snapchat, and other venues is likely to make up much of their daily video consumption. New standalone services are also beginning to emerge for this audience, often targeting fans of a particular genre or interest, like anime or gaming. But whether we’re talking about older or younger audiences, one theme is consistent: the attraction of these new services is they give consumers exactly what they want (and no more) on the devices and with the user interfaces they prefer and at what seems like a reasonable price (which may be zero, in the case of some of the ad-supported offerings aimed at younger viewers). The result is a fragmentation of the industry, with a move away from a single, homogeneous bundle of channels to a much more diverse set of consumption behaviors.

The Limits of Disaggregation

Disaggregation is therefore rising and a defining feature of the modern TV era. Yet, it has its limits. It works great as long as it’s limited to one or two services, easily manageable even if the bills are paid separately and the content is accessed through different devices or in different apps. The problem comes when this shift in usage from the programming guide to the home screen starts to encompass more than just one or two unbundled channels or services.

Maybe you watch Netflix in addition to a traditional provider or perhaps Netflix has taken the place of your traditional provider, but you just can’t live without Game of Thrones and so subscribe to HBO as well. That’s pretty manageable. But what if you also want your Homeland or Walking Dead fix or you’re a closet NCIS fan? Then, you might find yourself getting the Showtime, AMC, or CBS standalone services. Suddenly, you could have half a dozen services to manage, each with its own app and its own monthly bill. Perhaps some might be available on some devices but not others or the interfaces might be different enough to be frustrating, with some offering a powerful search function and others taking more of a navigation approach. Perhaps you’d like recommendations for other shows to watch which transcend these services, so your obsession with Better Call Saul would lead to recommendations for new shows from HBO and not just AMC. When you sit down for an evening, you’d just like something good to watch and you don’t want to hunt through five different apps to find it.

At that point, aggregation starts to look pretty good again and not just to those traditional providers. Even younger viewers tend not to obsess over a single genre, typically watching more varied fare across several channels and apps as well. And though they may be willing to do the hard work to find content when they’re strongly motivated by saving money in their youth, as they age they’re likely to make the same cost/time tradeoffs as their forebears and start to value ease of use enough to be willing to pay for it.

The New Aggregators

At this point, we start to see new aggregators emerge in the TV industry and, of course, this is already happening. Several companies have stepped forward with their own visions of how to do this, though no one has yet made a complete success of it. On the one hand, we have Apple, which last fall introduced the appropriately, if somewhat generically named, TV app for iOS and Apple TV. That app acts as an aggregation layer for video apps available on those platforms, providing a single user interface for keeping track of favorite shows, recommendations, and so forth. It works well for the apps which have chosen to support it but it’s missing Netflix and some other big services which limits the utility somewhat.

On the other hand, we have Amazon, which has taken a different approach, offering various paid channels as tiers on top of its Prime subscription service. It started with one or two premium channels but has since expanded significantly (and arguably quietly) and now offers not just four big premium services (HBO, Starz, Showtime, and Cinemax) but also a long list of more specialized offerings including Acorn TV (British television), Heera (Indian content), AnimeStrike, NBC’s Seeso comedy app/channel, and more. These additional channels are billed to the same credit card as the core Prime subscription and accessed through the Amazon Video app. The recommendations layer isn’t quite there yet but the billing and user interface bundling are.

What we haven’t yet seen, but I suspect we will see soon, is a similar approach to the more youth-oriented programming offered by brands like Crunchyroll, Maker Studios, and AwesomenessTV. For now, these brands remain separate and are consumed by viewers individually, in some cases on an ad-supported basis or, in a smaller number of cases, as paid subscriptions. But over time, there’s an aggregation opportunity here, too, in bundling together content by demographic, genre, or interest in a single package, easily consumed in a single interface and billed (as applicable) once rather than in pieces.

Though Amazon and Apple will be potential aggregators here, we’ll see many others join this throng, including the traditional pay TV providers, big content owners like Time Warner or Disney, and even new companies which target this opportunity as startups. At this point, it’s far too early to know who will win, with the pay-TV providers having a strong history in the space but also lots of baggage in terms of not just preserving old business models but poor reputations with consumers, while tech companies often do user interfaces and recommendations better but fall short on content rights. I suspect this battle over aggregation is going to be one of the most important in the next few years because it’s the companies that do it well that will own, not just the customer relationship, but the primary brands in this space, with all others eventually secondary and becoming, in many cases, invisible over time. As such, all existing pay-TV providers and content owners should be thinking very carefully about where they might fit in such an ecosystem and taking steps to ensure they will be strong players there.

Even Millennials Don’t Know What to Do with Tablets

This week marks the 7th anniversary of the iPad’s availability on the market. The Verge reminds us of their initial take on the product. There were two main stands iPad reviewers took back in 2010. Some industry watchers thought the iPad could become the next computing platform — at least for some people. Others believed the iPad would mainly be successful with users with extra disposable income as well as users who wanted a simpler computing experience and did not need much.

Seven years in and the debate remains the same: is the iPad the next computing platform or merely a superfluous device? Apple is certainly trying with the new ads to make us believe the former is true but consumers do not seem to be convinced yet.

I am focusing on the iPad because, although many tablets followed it, none ever came close to the volumes Apple has been able to sell. Even now when sales are in decline, iPad remains the best-selling tablet in the market.

Perception is a Great Hindering Factor

At Creative Strategies, we recently ran an extensive study looking at Millennials’ preference of both devices and apps when it comes to collaboration. It is interesting to look at their device preference for productivity because this is “The Touch Generation”. We focused on 18 to 24 year olds to gather their expectations as they enter the workforce or soon after joining it. This is the age group that experienced the early stages of touch on smartphones the same way as Gen Z is today experiencing many voice-first interactions. They are not only very comfortable with touch but get a lot done with their phone which would put them in an ideal position to understand what they could do with a tablet.

We asked our panel of over 1200 US millennials several questions around how they prefer to collaborate, what devices they use, what app and services they use and what communication medium is their preferred one. One of the question asked them to think about which device they would take on a business trip with them if they knew they had a project due at work. As you can see from the chart below, there was no question about the tablet as the device of preference. Only 12% of male and 16% of female millennials would take a tablet with them. The rest of the panel was pretty evenly split between taking a smartphone or a PC.

It is when you dig into the why they would pick that particular device we get some clarity on where tablets stand. Most millennials who picked a smartphone valued the communication side of the device. Being able to make calls and use messaging apps was the biggest selling point. There was also a consensus that, “anything needs doing can be done on a phone.” On the PC side, the main two drivers were screen size and range of apps. Millennials who would not leave without their PC really appreciated the larger screen real estate and believe there are certain productivity apps they would not be able to run on a phone. Communications mattered to these millennials too, but they thought that, between apps and VoIP calls, they could get the job done. Some were even prepared to go old school and use a landline if absolutely necessary. The bottom line for people choosing the PC was, if you want to do real work, there is no other option.

The smaller percentage of millennials who would take a tablet on their trip are Apple’s sweet spot. They are the ones that understand they can do what they can do on a phone, including communications, and get the larger screen. They referred to the tablet as a happy medium, the best of both worlds, and as a device that gets the job done. Interestingly, a few called out Microsoft Office as an app that makes using a tablet as a main device easy. These are users that believe in the ability of a tablet to be the next computing platform.

Overall, this set of results very much points out the perception that the iPad, the best in class tablet, had back in 2010 — it was only good enough for light productivity. It still rings true today to many consumers, even open minded millennials. What has also negatively affected tablet uptake has been the progress in processing power and screen size many smartphone models have undergone.

Are 2-in-1s a Tablet or a PC?

While analysts and marketers love putting labels on devices, consumers seem to be a bit more pragmatic. If it quacks like a duck and it walks like a duck, chances are it’s a duck!

We wanted to test whether millennials were interested in iPad Pro, Surface, or other Windows 2-in-1s as their primary laptop so we asked. Overall, only 18% of the panel was interested in a Windows 2-in-1 and only 9% was interested in an iPad Pro as a substitute for their laptop. Forty-nine percent came right out and said, “No way, I prefer a traditional laptop form factor”. Another 16% is not convinced touch is useful or needed – so Apple is not alone thinking that a touch screen is not a must-have in a laptop form factor.

Once again, when digging a little deeper and looking at the data by operating system the panelists are currently running on their computer, things get very interesting.

Current Windows 10 users are much more open to the idea of using a 2-in-1 as their main laptop than current Mac users. This highlights two main points: on the one hand, Microsoft and all the OEMs have succeeded in positioning 2-in-1s as PCs. On the other, Mac users still see their devices as superior to an iPad Pro or a 2-in-1. While the difference is less striking when it comes to the role of touch, current Mac users do have more doubts on how much it is needed in a laptop.

My hypothesis that 2-in-1s are seen more as a PC while iPad Pro is still seen more as a tablet is backed up by the data we get if we cross these two questions. When you ask, “Would you consider replacing your laptop with an iPad Pro?”, 22% would take a tablet on their business trip while only 16% of millennials interested in using a 2-in-1 as their main PC would pick a tablet for the trip. This corroborates my hypothesis that iPad Pro has yet to establish itself as a PC and Apple has more work to be done.

While the Windows ecosystem could convince consumers 2-in-1s were PCs mainly through advertising, they had a great advantage over Apple in the operating system that most consumers know as a PC operating system. This means Apple needs to do more than advertise, especially at an enterprise level which is exactly what they are doing with their collaboration with SAP in particular.

If we consider millennials’ attitude to communication in our survey, it is clear the way they communicate has changed quite significantly. Messaging, video calling, and voice through apps not phone-based have taken over. Empowering the work place with apps that take advantage of the iPad and create new workflows will have the same impact. While this might not affect the sales trajectory of the iPad any time soon, I do believe it will make a difference in enterprise for iOS. The big question is whether the iPhone or the iPad will be the main benefactor.

Samsung Building a Platform Without an OS

For the last 20+ years, the traditional thinking in the tech industry has been that in order to have any real power and influence, you had to have an operating system. Companies like Microsoft, Apple, and Google have turned their OS offerings into platforms, which could then be leveraged to provide additional revenue-generating services, as well as drive the direction and application agenda for other companies who wanted to access the users of a particular OS.

In an effort to follow that strategy, we’ve witnessed a number of companies try, unsuccessfully, to reach a position of power and control in the tech industry by building or buying operating systems of their own. From Blackberry, to HP and LG (with WebOS), to Samsung (with Tizen), there have been numerous efforts to try to replicate that OS-to-platform strategy.

Over the last year or so, however, we’ve begun to see the rise of platforms that are built to be independent from an OS. Prominent among these are Amazon, with Alexa, Facebook with, well, Facebook, and most recently, Samsung with a whole set of services that, while initially focused on their hardware, actually reflect a more holistic view of a multi-connection, multi-device world.

Interestingly, even many of the traditional OS vendors are starting to spend more time focusing on these “metaplatform” strategies, as they recognize that the value of an OS-only platform is quickly diminishing. Each of the major OS vendors, for example, is placing increased emphasis on their voice-based assistants—most of which are available across multiple traditional OS boundaries—and treating them more like the OS-based platforms of old.

Moving forward, I suspect we will see more machine learning and artificial intelligence (AI)-based services that may connect to the voice-based assistants or the traditional OS’s, but will actually be independent of them. From intelligent chatbots, that enable automated tech support, to sales and other common services, through smart news and media-delivery applications, these AI-based services are going to open up a sea of new opportunities for these “new” platform players.

Another key new service will likely be built around authentication and digital identity capabilities. This will serve not only as a first log-in of the day, but function as an identity gateway through e-commerce, online banking, secure communications, and many other key services that require verification and authentication of one’s identity.[pullquote]While some OS-independent platform strategies have been known for some time, the recent Samsung S8 launch event unveiled the first real glimpse of what Samsung may have in mind going forward.[/pullquote]

While some of these OS-independent platform strategies have been known for some time, the recent Samsung S8 launch event unveiled the first real glimpse of what Samsung may have in mind going forward. Because of the company’s extensive range of not only consumer tech products, such as smartphones, tablets, wearables and PCs, but also TVs and other consumer electronics, along with white goods like connected appliances, Samsung is uniquely positioned to deliver the most comprehensive connected hardware (and connected home) story of almost any company in the world. In fact, with the recent purchase of Harman—a major automotive component supplier—they can even start to extend their reach into connected cars.

To date, the company hasn’t really leveraged this potential position of power, but it looks like they’re finally starting to do so. Samsung Pass, for example, moves beyond the simple (though critical) capability of digital payments offered in Samsung Pay, to a complete multi-factor biometric-capable identity and vertification solution. Best of all, it appears to be compatible with the FIDO Alliance standard for the passing of identity credentials between devices and across web services, which is going to be a critical capability moving forward.

On a more concrete level, the Bixby Assistant on the S8, of course, provides the kind of voice-based assistant mentioned previously, but it also potentially ties in with other Samsung hardware. So, for example, you will eventually be able to tell Bixby on your Samsung phone to control other Samsung-branded devices or, through their new Samsung Connect Home or other SmartThings hub device, other non-Samsung devices. While other companies do offer similar types of smart home hubs, none have the brand reach nor the installed base of branded devices that Samsung does.

As with any single-branded effort to dominate in the tech world, Samsung can’t possibly make a significant impact without reaching out proactively to other potential partners (and even competitors) on the device side in order to make its connected device platform viable. Still, because of its enormous footprint across so many aspects of households around the world, Samsung now possesses a bigger potential to become a disruptor in the platform war than its earlier OS-based efforts with Tizen might have suggested.

The Flaw in Tech Companies’ AI Strategy

There is a lot of talk about artificial intelligence; sadly, not a lot of substance. We are in such early days of AI that I prefer to talk about what is happening in Machine Learning since that is the current stage of the AI future we are in. We are currently trying to teach computers to see, hear, learn, and more. Right now, that is the focus of every major effort that will someday be considered AI. When I think about how tech companies will progress in this area, I think about it from the standpoint of what data they have access to. In reality, data is the foundation of machine learning and thus, the foundation for the future of AI. I fully expect many companies to turn data into intelligence and use what they have collected to teach machines. There may very well be a plethora of specialized artificial intelligence engines for things like commerce, banking, oil and gas, astrology, science, etc., but the real question in my mind is who is in the best position to develop a specialized AI assistant tuned to me.

While several of the tech companies I’m going to mention may not be focused on personal AI, I’m going to make some points within the lens of the goal of personal AI vs. a general purpose AI. The question is, who is developing Tony Stark’s version of Jarvis for individual consumers? The ultimate computing assistant designed to learn, adapt, and augment all of our weaknesses as humans and bring new levels of computational capabilities to the forefront for its user.

With the assumption that Facebook, Amazon, Google, Microsoft, and Apple are trying to build highly personalized agents, I want to look at the flaws and challenges each of them face in the present day.

Facebook
Facebook no doubt wants to be the personal assistant at all levels for consumers. However, like all the companies I’m going to mention, they have a data problem. This problem does not mean they don’t have a lot of data — quite the contrary. Facebook has a tremendous amount of data. However, they have a lot of the wrong kind of data to deliver a highly personalized artificial assistant for every area of your life.

Facebook’s dilemma is they see only the person the consumer wants them to see. The data shared on Facebook by a user is often not the full picture of that person. It is often a facade or a highly curated version of one’s self. You present yourself on Facebook the way you want to be perceived and do not share all the deep personal issues, preferences, problems or truly intimate aspects of your life. Facebook sees and is learning about the facade and not the true person behind the highly curated image presented on Facebook.

We share with Facebook only what we want others to see and that means Facebook is only seeing part of us and not the whole picture. Certainly not the kind of data that helps create a truly personalized AI agent.

Amazon
I remain convinced Amazon is one of the more serious players in the AI field and potentially in a strong position to compete for the job of being my personal assistant. Amazon’s challenge is it is commonly a shared service. More often than not, people share an Amazon Prime account or an Amazon account in general across their family. So. Amazon sees a great deal of my family’s commerce data. However, it has no idea if it is me or my wife or my kids who are making the transaction. It’s so often blatantly clarified for me as I’m surfing Facebook or some other site that is an Amazon affiliate and I see all the personal hygiene and cosmetic ads for items my wife has searched for on Amazon. Nothing like killing time on Facebook and seeing ads for Snail and Bee facial masks presented to me in every way possible.

While Amazon, with their Alexa assistant, is competing for the AI agent in my life, it has no idea how to distinguish me from other people who share my Amazon account thus making it very hard for Amazon to build a personalized agent just for me when it observes and learns from the vast data set of my shopping experience but does not know what I’m shopping for versus what my family is shopping for. The shared dynamic of the data Amazon is getting makes it hard for them to truly compete for the personal AI. However, it does put them in a good position to compete more for the family or group AI than the individual.

Google
Google is an interesting one. Billions of people use Google’s search engine every day, but the key question remains, how much can you learn about a person from their search query? You can certainly get a glimpse into the context and interest at any given time by someone who is running a query and, if you keep building a profile of that person from their searches then, over time, it is certainly possible to get a surface level understanding. But I’m not sure you can know a person intimately from their searches.

No doubt, Google is building a knowledge profile of its users on more than just their search queries as you use more of Google’s services. Places you go if you use Maps. Conversations you have if you use their messaging apps and email, etc. No doubt, the more Google services you use, the more Google can know and learn about you. The challenge is that, for many consumers, they do not fully and extensively use all of Google’s services. So Google is also seeing only a partial portrait of a person and not the entirety which is necessary to develop a truly personal and intimate AI agent.

Microsoft
Microsoft is in an interesting position because they, like Google and Apple, own an operating system hundreds of millions of people use on a daily basis for hours on end. However, I would argue the position Microsoft is in is to learn about your work self, not so much your personal self. Because they are only relevant, from an OS and machine learning standpoint on the desktop and laptop, then they are stuck learning mostly and, in many cases only, about your work self. Indeed, this is incredibly valuable in itself and Microsoft is in a position to develop an AI designed to help you be productive and get more work done in an efficient manner. The challenge for Microsoft is to be able to learn more about the personal side of one’s life when all they will see and learn from is the work side.

Apple
Lastly, we turn to Apple. On paper, Apple is in one of the best positions to develop an agent like Siri to fully know all the intimate dynamics of those who use Apple devices. Unlike Google, it is more common for consumers to use the full suite of Apple’s services from Maps, to email, to cloud storage and sync, to music, to photos, etc. However, Apple’s stance to champion consumer privacy has put them in a position to willingly and purposely collect less data rather than more.

If data is the backbone of creating a useful AI agent designed to know you and help you in all circumstances of your life, then the more it knows about you the better. Apple seems to want to grab as little data as possible, with the added dynamic of anonymizing that data so they don’t truly know it’s you, in order to err on the side of privacy.

I have no problem with these goals but I am worried Apple’s stance puts them in a compromised position to truly get the data they need to make better products and services.

In each of these cases, all the main tech companies have flaws in their grand AI strategy. Now, we certainly have many years until AI becomes a reality but the way I’m analyzing the potential winners and losers today is on the basis of the data they have on their customers in order to build a true personal assistant that adds value at every corner of your life. While many companies are well positioned, there remain significant holes in their strategy.

Podcast: Samsung S8, Dex, Bixby, Connect Home

In this week’s Tech.pinions podcast Carolina Milanesi, Jan Dawson and Bob O’Donnell discuss the recent Samsung Unpacked 17 launch event, which included the debut of the Samsung S8 smart phone, their Dex desktop docking station, Bixby assistant and Samsung Connect Home device.

If you happen to use a podcast aggregator or want to add it to iTunes manually the feed to our podcast is: techpinions.com/feed/podcast

The Appeal of Consumer-Focused PC as a Service

“PC as a Service” has started to gain traction in the corporate world, and over the next few years, an increasing number of companies will embrace this new way of procuring personal computers. The concept has the potential to appeal to a subset of consumers, too. And once they see the benefits, many will likely look to obtain additional technology products the same way.

I first wrote about PCaaS back in July 2016 and, since that time, we’ve conducted a great deal more research on the topic at IDC, including surveys of IT decision makers in multiple countries and a forecast around the uptake of such services. While each providers’ definition of PCaaS differs slightly, the main components revolve around paying a monthly fee for a service that includes the deployment, management, and eventual replacement of a PC. Companies like PCaaS because it lets them shift spending on PCs from a single, large upfront expenditure to a smaller, known recurring cost. PCaaS is different from leasing because the package typically includes IT services bundled with the hardware, which can drive some cost savings for the company over buying each separately. firms with existing IT departments sometimes choose PCaaS to lessen the burden of PC management or to free up staff to focus on other key IT initiative. Employees (and PC vendors) like the idea of PCaaS because contracts typically lead to more frequent PC refreshes.

I believe a certain percentage of consumer PC buyers would find PCaaS attractive as well. IDC is currently fielding a consumer survey where we asked about this topic. The US survey is still in the field but I took a sneak peek and, with more than 1,500 collected responses, about one-quarter of respondents expressed an interest in obtaining their next PC through a PCaaS program.

Like Leasing a Car
Some people hate the idea of leasing a car, preferring to pay cash or finance the purchase, so they eventually have a period of time after it is paid off to not have a monthly payment. Because they expect to keep the car for a very long time, these consumers typically keep up the required maintenance, even though it costs them in the form of time or money. PCaaS is obviously not for these consumers.

However, those consumers who do lease a car often do so because they’re comfortable with the idea of always having a monthly payment, they like the idea of getting a new vehicle every few years, and they also appreciate the fact that service is often part of the negotiated fee. These are the types of consumers who would likely embrace the idea of PCaaS. Somebody who is willing to pay a monthly fee, in perpetuity, to have a PC with the cost of service updates, tech support, and even repairs factored into the monthly cost. Best of all, after a certain period of time, they turn it in for a brand new one without having to front a big lump sum payment.

If that sounds vaguely familiar, you’re right: Apple currently offers such a program for the iPhone. With the iPhone Upgrade Program, which I discussed here, Apple collects a monthly fee (starting at about $34) and, once per year, the consumer can trade in their iPhone for a new one. The plan also includes AppleCare+.

Apple’s experience with the iPhone upgrade program, the fact it has nearly 500 retail stores worldwide where people can bring hardware for service and support, and its reputation for delivering high-end notebooks and desktops, make it an excellent candidate to roll out as-a-service plans. Coupled with the company’s growing list of services, including iCloud and Apple Music, the appeal of “Mac as a Service” is undeniable. In addition to catering to high-end buyers, Apple could also reap the benefits of collecting and reselling Macs, just as it does with iPhones today. As well as offering these Macs to consumers at lower-than-original prices, the company could divert some reclaimed units to education channels and other cost-sensitive segments.

Apple isn’t the only one with stores: Microsoft has them too and the company has been testing the waters with a commercial-focused Surface as a Service program. A consumer version could be viable. Companies such as HP, Dell, and Lenovo have already launched commercial PCaaS programs and they have extensive support networks already in place. But the lack of physical stores could be an inhibitor to their ability to offer such a service for consumers who want the ability to walk in and get service.

Clearly, not every PC buyer would embrace the idea of PCaaS. But a significant number of consumers have expressed an interest and there is clear upside for the industry in terms of stable, recurring hardware revenues, improved services uptake, shortened product life cycles, and, perhaps most importantly, happier customers.

Should the idea of consumer PCaaS take off, the possible iterations are nearly endless: smartphones, tablets, wearables, AR/VR headsets, and more. Drones as a Service, anyone?

Samsung’s S8/S8+ Offer the Promise of a Bigger Picture and not just Thanks to the Infinity Screen

Samsung’s events always get a lot of attention but the scrutiny around the launch of the Galaxy S8 is somewhat unprecedented — the same can be said about the product leaks. We would do the S8 a disservice, however, if we thought that such a level of attention is because it is the first phone released after the recall of the Note 7. Of course this matters but the Galaxy S family has always had a much bigger impact on overall Samsung’s sales. This means that, as well as the Note 7 could have done, there would have still been pressure on the S8 to be a big hit. To give some perspective, the Note family represents only around 12% of all Samsung phones currently in use around the world.

Similar to Apple, the pressure comes from delivering a product able to excite current Samsung owners as well as convert users of competitive brands. The S8 also hits the market a few months before the 10th anniversary iPhone is released and market watchers will be looking to see if Samsung has done enough to stay ahead of Apple. This is, of course, despite not having any concrete insight on what the next iPhone has to offer.

The Galaxy S8’s lack of bezel, or what Samsung calls infinity screen, is definitively an attention grabber. It is when you hold the S8+ in your hand and compare it to the iPhone 7 Plus, however, that you realize how much screen real estate you get with a narrower form factor. It makes for an easier one hand use. I see this as the biggest selling point of the device, both for upgraders and churners.

The rest of the hardware improvements, from better front camera to more a powerful processor, to Nougat are nice improvements but might have not been enough to get a Galaxy S7 user to upgrade. Had Samsung not brought to market the Galaxy S7 Edge last year, the impact of the S8 would have been even stronger. Think about the current iPhone and the reaction users will have if the current bezel disappears on the next model to leave room for more screen. The S8 feels like an evolution of the Edge. It’s in no way a bad thing, it just means the wow effect is a little muted.

Bixby only Playing a Supporting Role for Now

Samsung is very aware of how much is at stake with personal assistants and AI. It also understands there is no way around being compared to what consumers have already experienced with Alexa, Siri, Cortana, or Google Assistant. Setting expectations for the Bixby reveal was crucial. Samsung explained, through blogs and interviews with press, that what we see of Bixby in the Galaxy S8 family is the foundation of something bigger. For now, Bixby seems to be aimed at simplifying a user’s life by allowing him/her to cut steps in getting something done — turning on flight mode or recognizing a book through the camera and ordering it on Amazon.

Bixby is not really an assistant, not yet. Samsung has been very careful not to call it an assistant and presenters at the launch referred to it as an intelligent interface. I see Bixby as an AI engine focused on a somewhat limited range of functionalities that happens to have a voice. Not calling it an assistant is important as it helps consumers not to make direct comparisons and ultimately avoids disappointment.

Since the first information on Bixby was leaked, I have been trying to explain why Samsung decided to come out with it now rather than waiting until after the Viv acquisition is done. Two main reasons. Samsung wanted to show it can develop in house, not just acquire AI. Culturally, this is important, especially now when Samsung is focusing on reassuring users the Note 7 issues will not hinder its innovation. Politically, it might also help to have a vision and establish a starting point before the Viv team comes on board.

From a consumer perspective, coming out with Bixby now is low risk. Consumers are certainly far from picking their phones based on what assistant comes with them. Starting with a focused set of capabilities could actually get consumers to use Bixby and be satisfied with the experience which would potentially help with future adoption of Viv.

The biggest question I have is what will happen to Bixby when Viv arrives? Will Bixby step aside for Viv? Will it take on Viv’s voice and brains? Will it coexist by narrowing its focus empowering Samsung to deliver a cleaner user interface, not just for its phones but across its devices including TV, wearables, and other connected devices? Fortunately for Samsung, I doubt these questions will keep any Galaxy S8 potential buyer awake at night.

Not Just about Hardware Anymore

Since D.J. Koh took the reins of the mobile business, we have seen a consistent shift to deliver products that go well beyond hardware. Samsung has been focusing on software, especially in enterprise with Knox growing to be a full enterprise platform. The newly announced Samsung DeX is a very good example of how hardware becomes a solution to enhance a user’s experience and extend the life of another product, in this case the Galaxy S8/S8+. Being able to dock your phone in a highly portable cradle that you can plug into a monitor at your desk or a tv screen in your hotel room could transform productivity on the go. Similar solutions have been tried in the past but DeX is coming at the perfect time. Android apps are finally able to better scale on a larger screen and Microsoft Office is also optimised. Mobile productivity delivered securely, with limited overhead hardware cost and management, is certainly attractive. The big question is whether IT managers will justify the upgrade to a Samsung S8/S8+.

Samsung’s ecosystem play is getting stronger, both out of need and opportunity. Even the balancing act with Google seems to be less a concern – despite the stage shout-out – as Samsung expands to strategic partnerships such as with Microsoft with DeX or Amazon with the Bixby camera functionality.

Samsung’s opportunity to own consumers’ home is quite unique and I expect Samsung to continue to invest in it with products like the Samsung Connect Home mesh wifi router that also functions as a SmartThings hub. Whether or not consumers are ready to fully invest in the brand remains to be seen. Starting to build easy connections such as Galaxy S8+Gear VR+Gear 360 for play and products such as DeX for work get the ball rolling. Creating connections to home, fitness, and health might take longer and require more resources but I doubt this will be a deterrent for Samsung.

The Future is Unevenly Distributed – Tech Should Fix That

William Gibson famously said, “the future is already here; it’s just not evenly distributed yet.” Nowhere is that more true than in the tech world where it’s easy to think that innovations, products, and services available to us are ubiquitous, even when their distribution is, in fact, very limited.

Square and Google Home come to the UK

Both Square and Google announced on Tuesday their products were coming to the UK. In Square’s case, this is its first entry into that market, but its fourth international market outside the US, after Canada, Australia, and Japan. In Google’s case, this is its international debut for the Google Home speaker and its Google WiFi routers. I have to confess, I was unaware Square hadn’t launched in the UK and was also unaware Google hadn’t made its new hardware products available outside the US until now. But I suspect that’s typical of those of us who follow the tech market in the US – we’re so accustomed to being the first to see new technologies, we rarely spare a thought for those who don’t have access to them yet, even in a neighboring markets like Canada (as with the Google Home and Echo).

Even within the US, Haves and Have-Nots

Even within the US, though, there are often haves and have-nots when it comes to new technology and Amazon’s footprint is a great example of this. Amazon just announced two new pickup grocery locations for its Amazon Fresh service but they’re both in Seattle (and currently only open to employees). It’s Amazon Go grocery store is also only in Seattle (and perhaps for a bit longer than planned, limited to employees). Amazon’s brick-and-mortar bookstores? All but one of the stores it has opened or announced are in or near big coastal cities, the latest in Chicago. Its Fresh delivery service is also limited to just a few markets, as are its same-day delivery services.

But this goes well beyond just Amazon. One of Lyft’s competitive disadvantages relative to Uber is the smaller number of cities (and countries) where it operates, even in the US, something it’s trying to rectify with a rapid expansion this year. I’m in New York City this week and am finding there are a raft of options for ride-sharing services (for someone who feels increasingly uncomfortable with patronizing Uber) but that’s not the case everywhere in the US. Even something as seemingly ubiquitous as the Apple Store is still missing from several US states.

Silicon Valley’s Other Diversity Problem

Diversity is frequently in the news when it comes to the tech industry and was again this week with the release of Uber’s diversity report. When we talk about diversity, it’s typically about underrepresented gender or racial groups but there’s also another form of diversity the US tech industry is missing out on — exposure to these parts of the United States and the world where many of the services Bay Area residents take for granted are simply not available. A tech worker living in San Jose can likely commute to work using Uber, Lyft, Waze, or a number of other tech-based transportation services, order lunch through Postmates, and get groceries delivered at night from Instacart, Amazon Fresh, or Google Express. But many of those services aren’t available (or in some cases relevant) in much of the rest of the country.

Living in such an environment and among other people who are benefiting from the rise of technology alternatives to traditional services, it must be tempting to think of these innovations as unmitigated boons to mankind. Of course, it’s often in the rest of the country where the negative impacts of these changes are felt, as jobs get sucked out of rural and suburban areas, either to disappear completely or to be replaced in high-tech zones. Engineers who only ever see the tech-infused version of the world they live in can have little conception of the impact it causes elsewhere or the way the other half, or more accurately the other 99%, lives.

Going Global is Tough but Important

That’s why going global with a product or service is so important, though it may in some cases be tough. If innovations are beneficial, they should be as widely spread as possible, as quickly as possible. It’s obviously much tougher where extensive physical infrastructure such as retail stores, warehouses, or even fleets of cars and drivers are needed, but we often see even digital products and services like Amazon Echo and Google Home restricted to just a few markets, even ones that share a common language. That’s why I was so impressed by Netflix’s global launch a little over a year ago and continue to be impressed by major digital services from Apple like iTunes which span the globe, or even Siri, which supports many different languages in more countries than any of the other major virtual assistants. Doing that work is hard – it requires local language support, cultural understanding, partnerships with local players, and more — but it deserves doing because the benefits of many of these technologies are worth spreading as far and wide as possible.

It’s also important for companies to put their people into more diverse places because only then can those employees more accurately understand and represent the needs of those they’re trying to serve and create products and services designed to help a much broader swathe of the population. I’ve also been impressed recently by Steve Case’s mission to grow tech hubs outside of the big existing ones as a way to bring renewal and growth to more places across the US.

More people in the tech industry should be thinking about how to distribute the future more evenly, both within the US and across the world. That applies to their own businesses as much as to the products and services they sell.

Augmented Reality Finally Delivers on 3D Promise

The disillusionment was practically palpable. 3D—particularly in TVs—was supposed to be the saving grace of a supposedly dying category, and drive a new level of engagement and involvement with media content. Alas, it was not to be, and 3D TVs and movies remain little more than a footnote in entertainment history.

Not surprisingly, many people gave up on 3D overall as a result of this market failure, viewing the technology as little more than a gimmick.

However, we’re on the verge of a new type of 3D: one that serves as the baseline for augmented reality experiences and that, I believe, will finally deliver on the promise of what many people felt 3D could potentially offer.

The key difference is that, instead of trying to force a 3D world onto a 2D viewing plane, the next generation 3D enables the viewing of 3D objects in our naturally 3D world. Specifically, I’m referring to the combination of 3D cameras that can see and “understand” objects in the real world as being three-dimensional, along with 3D graphics that can be rendered and overlaid on this real-world view in a natural (and compelling) way. In other words, it’s a combination of 3D input and output, instead of just viewing an artificially rendered 3D output. While that difference may sound somewhat subtle in theory, in practice, it’s enormous. And, it’s at the very heart of what augmented reality is all about.

From the simple, but ongoing, popularity of Pokemon Go, through the first Google Tango-capable phone (Lenovo’s Phab 2 Pro), into notebooks equipped with 3D cameras, and ultimately leading to the hotly rumored next generation iPhone (whether that turns out to be iPhone 8 or 10 or something completely different remains to be seen), integrating 3D depth cameras with high-quality digital graphics into a seamless augmented reality experience is clearly the future of personal computing.[pullquote]Integrating 3D depth cameras with high-quality digital graphics into a seamless augmented reality experience is clearly the future of personal computing.[/pullquote]

The ability to have objects, data and, ultimately, intelligence injected into our literal view of the world is one of the most intellectually and physically compelling examples of how computing can improve our lives that has popped up in some time. Yet, that’s exactly what this new version of augmented 3D reality can bring.

Of course, arguably, Microsoft HoloLens was the first commercially available product to deliver on this vision. To this day, for those who have been fortunate enough to experience it, the technology, capabilities and opportunities that HoloLens enables are truly awe-inspiring. If Magic Leap moves its rumored AR headset beyond vaporware/fake demoware into a real product, then perhaps it too will demonstrate the undeniably compelling technological vision that augmented reality represents.

The key point, however, is that the integration of 3D digital objects into our 3D world is an incredibly powerful combination that will bring computing overall, and smartphones in particular, to a new level of capability, usefulness, and, well, just plain coolness. It will also drive the creation of the first significant new product category that the tech world has seen in some time—augmented reality headsets.

To be fair, initial shipment numbers for these AR headsets will likely be very small, due to costs, bulky sizes and other limitations, but the degree of unique usefulness that they will offer, will eventually make them a mainstream item.

The key technology that will enable this to happen are depth cameras. Intel was quick to recognize their importance and built a line of RealSense cameras that were initially designed for notebooks to do facial recognition several years back. With Tango, Google brought these types of cameras to smartphones, and as mentioned, Apple is rumored to be bringing these to the next generation iPhone in order to make their first stab at augmented reality.

The experience requires much more than just hardware, however, and that’s where the prospect of Apple doing some of their user interface (UI) software magic with depth cameras and AR could prove to be very interesting.

The concept of 3D has been an exciting one that the tech industry has arguably been chasing since the first 3D movies of the 1950s. However, only with the current and future iterations of this technology tightly woven into the enablement of augmented reality, will the industry be able to bring the kind of impact that many always hoped 3D could have.

Three Millennial Tech Myths Busted

A core thesis we have about the future of technology here at Creative Strategies centers on a younger demographic. Because of that, much of our continued research on the industry leads us to do dedicated studies of the Millennial demographic to help us understand the unique function of technology from this cohort. We recently completed a study spanning hardware preferences, software behavior, collaboration techniques, communication techniques, and more specifically on the 18-24-year-old millennial segment. This group is largely still in college and about to enter the workforce with an established set of collaboration and cloud-based workflows. An essential part of our study was to understand how this demographic is using the combination of hardware, software, and cloud services to be productive.

As part of our study, we discovered some interesting data which busts many myths associated with this demographic. For reference, this study was taken by 1,446 respondents within the millennial demographic and over 90% are 18-24. This study also spans over 40 college campuses.

Myth #1: Millenials are Done with Facebook
Perhaps one of the most popular myths is millennials don’t use Facebook anymore or, if they do, it is not central to their social media usage or an app that gets used daily. We asked millennials which apps they use on a daily basis. To our surprise, Facebook is still king. 89.35% of millennials still use Facebook on a daily basis. This percentage was the highest of all the apps we tested. Next on the list was Snapchat with 76.36% using it daily followed by Instagram at 73.79%.

While all current data we have suggests engagement time may indeed favor things like Instagram and Snapchat over Facebook, there is no doubt millennials still have Facebook as a daily part of their behavior. The more we study how millennials and even Gen Z use different social networks, we observe how each seems to serve a purpose. None appear to replace each other entirely but they all offer something a little different. This demographic has no problem juggling them effectively for their needs.

Myth #2: The PC is Dead to Millenials
Perhaps the most interesting hardware discovery our study made was how important the PC still is to this demographic. Through a variety of questions and behavior scenarios we tested, we came to the realization the PC is still the form factor this demographic uses and prefers to get “real” work done. While this demographic is certainly the most comfortable using their smartphone to do things that classify as “work”, more so than older demographics, they still prefer their notebooks for a variety of productivity, creativity, and entertainment use cases.

One of our questions tested a specific scenario to understand how they may weigh hardware preferences in a particular situation. We presented them with a scenario where they were going on a trip and were going to be working on a project while they were traveling. On this trip, they could only take one device for all their needs. We asked them to choose if they would take their notebooks, smartphone, or tablet. We were certain it was a no-brainer and the majority would want their smartphone. To our surprise, 42.46% said they would choose their notebook. The smartphone barely beat the notebook with 42.92%.

The scenarios we tested showed the strength of the laptop form factor when any level of “work” or “school project” was involved. Based on many of the write-in comments on why they choose the device they did, it was clear that, had there not been work involved, there would have been no contest with the smartphone as the clear winner.

Myth #3: Face to Face Meetings are not Desirable
The strength of myth is questionable but I hear it frequently from senior managers at large corporations. Many Silicon Valley tech companies which employ large numbers of millennials also note how prevalent video conferencing has become with this generation. There is no question the idea of face-to-face meetings may be suspect or questionable with this demographic — they feel it is a waste of time. However, our study shows they still view face-to-face meetings as the most efficient way to collaborate.

We examined the preferred collaboration methods at different stages of a project for millennials and found face-to-face meetings were viewed as the most useful and preferred for both the planning and brainstorming part of the project and the check up/review stages. Collaborating through things like Google Docs, or a messaging client like iMessage were sufficient to keep making progress. However, when it mattered at critical stages, nothing replaces a good old fashioned meeting–even with millennials.

The more we study different demographics, the more we see quite distinct behavior patterns depending on their life stage. Most of the “myths” I’ve heard are observations of either young millennials or Gen Z who have much more time on their hands. The contrast is quite stark once you observe millennials in college, entering the workforce, or in their late 20s already working and starting a family. Technology remains constant at all stages. Technology is almost always the answer to many problems or challenges with this demographic. However, the ways it is implemented and used may vary widely by life stage and this may always be a constant as well.

Podcast: Apple iPads, Semiconductor Renaissance, Intel AI

In this week’s Tech.pinions podcast Tim Bajarin, Ben Bajarin and Bob O’Donnell discuss the new product announcements from Apple, new developments in the semiconductor market from companies like nVidia and ARM, and Intel’s recent announcement of a new AI organization.

If you happen to use a podcast aggregator or want to add it to iTunes manually the feed to our podcast is: techpinions.com/feed/podcast

Despite 5G Hype, still a Compelling Roadmap for 4G

5G was the dominant topic of conversation at this year’s Mobile World Congress. While this might be the sexiest topic in telecom right now, there’s still a compelling 4G LTE roadmap, incorporating many of the capabilities promised for 5G – especially the earlier, ‘pre-5G’ versions that have been discussed. What’s clear about 5G is it will roll out in stages, it will be messy, and there will be multiple “versions” of 5G.

Meanwhile, what about 4G, the LTE workhorse we all know and love? Well, there’s still a lot of gas left in the LTE tank. First, even though the US, Japan, South Korea, and a handful of other countries have 80% or more of their subscribers on 4G, that number is still less than a third of total subscriptions globally. So there is still substantial investment going into 4G. Second, even as 5G is deployed, LTE is going to provide the primary coverage layer for the foreseeable future – likely out to 2025 – even among the ‘early adopter’ 5G countries. That’s because 5G is likely to be deployed in islands or pockets until the business case is truly proven. And it will require a massive number of small cells which, as we’ve learned with the early stages of the small cell market, are difficult to deploy at scale.

Most importantly, there is still a compelling roadmap for LTE, promising significant improvements in speed, latency, and spectral efficiency. Much of what is promised for 5G — especially the pre-standard or early stages of 5G — can be accomplished within the LTE roadmap.

LTE Advanced, which was introduced a couple of years ago, has already delivered speeds exceeding 100 Mbps and a 2x or greater boost in capacity per MHz, using carrier aggregation, 4×4 multiple input (MIMO) antennas 256 quadrature amplitude modulation (QAM), and an assorted soup of technologies with even less user-friendly acronyms.

The roadmap for the next two to three years is equally compelling. The next stage is called LTE Advanced-Pro, which some call 4.5G. Some of the capabilities include:

• An even higher number of potential ‘carrier aggregation’ channels (from 5 to 32)
• Support of much wider spectrum bands
• Peak data rates up to 3x that of LTE advanced
• Latency improvements of 50% or more
• Further MIMO enhancements, for better coverage
• Support for unlicensed spectrum, such as 5 GHz
• A host of enhancements for IoT, to support lower speed, narrow band access for low power devices

All this added together promises significant increases in throughput, improved latency, greater spectral efficiency, and other improvements. If you thought we needed to wait for 5G for gigabit services, think again: last year, Qualcomm introduced the X16 modem, which is the first commercial modem to support Gigabit LTE up to 1 Gbps—that’s LTE Cat 16—by using four antennas to simultaneously receive 10 LTE data streams boosted to around 100 Mbps each through advanced signal processing. In reality, the operators won’t offer gigabit LTE, just as peak 5G speeds of 10 Gbps represent more of a theoretical than a practical number.

Another key aspect of LTE Advanced Pro will be using carrier aggregation in the unlicensed bands, such as 5 GHz (used by WiFi). Contentions around this issue were ironed out late last year and we should see services such as LTE Unlicensed (LTE-U) rolled out as soon as this summer, delivering business case-driven speed boosts and capacity increases. This, plus additional bands of 700 MHz, AWS, and eventually 600 MHz spectrum being deployed or becoming available soon, provide a lot more ‘real estate’ to increase channels. That translates into faster speeds and more capacity. The fact the operators are now all offering some flavor of an ‘unlimited’ plan, even with asterisks, reflects their increased optimism with respect to the capacity picture.

The significance of all this is many of the improvements and attributes touted for 5G, especially in its early stages, will be delivered within the LTE roadmap. Now, how exactly LTE Advanced Pro will be marketed is an interesting question. Remember when Metro PCS and Verizon were the first operators to deploy LTE in 2010? Within a few months, AT&T and T-Mobile branded their HSPA+ services, which technically were still 3G under the 3GPP framework, as ‘4G’. Some cried foul although, in reality, HSPA+ in some markets outperformed LTE (it was very situational). Still further, what was originally promised for LTE really wasn’t delivered until some of the first LTE Advanced Services started to be incorporated, circa 2013.

This playbook is likely to be (already is being?) replayed with respect to 5G. I would not be surprised if some operators branded aspects of their 4.5G services as 5G. They might call it ‘Pre-5G’. But ‘4G Plus’ and ‘5G Minus’ are likely to be much the same thing, from the standpoint of the user experience.

The bottom line is we don’t have to wait until 2020 or later for some of the significant improvements promised with 5G. There will be material increases in speed, latency, and capacity along the LTE path and those capabilities are already part of the 12 to 18-month roadmap of the major device vendors. So, while 5G might grab all the headlines, there’s still lots of reasons to get excited about 4G.

Google’s YouTube Advertiser Problem has No Easy Fix

Last week, UK advertisers, including the government, the Guardian newspaper, and various others began boycotting Google’s ad products including YouTube over the fact their ads were appearing next to troublesome content, ranging from videos promoting hate to those advocating terrorism. Unsurprisingly, given the exact same issues exist here in the US, the boycott this week began to spread to Google’s home turf, with several of the largest US advertisers pulling their ads from some or all of Google’s platforms. The challenge facing Google is this problem has no easy fix – with two of three possible scenarios, either creators or advertisers will be unhappy, while Google is probably hoping a third scenario is the one that actually pans out.

The Problem

The main focus of the complaints has been YouTube, although the same problem has, to some extent, affected Google’s ads on third party sites as well. On YouTube, the root of the problem is the site has 400 hours of video uploaded every minute, making it impossible for anything but an army of human beings to view all the new content being put onto the site continuously.

As such, Google uses a combination of algorithmic detection, user flagging, and human quality checking to find videos advertisers wouldn’t want their ads to appear in and those systems are far from perfect. Terrorist videos, videos promoting anti-Semitism and other forms of hate, content advocating self-harm and eating disorders, and more have slipped through the cracks and ended up with what some perceive as an endorsement from major brands. Those brands of course, aren’t happy with that. Following some investigations by the UK press, several have now pulled their ads either from YouTube specifically or from Google platforms in general until Google fixes the problem. US brands like AT&T, Verizon, Enterprise, GSK, and others are starting to follow suit.

No Perfect Fix

From Google’s perspective, the big challenge is its existing systems aren’t working and there’s no easy way to fix that. Only one reasonable solution suggests itself and it’s far from ideal: restrict ads to only those videos which appear on channels with long histories of good behavior and lots of subscribers. That would likely weed out any unidentified terrorists, hate mongers, and scam artists without having to explicitly identify them. Problem solved! Except that, of course, the very long tail of YouTube content and creators would be effectively blacklisted even as this much smaller list of content and creators are whitelisted. That, in turn, would be unpalatable to those creators, even if advertisers might be pacified. Of course, it would have a significant effect on YouTube’s revenue too.

Given that some creators are already unhappy with what they see as the arbitrary way YouTube already determines which videos are and aren’t appropriate for advertising, going further down that route seems dangerous and will create problems of its own. But, given the current backlash against YouTube and Google more broadly over this issue, it can’t exactly keep things as they are either, because many advertisers will continue to boycott the platform and there’s likely to be a snowball effect as no brand wants to be seen as the one that’s OK with its ads appearing next to hate speech, even if others aren’t.

So, we have two scenarios, neither of them palatable. One would be essentially unacceptable to the long tail of creators and would likely significantly impact YouTube’s revenue, while the other would continue to be unacceptable to major advertisers and also would significantly impact YouTube’s revenue. To return to a point I made at the beginning, this actually is broader than YouTube to programmatic advertising in general, including Google’s ads on third party sites. Alphabet’s management has cited programmatic advertising, where humans are taken out of the picture and computers make the decisions subject to policies set by site owners and advertisers, as a major revenue driver in at least its last four earnings calls, mentioning it in that context at least seventeen times during that period.

To the extent the programmatic method of buying is a major source of the content problem at YouTube specifically and Google broadly, that’s particularly problematic for its financial picture going forward. There was already something of a backlash over programmatic advertising towards the end of last year when brand advertising was appearing on sites associated with racism and fake news but this YouTube issue has taken to the next level.

Hope of a Third Scenario

Alongside these two unappealing scenarios, there’s a third. Google must be hoping this one is what actually pans out. This third scenario would see Google making more subtle changes to both its ad and content policies than the ones I suggested above and eventually getting advertisers back on board. That approach banks on the fact brands actually generally like advertising on Google, which has massive reach and – through YouTube – a unique venue for video advertising that reaches generations increasingly disengaged from traditional TV. So I’d argue advertisers don’t actually want to shun Google entirely for any length of time and mostly want to use the current fuss to extract concessions from the company both on this specific issue and on the broader issue of data on their ads and where they show up.

Google’s initial response to the problem, both in a quick blog post on its European site last week and a slightly longer and more detailed post on its global site this week, has been along these lines. It’s accepted responsibility for some of its past mistakes, identified some specific ways in which it plans to make changes, and announced some first steps to fixing problems. However, the fact that several big US brands pulled their advertising after these steps were announced suggests Google hasn’t yet done enough. It’s still possible advertisers will come around once they see Google roll out all of its proposed fixes (some of which were only vaguely described this week) and perhaps after some additional concessions. That would be the best case scenario here. Some of the statements from advertisers this week indicate they’re considering their options and reviewing their own policies, suggesting they may be open to reconsidering.

But these current problems still highlight broader issues with programmatic advertising in general, on which advertisers won’t be placated so easily. I could easily see the present backlash turn into a broader one against programmatics in general, which could slow its growth considerably, with impacts both on Google and the broader advertising and ad tech industries. I would think Google/Alphabet would be extremely lucky to emerge from all this with minimal financial impact and I think it’s far more likely it sees both a short-term dent in its revenues and profits from the spreading boycotts and possibly a longer-term impact as brands reconsider their commitments to programmatic advertising in general.

In the Market for a Tablet? No-Brainer to buy an iPad

I am sure you know by now Apple has announced a new iPad model simply called iPad – aka the 5th Generation. This is not quite an update to the Air 2 as some of the features, such as weight and thickness, are the same as the Air.

The fact Apple did not hold an event for the announcement had more to do with not setting high expectations than with the significance of this product. The 9.7” iPad has been the most popular model for Apple. Since moving to the iPad Air line, Apple has been able to please customers who thought they wanted a smaller form factor. In reality, what they wanted was the higher portability that comes with a lighter device. The price drop of the older Mini generation helped buyers who wanted the most affordable iPad but would have not necessarily picked this product based on screen size.

Apple believes there is still a market opportunity for iPad both as people upgrade older models and as they discover iPads for the first time. For many consumers, however, making the jump to buying the first iPad or upgrading to a new model has not been easy. Depending on where you are in the process, there are either cheaper Android alternatives marketed as equals or the iPad you are using still fulfills your needs, making it hard to justify an upgrade.

This week, Apple made buying an iPad simpler and more affordable. The new line up is pretty clear:

    • iPad Mini is no longer the entry level iPad with consumers choosing this option based on form factor rather than price
    • No need to use the Air name as iPads are all lighter and slimmer than they were before the Air was introduced. I made the same argument for the MacBook Air when the new MacBook was announced and we’ll see if I am right
    • 9.7” is not only the most popular size but it is where Apple sees the future of iPad as it plays well in consumer, enterprise, and education. So the price aggressively comes down to $329
    • iPad Pro remains the flagship for people who want the best and/or people who are ready to make the switch from a PC or a Mac and make the iPad Pro their main computing device. The two sizes offer choice depending on your mobility requirements.

Tablets remain a category of device many consumers do not see as necessary. In fact, according to a GWI report, only 5% of online Americans consider a tablet as their most important device to access the internet, whether at home or elsewhere. This compares to 24% for smartphones and 40% for laptops. This lack of a clear role limits how much consumers are prepared to spend on them. Yet, when people use iPads satisfaction is high.

According to J.D. Power, iPads have the highest satisfaction in the category at 830 (out of 1000). Satisfaction is measured across performance, ease of use, features, styling and design and cost. iPads outperform the competition on every factor aside from cost. Apple just addressed this very point.

 Apple is not going to Concede the Education Market to Chromebooks

Tablets are not just a consumer play and Apple is very well aware of that. Over the past year or so, Apple has been focusing on empowering their iPads in the enterprise through partnerships with IBM and SAP. Education is another major market for iPads but lately, Apple has been under pressure from a growing number of Chromebooks being used, especially by K-12 schools.

While $329 is an aggressive price in the consumer market, Apple pushed even more on education and will make the new iPad available through its education channel for $299. Targeted, aggressive pricing is something Apple is willing to do for certain segments and done in a way that does not negatively impact the brand.

Apple also collaborated with Logitech to make a rugged case available through the same channel, priced at around $99. Logitech will also offer an add-on keyboard for the rugged case and a “Rugged Combo”.

While even at this price, Apple remains higher than Chromebooks. But there is more than just iPad Apple brings to the table compared to Chromebooks. Once the price gap closes, the other factors hold a different weight. Apple’s app ecosystem is much larger than what Chromebooks have to offer and the fact Android apps will run on Chromebooks will not make the situation much better. Many of the Android apps available in the store are still not optimized for tablet use, which of course, limits how user-friendly and rich the experience can be. Accessories ecosystem is also a plus for Apple as it lets the iPad better fit in with other tools teachers might be using in the classroom. The last point I think worth mentioning is security. Apple’s strong focus on privacy and security for its devices and the apps that run on them is an added benefit I am sure schools consider. Google and Apple both offer specific education tools to monitor access on the devices to limit vulnerabilities but the cloud/browser-based nature makes that more challenging. Of course, the fact Google Docs work well on iPad is a reassurance for teachers who are invested in those tools.

The education market is certainly becoming a battleground for Google, Apple, and Microsoft. It will be interesting to see who will focus on a more holistic experience that centers on empowering teachers and students to teach and learn vs. facilitating admin and management of kids and staff.

In Other News

There were more Apple announcements on Tuesday.

There was an updated storage option for the iPhone SE that now starts at 32GB for the same price of $399 and a small $50 premium for the 128GB version. This is a sensible move by Apple to future-proof this line for further software upgrades.

To celebrate the ten year anniversary of support for Product RED and the fight against HIV, Apple released a RED iPhone. This is a first for Apple who have done several RED products over the years — iPods, Beats headsets, iPhone and iPad Cases — but never an iPhone.

Lastly, Apple announced Clips, a video editing app that will be available as a free download in April. Despite some confusion on social media, this is NOT a competitor to Snapchat or Instagram. Clips is about creating content to be shared on the social platform of choice. iPhone is used more and more for pictures and videos and giving users the opportunity to add features such as stickers, lenses, and filters makes a lot of sense. Apple is aware, however, that its audience is not made up of social savvy teenagers only. So Clips comes as a separate app rather than being integrated into the main camera app. First and foremost, this approach avoids annoying users who are not interested. It also offers Apple an opportunity to further develop Clips by adding other capabilities in the future – think AR.

It remains to be seen, however, if avid Snapchat and Instagram users will be interested in creating the content in Clips before they share it through their social platforms. If Clips takes off, Apple would have created direct user engagement and would have shifted value back to the hardware and, ultimately, leave to the social network platform the delivery portion of the engagement.

Clips is a great example of the kind of first party apps Apple should be focusing on to add value to hardware. While the wider ecosystem is a great strength and the balance of keeping partners and developers engaged is tricky, there is certainly room for Apple to do more.

Chip Magic

Sometimes, it just takes a challenge.

After years of predictable and, arguably modest, advances, we’re beginning to witness an explosion of exciting and important new developments in the sometimes obscure world of semiconductors—commonly known as chips.

Thanks to both a range of demanding new applications, such as Artificial Intelligence (AI), Natural Language Processing (NLP) and more, as well as a perceived threat to Moore’s Law (which has “guided” the semiconductor industry for over 50 years to a state of staggering capability and complexity), we’re starting to see an impressive range of new output from today’s silicon designers.

Entirely new chip designs, architectures and capabilities are coming from a wide array of key component players across the tech industry, including Intel, AMD, nVidia, Qualcomm, Micron and ARM, as well as internal efforts from companies like Apple, Samsung, Huawei, Google and Microsoft.

It’s a digital revival that many thought would never come. In fact, just a few years ago, there were many who were predicting the death, or at least serious weakening, of most major semiconductor players. Growth in many major hardware markets had started to slow, and there was a sense that improvements in semiconductor performance were reaching a point of diminishing returns, particularly in CPUs (central processing units), the most well-known type of chip.

The problem is, most people didn’t realize that hardware architectures were evolving and that many other components could take on tasks that were previously limited to CPUs. In addition, the overall system design of devices was being re-evaluated, with a particular focus on how to address bottlenecks between different components.[pullquote]People predicting the downfall of semiconductor makers didn’t realize that hardware architectures were evolving and that many other components could take on tasks that were previously limited to CPUs. [/pullquote]

Today, the result is an entirely fresh new perspective on how to design products and tackle challenging new applications through multi-part hybrid designs. These new designs leverage a variety of different types of semiconductor computing elements, including CPUs, GPUs (graphics processing units), FPGAs (field programmable gate arrays), DSPs (digital signal processors) and other specialized “accelerators” that are optimized to do specific tasks well. Not only are these new combinations proving to be powerful, we’re also starting to see important new improvements within the elements themselves.

For example, even in the traditional CPU world, AMD’s new Ryzen line underwent significant architectural design changes, resulting in large speed improvements over the company’s previous chips. In fact, they’re now back in direct performance competition with Intel—a position AMD has not been in for over a decade. AMD started with the enthusiast-focused R7 line of desktop chips, but just announced the sub-$300 R5, which will be available for use in mainstream desktop and all-in-one PCs starting in April.

nVidia has done a very impressive job of showing how much more than graphics its GPUs can do. From work on deep neural networks in data centers, through autonomous driving in cars, the unique ability of GPUs to perform enormous numbers of relatively simple calculations simultaneously is making them essential to a number of important new applications. One of nVidia’s latest developments is the Jetson TX2 board, which leverages one of their GPU cores, but is focused on doing data analysis and AI in embedded devices, such as robots, medical equipment, drones and more.

Not to be outdone, Intel, in conjunction with Micron, has developed an entirely new memory/storage technology called 3D Xpoint that works like a combination of DRAM—the working memory in devices—and flash storage, such as SSDs. Intel’s commercialized version of the technology, which took over 10 years to develop, is called Optane and will appear first in storage devices for data centers. What’s unique about Optane is that it addresses a performance bottleneck found in most all computing devices between memory and storage, and allows for performance advances for certain applications that will go way beyond what a faster CPU could do.

Qualcomm has proven to be very adept at combining multiple elements, including CPUs, GPUs, DSPs, modems and other elements into sophisticated SOCs (system on chip), such as the new Snapdragon 835 chip. While most of its work has been focused on smartphones to date, the capabilities of its multi-element designs make them well-suited for many other devices—including autonomous cars—as well as some of the most demanding new applications, such as AI.

The in-house efforts of Apple, Samsung, Huawei—and to some degree Microsoft and Google—are also focused towards these SOC designs. Each hopes to leverage the unique characteristics they build into their chips into distinct features and functions that can be incorporated into future devices.

Finally, the company that’s enabling many of these capabilities is ARM, the UK-based chip design house whose chip architectures (sold in the form of intellectual property, or IP) are at the heart of many (though not all) of the previously listed companies’ offerings. In fact, ARM just announced that over 100 billion chips based on their designs have shipped since the company started 21 years ago, with half of those coming in the last 4 years. The company’s latest advance is a new architecture they call DynamIQ that, for the first time, allows the combination of multiple different types and sizes of computing elements, or cores, inside one of their Cortex-A architecture chip designs. The real-world results include up to a 50x boost in AI performance and a wide range of multifunction chip designs that can be uniquely architected and suited for unique applications—in other words, the right kind of chips for the right kind of devices.

The net result of all these developments is an extremely vibrant semiconductor market with a much brighter future than was commonly expected just a few years ago. Even better, this new range of chips portends an intriguing new array of devices and services that can take advantage of these key advancements in what will be exciting and unexpected ways. It’s bound to be magical.