News You might have missed: Friday, October 20th, 2017

on October 20, 2017
Reading Time: 5 minutes

Windows Fall Creator Update

Windows 10 Fall Creators Update includes a number of new features, including a replacement for OneDrive Placeholders, support for Windows Mixed Reality, the ability to have a better workflow between Windows 10 PCs and iPhones and Android phones and an improved Photos app experience. For enterprise users, the added security features with Windows Defender are probably the most compelling updates. Mixed Reality support is probably the feature that will be the focus on the holiday advertising.

An Interesting Battle is Shaping Up on 5G

on October 19, 2017
Reading Time: 3 minutes

In July 2014, the FCC released its Spectrum Frontiers plan, which allocated up to four large swaths of spectrum in the millimeter wave (mmWave) bands, above 20 GHz, for 5G. This spawned a bit of a land grab, with Verizon snapping up mmWave spectrum with the acquisitions of XO and Critical Path, and AT&T acquiring FiberTower’s assets. It was a windfall for these companies, sort of the tech equivalent of having held onto a house in a lousy neighborhood that suddenly gets hot. With these acquisitions, AT&T and Verizon now own close to 60% of the licensed mmWave spectrum. The FCC still retains about 1/3 of it, and plans 5G auctions at some point. Verizon and AT&T have been marching down the 5G road, testing fixed wireless access in several cities in the mmWave bands as one of the initial use cases.

But even though 5G had been heading in a mmWave, circa 2020 direction, mid-band spectrum, characterized as that below 6 GHz, is proving to be an important contender for 5G as well. T-Mobile was among the big winners in the 600 MHz auctions completed earlier this year, acquiring 31 MHz of nationwide spectrum. In August, the operator announced that it would deploy a ‘5G Ready’ network at 600 MHz, meaning that new equipment from Ericsson would be used that supports both LTE and 5G at that band. Also in August, the FCC opened an inquiry into new opportunities in the 3.7-4.2 GHz band, to be used for the “next generation of wireless services”. This effort is backed by Google and several wireless ISPs, who would want to use this spectrum for fixed wireless services. At the same time, T-Mobile and the CTIA are leading an effort to make the 3.5 GHz (CBRS) band more ‘5G friendly’ by lengthening the terms of the licenses and expanding the geographic service areas.

The upshot of this is that mid-band spectrum is emerging as a viable alternative for 5G. One can see the battle shaping up, especially if Sprint and T-Mobile merge, which is looking increasingly likely. Sprint/TMO’s main 5G play would be in their 600 MHz and 2.5 GHz spectrum, plus leveraging their holdings in other bands as well (it should be noted that TMO owns mmWave spectrum serving about 1/3 of the country, through MetroPCS). It’s not clear how active they would be in a future FCC mmWave spectrum auction.

This is setting up a pretty interesting marketing battle and debate over 5G. The mmWave bands offer a huge amount of spectrum, which would deliver orders of magnitude improvements in network speed, capacity, and latency. The tradeoff is that mmWave spectrum generally requires line-of-sight, can be affected by weather, and offers a small coverage radius. Providing service in these high spectrum bands will also require the deployment of large numbers of small cells—and we haven’t yet found the formula to be able to do this at scale, yet. There is also still quite a bit of work to be done to develop the beam-forming antennas and other technology required to deliver wireless services in the mmWave bands.

So, what does this mean for the 5G rollout? We will see services marketed as 5G, even using sub-6 GHz versions of 5G New Radio, starting in 2018. AT&T has already prepped us by launching ‘5G Evolution’ in a handful of markets. In reality, these 4.5G, or Gigabit LTE services will offer considerable improvements in download speeds and latency, which are certainly in the neighborhood of what has been envisioned for the early stages of 5G.

It’s also becoming clear that there will be different flavors of 5G. Gigabit LTE, and other services offered in the mid-bands, will look more like today’s cellular services, supporting broad coverage and mobility. Think of it as a base layer. Then, mmWave band networks will be built in denser urban areas and other targeted coverage deployments, where it makes the best economic sense and where the most subscribers can be reached. The map will look like ‘islands’ of 5G in a sea of LTE and LTE Advanced.

It will be well into the next decade before there is broad coverage of mmWave-based 5G, and there is still some question regarding the extent to which mobility can be supported in these bands or how good the coverage will be in buildings. But thinking about 5G in this way, and with this timeframe, provides a good runway for the technology to evolve. Consider that the average LTE speed is 4-5x what it was only five years ago. Apply that multiplier to 5G, as a base case, and things start getting interesting.

In the meantime, fasten your seatbelts for the upcoming marketing war between T-Mobile/Sprint and Verizon/AT&T, over 5G. With no official body really calling the shots over the definition of 5G, it will be up to the market to decide.

Samsung Aims for Connected Thinking at Developer Conference

on October 19, 2017
Reading Time: 4 minutes

Samsung is holding its annual Developer Conference this week in San Francisco. At the day one keynote on Wednesday, it pushed a vision centered on “Connected Thinking” as its major theme for not only the conference but its strategy in relation to its software and services in the coming year. That was reflected in a range of moves designed to bring what have been disparate parts of Samsung together, but it’s apparent that this will be a tall order.

A Single Cloud Platform and Bixby as Connective Tissue

Samsung’s major announcements focused on three key topics:

  • Consolidating Samsung’s disparate Internet of Things cloud offerings
  • Iterating on Bixby, by improving the technology, extending it to new devices, and opening it up more
  • Going all-in with Google on AR through ARCore support on all of this year’s flagship phones.

The Internet of Things moves are focused mostly on using the SmartThings brand (now without Samsung as an umbrella brand) as the consumer lead, while consolidating three separate cloud IoT platforms into one, also now tagged with the SmartThings brand. ARTIK survives as a separate IoT brand, but now focused mainly on modules, while its cloud platform along with the Samsung Connect Cloud announced earlier this year will be folded into the SmartThings Cloud.

The way I see this is that the SmartThings Cloud will be the invisible connective tissue on the back end, while Bixby 2.0 eventually becomes the visible connective tissue in the front end as part of a much more coherent and connected vision for Samsung’s range of devices. Samsung executives pointed out during the keynote that it has arguably the largest number and range of devices in use of any company in the world, but the reality is that it’s always been a pretty disparate range of devices, with only fairly superficial integration between them. A big reason for that is Samsung’s operational structure, which has separate CEOs for each product-centric business unit.

The vision Samsung is pushing now is one where a variety of services on these devices will all be powered by the same cloud back-end, and Bixby will become a cloud-based voice interface which works on more and more of them over time. Bixby 2.0 will shift its personalization and training from the device to the cloud, and will therefore start to build profiles of individual users which can be exposed on a variety of devices, including shared devices like TVs and fridges. In addition to its own devices, it’s going to try to extend Bixby support to a variety of third party devices through modules and dongles as part of what it called Project Ambience, which will Bixby-enable existing home devices, both smart and dumb ones, and connect them to each other.

Significant Challenges Lie Ahead

What’s interesting here is that, even though Samsung controls the operating systems on several of its devices, because it doesn’t control by far the biggest – Android on its smartphones – it is instead building the connective layer between its various devices at the interface level. That means pushing Bixby to become far more than it’s been so far, acting not only as a way to perform tasks previously done through touch on a phone, but increasingly allowing for integration with other Samsung devices like TVs and control of smart home gear through SmartThings integrations.

In reality, though, voice can’t be the only interface and therefore can’t be the only connective layer between these various devices – in time, the integration therefore either needs to grow beyond Bixby, or Bixby itself needs to evolve to the point where it’s more than just a voice interface. In the meantime, the SmartThings brand, now decoupled from the Samsung brand to foster a sense of openness, will nonetheless become the brand for Samsung’s own connected home ecosystem too (replacing Samsung Connect), which may cause some customer confusion.

But those aren’t the only barriers to making this vision work: Samsung needs to overcome both internal and external hurdles if it’s to be successful in creating a truly connected ecosystem. The biggest internal barriers continue to be structural – hearing Samsung executives talk about this week’s announcements both on stage and one-on-one, the language is still far more that of separate companies “partnering” rather than a single team working together. The integration announced this week represents progress, but there’s a long way still go go and huge cultural barriers to overcome.

Externally, Samsung needs to convince developers and hardware partners that Bixby is ready for use as a voice platform beyond its smartphones, at a time when it’s got big shortcomings even there. Deeper integration of the Viv technology will certainly help to improve its functionality, as will opening up version two earlier to developers so that the integration can be deeper when it launches to consumers. But the leap Samsung is contemplating here is a huge one, one which other platforms have approached much more gradually and incrementally than Samsung is proposing to do. Samsung would arguably be better served by tackling either third party integration or cross-device support first and then pursuing the other second, rather than trying to do it all at once. The current approach risks over-promising and under-delivering.

The last big challenge is one of adoption – unlike earlier voice assistants, Samsung can’t simply add Bixby to existing hardware, because little of it was designed with voice interfaces in mind. What that means is that it can only grow the Bixby base to the extent that it can grow the base of devices which offer it. In categories like TVs and fridges, that means waiting until next year to even start selling them, and with long refresh cycles, it’ll take many years before penetration is meaningful. Even in smartphones, where Samsung has an installed base of hundreds of millions, it has just 10 million users of Bixby, and we don’t even know how many of those use it daily or weekly. Even if the new SmartThings and Bixby ecosystems work exactly as intended, it will be quite some time before any significant number of consumers actually get to benefit from them.

Amazon’s Alexa Land Grab

on October 19, 2017
Reading Time: 4 minutes

Without a doubt, in my mind, Amazon’s Alexa has been the star of the tech industry this year. Starting with CES where the banner of “works with Alexa” was first raised with support from nearly all major appliance and smart home brands. The presence of an ambient, always on, smart speaker with digital assistant has been the single greatest catalyst for the smart home I’ve seen since I’ve been studying the smart home from the beginning of the category. From a consumer perspective, the voice interface eliminated a great deal of friction from how we interact with smart home objects. Our continued research on the category keeps confirming those with Amazon Echo’s have more connected smart home products per household than those who do not and those customers rapidly increase their smart home gear after buying and integrating an Amazon Echo into their home.

Now Everyone Wants to Design Silicon

on October 18, 2017
Reading Time: 3 minutes

The move my major technology companies to start designing some custom silicon components vs. buy off the shelf components from suppliers has been a long time coming. One of the biggest challenges in the competitive field of consumer electronics is when competitors all use the same components and software platforms as their competitors. Companies competing for the consumer will live and die by their ability to be different and stand out from the pack. When you use the same software platforms and components as your competition you simply swim in the sea of sameness and have a hard time standing out. This is why Apple has developed a fully mature and foundational strategy to design all of the most critical and differentiating components that give their products an edge in the market. So it comes as no surprise that Google has developed a custom SoC for the image processing part of their new Pixel 2 smartphones.

The Good, the Bad and the Ugly of Twitter

on October 18, 2017
Reading Time: 5 minutes

Last week, actress Rose McGowan’s Twitter account was suspended for 12 hours after she wrote a series of tweets accusing Hollywood’s producer Harvey Weinstein of raping her. When she said she was being silenced, Twitter responded that her account was suspended as one of her tweets included a private phone number, which violates the code of conduct. This explanation did not convince many, however. At a minimum, it raised questions about the timing of it all. CEO Jack Dorsey took to Twitter to admit that his platform “needs to do a better job at showing that we are not selectively applying the rules.”

It is not the first time that Twitter is under fire not so much for lack of clarity on what makes up a violation of the code of conduct but for lack of consistency on how those violations are dealt with when reported. Over the past year, as harassment increased, Twitter deployed a series of measures, like the ability to mute a conversation or a user, that seemed to be aimed at hiding the issue rather than addressing it. Just because I no longer see the abuse and harassment, it does not mean it has gone away. More importantly, those users who are harassing and abusing others feel that their behavior is condoned.

Fresh off the press there is a Twitter internal email obtained by Wired that outlines new rules Dorsey is readying to release but I will wait for an official communication before commenting.

Social Media Engagement

Social Media drivers differ from people to people and from network to network. I was a reluctant Twitter user. I started using the platform for work in 2009 but did not do so consistently until 2013 when I changed job. Twitter quickly became a useful tool to keep on top of the news. My initial passive networking experience turned into an engaged one as I came to appreciate being able to share my thoughts on the tech world and actively engage with fellow tech watchers. As my engagement grew, I set some rules for how I wanted to use the platform:

– Never say anything I could not stand behind in case it was published as a quote in the press

– Keep it clean-ish

– No Religion

– No Politics

Pretty simple stuff, right? Eight years on, I am proud to say that except for the last rule I have been quite diligent in following them. I am sure that, given the current state of affairs in all the countries I lived in over the years, being silent rather than breaking my own politics rule would have been the real crime!

In a recent report published by GWI, I discovered that I was not alone in my reliance on Twitter for news. Twitter users are first engaged in reading news stories (57%) followed by liking a tweet (40%) and watching a video (34%) Direct actions such as tweeting a photo or a comment about my daily life only make up 23% and 22% of activities, respectively.

Overall engagement on Twitter has been declining since 2013 (-5%), a problem that the company has been trying to address without much success. That said, engagement on Facebook over the same period has been declining even more rapidly (-16%) as consumers seem to lean more towards more videos and pictures focused platforms such as YouTube and Instagram, up 2% and 14% respectively.

It would be too easy to blame the lower engagement to harassment alone, but I am sure nobody would argue with the fact that harassment is making Twitter less appealing as a platform. Quite a few celebrities have found Twitter too ugly, and either left like Kanye West, Lindsay Lohan, Emma Stone, and Louis C.K. or took breaks and returned, like Leslie Jones, Justin Bieber, and Sam Smith. For now, the return of investment the platform is providing me is still positive. The question is, for how long?

Disasters, Emergencies, and Hashtags See the Best of Twitter

Over the past few months, we have had our share of disasters and emergencies to deal with both in the US and internationally. It is at those difficult times that I tend to see the best of Twitter. From breaking news that allows people to keep up to date with a fast-evolving situation to people coming together to help by sharing stories or ways to donate.

But even in those good moments, trolls accounts creep into the conversation to dismiss, offend or sabotage the effort.

On the back of the Rose McGowan’s incident, two hashtags emerged bringing attention to harassment on the platform and sexual harassment across the board. On Friday the 13, a #WomenBoycottTwitter started calling on women to walk away from the social media platform for a day. Many users, including celebrities, joined in. Not everybody though agreed that silence was the best tactic to make a point in this particular situation. I for one decided not to be silent and went on Twitter to condemn abuse and do what I do every day: talk tech. I thought that at a time when many women are being brave in speaking up against abuse, remaining silent was not something I was comfortable with. Also, when it comes to Twitter it only matters who is on it not who is not. In other words, you do not notice who does not Tweet. Some also were uncomfortable with the fact that the uproar against abuse was somehow limited to white women when minorities and the LGBT community have been victims of abuse on the platform for a long time.

The original intent was clear and deserves the utmost respect, but the execution was possibly not the best. So by Sunday night, Alyssa Milano encouraged people to reply “me, too” to her tweet about being a victim of sexual harassment or assault as a way to show how pervasive the problem is. A new meme was born: #MeToo. Voices were heard from women, men, straight and gay across countries like the US, UK, Italy and even more conservative France. The conversation was not limited to Twitter; it took over Facebook as well engaging more than 4.7 million people.

Burst Your Bubble…Read Some comments

Twitter succeeded in giving a voice to so many people making it clear that sexual harassment is not just a Hollywood or Tech industry issue and impacts individuals across the world. But even in that strong testimony, the ugliness of Twitter came through. Just take a look at some of the replies posted to comments of more famous women like Italian actress Asia Argento, and you quickly have a feel for how ugly people can be when they can hide behind a Twitter handle.

Very often we live in our cocoon of lists of people we follow because we respect them, share their views or are interested in what they do or say. Without knowing it, we are sheltering ourselves from all those individuals who more likely than not do not share our views, our believes, our values. And I am not talking here about which smartphone ecosystem you prefer but big stuff like politics, religion, sexual orientation.

Sometimes that bubble bursts as we get trolled or right out attacked for our views. Others, we are lucky, and we just never see the ugly side of Twitter. That does not mean it does not exist. Like we have seen since Sunday just because you do not have a story to share under the #MeToo meme it does not mean millions of people in the world don’t have one to share.

Tech Inevitability Isn’t Guaranteed

on October 17, 2017
Reading Time: 3 minutes

It’s a story that would have been hard to believe a few years back.

And yet, there it was. eBook sales in the US declined 17% last year, and printed book sales were up 4.5%. What happened to the previous forecasts for electronic publishing and the inevitable decline of print? Wasn’t that widely accepted as a foregone conclusion when Amazon’s first Kindle was released about 10 years back?

Of course, there are plenty of other similar examples. Remember when iPad sales were accelerating like a rocket, and PC sales were declining? Clearly, the death of the PC was short at hand.

And yet, as the world stands five years later, iPad sales have been in continuous decline for years, and PC sales, while they did suffer some decline, have now stabilized, particularly in notebooks, which were seen as the most vulnerable category.

Then there’s the concept of virtually all computing moving to the cloud. That’s still happening, right?

Not exactly. In fact, the biggest industry buzz lately is about moving some of the cloud-based workloads out of the cloud and back down to “the edge,” where end devices and other types of computing elements live.

I could go on, but the point is clear. Many of the clearly inevitable, foregone conclusions of the past about where the tech industry should be today are either completely or mostly wrong.

Beyond the historical interest, this issue is critical to understand when we look at many of the “inevitable” trends that are currently being predicted for our future.

A world populated by nothing but completely electric, autonomous cars anyone? Sure, we’ll see an enormous impact from these vehicles, but their exact form and the timeline for their appearance are almost certainly going to be radically different than what many in the industry are touting.

The irreproachable, long-term value of social media? Don’t even get me started. Yes, the rise of social media platforms like Facebook, Twitter, SnapChat, LinkedIn and others have had a profound impact on our society, but there are already signs of cracks in that foundation, with more likely to come.

To be clear, I’m not naïvely suggesting that many of the key trends that are driving the tech industry forward today—from autonomy to AI, AR, IoT, and more—won’t come to pass. Nor am I suggesting that the influence of these trends won’t be widespread, because they surely will be.

I am saying, however, that the tech industry as a whole seems to fall prey to “guaranteed outcomes” on a surprisingly regular basis. While there’s nothing wrong with speculating on where things could head and making forceful claims for those predictions—after all, that’s essentially what I and other industry analysts do for a living—there is something fundamentally flawed with the presumption that all those speculations will come true.

When worthwhile conversations about potential scenarios that may not match the “inevitable direction” are shut down with group think (sometimes from those with a vested interest at heart)—there’s clearly a problem.

The truth is, predicting the future is extraordinarily difficult and, arguably even, impossible to really do. The people who have successfully done so in the past were likely more lucky than smart. That doesn’t mean, however, that the exercise isn’t worthwhile. It clearly is, particularly in developing forward-looking strategies and plans. Driving a conversation down only one path when there may be many different paths available, however, is not a useful effort, as it potentially cuts off what could be even better solutions or strategies.

Tech futurist Alan Kay famously and accurately said that “the best way to predict the future is to invent it.” We live and work in an incredibly exciting and fast-moving industry where that prediction comes true every single day. But it takes a lot of hard work and focus to innovate, and there are always choices made along the way. In fact, many times, it isn’t the “tech” part of an innovation that’s in question, but, rather, the impact it may have on the people who use it and/or society as a whole. Understanding those types of implications is significantly harder to do, and the challenge is only growing as more technology is integrated into our daily lives.

So, the next time you hear discussions about the “inevitable” paths the tech industry is headed down, remember that they’re hardly guaranteed.

Q3 2017 Earnings Preview

on October 16, 2017
Reading Time: 6 minutes

We’re about to kick of earnings season for Q3 2017, and so I’m doing my usual quarterly preview. My focus here isn’t so much predicting what we’ll see as suggesting the things to look for when these companies report. As usual, I’ll tackle the main companies I track in alphabetical order.

Virtual Reality’s Desktop Dalliance

on October 13, 2017
Reading Time: 3 minutes

The hardware landscape for virtual reality evolved dramatically in just the last few weeks, with new product announcements from Samsung, Google, and Facebook that span all the primary VR platforms. While the new hardware, and some lower pricing, should help drive consumer awareness around the technology, perhaps the most interesting development was both Microsoft and Facebook demonstrating desktop modes within their VR environments. These demonstrations showed both the promise of productivity in VR and the challenges it faces on the road to broader adoption.

Microsoft’s Mixed Reality Desktop Environment
For the last few months, Microsoft and its hardware partners have been slowly revealing more details about both the Mixed Reality headsets set to ship later this month and the upcoming Windows 10 Fall Creator Update that will roll out to users to enable support of the new hardware. At a recent event in San Francisco, Microsoft announced a new headset from Samsung that will ship in November, which joins products from HP, Lenovo, Dell, and Acer that will ship in October. During that event, Microsoft Fellow Alex Kipman gave attendees a tour of the Cliff House, the VR construct inside Windows 10 where users interact with the OS and their applications.

At the time, it seemed clear to me that one of the obvious advantages Microsoft brought to the table was the ownership of the OS. By having users move within the OS virtually, you decrease the number of times the user must jump between the 3D world of VR-based apps and the 2D world of today’s PC desktop environment. More importantly, the Cliff House also offered a productivity-focused room where you could go and set up a virtual desktop where you utilize your real-world keyboard and mouse to use traditional desktop apps. Essentially a desktop space where your monitor is as wide and as tall as you desire to make it, providing the virtual real estate for a multitasking dream (or nightmare, depending on your perspective). Microsoft noted at the time that the number of apps running in such a scenario is limited primarily by the PC graphic card’s ability to support them. I couldn’t test the actual desktop environment at that event, but it certainly looked promising.

Facebook Announces Oculus Dash
At this week’s Oculus Connect conference Facebook offered its market response to Microsoft, announcing a permanent price cut to its existing Oculus Rift product ($399), a new standalone VR product called Oculus Go ($199), and additional details about its future Rift-caliber wireless headset code-named Santa Cruz. Just as important, though, was the company’s announcements about updates to its platform (Oculus Core 2) and its primary interface mechanism (Dash) that includes a desktop environment. With these announcements, Facebook rather effectively addressed Microsoft’s perceived advantage by introducing a VR environment that appears, at least from the on-stage demos, to bring many of the same interactive features as Microsoft’s to the Oculus Rift. I wasn’t at the Facebook event and haven’t tested its desktop environment yet, either, but it also looked promising. Whether the company will be able to drive the same level of desktop performance as Microsoft, which obviously has the advantage of controlling the underlying OS, remains to be seen.

The 2D VR Conundrum
One issue that both Microsoft and Facebook face as they push forward with their desktop environment plans is the simple to note but hard to address issue that pretty much 100% of today’s productivity apps are two dimensional. The result is that when you drop into these fancy virtual reality desktops, you’re still going to be looking at a two-dimensional windowed application. And you’re going enter and manipulate data and objects using your real-world keyboard and mouse. What we’re facing here is the mother of all chicken and eggs problems: Today there are very few virtual-reality productivity apps because nobody is working in VR, but because nobody is working in VR few app developers will focus on creating such apps.

One of the primary reasons I’ve been bullish on the long-term prospects of virtual reality (and augmented reality) is that I envision a future where these technologies enable new ways of working. Up until now, humans have largely adapted to the digital tools on offer, from learning to use a qwerty keyboard and mouse to tapping on a smartphone screen filled with icons. VR and AR offer the industry the opportunity to rethink this, to define new interface modes, to create an environment where we do a better job of adapting the tool to the human, acknowledging that one size doesn’t fit all.
Facebook and Microsoft’s continued reliance on the desktop metaphor at this early stage is both completely understandable and a little frustrating. These are the first stops on what will be a long journey. Ultimately, it will be up to us as end users to help guide the platform owners and app developers toward the future we desire. I expect it to be a very interesting ride.

News You might have missed: Week of October 13, 2017

on October 13, 2017
Reading Time: 3 minutes

Google Home Mini loses Touch Feature

Not even a week after the debut of Google Home Mini, Google was made aware of an issue with the pre-production units. The touch functionality on the top of the Mini which allows users to turn the Mini into listening mode was behaving incorrectly. Basically, the Mini was detecting a touch when nobody was actually touching it causing it to be listening in without the user knowing. Google first released a software update to rectify the issue and later disabled the feature altogether.

Google and the Disintermediation of Search

on October 12, 2017
Reading Time: 3 minutes

This week, the growing amounts Google pays phone makers and other companies to carry its search engine have been in the news as financial analysts have expressed concern over margin pressure. The growth in those traffic acquisition costs is certainly worth watching, but I’d argue that by far the larger strategic threat to Google comes from the growing disintermediation of search, something that’s also been in the news this week.

Google’s Growing Traffic Acquisition Costs

There’s no doubt that Google’s traffic acquisition costs have been growing, not only in absolute terms but as a percentage of revenue. By far the biggest driver of that increase is the increasing cut Google has to pay to Apple, Samsung, and others who give the Google search engine prime placement in their browsers. The chart below shows the percentage of revenue from Google’s own sites which it has paid out in TAC to these partners:

As you can see, that number has risen in various phases, notably from 2011-2013, and again starting in 2015 and continuing through the first half of this year. Overall, the percentage has nearly doubled from 6% to 12% during this eight-year period, and the trajectory continues to be dramatically up and to the right. That reflects the fact that an increasing proportion of Google’s search traffic and revenue now comes through smartphones and especially the iPhone, which likely constitutes a big chunk of its overall TAC payouts.

Disintermediation May Be the Bigger Issue

However, all of this only affects the search revenue Google actually generates and the margins it can drive off the back of that. Certainly, if TAC continues to rise in this way, that should squeeze margins, but the threat of disintermediation could undermine the revenue base on which those margins are generated in the first place. What do I mean by disintermediation here? The fact that many of what would once have been Google searches are now pre-empted by other apps and services before the user ever reaches Google. Here are just a few longer-term examples:

  • Apps: whereas users once used Google as a starting point to reach a variety of websites, they’re far more likely today to visit smartphone apps associated with those sites. To the extent that there’s any searching going on, it likely takes place within the narrower confines of those apps or perhaps an on-device search engine.
  • E-commerce: for online retail specifically, past studies have shown that some 55% of searches now originate not on Google.com but on Amazon.com, again cutting Google and its search and ad revenues out of the picture (and in the process allowing Amazon to quietly build its own search advertising business.
  • Voice: people are increasingly using voice interfaces to search for information they once used a text search for, both on mobile devices and increasingly on smart voice speakers like Amazon’s Echo and Google’s own Home products. In many of these cases, even on Google’s platform, there’s currently no ad revenue opportunity associated with that.
  • Bots: Facebook and Microsoft have now both announced integration of AI-based virtual assistants into their messaging platforms, with Microsoft finally launching Cortana in Skype this week after trailing it at last year’s Build conference. These bots will increasingly pre-empt searches because they give users the information they need when they need it in proactive ways.
  • Contextual information: even if AI-powered bots aren’t serving up this information in a messaging context, there are a variety of other ways in which information previously provided reactively is now being provided to users proactively. Snapchat’s addition of Context Cards this week is the latest example of this, offering up restaurant reviews and ride sharing services in the context of Snaps with location tags.

Google clearly recognizes all of this, which is why it’s been one of the biggest proponents of progressive web apps and other approaches which try to reassert the pre-eminence of the web, though it hasn’t had much success with that approach against the continued growth of native apps. But it’s also clearly aware that it may as well try to play in secondary roles where it can, which explains its recent reappearance as the back-end of Apple’s Siri search functions in iOS and macOS, which likely resulted from a bigger financial incentive, which in turn will drive up traffic acquisition costs further. But such concessions are going to be increasingly necessary if Google is to maintain its search and ad revenue growth in the face of these multi-faceted threats.

Our Concerns About Smart Tech might reflect our Lack of Trust in Humanity

on October 11, 2017
Reading Time: 4 minutes

Back in January, toy maker Mattel announced Aristotle, a smart hub aimed at children. Aristotle was designed to grow with your kids, starting out with helping soothe a crying baby with lights and music to get to help them with homework once in school.

Last week, Mattel canceled the product saying it did not “ fully align with Mattel’s new technology strategy.” Despite the company statement, it seems that strong concerns around data privacy and child development led to the product cancellation.

While a product directly marketed at kids might raise more concerns, there are plenty of products that are hitting the smart home market that will be used by children and should be no less of a concern.

Focus on Kids is growing in the Quest to win Our Homes

Over the past few weeks, both Amazon and Google refined their Amazon Echo and Google Home’s Kids offering by adding specific apps and features. Amazon released a series of apps that will be labelled as children’s apps from Spongebob Challenge to Storytime. In a similar vein, Google announced some new kid-friendly features bringing story time and gaming together with Disney’s names like Lightning McQueen and Star Wars. Google is also working with Warner Brothers and Sports Illustrated Kids to add content regularly. The new kids vertical will soon be open to developers and these features will be arriving on Google Home later in October. Parents will also be able to have family-linked account settings on Google Home so that different permissions can be set through the Family Link service. Last, a new feature for families called “Broadcast” allows you to push voice messages and reminders across all of your Google Home devices. Although I doubt Broadcast will make my kid pay more attention than when I shout “time for school” at the top of my voice! Google also improved Google Home’s ability to understand kids younger than 13.

All of these features are collecting data at some level or another which might expose brands to risks, risks Mattel did not think were worth running. Doing things by the book, Amazon is requesting parents to give permission as requested by the Children’s Online Privacy Protection Act but only time will tell if it is enough. This is, after all, uncharted territory.

For technology shared in the home, things get complicated quite quickly. There are different areas we should consider: content access and data privacy. While there is a lot of attention on the latter, I am not sure we have started to think about the former as much as we should. These devices are full-fledge computers disguised as remote controls which might lead most users to underestimate the power they have. With computers, and to a lesser extent, with phones, we set up restrictions for what our kids can do. With TVs, however, we mostly tend to rely on a mix of program guidance and common sense in order to regulate the type of content our kids are exposed to. These new  smart devices are nowhere near self regulation. Once technology is capable to recognize voices we might be able to grant certain permissions so that our children will not be played the wrong version of a song, or read an R rated definition of a word. Nothing is perfect though. Consider your TV experience, you might be ok with the movie you allow your children to watch despite the PG13 rating, but the commercials are often not appropriate. When it comes to these smart speakers, everything from contacts to search is wide open to users. Google started supporting two voices under its Voice Match technology so that when I access calendar appointments or contacts it makes sure they are mine and not my husband’s. As the number of voices supported grows you can see how families could use the technology not to just prevent kids from calling people and messing with your calendar but also to avoid access to certain content.

Technology is not Evil but Humans can be

Protecting privacy, especially of the vulnerable, is a very important topic. Yet, what I find even more fascinating are the concerns some expressed about the impact that Aristotle could have had on children’s development. The worry was that kids could learn to turn to technology for comfort rather than their parents or care takers. But how can that happen if we, as parents and caretakers, continue to do our job?

Technology impacts behavior. Nobody could successfully dispute this statement.
We already know Gen Z is growing up more slowly than previous generations. Like Jean M. Twenge says in the book iGen “ today’s kids are growing up less rebellious, more tolerant, less happy and completely unprepared for adulthood”. While technology is the enabler, it is humans who empower it. This addiction to screens starts very early out of convenience. I often tell the story that our daughter’s first words were dada, “ipone” and mama. Yes, ladies and gentlemen, I came after the magical device Steve Jobs brought to market! How did that happen? Cause we discovered it was the most effective tool to keep our wiggly baby still during nappy changes. Convenience drives a lot of what we do. Before phones it was TV of course. Parents discovered that it was much easier to put kids in front of a screen to be entertained than to actually engage with them. Yet, I am sure that even the busiest of parents would not just let a smart hub soothe their crying baby.

Why are we so concerned about the impact on child development? How is this new tech any different from the effect that pacifiers could have had? While we might not see a pacifier as tech, it is a device, a device that was invented to substitute a mother’s breast to soothe. Have mothers stopped breastfeeding or caring for their babies because of it? Certainly not!

Smart tech is helping with preventing crib and hot car deaths but as far as I know there is no tech that can either supplement or substitute common sense. It seems to me that the concerns people have are more rooted in a lack of trust in us humans and how we will use the technology rather than in a lack of trust in what the technology can deliver. Interestingly, what will help is not better tech skills but better social skills, greater empathy, higher emotional IQ. So, as we balance the impact of tech on child development on one side and a greater focus on STEM on the other, let’s not forget to empower our kids with emotional and social skills that will help them be tech wizards with a heart.

 

My Month with Apple Watch Series 3

on October 10, 2017
Reading Time: 3 minutes

I’ve spent the last month with the Apple Watch Series 3 and during that time a few key observations have stood out. One of the big value propositions of the Series 3 is the cellular connectivity and that is the part I was most interested to try and see how not having to have Apple Watch tethered to my iPhone changed the overall experience.

Edge Computing Could Weaken the Cloud

on October 10, 2017
Reading Time: 4 minutes

Ask anyone on the business side of the tech industry about the most important development they’ve witnessed over the last decade or so and they’ll invariably say the cloud. After all, it’s the continuously connected, intelligently managed, and nearly unlimited computing capabilities of the cloud that have enabled everything from consumer services like Netflix, to business applications like Salesforce, to social media platforms like Facebook, to online commerce giants like Amazon, to radically transform our business and personal lives. Plus, more than just the centralized storage and computing capabilities for which it’s best known, cloud computing models have also led to radical changes in how software applications are designed, built, managed, monetized and delivered. In short, the cloud has changed nearly everything in tech.

In that light, suggesting that something as powerful and omnipresent as the cloud could start to weaken may border on the naïve. And yet, there are growing signs—perhaps some “fog” on the cloud horizon?—which suggest that’s exactly what’s starting to happen. To be clear, cloud computing, and all the advancements its driven in products, services and processes, isn’t going away, but I do believe we’re starting to see a shift in some areas from the cloud and towards the concept of edge computing.

In edge computing, certain tasks are done closer to the edge or end of the network on client devices, gateways, connected sensors, and other IoT (Internet of Things) gadgets, rather than on the large servers and other infrastructure elements that make up the cloud. From autonomous cars, to connected machines, to new devices like the Intel Movidius VPU (visual processing unit)-powered Google Clips smart camera, we’re seeing an enormous range of new edge computing clients start to hit the market.

While many of these devices are very different in terms of their capabilities, function and purpose, there are several characteristics that unite them. First, most of these devices are designed to take in, analyze, and react to real-time data from the environment around them. Leveraging a range of connected sensors, these edge devices ingest everything from location and temperature data to sound and images (and much more), and then compute an appropriate response, whether that be to slow a car down, provide a preventative maintenance warning, or take a picture when everyone in view is smiling.

The second shared characteristic involves the manner with which this real-time data is analyzed. While many of the edge computing devices have traditional computing components, such as CPUs or ARM-based microcontrollers, they all also have new and different types of processing components—from GPUs, to FPGAs (field programmable gate arrays), to DSPs (digital signal processors), to neural net accelerators, and beyond. In addition, many of these applications use machine learning or artificial intelligence algorithms to analyze the results. It turns out that this hybrid combination of traditional and “newer” types of computing is the most efficient mechanism for performing the new kinds of calculations these applications require.

The third unifying characteristic of edge computing devices gets to the heart of why these kinds of applications are being built independent from or migrated (either partially or completely) from the cloud. They all require the kind of real-time performance, limited latency, and/or security and privacy guarantees that best come from on-device computing. Even with the promise of tremendous increases in broadband network speed and reductions in latency that 5G should bring, it’s never going to replace the kind of immediate response that an autonomous car is going to need when it “sees” and has to respond to an obstacle in front of it. Similarly, if we ever want our interactions with personal-assisted powered devices (ie., those using Alexa, Google Assistant, etc.) to move beyond one question requests and into naturally-flowing, multi-part conversations, some amount of intelligence and capability is going to have to be built into edge devices.

Beyond some of the technical requirements driving growth in edge computing, there are also some larger trends at work. With the tremendously fast growth of the cloud, the pendulum of computing provenance had swung towards the side of centralized resources, much like the early era of mainframe-driven computing. With edge computing, we’re starting to see a new evolution of the client-server era that appeared after mainframes. As with that transition, the move to more distributed computing models doesn’t imply the end of centralized computing elements, but rather a broadening of possible applications. The truth is, edge computing is really about driving a hybrid computing model that combines aspects of the cloud with client-side computing to enable new kinds of applications that either aren’t well-suited or are not possible with a cloud-only approach.

Exactly what some of these new edge applications turn out to be remains to be seen, but it’s clear that we’re at the dawn of an exciting new age for computing and tech in general. Importantly, it’s an era that’s going to drive the growth of new types of products and services, as well as shift the nexus of power amongst tech industry leaders. For those companies that can adapt to the new realities that edge computing models will start to drive over the next several years, it will be exciting times. But for those that can’t—even if they seem nearly invincible today—the potential for becoming a footmark in history could end up being surprisingly real.

Deception on the Internet is Nothing New and it’s Getting Worse

on October 9, 2017
Reading Time: 3 minutes

We’re just digesting and analyzing the impact to the nation of being exposed to untruthful news stories. (Note: I’m following Dan Gillmor’s advice and not using Fake News because that term has been hijacked by Donald Trump to refer to news he disagrees with.) And while this may be the most severe example of being misled by the Internet, it’s certainly not the only. In fact, the Internet is filled with cases whose sole purpose is to trick and deceive us under the guise of offering useful information.

One pervasive example is when searching for ratings on various products. There’s a vast number of sites that purport to provide objective analyses and ratings of products. The sites are titled with names such as www.top10antivirussoftware.com but are often sites created to tout one product over another or to just provide a list of products with links to buy, in exchange for referral fees.

A search for “Best iPhone cables” finds one top choice (paid for position), “BestReviews.Guide,” a site that reviews numerous products. There’s no explanation of how they rate, but in their disclaimer, they write, “BestReviews. The guide provides information for general information purposes and does not recommend particular products or services.”

But pseudo-reviews are not confined to mysterious companies. Business Insider offers reviews called “Insider Picks.” Many of these reviews are filled with words but do little to explain the basis for their ratings.

What’s motivating all of these review sites? The opportunity to monetize them by receiving kickbacks or referral fees when someone clicks to buy, primarily from Amazon. You can examine the link that takes you to Amazon to see the code added to the normal link. Commission range up to 10% with an average of about 5%.
And here’s another example of deception and trickery on the Web. I experienced a problem with QuickBooks on my Mac and looked for a phone number to get help. There was no phone number in the app, so I searched online. Up came an 800 number, using Google’s search and a Website titled “QuickBooks 800 Help Line”. I called it, got a seemingly helpful technician, and he readily identified the cause of my problem. He said he needed to install the QuickBooks utility software on my computer to remove some bad files. As I started to allow this, I hesitated and asked if there is was any charge. He said there is a $300 charge for the utility.

That’s when I checked with my daughter, using a 2nd phone line, who, coincidentally, was an Intuit manager. She confirmed after a quick call to the head of customer support that I was not speaking to Intuit, but an imposter. I quickly hung up and later discussed this with an executive at Intuit. Their policy, like many companies, had been to hide their customer service number because they were not equipped to handle the volume of calls. She said they never anticipated what I experienced and, perhaps, as a result, their phone number pops up at the top of a search.

I was reminded of this the other day when I was doing a story on Google’s customer support, which is a major consideration when buying their new phones. Searching for a support number brought up many sites purporting to be Google support, but no Google number. One prominent site is “Gmailtech.info” with the headline “Unlimited Gmail support” and a phone number, and this paragraph:

“Phone Support-one can reach the Google Technical Support service by dialing their customer service number which is completely free of cost and our customer care is available 24/7*35 days. You just need to call on the Google Support Phone Number, and you will get all the solutions to your problems.”

Of course, it takes you to a GTech number. And notice the poor grammar.

So, these misleading support sites are still rampant, taking advantage of those looking for help and information.

This is probably not a revelation to most of us in the tech community that once laughed about the Nigerian scams, but like deceptive news stories, the players are getting more sophisticated at deception.

Context For Netflix’s Price Increase

on October 9, 2017
Reading Time: 3 minutes

The price increases Netflix announced this week come around a year after it finished implementing its last set of price increases, a process it spread over several years. Those last price increases occurred at a time when Netflix’s margins were already expanding despite its growing content spending, but this time around, the increase follows pressure on margins from ongoing growth in content spending.

Why is Google Making Phones, Anyway?

on October 6, 2017
Reading Time: 3 minutes

Google-branded phones own about 1% share of the smartphone market, have limited carrier distribution, and won’t make real money for several years. Amazon and Microsoft both tried, and failed, on phones. Even Google’s initial phone foray, with Motorola, was a bomb. As my Techpinions colleague Jan Dawson pointed out in his excellent piece yesterday, Google doesn’t even seem all that serious about selling phones.

So, why is Google making smartphones? In fact, they just doubled down, acquiring HTC for $1.1 billion (a bargain) and introducing two new Pixel models.

The historic argument was that for Google, the more screens on which to view Google ads and do Google searches, the better. But Google is getting plenty of traffic from iPhone as the default search engine on iOS, as well as from Android devices. So selling a few million Pixels won’t really make a material difference to Google’s mobile search business.

My view is that Google’s commitment to being in the phone business is part of a broader strategy, with three central elements. First, there are those who believe that Pixel is an important component of Google’s commitment to hardware, along with Home speakers, Clips camera, Daydream VR headset, Pixelbook laptop, and even the AI-powered Pixel earbuds. But I think Pixel phones are necessary to some things that Google wants to accomplish in the wireless and connectivity arena. No, I don’t think they’re going to buy Sprint or T-Mobile. But I think they do want to play a broader role in connectivity, which includes multiple forms of wireless. Google continues to invest in Project Fi, their hybrid Wi-Fi mobile offering that works best on Google-centric phones such as Nexus and Pixel.

Google is also playing an increasingly pivotal role in the evolution to 5G. The company has a good chance of being selected by the FCC as one of the Spectrum Access System administrators for the shared 3.5 GHz spectrum (CBRS). Google has also been part of an industry push to make spectrum in the 3.7-4.2 GHz band available for 5G. Alongside this effort on the ‘mid-band front”, Google is part of a group of 30 key industry players pushing the FCC to set aside more spectrum for unlicensed (AKA Wi-Fi) users in the 6 GHz band, which sits in proximity to the 5 GHz band used for Wi-Fi. Also on the “5G” front, the Pixel 2 incorporates the latest LTE advances—3 Way Carrier Aggregation, 4×4 MIMO, and 256 QAM—which will allow Google to learn more about what some advances on the road to 5G might mean.

Google is investing in other forms of connectivity as well, from Google Fiber in the U.S. to Project Loon, an effort to bring the Internet to unserved areas. A test this summer kept one of its Loon balloons over Peruvian airspace for fourteen weeks. Google also needs to keep its eye on Facebook, which itself is involved in some of these groups but is also leading the Telecom Infrastructure Project and doing some other super-secret stuff.

Second, I think that being more directly involved in phones is important to Google’s AI push. Mobile devices are going to be a big part of how users experience AI, in ways different than they might from PCs or other types of hardware. And even though Google search is the default on iOS and Android, Google Assistant faces much greater competition from Apple (Siri) and Samsung (Bixby), plus of course Microsoft and Amazon. So Google needs its own hardware, from PCs to home speakers to phones, to work on some of its important AI efforts, such as Assistant. One of the new/unique aspects of the Pixel 2 is the Active Edge, which allows the user to squeeze the side of the phone to launch Assistant.

Third, Pixel represents a relatively cheap way to do some public R&D around several initiatives. If you think about it, Pixel is really a big public beta. They get HTC for $1.1 billion – a relative pittance in Google/Silicon Valley terms, plus they can now use Google powered phones to test things from Project Fi to AR (Google’s ARCore framework is fully active on the Pixel 2), to some nifty tricks on Assistant, in a way that they can’t on other Android-powered devices. And the risk is relatively low, by comparison. Something messes up, and they upset a few million Pixel/Nexus customers, not a few billion iPhone and Galaxy customers.

With a relatively modest investment, at least in Big Internet Player terms, Pixel phones allow Google to be masters of their own Android domain, providing a walled-off mobile testbed for new capabilities related to connectivity, AI, AR, and other concepts.

News You might have missed: October 6th, 2017

on October 6, 2017
Reading Time: 4 minutes

This week Sonos announced the Sonos One (One) with integrated Alexa. The One does not look different from the Play One but has six far-field microphones to pick up your voice and lights to show when it is listening. Amazon Alexa will work out of the box and you will be able to control with your voice all your Sonos speakers in the home. The speaker will ship on October 24 and will cost $199. Sonos promised that other assistants will come in 2018, including Google Assistant. At launch, the One will support about 80 streaming services, and you can use Alexa voice commands to play tunes from Amazon Music, iHeart Radio, Pandora, and a few other music services right now. Support for Spotify will be added soon, with Apple Music coming at a later date.

The Battle for the Home

on October 5, 2017
Reading Time: 3 minutes

likes of Amazon Echo, Google Home, and Apple’s HomePod, as the first main entry points, the real battleground is the platform, and the services, consumers will consume as a result of integrating more technology into their homes.

Google is Clearly Serious About Hardware, But Not About Selling Phones

on October 5, 2017
Reading Time: 5 minutes

Google’s announcement of its second generation of first-party hardware made clear something that’s been increasingly apparent over the past year: the company is very serious about this business. However, it also reinforced what continues to be a strange paradox: Google doesn’t actually seem to be very serious about selling at least some of that hardware, namely phones.

Another Solid Set of Hardware

Google’s first set of hardware was announced a year ago this week, and was already an impressive start. The Pixel phones were a decent pair of devices with some clever features, good cameras, and exclusive access – at least temporarily – to the Google Assistant which was then just launching. The Google Home was a prettier and in some ways more powerful answer to the Amazon Echo, at a lower price. Daydream VR was a prettier and more usable answer to the Samsung Gear VR. And Google WiFi leaned on Google’s partnership with a couple of traditional WiFi router vendors to create the first mesh WiFi system from a big brand. All told, the hardware released was solid, attractive, and competitive in its various markets.

This year’s hardware builds nicely on last year’s, with refreshed Pixel phones, two new Google Home devices either side of the first in the lineup, and new entrants in several other categories too. The new Pixel phones look like good upgrades on last year’s, and offer some interesting new features while not embracing some of this year’s big trends either entirely or at all. The new Google Home devices will help Google compete more effectively with the low end of Amazon’s lineup while also targeting a part of the market Apple had looked like having to itself. The Pixelbook laptop is as baffling a product as the first Pixel was – a premium device in a category notable mostly for its low prices. Google’s two new categories are wireless earbuds, where its PixelBuds serve as BeatsX competitors at an AirPods price point with a unique AI-based twist, and Google Clips seeks to create a new category which took a somewhat unwarranted chunk out of GoPro’s share price yesterday.

Again, all told this looks like a solid lineup. Google’s recent HTC deal, the fact that it’s updated and broadened the hardware line, and its ongoing public statements all confirm that it’s really serious about making its own hardware, offering a coherent set of products, and driving the same integration and optimization benefits as other integrated vendors have done before.

A Unique and Somewhat Puzzling Approach to Hardware

Google continues to approach the hardware categories it enters in a unique way, however. It downplays the role of hardware itself while playing up the role of software, clearly playing to its strengths as first and foremost a software company, and one which has had little control until now over the details of its hardware design in phones in particular. It’s the argument we would expect Google to make, and it’s clearly capable of achieving in software what others pursue through hardware, which is impressive.

Some of Google’s offerings also feel experimental, and the AI features in both the PixelBuds and Clips seem like good examples of that. Real-time translation and the idea of a camera that automatically takes the shots you want are both impressive demonstrations of Google’s AI chops, but neither is a product people are clamoring for, nor are the features ones people are actually likely to use regularly. How often do you need real-time language translation? And how likely is it that both you and your conversation partner will have the necessary hardware and software to make it happen? Google is honest about the fact that it doesn’t expect Google Clips to be a big seller, and that’s a good thing – creating new hardware categories is tough for anyone, but especially for companies without a significant presence or history in the space.

Given that ChromeOS has actually done pretty well in the segments where it feels relevant, while Android Wear continues to languish, some of Google’s choices about where to pursue a first party hardware strategy seem a little puzzling. Why not show the Android OEMs how to make a decent smartwatch, rather than ceding much of the market to two platforms – Apple’s and Samsung’s – it doesn’t control? Why pursue premium Chromebooks instead? Showcasing both Android apps and Google Assistant on a Chromebook would be just as possible at $500 as at $1000 if that’s the intention here.

Marketing Continues to be the Biggest Challenge

Above all, though, it feels like marketing continues to be Google’s biggest challenge when it comes to smartphones in particular. The products are there, but Google’s approach to the other 4 P’s of marketing continues to be lacking:

  • Price – last year Google matched iPhone pricing exactly, but this year it has a $200 price differential between phones which it explicitly said don’t have any feature differences, in a market where $100 size differentials have become the norm. That makes the Pixel 2 XL more expensive than Apple’s base iPhone 8 Plus, which seems an odd decision.
  • Promotion – as with pricing, much of Google’s marketing last year seemed aimed directly at the iPhone, not surprising from a company which doesn’t want to be seen to undermine its OEM partners. And yet all the evidence suggests that people tend to be fairly loyal to the two big ecosystems, and there’s relatively little switching from iOS to Android. Everyone else knows that Pixels are for Google-centric people, and Google needs to embrace that in its marketing rather than making sarcastic digs at the iPhone. Its advertising should be about what its phones do uniquely well for people who love Google and its services, because that’s the niche it’s really going after.
  • People and Placement – ever since the launch of the first Android phone, Google has underestimated the role of people in selling and supporting phones, and that still doesn’t seem to have changed. Its own channel is exclusively online, and it really hasn’t invested in the kind of third party retail presence necessary to effectively market a phone. But from a US perspective in particular the biggest stumble Google has made continues to be its exclusive relationship with Verizon on the carrier side. Carriers are by far the biggest channel for smartphone sales in the US, and limiting itself to one carrier – generally not the one seen as the most forward looking either – has been a huge mistake, one Google has repeated this year. (Indeed, I was told today that the Google-Verizon exclusive is a three-year deal, and if that’s true it means Google can’t extricate itself from this mess anytime soon.)

All of this adds up to a bizarre picture of a Google which is enormously serious about building the best possible hardware, but doesn’t seem very serious at all about actually selling it. Given the scale of both its organic investment in the Pixel line and now its billion-dollar-plus HTC deal, Google is pouring massive resources into this project, but it will never see a reasonable return unless these devices sell. It’s unclear to me whether this is a deliberate strategy on Google’s part to limit the negative impact on its hardware partners, or the result of organizational schizophrenia, but it simply doesn’t make sense. Either Google is serious about this market or it isn’t, and if it is it needs to bite the bullet and go ahead and compete with its OEM partners more directly. If that pushes them to do better, that’s good for Google too, and if it doesn’t Google gets a greater share of the premium Android smartphone market and gets to put its own services front and center. At this point, there’s no viable alternative to Android for independent phone makers, so there’s little if any risk to the strategy.