Samsung Embraces Intel Project Athena Vision

In an era of smartphones with larger screens and more and more capabilities, some people have started to question the continued usefulness and viability of PCs. After all, the thinking goes, if I can do more on my smartphone, why do I need to carry a notebook around as well?

Theoretically, no company should know this better than Samsung. It’s pushing smartphone capabilities further than anyone with devices, like the Galaxy Fold, that bring true computer-like capabilities to your pocket.

And yet, rather than backing away from the PC market, the company is doubling down, having introduced several new lines of laptops since the beginning of the year, including the Galaxy Book S, a Qualcomm 8cx-powered device that runs Microsoft’s Windows 10 Home. With today’s launch of the Galaxy Book Flex and Galaxy Book Ion, two Intel 10th Gen Core-powered devices, as well as the announcement of the forthcoming Intel Lakefield-based version of the Galaxy Book S, Samsung is extending its mobile PC line even further.

Both the Galaxy Book Flex and Ion—each of which are available in 13.3” and 15.6” screen size versions—are part of Intel’s new Project Athena program. Launched with great fanfare at this year’s CES show, Project Athena is designed to reinvigorate the PC market, with the ultimate goal of creating more compelling, more focused computing experiences, ideally enabled through new types of technologies and even new form factors. In the near term, however, the more practical objectives are to provide better real-world battery life, better connectivity, slim designs, and more immediate responsiveness—in other words, to make PCs a bit more smartphone-like.

On top of those usability enhancements, another critical goal with Project Athena—and likely why Samsung views it as an important extension of its product line—is to offer the kind of robust performance that only a well-equipped PC can provide. The truth is, no matter how compelling smartphone performance becomes, there are certain tasks most people do that require the kind of performance and interaction that only a PC can deliver.

Whether it’s working with large sets of numbers, laying out large documents, editing videos or composing music, years of multi-device experience have shown us that PCs still play an important role—particularly for people who push their device capabilities to the limit and expect high-quality performance while working (or digitally playing) wherever they choose to. Throw in the desire/need to connect to a wide variety of powerful peripherals, and it’s clear that PCs have a healthy outlook, even in an era of powerful, foldable smartphones.

In that light, both the Galaxy Book Ion, which starts at under 1 Kg in weight, and the Galaxy Book Flex, which is based on a 2-in-1 design with a 360° hinge and integrated S-Pen, provide the kinds of key features and premium designs that are likely to appeal to these types of “mobile go-getters” (as Intel prefers to call them). Given Samsung’s heritage, it’s no surprise that the screen capabilities, in particular, look to be distinguishing characteristics. All four variants feature a full HD (1,920 x 1,080) resolution QLED panel that offers up to 600 nits of brightness, enabling an Outdoor mode for easy viewing outside. Both 15.6” models also offer the option of discrete Nvidia GeForce MX250 graphics with 2GB of GDDR5 memory. The Galaxy Book Ion 15 also features the ability to expand beyond its 16 GB DRAM and 1 TB SSD storage with an empty memory SoDIMM and a slot for additional SSD storage. All four are expected to be available in the US in 2020. Details on the Intel version of the now confusingly named Galaxy Book S are still to come.

Despite its growing PC ambitions, Samsung remains a niche player in the global PC market and these devices aren’t likely to dramatically change that. However, they are an important step forward for the company, and their very existence points to a bigger picture of multi-device and even ambient computing that Samsung seems to be embracing. In fact, given the growing relationship between Samsung and Microsoft, as well as the long-term existing partnership that Samsung shares with Google, the Korean giant is smartly moving itself into a unique and potentially very powerful position at the center of a diverse and growing universe of computing devices and platforms. Over time, Samsung could become the connecting thread that links together diverse computing worlds into a single unified experience and could prove to be an even stronger competitor to Apple.

Working with component providers like Intel and Qualcomm also plays right into that strategy and vision, because it provides them access to some of the key components they need to power that experience. Conversely, Samsung is a great partner for Intel to line up for Project Athena because of its capabilities in critical components (e.g., foldable displays) that could enable even more compelling computing devices.

Ultimately, all these companies need to work on making the experience of using multiple devices—which is now, and will continue to be, the day-to-day reality for the vast majority of consumers and business workers—much easier. Thanks to its uniquely broad product portfolio and range of platform and component partnerships, Samsung has the opportunity to make a big impact here. Let’s hope this is the start of more to come.

Google’s Offer for Fitbit, Spotify Earnings, AirPods Pro

Google Makes Offer for Fitbit
This is one of those synergy type deals that should have been easy to see. We have long written here at Tech.pinions about the challenges of being a one-trick pony business and how competitive threats remain more difficult to ward off when you have only one main revenue stream for your business.

Fitbit has largely been a one-trick pony as the hardware sales drive most of the companies revenue, and the services business never quite took off. It’s felt for some time that Fitbit was a company in need of being acquired where the expertise, or in this case the data they have acquired, can be used to the benefit of someone wanting to play catch-up in wearables.

Google has always made the most sense to acquire Fitbit and the teams there have expertise not just in making wearable hardware, fitness and health software and services, but also have quite a database of user data and fitness behavior trends. While many commenters panicked thinking Fitbit owners need to delete data because Google can’t be trusted with it, the reality is Google is likely interested in using this data for internal learnings as they continue to evolve Android Wear and their broader wearable strategy.

Google’s broad plan with Android Wear still remains a mystery to me, and have had multiple discussions with Android Wear partners and customers, and it seems they are similarly unsure of the total vision. Buying Fitbit from a hardware standpoint doesn’t solve this necessarily, but having more customer data and behaviors of Fitbit customers should certainly help them refine their strategy further.

It’s no secret Android Wear has not sold anywhere near the numbers of Apple Watch or Fitbit, which means this is one area Google has a data deficit, and they seem intent on looking to solve that. We will see if this deal happens, but it makes sense from many angles.

Spotify Earnings
Spotify is also a one-trick pony with only one business model at their disposal right now. It’s the main reason I’m not optimistic about Spotify and remain convinced they are a future acquisition target for someone like Amazon, or maybe Google as well.

Spotify’s business still relies heavily on freemium, which is why a good portion of their continued investor commentary focuses on its strategy to drive premium conversions. This quarter they made an interesting point that’s worth calling out.

We continue to see exponential growth in podcast hours streamed (up approximately 39% Q/Q) and early indications that podcast engagement is driving a virtuous cycle of increased overall engagement and significantly increased conversion of free to paid users. The correlations in our data sets are clearly apparent. We are working to prove causality. Overall, the business is performing strongly.

A few points to make here. The first is Spotify recognizing a higher value customer, for whom Spotify is a source of more than just background noise (i.e., music). I think this is an important point, should they prove causality, and I think they will because it will show them a behavior for additional content which garners higher-value customers to move from minnows to fish and potentially whales. For example, do they start having audiobooks? Would they get into syndicating video ever? The point here is what can they offer that moves customers beyond free, and I think they realize for many music is not the answer.

I’ve mentioned before I’m very active in my daughter’s High School and spend a lot of time talking with many teenagers. I’m continually fascinated by the division of teens who pay for Apple Music and those who don’t pay for Spotify. I can’t find many teens I talk to who pay for Spotify, and obviously, if you have Apple Music, you are paying for it. I’ve also seen many surveys suggesting similar patterns that your average Spotify customer is not paying, and from several studies, I’ve seen these customers are mostly coming from Pandora, where they also did not pay.

Spotify has to be concerned, as their investors are concerned that they grow their customer base but do not convert those users to premium at significant rates. It seems the point they made about podcasting is designed to show their potential to use higher-value content to drive subscriptions and how their ability to be wise in adding higher-value content could yield returns in premium conversions.

AirPods Pro
I’m looking forward to getting my hands on the new AirPods Pro, as I’m sure many readers area. I’ll write more on my experience once I have some time with them, but I want to make a point that I’ve made before but want to reiterate again.

This picture cements the point.

Where Apple develops a lot of custom silicon for iPad, iPhone, and Mac, what they design for wearables is a system in package. It’s essentially a custom motherboard tied together with a custom solution of chips, sensors, and other components. What we find in Apple’s wearables like Apple Watch and AirPods may very well be the most integrated component solution Apple creates, which is saying something.

The point is this. There is no company that is better at miniaturizing computers than Apple. Not even close. I don’t think many realize the advantage Apple has here with wearable computing. Apple is designing extremely small, yet extremely complex computers and to think anyone will be close to what they will be able to fit in a pair of glasses when they release seems impossible.

This is an area, the future of wearables, where all of Apple’s custom silicon and design efforts may culminate into their full potential.

Google and Partners Push Chromebooks Beyond the Education Market

There have been several Chromebook-related announcements in recent weeks, signaling that Google and its partners see new opportunities for the platform outside of its traditional stronghold in the U.S. education market. Updated enterprise-focused features, commercial and prosumer-focused hardware, and competition within the silicon space all point to a strong push for Chromebooks in 2020. The question: Is the market ready to buy?

Chromebooks for Enterprise
In late August, Google and Dell Technologies launched updates to their respective Chrome-based offerings. Google announced the Chrome Enterprise Upgrade, which lights up a wide range of capabilities required inside commercial organizations. New features included management capabilities, around user policies, network management, and device reporting. The company also did a major update to the Google Admin Console, increasing the speed and flexibility of the Web interface where IT goes to manage Chromebooks. These upgrades build upon the $50-per-Chromebook subscription model Google launch back in 2017 and work with existing unified endpoint management products such as VMware’s Workspace ONE, Citrix’s Endpoint Management, and others.

Dell’s concurrent announcement: The launch of two new Latitude Chromebooks. The Latitude 5400 Chromebook Enterprise, a $699 Intel-based notebook with a 14-inch display, and the Latitude 5300 2-in-1 Chromebook Enterprise, an Intel-based 13-inch convertible starting at $819. Dell’s seriousness about this segment push was evident by the fact the company announced it would launch the new products in 50 countries. A big part of its strategy: integrating Chrome OS into its unified workspace story.

HP Adds Chromebooks to DaaS
On October 10th, HP announced it was also launching a new lineup of Chrome Enterprise products, noting that its research showed “four out of five businesses are already exploring cloud-native clients like Chromebooks.” HP didn’t announce pricing for the new products, which will ship in late October and November. They include the Chromebook Enterprise X360, a 14-inch, Intel-based convertible, the Chromebox Enterprise G2, an Intel-based desktop for frontline workers, and the Chromebook Enterprise 14A, a 14-inch traditional notebook featuring AMD processors.

Equally notable: When HP launches its new Chromebooks, the company said they would also become available as part of its Device as a Service offering. I’ve written extensively about DaaS, and it is a testament to the slow but steady momentum of that market that HP has opted to expand its offering to include these new Chrome products. The “As a Service” model lets companies contract with a provider—such as HP—to offload device deployment, management, and lifecycle services, freeing up internal IT to focus on bigger picture projects. (It’s worth noting that Dell is also now offering Chromebooks as part of its PC as a Service offering, too).

Google Launches PixelBook Go
Last week at its big Pixel launch event, Google itself announced the PixelBook Go, a standard notebook product that starts at $649 with an Intel Celeron processor, but goes all the way up to $1,399 when equipped with a Core I7 processor, 4K display, 16GBs of RAM, and a 256GB of storage. Google adds an extra layer of security to the device utilizing its Titan C security chip, which validates the OS before bootup, a feature clearly aimed at commercial buyers. The Go joins Google’s existing convertible product, the PixelBook, which it launched in 2017 and sells for $999.

The Go announcement is interesting, as the range of configurations means it can hit a wider range of price points than the original PixelBook. However, it’s a rather staid form factor that seems to indicate that Google isn’t necessarily interested in being at the forefront of Chromebook design. Whether commercial companies will take a leap and buy from Google or stick with their existing providers such as Dell and HP remain to be seen.

Market Readiness
At IDC, my colleague Linn Huang has closely monitored the Chrome market, tracking its strong growth in education and its early forays into the adjacent commercial space. Earlier this year, he ran a survey asking IT decision-makers about their current and future appetite for supporting Chrome inside their organization. Among the U.S. enterprise buyers we surveyed, Chrome makes up about 4% of their total PC installed base today. That same group expects it to grow to 10% of its installed base in the next two years. Among SMB buyers, the current installed base is about 2% of the total, growing to about 9% in two years. So, there is clearly interest and desire to experiment with Chrome among IT buyers.

This interest reflects a larger trend: IT reacting to the changing needs of the workforce. The one-size-fits-all method of IT support is no longer viable in a tight labor market driven by digital natives that enter the workforce expecting more hardware choices and the ability to work wherever and whenever they want. To service these workers, we expect more IT organizations to support a wider diversity of devices and platforms going forward. Chrome is just one of the beneficiaries of this trend.

Supply-Side Rumblings
Finally, we’ve recently begun to hear that several ARM-focused silicon providers are looking closely at the Chromebook market. Those who have been following this space awhile know that the very first Chromebooks shipped with ARM-based processors. Intel moved quickly to embrace the category, and since then its largely owned the entire market. In the last two years, we’ve seen a few vendors introduce AMD-based Chromebooks (including the HP mentioned above).

I’ve long felt that the cloud-based nature of Chromebooks makes them perfect candidates for an LTE modem. Unfortunately, until recently, most Chromebooks vendors—with a few notable exceptions—focused primarily on hitting a low price point, which made the inclusion of a cost-adding modem a tough proposition. I’d very much like to see some next-generation Chromebooks that offer such a connection regardless of whether they include and X86 processor or an ARM-based chip.

With the wide array of new product announcements this year, increasing vendor and silicon competition heating up for next year, new procurement options such as DaaS clearing the way, and increasingly pull from end-users pushing IT forward, I expect Chromebooks to gain more traction in the commercial market in the coming years.

Facebook’s Good Idea, Amazon Earnings, Intel Earnings

Facebook’s Good Idea
I watched most of Mark Zuckerberg’s speech at Georgetown on free speech and how Facebook plans to deal with their role in the world. While most of the commentary was pessimistic, there were a few things he discussed that I think merit more thought.

The core of Facebook’s issue is their attempt to behave, and monetize, massive user scale as a neutral platform. A fundamental challenge is that Facebook is essentially a general-purpose neutral platform, and the same Facebook we use in America is essentially the same Facebook as other parts of the world. When it comes to a global platform, Facebook may be facing issues never seen before. While we can argue, Facebook may need to be broken up, or perhaps even more specially regionalized, and perhaps each country in which they compete should be run in a way that works more closely with the specific countries. That being said, I want to focus on one specific idea Zuckerberg touched on that I think is a good start.

An idea brought forth by Zuckerberg, which related to how you place ads, was to make sure no one could create ads, especially political ones or ones that spread information without verifying their identity. Essentially, Facebook wants to make it, so accounts that are created or used to place ads need to be verified to an actual human. They plan to do this by requiring a government ID before they can place ads. The goal here is accountability, and ultimately this is the critical idea put forth by Zuckerberg.

If the spread of misinformation, damaging speech, and in general, the kind of content that is worse for society is to be managed, it starts with identity. While this alone may not completely stop the issue, at least there is a way to hold people accountable for their actions. This is certainly a better step forward and one I hope Facebook follows through with and explores more ways they can more deeply hold their users accountable for their actions. Accountability is a step in the right direction to discourage malicious intent.

Amazon Earnings
Many who study Amazon know they can pull levers within their business at any time to turn a profit or not. The commenters seem to forget this making a much bigger deal of lower than expected earnings. Amazon’s stock was down, irrationally, because they took a margin hit due to increased investments in one-day shipping. Amazon let investors get too comfortable with their quarterly profit trend, so the reaction was mixed when they decided to turn less profit and invest in some core areas of business.

Overall, Amazon’s business remains exceptionally strong. Amazon remains well-positioned to keep gaining advertising share, an area that contributed to their “other” business revenues, which grew 44%. Online stores grew 21%. AWS grew 35%. The only real negative was guidance, which management explained was due to continued investment in one-day shipping, which will impact Amazon’s commerce business to the positive once established.

On a quarter to quarter basis, the debate with Amazon will always be growth vs. investment. But Amazon’s upside remains too high to ignore.

Intel Earnings
Along with Amazon, Intel is a tech bellwether company. I don’t normally talk about Intel’s earnings, but there are some things brewing that I think could be positive for Intel and worth mentioning because they are a bellwether company.

Intel has just come off trying times. Their shift to 10nm from 14nm was a struggle, to put it mildly. That struggle impacted the PC ecosystem and to a degree, the data center. During this challenging time for Intel and their customers, both Intel and those who depend on Intel technology had to prioritize their product strategy. What that meant for Intel customers was prioritizing data center, due to demand, and a focus on enterprise PCs. Now it appears the end is in sight for Intel to be constrained with products, and as they ramp 10nm platforms, we will likely see some important new steps forward in both the data center and PCs for enterprises and consumers.

Ahead for Intel is the bigger opportunity coming to them from the smart edge computing devices, 5G, as well as their GP-GPU strategy, which, if successful, provides quite an upside to Intel to take share from Nvidia which will be the key storyline to watch.

Intel still has some execution challenges ahead of them, but as a few of their product kinks get worked out, the timing for growth may intersect nicely with some industry trends in the 2021 timeline.

Apple and the State of AR Glasses in 2020

Ever since financial analyst Ming-Chi Kuo wrote recently that he believed that Apple would bring their first set of AR glasses to market in 2020, a lot of people in the Mac community have been wondering whether this rumor is true, and, if so, what would they look like.

Ming-Chi Kuo prediction success rate is pretty good so, when he says this could be in the hopper, I actually take notice and dig a bit deeper to see if there is any truth to his speculative notice to his clients.

It is no secret that Apple has bet big on AR. That was made clear at WWDC in 2017. If I needed any convincing that Apple was really committed to AR and that it will be a huge part of their future, that was confirmed to me when Tim Cook appointed Frank Casanova to lead this group. Frank has been with Apple since the mid-1980s and is one of their most trusted project and product leads. Although he likes to work more in the shadows, giving him the lead of AR speaks volumes about how important AR is to Apple.

I understand that he has been overseeing the internal AR development as well as directing third party software and partnerships since he took over in 2018. And to date, most of what Apple has delivered in AR so far have been software related. While software AR apps and tweaks to the iPhone to support AR are a key component in Apple’s AR strategy, it was no secret that Apple has AR glasses in the works too. Multiple patents have been filed over the last four years related to AR glasses, so we know they are at least in the works in the works. Although Apple files many patents and not all come to the market, this one seems to have a lot of legs.

However, after talking to key players in the supply chain and research our company has done on what consumers would even perceive as acceptable, I am certain that the technology that would garner broad mass adoption of AR glasses is still not available. The supply chain folks we speak with say that it is at least 2-3 years away.

So the idea that Apple would bring AR glasses to the market in 2020 makes me a bit skeptical. I have written that I really did not think Apple would introduce any AR glasses before 2021-2022 based on the info I had from the supply chain. Add to that the research we have that suggests consumers don’t want any headgear that looks like goggles and make them look like a geek and instead want them to be more like traditional glasses.

But there is a case that can be made for Apple to introduce what I would call a “consumer experimental’ first-generation model that they could get strong feedback from “early adopters” who would fork out any price to be the first to have Apple’s AR glasses.

Google tried this out with their Google Glasses in 2013, but it failed miserably.

A key reason for its failure was that it did not have have any applications or services tied to it and was more a novelty than a real product. Also, it was more a beta than a fully cooked product, but Google felt it was worth it to get feedback on it to help them determine if they should back this idea or not.

On the other hand, should Apple introduce an early version of their AR glasses, Apple has already spent two years working with software developers and many partners to create AR apps, games, and services that work on an iPhone. Taking those apps now and adding AR glasses as an alternative delivery method would be the next natural move anyway.

There is actually much precedent at Apple to do an early version of AR glasses. In my conference room tech museum, I have the original iPod and iPhone. Both are a shadow of what they looked like two years later as Technology became better for both, and the software and services expanded and made them more useful.

You could almost say that for Apple, third times the charm. Apple introduces an early version of a product, gets a good buy-in from the early adopters, works the third party software and services harder now that they have a product on the market to work with, and drives it to greater success. Then, year after year, as the Technology gets better and Apple applies advanced Technology to new models that transform the product well beyond what they introduced in its first generation, the product starts to take off and drive a new market segment for Apple to exploit via hardware, software, and services.

If the current patent designs are accurate and represent the first generation of AR glasses for Apple, they could be more like a goggle form-facture in nature at first and become more streamlined to look like normal eyeglasses by year 3 or 4. These first AR glasses will derive all of its intelligence from the iPhone and be more of an extension of the AR app yet deliver via glasses.

Indeed, I believe that for the first four or five years of any glasses Apple brings to market, that the iPhone will be its brains, and the AR glasses will be an additional screen that can be used to enhance Apple’s AR apps initially written for the iPhone. The glasses will add more functionS and introduce new UIs such as voice, gestures, and perhaps eye tracking that can also activate an AR app on the glasses.

However, after those five years, I believe Apple will be on track to give the AR glasses its own intelligence and UI and could actually replace a person’s smartphone in the future.

Next week I will delve into the idea that AR or mixed reality glasses may become the only tech device you have and drive an even new form of computing that could reshape our personal computing experience for at least the next ten years.

Esports and Education: Looking Beyond the Money

If you have a Gen Zer in the house, chances are they are a gamer of sorts. Whether they play Minecraft, Fortnite, or spend hours on Twitch watching others play, they all are deeply invested. Common drivers are the fun of gaming, as well as the social impact that these games create in building teams and relationships with real-world friends or digital ones.

Over the summer, gaming became more than just fun for many kids following the Fortnite World Cup finals. Fifteen-year-old Jaden Ashman won half of the $2.25 million after coming second with his teammate. A few days later, sixteen-year-old Kyle Giersdorf went on to win the final battle taking home a $3 million prize.

If your child is as cunning as mine, I am sure you were faced with interesting conversations that outlined how your offspring’s gaming time could lead to wealth and success. While that might not necessarily be the case, it is true that the path to eSports as a career mirrors more and more that of a traditional sport, including the role colleges play.

Esports Scholarships and Courses

ESports scholarships have been around since 2014 when Robert Morris University in Chicago became the first university in the US to offer substantial scholarships for members of its Varsity eSports League of Legends team. The acceleration of this trend over the past year or so will get us close to 150 schools across the US and Canada by the end of 2019. ESports scholarships are very similar to other sports, with academics and merit playing a significant role in the decision of how they are allocated. Player skills, communication ability, and open spots on the team are also contributing factors to the decision.

More recently, the acknowledgment that eSports can be a full-on career drove some universities to go from offering scholarships based on gaming skills to offer eSports courses. These are courses that focus on prepare students to take advantage of the business opportunity presented by the growing world of eSports. The University of Staffordshire in the UK, Virginia’s Shenandoah University, Ohio State University, and Becker College in Massachusetts were the first universities to offer eSports courses at the start of the 2019 academic year that focus on a range of subjects from marketing to business management, to design and app content development.

There is a concern, of course, that these courses might be springing up to make colleges and universities look more relevant and attractive, rather than to provide skills for what might be a large job pool for the future. It is early to say, but there are certain skills that go with eSports that other businesses could benefit from, especially as the gig economy continues to grow.

K-12 Paves the Way

With scholarships and courses growing at the college level, it is no surprise we have seen more than 800 schools in North American join the High School eSports League reaching around 15,000 students in eSports clubs. Similar to the rise of robotics and STEM, we started to see eSports afterschool clubs roll out first. As the scholarship dollars grew, so did more formal elective courses that, like traditional sports, aim at preparing students to apply for some of those college scholarships.

With 70% of students identifying themselves as gamers, schools are hoping to build on this interest to offer students who might not be interested in traditional sports or might not be athletically gifted a different option to engage in campus activities. Similarly to other clubs and electives, the High School eSports League requires minimum GPA standards to participate.

Modern World Skills

Simply equating eSports to gaming, like equating First Lego Robotics to just coding, would miss the number of skills this discipline, yes I said it, it is a discipline, requires. Many of these games are team based and require a vast set of skills:

  • communication
  • writing for multiple purposes, and for different media formats
  • reading comprehensive information and directions
  • listening skills.

I would bet these are the skills any recruiter is looking for in both a leader or a team player. Branding, marketing, event planning, operational analysis, and strategy are all part of what eSports entails.

For schools, one of the appeals of eSports is actually that in most cases, there is no specific hardware requirement, but schools can plan to adopt multi-purpose computers and workstations. Hardware companies that cater to the education market see the opportunity eSports provide, but their involvement does not necessarily end with just providing the infrastructure to run the games and hold the classes. Often marrying their education leaders with their social responsibility advocates, hardware vendors look at the opportunity to help schools and districts to get started with eSports in the right way so that it is not just a fad but a truly new opportunity for kids.

China’s Gift to the Rest of the World

The trade battle between the US and China has taken some interesting twists and turns. Front and center in the conversation are the economic issues each country has been up against. Financial institutions are cautious when it comes to the public market, with fluctuations happening every time there is news of trade talks. Concerns of a US recession and overall slow GDP growth loom, and several economic reports out of China suggest their economy has slowed, and GDP is continued to be forecast downward through 2020.

With all that is happening, there is a long term observation that I find quite interesting related to the supply chain. The trade tension is shifting manufacturing out of China due to the tariffs in the US. The US market is so important to many companies that their priority has become getting manufacturing out of China and into other countries. This is what I’m calling China’s gift to the rest of the world.

The Supply Chain Shift
I have close contacts at several OEMs and speak with folks in the tech component supply chain frequently. The sense of urgency of shifting the supply chain out of China has accelerated the last year and is a primary focus at the moment. With the pressure China is putting on Taiwan, ODMs based there are also looking to move.

I have read three different private studies and reports on what is happening here, and the latest report shows accelerated timelines to move core manufacturing out of China. According to the study, which includes interviews with many of the main tech manufactures and supply chain players, 63% have already moved at least some of their production volume out of China. 57% said they are still planning to move even more production out of China and into other regions. How much manufacturing can be moved out of China is a big question for many. At the moment, the sweet spot seems to be between 30-40% of their production is moving or planning to be moved to another region.

The biggest benefactor has been Vietnam, but a fascinating emerging benefactor is India. See this chart from a CFO survey of ODM and supply chain companies.

Vietnam and India rank atop the list. Interestingly, the US in the YoY data from this survey declined in the number of respondents saying the US was a candidate with only 10% of respondents considering moving to the US, which is down from over 30% the same survey a year prior.

This question was on the planned move, again noting many have already made some manufacturing moves out of China. Looking at the data on countries manufacturing companies have already made investments, Vietnam, India, and Malaysia rank atop the list, in that order. In terms of the countries, other than China, the same CFOs said they currently have production moved to, atop the list was Vietnam, Korea, India, Malaysia, and Japan in that order.

It seems momentum is favoring these countries and Vietnam and India in particular. While it won’t surprise too many that countries in SE Asia are on this list, the most interesting one for me is India.

India’s Potential To Benefit from China’s Shift
Since the early 2000s, I had studied China from a few different lenses. Having been there and spoken at tech conferences and supplier-customer events, the culture and hard-working, determined nature of their people fascinating me from the moment I got there. China’s work ethic was vastly different than many other European countries I’ve visited, and that stood out. China’s people’s work ethic is a core reason they have become what they are as a manufacturing hub and will be a core reason for whatever else China evolves into in the future.

China’s sheer scale was another competitive factor. With well over a billion people in a growing class of consumers entering the financial conversation, and being highly ambitious, China’s scale of driven people was a large part of their success. The first time I was in China, one of the executives we were working with who lived there remarked: “I’ve never seen a more capitalistic people that are unfortunately held back by communism.”

I make the two points above because India shares a lot of the same dynamics that China does. India has a huge population and a growing economic class of customers entering the financial conversation and driven by ambition. While I’ve never been to India, I have close contacts there who help me stay informed on the economic and consumer trends. Years ago, I wrote about how an uprise in successful Indian CEOs, like Satya Nadella, was helping to inspire and motivate Indian culture. Having followed the Indian market for many years as well, and watching their ambition play out and grow, I’m beginning to think India has many opportunities to compete in the way China did on a global scale but in a very different way.

Ultimately, this acceleration is not just manufacturing, but also the opportunity for India has come because of the trade war. The surveys I mentioned from supply chain CFOs have been going on for years, and prior to the US trade war with China, there was little sense of urgency to move anything out of China. Now that is happening at a rapid rate, and what I’m not sure the Chinese government understands is that once the dust settles, the manufacturing that was moved out of China is not going back.

India has been trying to get more local manufacturing for years as well, making it hard for importing tech companies by placing tariffs on goods with a percentage of the hardware not made in India. Apple, of all tech companies, has likely been hit the hardest given the already steep cost of their hardware. Yet, we are just now seeing the fruit of Apple’s labor moving manufacturing to India with photos of locally made in India iPhone XRs.

Obviously, Apple wants to be more relevant in India, and local manufacturing, as well as continued investment in the Indian economy, is essential for Apple’s long term strategic play.

Lastly, the recently elevated narrative of China, communism, and their censorship stance will only continue to fuel more global businesses to be more diverse and have core operations, including manufacturing, outside of China. Essentially, while China’s tactics are designed to lower their dependency on companies and technologies outside of China, these efforts are also causing the rest of the world to lower their dependency on China.

Nvidia EGX Brings GPU Powered AI and 5G to the Edge

The concept of putting more computing power closer to where applications are occurring, commonly referred as “edge computing”, has been talked about for a long time. After all, it makes logical sense to put resources nearer to where they’re actually needed. Plus, as people have come to recognize that not everything can or should be run in hyperscale cloud data centers, there has been increasing interest in diversifying both the type and location of the computing capabilities necessary to run cloud-based applications and services.

However, the choices for computing engines on the edge have been somewhat limited until now. That’s why Nvidia’s announcement (well, technically, re-announcement after its official debut at Computex earlier this year) of its EGX edge computing hardware and software platform has important implications across several different industries. At a basic level, EGX essentially brings GPUs to the edge, allowing IoT, telco, and other industry-specific applications, not typically thought of as being Nvidia clients, the ability to tap into general purpose GPU computing.

Specifically, the company’s news from the MWC LA show provides ways to run AI applications fed by IoT sensors on the edge, as well as two different capabilities important for 5G networks: software-defined radio access networks (RANs) and virtual network functions that will be at the heart of network slicing features expected in forthcoming 5G standalone networks.

Nvidia’s announced partnership with Microsoft to have the new EGX platform work with Microsoft’s Azure IoT platform is an important extension of the overall AI and IoT strategies for both companies. Nvidia, for example, has been talking about doing AI applications inside data centers for several years now, but until now they haven’t been part of most discussions for extending AI inferencing workloads to the edge in applications like retail, manufacturing, and smart cities. Conversely, much of Microsoft’s Azure IoT work has been focused on much lower power (and lower performance level) compute engines, limiting the range of applications for which they can be used. With this partnership, however, each company can leverage the strengths of the other to enable a wider range of distributed computing applications. In addition, it allows software developers a consistent platform from large data centers to the edge, which should ease the ongoing challenge of writing distributed applications that can smartly leverage different computing resources in different locations.

On the 5G side, Nvidia announced a new liaison with Ericsson—a key 5G infrastructure provider—which opens up a number of interesting possibilities for the future of GPUs inside critical mobile networking components. Specifically, the companies are working out how to leverage GPUs to build completely virtualized and software-defined RANs, which provide the key connectivity capabilities for 5G and other mobile networks. For most of their history, cellular network infrastructure components have primarily been specialized, closed systems typically based on custom ASICs, so the move to support GPUs potentially provides more flexibility, as well as smaller, more efficient equipment.

For the other 5G applications, Nvidia partnered with RedHat and its OpenShift platform to create a software toolkit they call Aerial. Leveraging the software components of Aerial, GPUs can be used to perform not just radio access network workloads (which should be able to run on the forthcoming Ericsson hardware), but virtual network functions behind 5G network slicing. The concept behind network slicing is to deliver individualized features to each person on a 5G network, including capabilities like AI and VR. Network slicing is a noble goal that’s part of the 5G standalone network standard but will require serious infrastructure horsepower to realistically deliver. In order to make the process of creating these specialized functions easier for developers, Nvidia is delivering containerized versions of GPU computing and management resources, all of which can plug into a modern, cloud-native, Kubernetes-driven software environment as part of RedHat’s OpenShift.

Another key part of enabling these network slicing capabilities is being able to process the data as quickly and efficiently as possible. In the real-time environment of wireless networks, that requires extremely fast connections to data on the networks and the need to keep that data in memory the whole time. That’s where Nvidia’s new Mellanox connection comes in, because another key function of the Aerial SDK is a low-latency connection between Mellanox networking cards and GPU memory. In addition, Aerial incorporates a special signal processing function that’s optimized for the real-time requirements of RAN applications.

What’s also interesting about these announcements is that they highlight how far the range of capabilities has expanded with GPUs. Well past the early days of faster graphics in PCs, GPUs, included as part of the EGX offering, now have the software support to be relevant in a surprisingly broad range of industries and applications.

Podcast: Made by Google Event, Poly and Zoomtopia, Sony 360 Reality Audio

This week’s Tech.pinions podcast features Carolina Milanesi and Bob O’Donnell analyzing the announcements from the Made by Google hardware launch event, including the Pixel 4 smartphone, discussing new videoconferencing hardware from Poly and collaboration tools from Zoom’s Zoomtopia conference, and chatting about Sony’s new multichannel audio format release.

Facebook and Well Intentioned Failures

The business lessons being learned around Facebook on a regular basis are fascinating. The company has achieved once in a generation, or longer, user scale. We have never seen a company with the reach Facebook has, and we may never see it again. I know never say never, but the possibility we never see this scale again is worth pointing out.

When it comes to Portal and Libra, which to me, represent the broader struggles Facebook has had launching new products, there are a number of interesting observations to be made. But the big picture to see with Facebook is the position they now find themselves in will make launching new products or services extremely difficult even if they are well positioned on paper, and well-intentioned.

Portal
I’ve said this from the start, Portal is a really great product and had Google, Amazon, Apple, Microsoft, etc., launched this hardware it would have done much better. Portal, despite narratives from FB executives, is not selling well, really at all. I know this for a variety of reasons, but the main one is coming from component providers I know who make components in Portal. Volumes are low.

But what strikes me as the most interesting part of this story is how on paper Facebook seems exceptionally well-positioned to deliver Portal, and the solution lines up almost perfectly with the job to be done (the reason people use the product or service) of Facebook. In fact, if you were inside Facebook and did a job to be done analysis of the service and used that analysis as a basis to create the next product launch, you would almost certainly arrive at a product like Portal. Yet, no one seems to want it.

Factors for its failure to sell can include many things. For example, humans are still not comfortable with cameras in their homes yet. However, that seems to be slowly changing. Most people video conference on their PCs, tablets, or smartphones and may not see the need for dedicated video conferencing hardware. Or it could just be people don’t want such a product from Facebook.

While many points can be made about other factors, other than it’s from Facebook and user trust is an issue, I do think the heart of the matter is that Facebook has lost people’s trust to the point that Facebook may never evolve beyond what it is as an online place people see stuff from their friends. Basically, a social wall.

Again, this is fascinating on many levels about business, product, customer behavior, and more. Facebook should, on paper, be successful with a number of products they have tried or will try, but the reality is that ship has sailed, and this gets very tricky for Facebook going forward.

Libra
Libra, like Portal, looks great on paper. It makes absolute sense in the context of Facebook’s scale, yet it may very well fail. I’m sure by now you have seen the reports that founding members of Libra are leaving the consortium. Despite the exodus, Facebook remains committed, as I would have expected, but the fate of Libra is still up in the air.

This is again an example of something that, on paper, Facebook is exceptionally well-positioned to deliver. Here, by direct nature of their user scale, touching ~2 billion people on a monthly basis and well over a billion people on a daily basis. You would think a company with that user scale has an opportunity to disrupt banking.

If you have followed the narrative, and executive commentary on Libra, moving to bank into the 21st century and helping to bring the unbanked into financial inclusion are the main goals. Facebook’s entire position was no other solution that could satisfy Facebook’s scale needs, so they felt they needed to get involved.

I thought this note from investor Fred Wilson on Libra was telling.

It is fashionable to be negative about the Libra project right now. And it is equally fashionable to call it “Facebook’s crypto-currency project.” Both are understandable under the circumstances.

But yesterday was the beginning of an independent effort, one that Facebook does not control, one where Facebook is one founding member among many, and one where Facebook has one board seat out of five.

But even more important is Libra’s mission to create a stable cryptocurrency that can operate at sufficient scale such that Facebook and others can use it as a means of exchange/payment system in their applications.

No one will disagree with the promise of Libra. Yes, banking needs to change and potentially be disrupted. Yes, we want to bring the unbanked into the conversation and strive for financial inclusion. But, whether Facebook controls it or not, their platform will play a critical role in delivering the solution, and that is where I think the solution breaks down.

There are so many economic theories out there, and only if governments get involved would something like this move forward. But, to the point of the Bitcoin proponents, controlling currency is something governments want, and it is not in their best interests to adopt a neutral global standard.

Facebook has reached such a scale that they are perhaps more powerful than they realize. That scale, power, and influence, has created a significant state of skepticism among not just businesses, governments, but also their users.

I don’t expect Facebook to give up, but what concerns me more at the moment is Facebook’s management’s inability to see these patterns playing out. I’m not sure they genuinely understand the underlying systemic problem that stands as a thousand-foot wall impacting their ability to move beyond anything but a social feed for people to peruse when they are bored.

It is within this vein that I’ve named these efforts well-intentioned failures. I believe Facebook has well-meaning intentions behind these products, but there is a bigger reality facing them that makes these well-intentioned new products a struggle from the start. Digging into the systemic issues facing Facebook and the corner, they have been stuck into seems to me to be the most worthwhile effort at the moment. Because if they don’t the underlying issues, I’m not sure how they move forward.

Hotspots, Hotspots Everywhere

Most people would agree that wireless coverage and data speeds have been getting steadily better during recent years. The differences between the major operators have narrowed. Certain key problem areas have been addressed: Verizon has alleviated capacity issues in major cities as a result of an aggressive densification program; AT&T has improved coverage and speeds with the deployment of numerous bands of spectrum; and T-Mobile’s rollout of 600 MHz has helped shore up deficiencies outside of major cities.

But just when you thought it was safe to go in the water again, the next phase of improvements to the wireless experience will be more variable, ‘hit or miss’, in nature. ‘Premium’ wireless experiences, delivered by the rollout of 5G, Wi-Fi 6, CBRS, and so on are going to be much more ‘hotspot’ in nature. It will be more like ‘islands’ of premium data speeds or reduced latency, rather than broad coverage. Take 5G as an example. The deployment of mmWave is occurring primarily in cities, and only in select parts of those cities. mmWave, for the foreseeable future, will be more like a ‘super-hotspot’, like a Wi-Fi access point that works mainly outdoors over a radius of a couple of hundred feet. Even within that radius, quality will be variable, given the sensitivity of mmWave to all sorts of structures, materials, and conditions.

It’s not going to be all that different with the deployment of 5G in other bands, broadly known as ‘sub-6 GHz’. For the next couple of years, 5G NR rollouts are going to be more on a market-by-market basis, and there will be significant variability from one city to the next, as well as differences in operator deployment strategies. Add the ‘marketing’ angle to this, such as what operators call 5GE, 5G+, 5G Ready, and so on, and things will be even more complicated. Suffice it to say that there will be significant variability in the 5G experience, depending not only on what city, but even within that city, whether outdoors or indoors, and also depending on the operator. The spots where the 5G experience is truly revolutionary will be limited to ‘islands’ of coverage, or select venues or locations where operators have decided to showcase 5G or where there is a particular use case.

The ‘hotspot’ nature of wireless quality improvements is not limited to the rollout of 5G. We’re just now seeing the rollout of Wi-Fi 6 (802.11ax). It was encouraging that the iPhone 11 supports Wi-Fi 6. This new generation of Wi-Fi delivers significant improvements in speed and coverage, and does a much better job of supporting a large number of devices connected to a hot spot. But we’re in the early stages of device certification. And the deployment of Wi-Fi 6 requires the purchase of new Wi-Fi access points. The cycle for Wi-Fi equipment replacement/upgrades tends to be lengthy, mainly because Wi-Fi works well in most locations, most of the time. There’s no great urgency or particularly compelling use case driving the Wi-Fi 6 deployment cycle. Don’t expect your cable company to be knocking on your door offering new Wi-Fi equipment anytime soon. Rather, for the next couple of years, Wi-Fi 6 deployments will be case-driven, driven mainly by capacity-constrained locations, such as airports.

The other ‘hotspot’ on the block is CBRS, the shared spectrum at the 3.5 GHz band. We are in very early days with respect to CBRS, with a handful of deployments. For the next year or so, we are likely to see CBRS deployed at particular venues, such as stadiums, shopping malls, and convention centers. Mainly as a speed/capacity augmentation at high traffic locations. Some enterprises might also deploy CBRS. As CBRS matures and the PAL auctions occur, deployments will become more widespread, and permanent in nature.

Private LTE is another example of the ‘hotspot’ theme. We’re also in early days here, but in the coming years, we will see the deployment of Private LTE solutions by enterprises. Even there, the capability will only be at specific locations, and with a limited footprint at those locations.

The bottom line is that the next phase of improvements to the wireless experience — whether delivered by some flavor of 5G, evolution of LTE-Advanced, CBRS, Private LTE, or Wi-Fi 6 — will be deployed and delivered on a piecemeal basis, rather than broad coverage at the flick of a switch.

The other aspect of this is that these deployments will be for more specific use cases – such as fixed wireless access, the need to support high-traffic locations or venues, or ‘showcase’ locations to deliver a premium wireless experience using 5G, such as for multi-player gaming, e-sports, or AR/VR. An example: Verizon’s deployment of 5G at 13 NFL Stadiums.

Given the ‘hotspot’ nature of these new wireless experiences, I’m hoping that the operators are more forthcoming and transparent about where these services are available. With 5G for example, it’s not OK to just say ‘mobile 5G is available in X city, or in select areas of X city’. Customers should be able to easily determine 5G coverage at least at the ‘neighborhood’ level, with some information on how good that experience is, compared to prevailing 4G LTE. Icons on the phone should accurately effect what that experience is, at a particular location.

The next phase of wireless will feature some pretty remarkable improvements in coverage, speed, latency, and capacity. But these enhanced experiences will be mainly in specific locations or areas, rather than broad-based, at least for the next couple of years. Customers should adjust their expectations accordingly, and consider this in their purchase decisions. They should also press their service providers for more granular information on what sort of experience can be expected, and where.

Notes from the 5G America’s Summit

Each year around late September, 5G America’s, the Association that serves the telecom industry, holds its annual analyst event in Dallas. For analysts that follow telecom, as well as the overall market for communications and smartphones, it is an important forum to hear from telecom execs about what they are doing to further the telecom industry as well as prepare the industry for its next major growth thrust around 5G.

One of the major points discussed at this year’s event focused on the 5G rollout plans and how long it will take to get to mass adoption. If you have followed the wireless communication markets, you know that the rule-of-thumb is that with any new wireless advances brought to market, it takes approximately ten years for total worldwide mass adoption. This was true with 2G, 3G, and for the most part, 4G. While this is true in principle, there are still parts of the world that are on 2G and 3G, 20+ years after their original release dates.

But the sense I got from the execs at the 5G America’s summit this year is that 5G is a transformational wireless technology and that if they have their way, its adoption cycle will be well less than ten years for the majority of the world market.

They base this accelerated 5G adoption view on the fact that 5G will not just be for mainstream smartphone and wireless communications use. It will become a communications protocol for use in next-generation IoT, smart cars, smart cities, smart manufacturing, etc. They see its use in broader business transformation projects and expect that it will be adopted by major businesses for a whole host of specialized uses too.

One example they shared is that with millimeter-wave small cells, companies can deploy them inside their buildings and manufacturing facilities to serve as a communications back up system for use in mission-critical systems. While most business and manufacturing lines use WIFI as their main communications services, a millimeter-wave small cell with short distance towers on-premise would give them a backup communications systems should, for some reason, WIFI go down. While the 5G system would be a costly second communications system, in mission-critical applications, the cost can be justified as losing WIFI in real-time manufacturing programs or financial systems could be even more expensive if it caused downtimes or impacted real-time transactions.

5G infrastructure is also running ahead of 4G installments, which took 4+ years after it launched to cover all of the US with 4G coverage. Although there are a few cities in the US with pockets of 5G coverage now, these are considered more like early test sites used to help fine-tune early towers and networks.

The big thrust of 5G rollouts start in earnest in 2020 and telecom executives at this event said that they are certain that they can cover most cities by the end of 2021 and most rural areas by 2022-2023 in the US.

This time around, 5G America’s and the wireless telecom providers know that what they have with 5 G is a big deal and are also working directly with cities who want to transform their city into smart cities, automakers creating smart cars, companies who want to develop smart manufacturing lines and smart offices, etc.

One of the big reasons they want to push 5G adoption faster than in the past is the capital costs of deploying 5G networks are exponentially more expensive than when they rolled out 4G networks. While they will deploy centralized 5G towers in more population-dense areas, they also need to add MM Wave small cells to boost network coverage in big cities like NYC, Chicago, Dallas, LA, SF, etc.

Also, since they can expand 5G well beyond cell phone usage and can try and get 5G into a plethora of other uses fast, which will help pay for the higher 5 G network roll outcasts, they are moving quicker with 5 G deployments and courting all types of businesses to adopt 5G as fast a possible.

Although I knew that the telecom industry and wireless network providers are waxing poetically around 5 G and singing its praises right and left, this was the first time many of us analysts were able to really challenge them on the plans and their ability to execute.

Many analysts voiced skepticism of the telecom and wireless carrier’s ability to execute this fast. But the telecom and wireless executives were more than prepared to counter our criticism and laid out a very detailed plan on how they were moving forward. More importantly, they emphasized that 5G rollouts Worldwide would be much faster than 4G and 3G rollouts of the past and were adamant that they were on a plan to meet these objectives.

In fact, they even whispered that the early 6G specs were already in the works, but did not give any details beyond the fact that related committees are working on early 6G plans even now.

I probably should not have been surprised that these execs were so bullish on with their 5G plans and how fast they plan to light up the US with full coverage. There is a lot at stake for them, and getting as much of the US covered as fast as possible is really a big priority for them. They also want a piece of the action in smart cars, smart cities, smart businesses, and smart manufacturing and business and consumer IoT, so it really is in their best interest to accelerate 5G rollout and coverage as fast as possible.

Made By Google Is More Like Amazon Than Apple

This week was finally Pixel week. Over the past couple of months, we have seen teasers from Made by Google Team as well as leaks and even a Best Buy Canada early release of what the Pixel 4 was meant to be. We also had some details on the Pixel Book Go, the Nest Wi-Fi, and the new Pixel Buds. What was missing, though, was how the Made by Google team was going to frame its story around these products.

I said before that how a company talks and introduces its products are as important as the products themselves when it comes to understanding the vision and the goals of the business. This week’s launch was no different. While some industry watchers criticized the presentation for coming across as choppy, I thought it followed a similar format to the Google i/o main keynote. Product people come on stage to tell their story, talk about their creation, and highlight those aspects they think are a differentiator. I appreciated the attempt to move away from a specs sheet focus and provide more information on the thought process behind the devices and features as well as addressing hot areas such as sustainability and privacy.

Made by Google’s Chief, Rich Osterloh, framed the context around the new devices, but also how the team thinks about the role these devices should play in the users’ life. As he talked about ambient computing and helpful technology, it was impossible not to draw parallels to how Amazon positioned its devices just a few weeks ago.

The devices are not the final product; the technology in them is. From cloud to chipsets to Google Assistant and Soli, the technology that users access is what was on stage in New York.

Helpful Technology and Ambient Computing

Rick Osterloh stressed multiple times how the hardware the team is building focuses on being helpful. The message should sound familiar as the helpful technology tagline was used by Sundar Pichai at Google i/o. If technology is helpful, it will be pervasive in our lives, and privacy will matter more. Of course, if the technology is helpful, we come to rely on it, which creates higher brand loyalty. Helpfulness also drives customer loyalty because the perceived value of the device or service is higher. So far, there has not been any talk about paid services, but I find this emphasis on helpful tech very interesting. I do wonder if framing tech in such a way opens up options for Google to switch some of its services or features to a paid model. This revenue opportunity might also include the prospect of selling their Titan M chip to partners, especially for those who want their products to be Android Enterprise Recommended.

Privacy will also matter when the devices we use disappear and computing powers services and experiences all around us. Google wants the technology to work in such a way that when everything is perfect, the devices disappear. Interestingly this is similar to how Surface’s lead Panos Panay talks about his devices and how they keep you in the flow. It might seem odd that a hardware brand would want its devices to disappear, but if you use any technology, you know you don’t necessarily need to touch or look at a device to get a level of benefit that makes you love it. It is even easier to understand that when the device encapsulates values that are software and services driven and come from the same company.

A Focused Hardware Approach

And so, as much as Pixel 4 might be the iPhone 11 Pro competitor and Pixel Buds 2 might be Made by Google’s take on Air Pods, I cannot help but think that Made by Google’s goals are way more similar to Amazon than Apple. They might play in the same segments as Apple does, and avoiding the comparisons is impossible, but the measure of their success will not be market share but rather the continued adoption of and increased reliance on Google Assistant and the services that are powered by it.

One aspect where Google and Amazon might differ in approach is in the number of devices they decide to bring to market. It is quite apparent, though, why this is the case.

First, investment and leverage. Google has had a somewhat tricky road to hardware. We all remember how much the negative Motorola numbers impacted earnings, so the investment is much more thoughtful now. It is clear Made by Google wants to get to consumers where they get the highest return either on service engagement or cloud. It also means that Made by Google might try and leverage their devices more like they did with the new Nest Wi-Fi and the Google Assistant and smart hub integration. The partner ecosystem can also help Made by Google find those segments where there is value and those where there isn’t. The first smart display product with Google Assistant was brought to market by Lenovo. Following the positive reception of the category, we saw Made by Google launch the Google Home Hub line.

The second factor that makes a difference is, of course, Pixel. The Made by Google phone allows Google Assistant to be with the user all the time. This means, for instance, that no car dedicated device is needed to get to the users while the commute from the office to home. Amazon’s lack of phones means that they need to deliver compelling devices for those situations where users would turn to the phone by default.

No doubt in my mind that being in hardware, software, and services business for Google, Apple, Amazon, and Microsoft makes perfect sense. You just need to stop looking at the hardware as a stand-alone revenue generator and consider the impact it has on driving overall business revenue.

Poly Extends Collaboration Options

As simple as it may sound, one of the hottest topics in the modern workplace is figuring out how to best collaborate with your co-workers. Given the preponderance of highly capable smartphones, the ubiquity of available webcams and other video cameras, and a host of software applications specifically designed to enhance our co-working efforts, you would think it would be a straightforward problem to solve. But, in fact, companies are expending a good amount of time, effort and money trying to figure out how to make it all work. It’s not that the individual products have specific issues but getting multiple pieces to work together consistently and easily in a large environment turns out to be harder and more complicated than it first appears.

Part of the challenge is that video is becoming a significantly larger part of overall inter- and intra-office communications. Thanks to several different factors including faster, more reliable networks, a growing population of younger, video-savvy workers, and enhanced emphasis on remote collaboration, the idea of merely talking to co-workers, customers and work colleagues is almost starting to sound old-fashioned. Yet, despite the growth in video usage, just under 5% of conference rooms are currently video enabled, presenting a large opportunity for companies looking to address those unmet needs. Plus, our dependence on smartphones has reached deep into the workplace, creating new demands for products that can let smartphone-based video and audio calls be more easily integrated into standard office workflows.

A number of companies are working to address these issues from both a hardware and software perspective, including Poly, the combined company formed by last year’s merger of Polycom and Plantronics, Zoom, the popular videoconferencing platform, and, of course, Microsoft, among many others. At this year’s Zoomtopia conference, Poly took the wraps off a new line of low-cost dedicated videoconferencing appliances, the Poly Studio X30 and Studio X50, both of which can natively run the Zoom client software, as well as other Open SIP-compliant platforms without the need for a connected PC.

The soundbar-shaped devices are built around a Qualcomm Snapdragon 835 SOC, run a specialized version of Google’s Android, and feature a 4K-capable video camera, an integrated microphone array, and built-in speakers. In conjunction with the Zoom application, they allow organizations to easily create a Zoom Room experience in a host of different physically different size spaces, from huddle rooms to full-size conference rooms. Plus, because they’re standalone, they can be more easily managed from an IT perspective, offer more consistent performance, and can avoid the challenges end users face if they don’t have the right versions of communication applications when connecting to USB-based video camera systems.

Leveraging the compute horsepower of the Qualcomm SOC, both devices also include several AI-driven software features called PolyMeeting AI, all of which are designed to improve the meeting experience. Optimizations for audio include the ability to filter out unwanted background noises, while new video features offer clever ways of providing professional TV production-quality video tweaks, doing things such as focusing on the current speaker, seeing overall meeting context and more.

Poly is also working with Microsoft’s Teams platform in another range of products called the Elara 60 series that essentially turn your smartphone into a deskphone. Most versions of the Elara include both an integrated speakerphone, a wireless Bluetooth headset, and an integrated Qi wireless charger that can be angled to provide an easy view of your smartphone’s display. By simply placing your smartphone on the device and pairing it via Bluetooth, you can get the equivalent of a desktop phone experience with the flexibility and mobility of a smartphone. Plus, thanks to the integration with Microsoft Teams, there’s a dedicated single Teams logoed button that lets you easily initiate or join a Teams-driven call or meeting—a nice option for companies standardizing on Teams as their unified communications platform.

Of course, the reality is that most organizations need to support multiple UC platforms because even if they make their own choice for internal communications, there’s no way to know or control what potential customers and partners may be using. Given the diversity and robustness of several different platforms choices—including Zoom and Teams, but also Blue Jeans, GoToMeeting, Webex, Ring Central and Skype among others—what most organizations want is a software-based solution that would allow them to easily switch to whatever platform was demanded for a given call or meeting. While that may seem somewhat obvious, the reality is that most videoconferencing products came from the AV industry, which was literally built on decades of proprietary platforms.

Thankfully, we’re reaching the point where it’s now possible to build collaboration and videoconferencing devices based on standard operating systems, such as Android, and then simply run native applications for each of the different communications platforms that are required. We’re not quite there yet, but it’s clear based on some of these new offerings that we are getting much closer.

Apple’s Rumored 2020 AR Glasses and $399 iPhone

I had been getting a lot of questions lately about some of the latest rumors surrounding Apple, so I thought I’d address them to add my two cents.

AR Glasses in 2020
Let’s start with the most interesting rumor from a future-forward perspective. Noted Financial Analyst Ming-Chi Kuo issued a report saying he expects Apple to release their AR headset in the first half of 2020. The report claims, what many believe, and I agree with that this headset will be mostly an iPhone accessory. There are a number of points I want to make about this headset that I think need to sit in the back of our minds.

To be frank, I think the first half of 2020 is a bit early for Apple to release an AR headset. Apple usually waits until there are somewhat acceptable products on the market then delivers a complete solution that defines and sets the bar for the category. Outside of HoloLens and perhaps Magic Leap, there aren’t many players in the space, and both of those are so pricey they are outside the range of a consumer market. North Focals by Polar are not a rich media AR experience but simple text and notifications, I don’t consider them an AR headset.

With the market being much more immature than Apple usually likes before they enter, I’m quite skeptical we see this product in 2020.

There is also the market of what problem do these AR glasses solve? Even with Apple Watch, the latest product to analyze and put under a microscope on how Apple launches new products in new categories, they came out with some specific problems they were looking to solve. AR has some potential applications, but most consumers would not feel gaming, directions, notifications, etc., a large enough pain point yet with their current devices that a head-worn computer would solve.

At Creative Strategies, we do quite a bit of research on augmented reality, and that research continues to convict me that this form factor will be even harder to drive adoption than Apple Watch. The face/eyes are going to be an extremely difficult place to convince people to put computers in mass.

Another element to consider is the lack of significant smoke in the supply chain about such a product. At least with the rumored $399 iPhone SE, there is a lot of supply chain smoke and enough to know something is coming. With these glasses, there is little to no supply chain chatter suggesting large orders of glasses displays, chips, or other components that would likely go into this form factor. This point does not mean it is not happening, but it does mean there is nothing in the quantity being made. So if this is released, it will be released in an extremely small volume.

One part of this discussion that could be interesting is if Apple would consider doing a developer launch of these glasses like what Microsoft did with HoloLens 1 and Magic Leap has done with their first release. These products were not designed for shipping in volume but for providing developers with working hardware so they can start developing apps. I wonder if this is not a bad idea, even though unprecedented, for Apple. This would get them some positive feedback from developers, maybe even media, and start to plan the seed something is coming even if not what developers get is final hardware. Given how early this market is and how immature the technology is, even if Apple ships something in 2020, only early adopters, media, and developers will get one so it will function as a beta anyway for them to get feedback and build a market. But, the hard truth is, there is no market for AR glasses, and anything Apple releases in 2020 will have to make a market, and that is not the route Apple usually takes when launching new products.

I still remain skeptical we see this product in 2020, as much as I would like it to happen.

Apple’s $399 iPhone SE 2
Another rumored started by Ming-Chi Kuo is about an updated less expensive iPhone, that could be an update to the SE 2 coming next Spring. As I mentioned, there is more than enough smoke in the supply chain to confirm something is coming. The rumors suggest a new iPhone in the first half of 2020 that will be more aggressively priced. It has been pegged as an update to the iPhone SE, although the report suggests it will look more like iPhone 8 than the iPhone SE.

What is interesting about this product, if true, is what the positioning of such a product would be in Apple’s lineup. Is this lower-priced iPhone designed to go after emerging market consumers in SE Asia or India? Is this product targeting consumers who have held off upgrading because of the price? These will be questions to address whenever/if we see this product released.

For the question of emerging markets, the biggest challenge a product like this could face is if it is competitive with flagship Android devices with larger screens, faster specs, and more for the same price. For the laggards with aging iPhones, it’s not necessarily the price that has hindered their upgrade but more their understanding of themselves and not need the latest and greatest. This group will upgrade eventually, and I question if they will go for a lower-end, less expensive iPhone when they do. Having studied this demographic quite a bit, they tend to get something more new in the hopes it future proofs their purchase and lasts the 4-5 years they hope.

Overall, I’m not convinced Apple needs to play the price game at all. I don’t think it the price of iPhones that are normalizing sales at just shy of 200 million a year. It is more that customers seem content with what they have, and the price isn’t going to change that sentiment.

I know I’m coming off a bit more pessimistic than many of you are used to, but in my mind, I’m trying to be realistic about the state of the market and with what I know about Apple’s customer base. As with anything, these are just rumors, and the in-depth analysis will happen when/if we see any of these rumors become true. But, given they are hot topics, I wanted to throw my analysis into the ring.

Podcast: Arm TechCon, China Apps Controversy, Libra Meltdown

This week’s Tech.pinions podcast features Ben Bajarin and Bob O’Donnell analyzing the announcements from the Arm TechCon event, discussing the controversies around tech companies agreeing to Chinese government demands, and chatting about the quick meltdown of Facebook’s Libra cryptocurrency efforts.

Gen Z and Their New Found Love of TikTok

GenZ and TikTok
I consider it one of my responsibilities to keep you all informed on the latest with Gen Z. I’m in the fortunate position to have two Gen Z daughters but also access to larger groups of teenagers I speak with often around the Bay Area at different High Schools. As of late, TikTok has been one of the more interesting conversations, and it is at the point we need to discuss some observations.

Three months ago, when I asked a bunch of teenagers about TikTok, they all said it was stupid, and they, as well as their friends, all considered it a waste of time. I asked if anyone they knew was on TikTok, and universally their answer was “no one cool.” At that time, their social media investment of personal time and effort went into Instagram and Snapchat primarily. It’s important to know the differences in how both those platforms are used, and I’ll get there in a moment.

Fast forward until about two weeks ago, when I noticed some of the girls and their friends putting a lot of effort into producing a video. I noticed this because there was choreography, multiple scenes, and music, and I knew this was not for Instagram or Snapchat. I assumed this was maybe for something for school, but I thought I’d ask. When I inquired about what they were making, they all yelled a TikTok. To which I immediately said, “I thought you said TikTok was stupid?” To which they replied, “well, it’s not anymore.”

So what happened? Apparently, an awful lot changed for these teens in a short amount of time. The first was their realization of dancing and music (things they all seem to love), which seems to be a good portion of trendy TikTok videos. Most of them were on Musical.ly and used Vine, so it seemed TikTok should have taken off sooner, but it didn’t. When you ask them what TikTok is, they quickly explain it is basically Musical.ly and Vine. But what changed was their peer group, mainly High Schoolers, got on TikTok and upped the production value of the content. This was one of their biggest criticisms of Musical.ly, which was the production value was low because it was largely used by pre-teens. Apparently, High Schoolers and college students make better videos they feel worth watching. Really, I think it is about seeing kids their age use the platform, and that has now happened in a significant way.

Over the last few weeks, I’ve noticed a dramatic shift in my kids’ times using TikTok vs. Snapchat and Instagram. All three platforms offer something different, but the most significant difference is the time, thought, and planning they put into a TikTok video. Snapchat has evolved into mostly a chat app for all the teens I talk to. Even they largely admit this that their primary use for Snapchat is to message friends. Instagram is where they have the most friends, for now, but it is also a shorter production for most of them. Yes, many Instagram influencers put a lot of production time into their posts, but most teens use it for quick sharing of a life moment. TikTok interests me because it seems to be combining several things that I thought were interesting about Snapchat and Instagram.

When Teens used to use Snapchat for, when they posted more publicly, it was what I considered more fun and whimsical content. It showed a different more fun side of them, while Instagram showed a more refined side of them. TikTok appears to have a bit of both with the addition of some clever tools to help edit videos in a way that goes beyond what Vine offered. This is why I observe their TikTok videos taking quite a bit more time and practice as well as many takes to get it right. They want to show their fun side, but it also has to be produced in a way that they appear the way they want. I had to video one of these for a group of them, and it took more than a dozen takes to get it right. I know..

TikTok seems to be cementing itself as the third platform. TikTok is very meme friendly as trends emerge of types of videos, and the kids pick up on it and duplicate it in their own creative way. I’m also noticing that, like Instagram, TikTok already has a lot of diverse users on the platform. Snapchat is largely a US phenomenon, but you see people of all ages, races, and genders, on TikTok already.

TikTok already seems to be monetized in a more relevant way than Snapchat, in my opinion, or at least quite similar. I’ve noticed ads for very Gen Z specific products and ads that clearly learned best practices from Snapchat and were ready to add TikTok to their ad spend quickly.

The last observation I want to make on this topic, for now, is how most videos on TikTok are either a meme or a video being used to promote someone’s Instagram account. For the teens I talk to, their goal is not to become famous or get tons of followers but simply to have a video go viral. That appears to be their primary ambition for now. As TikTok evolves as a platform, it will be interesting to see what it becomes. As of now, Influencers use it to promote their other platforms, but whether TikTok can become a primary platform of influence or just a compliment to Instagram is still unclear.

From the data I’ve seen, it seems TikTok is just now ramping in the US and parts of Europe. Six months to a year from now, it could be a very different place, which will make tracking it interesting. But the main point I want to make is how different it is from the other social platforms, as well as the reality that it is here to stay, so expect to keep hearing about it.

The Broader Context and Opportunity for AR and VR

One of the latest market trends for the last 5-6 years has been a focus on VR, AR, and mixed reality devices and applications. VR has been a topic in technology circles for decades and gained more prominence after technology scientist Jared Lanier began sharing his vision for VR starting in 1991. AR got serious attention when Microsoft introduced their mixed reality HoloLens headset and even more attention when Oculus introduced their VR headset at CES in 2013.

VR/AR, as defined by Microsoft a few years earlier, was really a mixed reality concept, although the rhetoric around it focused on AR. Their goggles had a see-through feature that would allow you to look at a room you are in and see an aquarium and fish swimming around you. With that, the concept of mixed reality and AR started to gain more attention.

At WWDC in 2016, Apple made it clear that their focus would be on AR as well and introduced the first version of AR Kit. Since then, Apple developers have created hundreds of AR apps, all being delivered on the iPhone.

These advances in VR, AR, and mixed reality are quite important to the technology industry because, at its core, it represents a major revolution in the way we interact with computers.

If you are a science fiction buff, you know that beginning in the late 1800s, science fiction novel protagonists often had fictional devices that they talked to get them to do something. But by the mid-1940s, when computers began to hit the scene, the only way we could talk to a computer was via keyboard input. In 1968, when 2001, A Space Odyssey, had Hal speak to the characters, the concept of voice interaction with computers gained serious momentum.

While using voice to interact with a computer showed potential, the technology that actually advanced the way we worked with a computer came in the way of the mouse, when Xerox PARC researcher Doug Englebart, introduced the concept in 1964. Then when Apple used the mouse as a new way to interact with a computer when they introduced the Mac, computers got the next big step in the man-machine interface.

Interestingly, Grid Computers, Palm Computing, and General magic, along with Microsoft and others, introduced the next evolution when they brought pen computing to the computer interface around the 1989-1991 time frame. Then in the last decade, as the technology became viable for more advanced user interfaces, voice finally became a way we could interact with a computer.

All of these advancements in computer interfaces represented huge improvements to the man-machine interface and were revolutionary in their own right.

However, I believe that AR, VR, and Mixed Reality, especially when delivered via some type of goggles or glasses, represent the next major evolution in the way we work with and interact with computers in the future.

If you have had a chance to test or use any of the VR and AR goggles in the market today, you most likely have realized that this experience is significant and have already understood that something big is going on when VR and AR delivers completely new computing experiences than the ones we have had on desktop and laptops for the last 75 years.

VR brings us into alternate universes, as modern-day teleportation, and AR makes your surroundings come alive with data, images, and unique experiences super-imposed on the world around you. Within the next decade, we will see more 3D and even holographic technology introduced into this next revolution of the man-machine interface.

The enterprise and consumer applications for AR/VR span from training, remote work, collaboration, teleconference and communication, and much more. In consumer markets, while there may be productive and utility angles, what most consumers will get excited about will be things more focused on entertainment like games and media. We are so early in this paradigm shift it is fascinating to watch from the component side, software and platform side, and overall hardware form factor side as companies work to figure out what is next in computing.

As one who has covered the computer industry since 1981 as a professional analyst, I am most excited about what the next phase of computing will enable in the upcoming decade. In fact, I suspect that with the kind of raw processing power that we keep squeezing into mobile chips and, 5G networks that by the end of the next decade will probably deliver 30-50 GBPS wireless speeds, this will enable VR, AR, and Mixed reality goggles and glasses to reinvent the concept of a personal computing experience. It will also revolutionize the man-machine interface in ways we could not have imagined 35 years ago when personal computers debuted and changed our world forever.

Perception: The Biggest Hurdle In Broadening Your Business

Data and Artificial Intelligence (AI) are enabling device and solution providers in the enterprise space to expand and somewhat reinvent their business. Some have done so out of necessity to remain current and fence off competition from new entrants, while others have done so simply because they saw the opportunity to widen their revenue.

One area in particular, where we have seen a lot of change over the past couple of years has been collaboration and communication. The move has been brought forward by new apps that entered the workplace, but mostly by new workflows that are less siloed. Finally, communication and collaboration are intertwined the way they are supposed to be.

If you think back at a time before Slack, Teams, Zoom, and BlueJeans, communication and collaboration were pretty independent. We used single-purpose apps, but also most people did not want or need to collaborate in real-time as they do now. Better connectivity and increased mobility have redefined the way we work and how we see time-critical tasks. We moved from snail mail to email, and now we have been moving from email to live messaging for instant gratification, even on answers that are not time-sensitive. And so, we collaborate even if we are just communicating because the interactions we now have are real-time. In turn, the higher the importance we give to these interactions and more flexible work conditions increased our reliance on video conferences, smart boards, and more.

Devices, as well as apps and solutions we have been using, have grown in capabilities and intelligence to be more comprehensive than they used to be. With the change, brands had to learn how to talk, distribute, and position their products. They also must consider how much they want to deliver on their own rather than find a partner.

Two brands come to mind as an example of how far their business has evolved, and the challenges they face with the perception people have of them: Citrix and Poly. Both names should be familiar to you as they have been very visible players in the enterprise market in the digital workspace and unified communication, respectively.

You Are on a Journey…

Despite being in different businesses, both companies have walked a similar path on which, directly and through acquisitions, they have developed their core business in a much broader set of services and products. Most importantly, they transitioned from selling products and services to selling solutions that bring those together. Both companies went through a transition: Citrix from networking and virtual desktops to digital workspace solutions and Poly from the UC focus of Polycom and the headsets competence of Plantronics to a workplace solution for optimal collaboration no matter your location and what conference providers you use.

What is fascinating with these two brands is that their transition was not just a marketing and branding exercise. They actually did the work, acquired the talent, and listened to what their customers were telling them. Despite this, they face similar challenges in getting the broader market to understand their transformation because they changed and they took their customers on their journey, but industry watchers did not always tag along.

…Choose Your Fellow Travelers Carefully

We know technology often moves at a faster pace than we humans can understand, embrace, or accept. Over the past few years, however, technology has also enabled changes in business models, go to market, solutions, and services that require a new way to assess brands and segments such as collaboration and communication.

Market disruptors are often not easy to plot on a wave or a quadrant as they get into a market and change the rules by which they are supposed to measure. The same can be said for brands that transition their business. Think about how Uber would have lived up to be measured against a traditional taxi company or a limousine service, not very well, right? But that was not the point. They were not trying to be either and to make their service understood, they needed to rely on analysts and press who grasped the gig economy rather than those who covered the travel and transportation vertical.

It is a fine balance, but brands must invest in reaching out to those who cover the markets they are reaching for as well as continue to foster their relationship with those who have covered them in the past. This might require some time to cover the basics of who the company is and what they stand for. It might also need some patience from your spokespeople who might consider the new audience as uninformed. Finally, it will require a different way of communicating that focuses on the solution and its business impact rather than the detailed specs of a product. The investment will be well worth it, as this new audience brings an understanding of the new market, the broader competitive landscape, and ultimately the right reach into partners and clients you want to influence.

Both Citrix and Poly are not done yet with their transformative journey. Artificial Intelligence and machine learning will bring new opportunities to deliver vital information on how their customers use their solutions. As they move more into AI and ML, they will get on the radar of those who cover data centers, edge computing, cloud, and so their reach will have to shift to include those who have been covering these areas without ever considering Citrix or Poly as players.

 

Broadening or reinventing your business does not mean you need to change your core values, but it might mean you need to learn to talk about your business differently. Telling your audience who and what you are is as important as telling them who and what you are not despite how many times someone will force you to fit a preset mold.

Arm Extends Reach in IoT

One of the more interesting and challenging markets that the tech industry continues to focus on is the highly touted Internet of Things, or IoT. The appeal of the market is obvious—at least in theory: the potential for billions, if not trillions, of connected devices. Despite that seemingly incredible opportunity, the reality has been much tougher. While there’s no question that we’ve seen tremendous growth in pockets of the IoT market, it’s fair to say that IoT overall hasn’t lived up to its initial hype.

A big part of the problem is that IoT is not one market. In fact, it’s not even just a few markets. As time has gone on, people are realizing that it’s hundreds of thousands of different markets, many of which only amount to unit shipments measured in thousands or tens of thousands.

In order to succeed in IoT, therefore, you need the ability to customize on a massive scale. Few companies understand this better than Arm, the silicon IP (intellectual property) and software provider whose designs sit at the heart of an enormous percentage of the chips powering IoT devices. The company has a huge range of designs, from its high-end performance A Series through its mid-range and real-time focused R series, down to its ultra-low power M series, that are used by its hundreds of chip partners to build silicon parts that power an enormous range of different IoT applications.

Even with that diversity, however, it’s becoming clear that more levels of customization are necessary to meet the increasingly specialized needs of the millions of different IoT products. To better address some of those needs, Arm made some important, but easy to overlook, announcements at its annual Arm TechCon developer conference in San Jose this week.

First, and most importantly, Arm announced a new capability to introduce Custom Instructions into its Cortex-M33 and all future Armv8-M series processors at no additional cost, starting in 2020. One of the things that chip and product designers have recognized is that co-processors and other specialized types of silicon, such as AI accelerators, are starting to play an important role in IoT devices. The specialized computing needs that many IoT applications demand are placing strains on the performance and/or power requirements of standard CPUs. As a result, many are choosing to add secondary chips to their designs upon which they can offload specialized tasks. The result is generally higher performance and lower power, but with additional costs and complexities. Most IoT devices are relatively simple, however, and a full co-processor is overkill. Instead, many of these devices require only a few specialized capabilities—such as listening for wake words on voice-based devices—that could be handled by a few instructions. Recognizing that need, Arm’s Custom Instructions addition allows chip and device designers to get the customized benefits of a co-processor built into the main CPU, thereby avoiding the costs and complexities they normally add.

As expected, Arm is providing a software tool that makes the process of creating and embedding custom instructions into chip designs a more straightforward process for those companies who have in-house teams with those skill sets. Not all companies do, however, so Arm will also be offering a library of prebuilt custom instructions, including AI and ML-focused ones, that companies can use to modify their silicon designs.

What’s particularly clever about the new Custom Instructions implementation—and what allowed it to be brought to market so quickly and with no impact to existing software and chip development tools—is that the Custom Instructions go into an existing partition in the CPU’s design. Specifically, they’re replacing instructions that were used to manage a co-processor. However, because the custom instructions essentially allow Arm’s chip design partners to build a mini co-processor onto the Arm core itself, in most situations, there’s no loss in functionality or capability whatsoever.

Of course, there’s more to any device than hardware, and Arm’s core software IoT announcement at TechCon also highlights its desire to offer more customization opportunities for IoT devices. Specifically, Arm announced that it was further opening up the development of its Mbed OS for IoT devices by allowing core partners to help drive its direction. The new MBed OS governance program, which already includes participation from Arm customers such as Samsung, NXP, Cypress and more, will allow more direct involvement of these silicon makers into the future evolution of the OS. This allows them to do things like focus on more low-power battery optimizations for certain types of devices that specific chip vendors need to better differentiate their product offerings.

There’s little doubt that the IoT market will eventually be an enormous one, but there’s also no doubt that the path to reach that size is a lot longer and more complicated than many first imagined. Mass customization of the primary computing components and the software powering these devices is clearly an important step toward those large numbers, and Arm’s IoT announcements from TechCon are an important step in that direction. The road to success won’t be an easy one but having the right tools to succeed on many different paths should clearly help.

Microsoft’s Silicon Influence

Easily, one of the most interesting parts of Microsoft’s Fall devices launch event was the news that Microsoft had been working closely with Qualcomm and AMD on co-developing, or using somewhat customized Silicon, for their newest Surface products. Microsoft is certainly not going down the road Apple has been on developing their own custom silicon, but rather took a path to collaborate with AMD and Qualcomm to do some unique things for their hardware. There are several important takeaways that are positive for large screen computers going forward, as well as the broader ecosystem for Microsoft and their partners.

A Positive Influence on Silicon
Long-time readers know I’m fond of saying that understanding the trends and roadmaps for Silicon is the easiest way to predict the future. This is one reason I stay so close to the semiconductor industry. It shows us what is possible from a computing standpoint and helps us shape the perspectives of the new things humans can do once they get more capabilities in their hands.

In my analysis of the Surface event last week, I articulated how, for decades, Intel had been the influencing force for the PC ecosystem. I talked about how Intel created many reference designs of new form factors of computers in order to try and move PC OEMs forward with new ideas and categories. One positive byproduct of this was Intel’s ability to troubleshoot problems in advance as they learned by shipping new classes of devices. One of the most relevant was the early work done on 2-in-1 devices where PCs first saw a touch screen come to the form factor. There was critical work to be done on both the processor as well as the software that Intel and Microsoft had to do in order for this to work. The early reference designs help them work out the kinks, and ultimately touch-based Windows machines just ran better on Intel silicon because it had already been optimized. Intel used these references designs to influence but also stay ahead of the market and use that learning to impact future roadmaps.

Happily, the world of computing is now much more heterogeneous than it used to be. We can now see true silicon diversity across a range of categories, and this is good for everyone. As much as Intel would love the world to not just run on X86 but to run on Intel Architecture, this is not a great future. It is in this context I’m intrigued by what Microsoft is doing with AMD and Qualcomm.

For the broader computing ecosystem, Microsoft is a better influence overall than Intel, which makes Microsoft now co-investing in silicon efforts with AMD and Qualcomm all the more important for the future. When I look at the opportunity here, it stands to benefit AMD and Qualcomm a tremendous amount as Microsoft will be helping both these companies with their overall roadmaps as well through these efforts.

The influence Microsoft can exert, which will help Qualcomm and AMD better orient themselves for future computing products from laptops, to gaming rigs, to low-cost PCs, to foldable computers, etc., is one of the most significant elements of Microsoft starting to have influence and input on the Silicon landscape.

The other part of these moves I find interesting is the opportunity for Microsoft to further tune their software and services to the silicon roadmaps of Intel, AMD, and Qualcomm. A tighter relationship and guidance on silicon roadmaps means Windows, Azure services, and a range of other offerings from Microsoft stand to get better and more tightly integrated as well as a whole. Not just for Surface products but for the whole of the PC ecosystem and all forms of computing devices that come from Microsoft partners.

Interestingly, this extends beyond Windows as Microsoft is now working closely with Google as well to bring the best of Microsoft and the best of Google to an Android product. The Surface Duo is based on the Qualcomm 855, and while Microsoft had no direct influence on the 855, it will be interesting to see how their collaborations with Qualcomm extend beyond just a few products but perhaps impact the broader Android ecosystem positively. I see these moves as positives for Microsoft and the broader software ecosystem and will benefit many involved for years to come.

While we are not sure the exact amount of customization that Microsoft has done with Qualcomm and AMD, the broader point is whatever has been done today is just the beginning.

The Surface Effect

In case it is a helpful context, I attended the original launch of the Microsoft Surface in Los Angeles. Microsoft moving into hardware has been one of the most interesting developments within the Windows ecosystem and Windows hardware of the last decade. I followed Microsoft’s Surface journey from the start and listened to the PC OEM complain and share their outright disdain for Microsoft’s efforts with Surface because they believe Microsoft is stealing hardware customers from them, which they are to a degree. But the Surface effect on the PC industry has been more positive than most OEMs realize, and in hindsight, the PC industry may very well have been in worse off shape had Microsoft not helped give the market a boost with Surface.

Should Microsoft Be in the Hardware Business?
The debate around whether or not a company that sells software or services should be in hardware is an interesting one. If you look at both the Surface hardware business as well as Google’s Pixel business, you conclude they are relatively small businesses, but their impact on the market is quite large. The Surface business will likely be a $10 billion dollar business for Microsoft by the end of next year. Microsoft sells under 10m Surfaces a year, but I remain convinced market share is not the goal of Surface for Microsoft.

It is certainly true that the presence of the Surface does, in a way, compete with Microsoft partners. This is the tension that exists with the OEMs who are living in much smaller overall margins than Microsoft, and, Microsoft’s partners worried that Surface would get special treatment and preferable features of Windows. However, the Windows team treats the Surface group just like any partner. Meaning any special functions of Windows that are unique to Surface are things the Surface team had to do on their own the same way any OEM could do if they please.

So, then, what exactly is the role Surface plays for Microsoft?

Surface Sets the Bar
My overall observation on the role of Surface is to set the bar and be the physical manifestation of hardware innovations that help fuel more innovative hardware functions from Microsoft’s PC OEM partner. Unbeknownst to many, Intel used to do this but in a behind the scenes way. When Intel wanted to see hardware innovation or show the PC OEMs what new innovative hardware could be possible, they created a series of reference designs with the ODMs in an effort to showcase hardware innovation and hope the PC OEMs adopted some new ideas. Sometimes they did, and sometimes they didn’t, but Intel was aggressive in trying to push hardware innovation in PCs since that drove demand for new processors that utilized new hardware innovation. It was a cycle-dependent on each other, and OEM complacency in hardware is the worst thing that could happen to both Intel and Microsoft.

Microsoft knew they could not let OEMs get complacent since that was bad for the Windows roadmap in the same way it was bad for Intel’s silicon roadmap. Enter Surface as the piece of hardware that sets the bar for the rest of the category, as well as new categories and as the best first customer for Windows.

What I have always appreciated about the Surface was the hardware focus on unique innovations and experiences that led up to the total experience. I’ve long said, and it’s true, that if I was a Windows user as my primary notebook OS, I would be a Surface customer in a heartbeat. I’ve always loved the hardware, the keyboard, the Pen experience, and the overall look and feel is super-premium. From what Surface does with the display, to hinges, to material, etc., it all adds up to a premium experience and Surface has consistently been the bar for a premium PC experience.

To accomplish premium hardware innovations, Microsoft’s Surface team has looked to control more of the stack, creating unique things for Surface that sometimes were adopted by other OEMs. Things like the smart keyboard connector, or specific hinge designs used for the kickstand. Now with Surface Duo and Neo, there is a uniquely designed hinge that helps the device fold elegantly and smoothly. But their latest move to control more of the stack came with the ambition to influence more the silicon that powers the Surface by co-developing a custom chip with AMD and with Qualcomm.

This turn of events is quite fascinating, and we can be certain this is the beginning of Microsoft’s efforts to more greatly influence the semiconductor components in Surface going forward. Microsoft is not going to become a chip design powerhouse like Apple is, but Microsoft is able to benefit from the extremely clever foundational architecture innovations from AMD and Qualcomm that made their solutions flexible enough to give Microsoft, and other customers, a wider range of options to tweak the underlying architecture. AMD and Qualcomm thought about this flexibility in their architecture long ago, but Intel has recently created a similar approach with Lake Field. Intel was late the flexible architecture game, but they are now there as well, which means we should not be surprised if we see Microsoft work more closely with Intel as well to co-develop some Intel solutions for Surface products going forward.

I could write an entire article on this fascinating development, where all three primary Silicon providers are allowing for more rich customization and co-designing of specific chipsets, but from a hardware development standpoint, this could be a fascinating area to watch as it will likely help create more specialized computing products in the future.

The last thing I want to mention about Microsoft’s Surface hardware lineup and strategy evolution is how the core strategy has evolved. When the first Surface came out, it was all about Windows. The device featured a hardware form factor Microsoft wanted to see get adopted as well as pen and touch on tablet design. The first Surface was absolutely a key strategy to try and slow down the momentum of the iPad. While I never thought the two were real competitors, the reality was Surface benefited as consumers (not enterprise customers) looked for a tablet that could also handle heavy productivity tasks and leveraged existing workflows. iPad requires a change of workflow for most productivity minded humans, and the Surface/2-1 form factor never required a workflow change. This is why Panos Panay, who runs the Surface group, focused much of his commentary on keeping the customer in their “flow.” The workflow between different form factors is consistent when you use Microsoft software or services, and this is a relevant gap to understand for Apple since the workflows between Mac and iPad are actually quite inconsistent when you consider their target customer.

I encourage you also to read my colleague Carolina Milanesi’s write up on the new Surface Duo and Neo hardware, but this line in particular in her column is apt to understand how Surface has evolved.

This week, we witnessed the role of Surface devices move from being the best implementation of Windows to being the best implementation of Microsoft. This shift does not mean that Microsoft is no longer a software company, but it does mean that software does not define and limit the value that Microsoft can bring to its customers.

When I said earlier that Surface was the best first customer of Windows, it was true of the time, but that is no longer the case. Surface is now the best first customer of Microsoft. This includes Windows in some cases but is not limited to Windows. This now includes the Azure cloud, Microsoft apps, and a wider range of services. This profound strategy evolution is best seen in the Surface Duo, which runs Android and not Windows. Android is the default mobile OS for everyone, but Apple and for a device that has communications at its core running Android is critical. Yet Duo, just like other Surface hardware, is now more about Windows and more about everything Microsoft is and wants to become. Knowing this strategy shift is a helpful framework to think about the future of Surface, along with the future of Microsoft and how the Surface effect is more than just a business for Microsoft but a way to elevate the broader ecosystem and thus help Microsoft and their partners in much more meaningful ways.

Why Calling the Surface Duo a Phone Would Be Missing The Point

This year’s Surface event in New York felt as significant as the first Surface launch back in 2012. The critical difference, however, is that the impact that we see Surface devices deliver today affects not just the Windows Ecosystem but Microsoft as a company overall.

In just under two hours, Panos Panay introduced updates to the popular Surface Pro 7 and the Surface Laptop, now in aluminum and a larger which 15″ running on AMD silicon. He also had some additions to the portfolio: the new Surface Pro X running on a new custom chipset, the Microsoft SQ1, born from a collaboration with Qualcomm, and the Surface Earbuds. The reason why I consider this event so significant, however, is linked to two new products that show where Surface is heading, and the vision Panay and the team have for computing: Surface Duo and Surface Neo. Both these devices are dual-screen devices that are tightly intertwined in the way they encapsulate the best of Microsoft in an OS-agnostic way.

So many Windows Phone and Surface fans have been waiting for a Surface phone to be added to the portfolio for a very long time. But what was delivered this week with the Surface Duo might not be exactly what they wanted. Surface Duo must not be seen as Microsoft re-entry into the phone market. Yes I know, Microsoft is making a phone and selling a phone under the Surface brand, so their sales will show up in smartphone market share statistics and people will go out of their way to see if Surface Duo is an iPhone or Galaxy Fold killer. Looking at the Surface Duo in this light misses the significant role that this device has for the present and the future of Microsoft, not just Surface. It is only when you think about this broader impact that you can understand why we have a Surface running on Android.

A Front Row Seat for Microsoft Services

So why launch a smartphone now? If you’ve been following along over the past year or so, you have noticed Microsoft building more ties between Windows and Android. Microsoft has been making sure that PC users could benefit from their services in the best possible way on an Android phone, but also that they could feel that power amplified by first-party apps that deliver value through a seamless cross-platform performance.

With the launch of Surface Duo, Surface is delivering the best Microsoft experience on an Android device. Surface Duo follows the same high-standard in hardware design we are accustomed to while empowering rich and seamless workflows where the stars are the apps and the overall experience rather than the OS. Surface Duo gives a front-row seat to Outlook, Word, OneNote, OneDrive, to millions of users who every day use these apps on their Windows 10 PC as well as their phone. I am hoping it will also expose other apps currently on Android and iOS like Microsoft Translator and Microsoft Pix. For me, this is the key difference between Surface Duo and any previous attempt, under Nokia and Microsoft to deliver a smartphone. Surface Duo is not about taking the Windows experience to a phone and attempting to create an ecosystem. It is also not about taking users to Windows, but rather it is about meeting users where they are and creating more engagement and stickiness for Microsoft services on the most popular mobile platform.

In a world that is more and more driven by the power of data and what that data empowers as far as AI and ML, it is critical for Microsoft to drive engagement on as many platforms through as many apps and services today and in the future.

The Future of Computing

The other role that Surface Duo plays is to open the way for Surface Neo. Over the years, it has been proven that changing workflows, especially around productivity, is hard. When two-in-ones and convertibles came to market, users were attracted by the designs but were reluctant to consider them as laptop replacements. The resistance that these devices were met with, and the debate surrounding what makes a PC are still alive, especially in those enterprises where workflows are centered around legacy apps. A push towards modern work with cloud-first apps has been helping drive change. Surface Pro has been somewhat immune to many of these discussions over the years because running full Windows was enough to be considered a computer. But running “full” Windows might not be always necessary when the cloud is changing apps and workflows.

We do not have much detail on Windows 10X that will be running on Surface Neo, but what we know is that it is a new expression of Windows 10 built with dual screens in mind. This means it is not a one size fits all version of Windows, but it is specifically designed to deliver a seamless experience on a dual-screen device while being familiar to users.

Time and time again, we see users bending backward to fit their workflows around their phones. We do not question whether or not that phone is a computer; we simply use it to get things done. Surface Duo will empower users to find new workflows that take advantage of the dual-screen and highly mobile design. Because it is a phone, Surface Duo will not have to fight for a place in a portfolio of products which means that users will be heavily engaged with it.

It was evident that Microsoft was very cautious about calling the Surface Duo a phone because of their painful history. And although I agree and explained why calling it a phone might have led people to think differently about this product, I think it’s also important to understand that history got us to Surface Duo. We saw these new Surface models this week because of what Microsoft learned, because of how Microsoft changed as a company and with that how the role of Windows has changed. Microsoft is now a company that sees cloud and AI at the core of everything they do. Windows is one of its assets but not the ultimate one. Microsoft is invested in bringing an experience through all Surface hardware, their first-party apps and their services that transcend operating systems and gives users value in many different ways.

This week we witnessed the role of Surface devices move from being the best implementation of Windows to being the best implementation of Microsoft. This shift does not mean that Microsoft is no longer a software company, but it does mean that software does not define and limit the value that Microsoft can bring to its customers.

America’s Senior Citizens and Screen Time

When I first started college, my major was Pre-Med with an emphasis on geriatrics. I already had my pharmacy tech credentials, and the place I worked at had multiple contracts to be the pharmacy of choice for nursing homes and some private clinics that were focused on people over 70. For two years, I was the person who went to these nursing homes and helped them manage their pharmaceutical needs.

I became very interested in the aging process and their healthcare. But even then, I observed closely how some of these older folks spent their time in these specialty care facilities. Many were bedridden and, besides visits from the family and short excursions to areas where they needed special physical therapy and exercise, most were confined to their rooms, and their best friend was their televisions.

Although I switched majors after two years of college, I never forget my work in geriatrics. Over the last 40 years, due to the family at times having to be confined to nursing homes or special care facilities, I continued to observe how an older generation spends their leisure time, and even now, the television still dominates their video viewing free time.

Recently, the Economist published a story that included a chart from Nielson that looked at how many hours are spent on media consumption across all adult age groups. The chart confirms that for those over 65, the television is still their major choice for media viewing. But the smartphone has also become a major medium for media consumption for this age group too.

Here is the Economist’s commentary on the Nielson chart”

“According to Nielsen, a market-research firm, Americans aged 65 and overspend nearly ten hours a day consuming media on their televisions, computers, and smartphones. That is 12% more than Americans aged 35 to 49, and a third more than those aged 18 to 34 (the youngest cohort for whom Nielsen has data).

Most of that gap can be explained by the TV. American seniors—three-quarters of whom are retired—spend an average of seven hours and 30 minutes in front of the box, about as much as they did in 2015 (this includes time spent engaged in other activities while the television is blaring in the background). They spend another two hours staring at their smartphones, a more than seven-fold increase from four years ago (see chart).”

We have friends in there 70’s and 80’s who are still very active and their smartphones have become very important to them. Besides their smartphones being a safety net that allows them to reach family and emergency services if needed, they also use their smartphones for video, to play games and in some cases, to learn and keep their minds sharp.

In fact, Seniors playing games is on the rise.


They also combat aging with video games.

The Internet itself has become a major tool for Seniors-

Axess lab did a study on facts about the elderly and the Web and found the following:

  • Those of the baby boomer generation spend around 27 hours weekly online.
  • Of the group aged over 65, seven out of ten will go online daily.
  • 82% of those in both groups run searches online related to what they’re interested in.
  • 78% of seniors say that they like going online because it enables them to find the information that they need easily.

  • 60% of them believe that you can stay up to date when it comes to policy and political issues by surfing the web.
  • For around about a third of seniors, the Internet is considered a trustworthy source of information and news.
  • 20% of seniors communicate with their friends via email.
  • 75% of the elderly go online to communicate with their family and friends.
  • More than half of those who are classified as seniors follow an organization on social media.
  • 40% of seniors who watch videos online do so in order to keep up to date with breaking news.
  • 53% of the elderly research medical or healthcare issues online.

    54% of seniors watch videos online for purely entertainment purposes.

  • Half of the seniors say that it’s very important to play games in order to remain sharp. A further 26% state that playing games are extremely important for this reason.

The Internet, as a medium for information and entertainment for our aging population, is an important market and one that continues to grow.

AARP published a chart from the US Census Bureau that projects that Older Adults will outnumber children by 2035.

Marketers take note. These types of stats continue to show that a senior market will become even bigger over time, and tech marketers cannot ignore them when designing hardware, software, and services in the future.

And while the TV dominates their media consumption, a computer and smartphone have become critical tools for them too, and they appear to be increasing their usage and making them much more important to their aging lifestyles.