Nvidia EGX Brings GPU Powered AI and 5G to the Edge

The concept of putting more computing power closer to where applications are occurring, commonly referred as “edge computing”, has been talked about for a long time. After all, it makes logical sense to put resources nearer to where they’re actually needed. Plus, as people have come to recognize that not everything can or should be run in hyperscale cloud data centers, there has been increasing interest in diversifying both the type and location of the computing capabilities necessary to run cloud-based applications and services.

However, the choices for computing engines on the edge have been somewhat limited until now. That’s why Nvidia’s announcement (well, technically, re-announcement after its official debut at Computex earlier this year) of its EGX edge computing hardware and software platform has important implications across several different industries. At a basic level, EGX essentially brings GPUs to the edge, allowing IoT, telco, and other industry-specific applications, not typically thought of as being Nvidia clients, the ability to tap into general purpose GPU computing.

Specifically, the company’s news from the MWC LA show provides ways to run AI applications fed by IoT sensors on the edge, as well as two different capabilities important for 5G networks: software-defined radio access networks (RANs) and virtual network functions that will be at the heart of network slicing features expected in forthcoming 5G standalone networks.

Nvidia’s announced partnership with Microsoft to have the new EGX platform work with Microsoft’s Azure IoT platform is an important extension of the overall AI and IoT strategies for both companies. Nvidia, for example, has been talking about doing AI applications inside data centers for several years now, but until now they haven’t been part of most discussions for extending AI inferencing workloads to the edge in applications like retail, manufacturing, and smart cities. Conversely, much of Microsoft’s Azure IoT work has been focused on much lower power (and lower performance level) compute engines, limiting the range of applications for which they can be used. With this partnership, however, each company can leverage the strengths of the other to enable a wider range of distributed computing applications. In addition, it allows software developers a consistent platform from large data centers to the edge, which should ease the ongoing challenge of writing distributed applications that can smartly leverage different computing resources in different locations.

On the 5G side, Nvidia announced a new liaison with Ericsson—a key 5G infrastructure provider—which opens up a number of interesting possibilities for the future of GPUs inside critical mobile networking components. Specifically, the companies are working out how to leverage GPUs to build completely virtualized and software-defined RANs, which provide the key connectivity capabilities for 5G and other mobile networks. For most of their history, cellular network infrastructure components have primarily been specialized, closed systems typically based on custom ASICs, so the move to support GPUs potentially provides more flexibility, as well as smaller, more efficient equipment.

For the other 5G applications, Nvidia partnered with RedHat and its OpenShift platform to create a software toolkit they call Aerial. Leveraging the software components of Aerial, GPUs can be used to perform not just radio access network workloads (which should be able to run on the forthcoming Ericsson hardware), but virtual network functions behind 5G network slicing. The concept behind network slicing is to deliver individualized features to each person on a 5G network, including capabilities like AI and VR. Network slicing is a noble goal that’s part of the 5G standalone network standard but will require serious infrastructure horsepower to realistically deliver. In order to make the process of creating these specialized functions easier for developers, Nvidia is delivering containerized versions of GPU computing and management resources, all of which can plug into a modern, cloud-native, Kubernetes-driven software environment as part of RedHat’s OpenShift.

Another key part of enabling these network slicing capabilities is being able to process the data as quickly and efficiently as possible. In the real-time environment of wireless networks, that requires extremely fast connections to data on the networks and the need to keep that data in memory the whole time. That’s where Nvidia’s new Mellanox connection comes in, because another key function of the Aerial SDK is a low-latency connection between Mellanox networking cards and GPU memory. In addition, Aerial incorporates a special signal processing function that’s optimized for the real-time requirements of RAN applications.

What’s also interesting about these announcements is that they highlight how far the range of capabilities has expanded with GPUs. Well past the early days of faster graphics in PCs, GPUs, included as part of the EGX offering, now have the software support to be relevant in a surprisingly broad range of industries and applications.

Podcast: Made by Google Event, Poly and Zoomtopia, Sony 360 Reality Audio

This week’s Tech.pinions podcast features Carolina Milanesi and Bob O’Donnell analyzing the announcements from the Made by Google hardware launch event, including the Pixel 4 smartphone, discussing new videoconferencing hardware from Poly and collaboration tools from Zoom’s Zoomtopia conference, and chatting about Sony’s new multichannel audio format release.

Hotspots, Hotspots Everywhere

Most people would agree that wireless coverage and data speeds have been getting steadily better during recent years. The differences between the major operators have narrowed. Certain key problem areas have been addressed: Verizon has alleviated capacity issues in major cities as a result of an aggressive densification program; AT&T has improved coverage and speeds with the deployment of numerous bands of spectrum; and T-Mobile’s rollout of 600 MHz has helped shore up deficiencies outside of major cities.

But just when you thought it was safe to go in the water again, the next phase of improvements to the wireless experience will be more variable, ‘hit or miss’, in nature. ‘Premium’ wireless experiences, delivered by the rollout of 5G, Wi-Fi 6, CBRS, and so on are going to be much more ‘hotspot’ in nature. It will be more like ‘islands’ of premium data speeds or reduced latency, rather than broad coverage. Take 5G as an example. The deployment of mmWave is occurring primarily in cities, and only in select parts of those cities. mmWave, for the foreseeable future, will be more like a ‘super-hotspot’, like a Wi-Fi access point that works mainly outdoors over a radius of a couple of hundred feet. Even within that radius, quality will be variable, given the sensitivity of mmWave to all sorts of structures, materials, and conditions.

It’s not going to be all that different with the deployment of 5G in other bands, broadly known as ‘sub-6 GHz’. For the next couple of years, 5G NR rollouts are going to be more on a market-by-market basis, and there will be significant variability from one city to the next, as well as differences in operator deployment strategies. Add the ‘marketing’ angle to this, such as what operators call 5GE, 5G+, 5G Ready, and so on, and things will be even more complicated. Suffice it to say that there will be significant variability in the 5G experience, depending not only on what city, but even within that city, whether outdoors or indoors, and also depending on the operator. The spots where the 5G experience is truly revolutionary will be limited to ‘islands’ of coverage, or select venues or locations where operators have decided to showcase 5G or where there is a particular use case.

The ‘hotspot’ nature of wireless quality improvements is not limited to the rollout of 5G. We’re just now seeing the rollout of Wi-Fi 6 (802.11ax). It was encouraging that the iPhone 11 supports Wi-Fi 6. This new generation of Wi-Fi delivers significant improvements in speed and coverage, and does a much better job of supporting a large number of devices connected to a hot spot. But we’re in the early stages of device certification. And the deployment of Wi-Fi 6 requires the purchase of new Wi-Fi access points. The cycle for Wi-Fi equipment replacement/upgrades tends to be lengthy, mainly because Wi-Fi works well in most locations, most of the time. There’s no great urgency or particularly compelling use case driving the Wi-Fi 6 deployment cycle. Don’t expect your cable company to be knocking on your door offering new Wi-Fi equipment anytime soon. Rather, for the next couple of years, Wi-Fi 6 deployments will be case-driven, driven mainly by capacity-constrained locations, such as airports.

The other ‘hotspot’ on the block is CBRS, the shared spectrum at the 3.5 GHz band. We are in very early days with respect to CBRS, with a handful of deployments. For the next year or so, we are likely to see CBRS deployed at particular venues, such as stadiums, shopping malls, and convention centers. Mainly as a speed/capacity augmentation at high traffic locations. Some enterprises might also deploy CBRS. As CBRS matures and the PAL auctions occur, deployments will become more widespread, and permanent in nature.

Private LTE is another example of the ‘hotspot’ theme. We’re also in early days here, but in the coming years, we will see the deployment of Private LTE solutions by enterprises. Even there, the capability will only be at specific locations, and with a limited footprint at those locations.

The bottom line is that the next phase of improvements to the wireless experience — whether delivered by some flavor of 5G, evolution of LTE-Advanced, CBRS, Private LTE, or Wi-Fi 6 — will be deployed and delivered on a piecemeal basis, rather than broad coverage at the flick of a switch.

The other aspect of this is that these deployments will be for more specific use cases – such as fixed wireless access, the need to support high-traffic locations or venues, or ‘showcase’ locations to deliver a premium wireless experience using 5G, such as for multi-player gaming, e-sports, or AR/VR. An example: Verizon’s deployment of 5G at 13 NFL Stadiums.

Given the ‘hotspot’ nature of these new wireless experiences, I’m hoping that the operators are more forthcoming and transparent about where these services are available. With 5G for example, it’s not OK to just say ‘mobile 5G is available in X city, or in select areas of X city’. Customers should be able to easily determine 5G coverage at least at the ‘neighborhood’ level, with some information on how good that experience is, compared to prevailing 4G LTE. Icons on the phone should accurately effect what that experience is, at a particular location.

The next phase of wireless will feature some pretty remarkable improvements in coverage, speed, latency, and capacity. But these enhanced experiences will be mainly in specific locations or areas, rather than broad-based, at least for the next couple of years. Customers should adjust their expectations accordingly, and consider this in their purchase decisions. They should also press their service providers for more granular information on what sort of experience can be expected, and where.

Made By Google Is More Like Amazon Than Apple

This week was finally Pixel week. Over the past couple of months, we have seen teasers from Made by Google Team as well as leaks and even a Best Buy Canada early release of what the Pixel 4 was meant to be. We also had some details on the Pixel Book Go, the Nest Wi-Fi, and the new Pixel Buds. What was missing, though, was how the Made by Google team was going to frame its story around these products.

I said before that how a company talks and introduces its products are as important as the products themselves when it comes to understanding the vision and the goals of the business. This week’s launch was no different. While some industry watchers criticized the presentation for coming across as choppy, I thought it followed a similar format to the Google i/o main keynote. Product people come on stage to tell their story, talk about their creation, and highlight those aspects they think are a differentiator. I appreciated the attempt to move away from a specs sheet focus and provide more information on the thought process behind the devices and features as well as addressing hot areas such as sustainability and privacy.

Made by Google’s Chief, Rich Osterloh, framed the context around the new devices, but also how the team thinks about the role these devices should play in the users’ life. As he talked about ambient computing and helpful technology, it was impossible not to draw parallels to how Amazon positioned its devices just a few weeks ago.

The devices are not the final product; the technology in them is. From cloud to chipsets to Google Assistant and Soli, the technology that users access is what was on stage in New York.

Helpful Technology and Ambient Computing

Rick Osterloh stressed multiple times how the hardware the team is building focuses on being helpful. The message should sound familiar as the helpful technology tagline was used by Sundar Pichai at Google i/o. If technology is helpful, it will be pervasive in our lives, and privacy will matter more. Of course, if the technology is helpful, we come to rely on it, which creates higher brand loyalty. Helpfulness also drives customer loyalty because the perceived value of the device or service is higher. So far, there has not been any talk about paid services, but I find this emphasis on helpful tech very interesting. I do wonder if framing tech in such a way opens up options for Google to switch some of its services or features to a paid model. This revenue opportunity might also include the prospect of selling their Titan M chip to partners, especially for those who want their products to be Android Enterprise Recommended.

Privacy will also matter when the devices we use disappear and computing powers services and experiences all around us. Google wants the technology to work in such a way that when everything is perfect, the devices disappear. Interestingly this is similar to how Surface’s lead Panos Panay talks about his devices and how they keep you in the flow. It might seem odd that a hardware brand would want its devices to disappear, but if you use any technology, you know you don’t necessarily need to touch or look at a device to get a level of benefit that makes you love it. It is even easier to understand that when the device encapsulates values that are software and services driven and come from the same company.

A Focused Hardware Approach

And so, as much as Pixel 4 might be the iPhone 11 Pro competitor and Pixel Buds 2 might be Made by Google’s take on Air Pods, I cannot help but think that Made by Google’s goals are way more similar to Amazon than Apple. They might play in the same segments as Apple does, and avoiding the comparisons is impossible, but the measure of their success will not be market share but rather the continued adoption of and increased reliance on Google Assistant and the services that are powered by it.

One aspect where Google and Amazon might differ in approach is in the number of devices they decide to bring to market. It is quite apparent, though, why this is the case.

First, investment and leverage. Google has had a somewhat tricky road to hardware. We all remember how much the negative Motorola numbers impacted earnings, so the investment is much more thoughtful now. It is clear Made by Google wants to get to consumers where they get the highest return either on service engagement or cloud. It also means that Made by Google might try and leverage their devices more like they did with the new Nest Wi-Fi and the Google Assistant and smart hub integration. The partner ecosystem can also help Made by Google find those segments where there is value and those where there isn’t. The first smart display product with Google Assistant was brought to market by Lenovo. Following the positive reception of the category, we saw Made by Google launch the Google Home Hub line.

The second factor that makes a difference is, of course, Pixel. The Made by Google phone allows Google Assistant to be with the user all the time. This means, for instance, that no car dedicated device is needed to get to the users while the commute from the office to home. Amazon’s lack of phones means that they need to deliver compelling devices for those situations where users would turn to the phone by default.

No doubt in my mind that being in hardware, software, and services business for Google, Apple, Amazon, and Microsoft makes perfect sense. You just need to stop looking at the hardware as a stand-alone revenue generator and consider the impact it has on driving overall business revenue.

Poly Extends Collaboration Options

As simple as it may sound, one of the hottest topics in the modern workplace is figuring out how to best collaborate with your co-workers. Given the preponderance of highly capable smartphones, the ubiquity of available webcams and other video cameras, and a host of software applications specifically designed to enhance our co-working efforts, you would think it would be a straightforward problem to solve. But, in fact, companies are expending a good amount of time, effort and money trying to figure out how to make it all work. It’s not that the individual products have specific issues but getting multiple pieces to work together consistently and easily in a large environment turns out to be harder and more complicated than it first appears.

Part of the challenge is that video is becoming a significantly larger part of overall inter- and intra-office communications. Thanks to several different factors including faster, more reliable networks, a growing population of younger, video-savvy workers, and enhanced emphasis on remote collaboration, the idea of merely talking to co-workers, customers and work colleagues is almost starting to sound old-fashioned. Yet, despite the growth in video usage, just under 5% of conference rooms are currently video enabled, presenting a large opportunity for companies looking to address those unmet needs. Plus, our dependence on smartphones has reached deep into the workplace, creating new demands for products that can let smartphone-based video and audio calls be more easily integrated into standard office workflows.

A number of companies are working to address these issues from both a hardware and software perspective, including Poly, the combined company formed by last year’s merger of Polycom and Plantronics, Zoom, the popular videoconferencing platform, and, of course, Microsoft, among many others. At this year’s Zoomtopia conference, Poly took the wraps off a new line of low-cost dedicated videoconferencing appliances, the Poly Studio X30 and Studio X50, both of which can natively run the Zoom client software, as well as other Open SIP-compliant platforms without the need for a connected PC.

The soundbar-shaped devices are built around a Qualcomm Snapdragon 835 SOC, run a specialized version of Google’s Android, and feature a 4K-capable video camera, an integrated microphone array, and built-in speakers. In conjunction with the Zoom application, they allow organizations to easily create a Zoom Room experience in a host of different physically different size spaces, from huddle rooms to full-size conference rooms. Plus, because they’re standalone, they can be more easily managed from an IT perspective, offer more consistent performance, and can avoid the challenges end users face if they don’t have the right versions of communication applications when connecting to USB-based video camera systems.

Leveraging the compute horsepower of the Qualcomm SOC, both devices also include several AI-driven software features called PolyMeeting AI, all of which are designed to improve the meeting experience. Optimizations for audio include the ability to filter out unwanted background noises, while new video features offer clever ways of providing professional TV production-quality video tweaks, doing things such as focusing on the current speaker, seeing overall meeting context and more.

Poly is also working with Microsoft’s Teams platform in another range of products called the Elara 60 series that essentially turn your smartphone into a deskphone. Most versions of the Elara include both an integrated speakerphone, a wireless Bluetooth headset, and an integrated Qi wireless charger that can be angled to provide an easy view of your smartphone’s display. By simply placing your smartphone on the device and pairing it via Bluetooth, you can get the equivalent of a desktop phone experience with the flexibility and mobility of a smartphone. Plus, thanks to the integration with Microsoft Teams, there’s a dedicated single Teams logoed button that lets you easily initiate or join a Teams-driven call or meeting—a nice option for companies standardizing on Teams as their unified communications platform.

Of course, the reality is that most organizations need to support multiple UC platforms because even if they make their own choice for internal communications, there’s no way to know or control what potential customers and partners may be using. Given the diversity and robustness of several different platforms choices—including Zoom and Teams, but also Blue Jeans, GoToMeeting, Webex, Ring Central and Skype among others—what most organizations want is a software-based solution that would allow them to easily switch to whatever platform was demanded for a given call or meeting. While that may seem somewhat obvious, the reality is that most videoconferencing products came from the AV industry, which was literally built on decades of proprietary platforms.

Thankfully, we’re reaching the point where it’s now possible to build collaboration and videoconferencing devices based on standard operating systems, such as Android, and then simply run native applications for each of the different communications platforms that are required. We’re not quite there yet, but it’s clear based on some of these new offerings that we are getting much closer.

Podcast: Arm TechCon, China Apps Controversy, Libra Meltdown

This week’s Tech.pinions podcast features Ben Bajarin and Bob O’Donnell analyzing the announcements from the Arm TechCon event, discussing the controversies around tech companies agreeing to Chinese government demands, and chatting about the quick meltdown of Facebook’s Libra cryptocurrency efforts.

Perception: The Biggest Hurdle In Broadening Your Business

Data and Artificial Intelligence (AI) are enabling device and solution providers in the enterprise space to expand and somewhat reinvent their business. Some have done so out of necessity to remain current and fence off competition from new entrants, while others have done so simply because they saw the opportunity to widen their revenue.

One area in particular, where we have seen a lot of change over the past couple of years has been collaboration and communication. The move has been brought forward by new apps that entered the workplace, but mostly by new workflows that are less siloed. Finally, communication and collaboration are intertwined the way they are supposed to be.

If you think back at a time before Slack, Teams, Zoom, and BlueJeans, communication and collaboration were pretty independent. We used single-purpose apps, but also most people did not want or need to collaborate in real-time as they do now. Better connectivity and increased mobility have redefined the way we work and how we see time-critical tasks. We moved from snail mail to email, and now we have been moving from email to live messaging for instant gratification, even on answers that are not time-sensitive. And so, we collaborate even if we are just communicating because the interactions we now have are real-time. In turn, the higher the importance we give to these interactions and more flexible work conditions increased our reliance on video conferences, smart boards, and more.

Devices, as well as apps and solutions we have been using, have grown in capabilities and intelligence to be more comprehensive than they used to be. With the change, brands had to learn how to talk, distribute, and position their products. They also must consider how much they want to deliver on their own rather than find a partner.

Two brands come to mind as an example of how far their business has evolved, and the challenges they face with the perception people have of them: Citrix and Poly. Both names should be familiar to you as they have been very visible players in the enterprise market in the digital workspace and unified communication, respectively.

You Are on a Journey…

Despite being in different businesses, both companies have walked a similar path on which, directly and through acquisitions, they have developed their core business in a much broader set of services and products. Most importantly, they transitioned from selling products and services to selling solutions that bring those together. Both companies went through a transition: Citrix from networking and virtual desktops to digital workspace solutions and Poly from the UC focus of Polycom and the headsets competence of Plantronics to a workplace solution for optimal collaboration no matter your location and what conference providers you use.

What is fascinating with these two brands is that their transition was not just a marketing and branding exercise. They actually did the work, acquired the talent, and listened to what their customers were telling them. Despite this, they face similar challenges in getting the broader market to understand their transformation because they changed and they took their customers on their journey, but industry watchers did not always tag along.

…Choose Your Fellow Travelers Carefully

We know technology often moves at a faster pace than we humans can understand, embrace, or accept. Over the past few years, however, technology has also enabled changes in business models, go to market, solutions, and services that require a new way to assess brands and segments such as collaboration and communication.

Market disruptors are often not easy to plot on a wave or a quadrant as they get into a market and change the rules by which they are supposed to measure. The same can be said for brands that transition their business. Think about how Uber would have lived up to be measured against a traditional taxi company or a limousine service, not very well, right? But that was not the point. They were not trying to be either and to make their service understood, they needed to rely on analysts and press who grasped the gig economy rather than those who covered the travel and transportation vertical.

It is a fine balance, but brands must invest in reaching out to those who cover the markets they are reaching for as well as continue to foster their relationship with those who have covered them in the past. This might require some time to cover the basics of who the company is and what they stand for. It might also need some patience from your spokespeople who might consider the new audience as uninformed. Finally, it will require a different way of communicating that focuses on the solution and its business impact rather than the detailed specs of a product. The investment will be well worth it, as this new audience brings an understanding of the new market, the broader competitive landscape, and ultimately the right reach into partners and clients you want to influence.

Both Citrix and Poly are not done yet with their transformative journey. Artificial Intelligence and machine learning will bring new opportunities to deliver vital information on how their customers use their solutions. As they move more into AI and ML, they will get on the radar of those who cover data centers, edge computing, cloud, and so their reach will have to shift to include those who have been covering these areas without ever considering Citrix or Poly as players.

 

Broadening or reinventing your business does not mean you need to change your core values, but it might mean you need to learn to talk about your business differently. Telling your audience who and what you are is as important as telling them who and what you are not despite how many times someone will force you to fit a preset mold.

Arm Extends Reach in IoT

One of the more interesting and challenging markets that the tech industry continues to focus on is the highly touted Internet of Things, or IoT. The appeal of the market is obvious—at least in theory: the potential for billions, if not trillions, of connected devices. Despite that seemingly incredible opportunity, the reality has been much tougher. While there’s no question that we’ve seen tremendous growth in pockets of the IoT market, it’s fair to say that IoT overall hasn’t lived up to its initial hype.

A big part of the problem is that IoT is not one market. In fact, it’s not even just a few markets. As time has gone on, people are realizing that it’s hundreds of thousands of different markets, many of which only amount to unit shipments measured in thousands or tens of thousands.

In order to succeed in IoT, therefore, you need the ability to customize on a massive scale. Few companies understand this better than Arm, the silicon IP (intellectual property) and software provider whose designs sit at the heart of an enormous percentage of the chips powering IoT devices. The company has a huge range of designs, from its high-end performance A Series through its mid-range and real-time focused R series, down to its ultra-low power M series, that are used by its hundreds of chip partners to build silicon parts that power an enormous range of different IoT applications.

Even with that diversity, however, it’s becoming clear that more levels of customization are necessary to meet the increasingly specialized needs of the millions of different IoT products. To better address some of those needs, Arm made some important, but easy to overlook, announcements at its annual Arm TechCon developer conference in San Jose this week.

First, and most importantly, Arm announced a new capability to introduce Custom Instructions into its Cortex-M33 and all future Armv8-M series processors at no additional cost, starting in 2020. One of the things that chip and product designers have recognized is that co-processors and other specialized types of silicon, such as AI accelerators, are starting to play an important role in IoT devices. The specialized computing needs that many IoT applications demand are placing strains on the performance and/or power requirements of standard CPUs. As a result, many are choosing to add secondary chips to their designs upon which they can offload specialized tasks. The result is generally higher performance and lower power, but with additional costs and complexities. Most IoT devices are relatively simple, however, and a full co-processor is overkill. Instead, many of these devices require only a few specialized capabilities—such as listening for wake words on voice-based devices—that could be handled by a few instructions. Recognizing that need, Arm’s Custom Instructions addition allows chip and device designers to get the customized benefits of a co-processor built into the main CPU, thereby avoiding the costs and complexities they normally add.

As expected, Arm is providing a software tool that makes the process of creating and embedding custom instructions into chip designs a more straightforward process for those companies who have in-house teams with those skill sets. Not all companies do, however, so Arm will also be offering a library of prebuilt custom instructions, including AI and ML-focused ones, that companies can use to modify their silicon designs.

What’s particularly clever about the new Custom Instructions implementation—and what allowed it to be brought to market so quickly and with no impact to existing software and chip development tools—is that the Custom Instructions go into an existing partition in the CPU’s design. Specifically, they’re replacing instructions that were used to manage a co-processor. However, because the custom instructions essentially allow Arm’s chip design partners to build a mini co-processor onto the Arm core itself, in most situations, there’s no loss in functionality or capability whatsoever.

Of course, there’s more to any device than hardware, and Arm’s core software IoT announcement at TechCon also highlights its desire to offer more customization opportunities for IoT devices. Specifically, Arm announced that it was further opening up the development of its Mbed OS for IoT devices by allowing core partners to help drive its direction. The new MBed OS governance program, which already includes participation from Arm customers such as Samsung, NXP, Cypress and more, will allow more direct involvement of these silicon makers into the future evolution of the OS. This allows them to do things like focus on more low-power battery optimizations for certain types of devices that specific chip vendors need to better differentiate their product offerings.

There’s little doubt that the IoT market will eventually be an enormous one, but there’s also no doubt that the path to reach that size is a lot longer and more complicated than many first imagined. Mass customization of the primary computing components and the software powering these devices is clearly an important step toward those large numbers, and Arm’s IoT announcements from TechCon are an important step in that direction. The road to success won’t be an easy one but having the right tools to succeed on many different paths should clearly help.

Why Calling the Surface Duo a Phone Would Be Missing The Point

This year’s Surface event in New York felt as significant as the first Surface launch back in 2012. The critical difference, however, is that the impact that we see Surface devices deliver today affects not just the Windows Ecosystem but Microsoft as a company overall.

In just under two hours, Panos Panay introduced updates to the popular Surface Pro 7 and the Surface Laptop, now in aluminum and a larger which 15″ running on AMD silicon. He also had some additions to the portfolio: the new Surface Pro X running on a new custom chipset, the Microsoft SQ1, born from a collaboration with Qualcomm, and the Surface Earbuds. The reason why I consider this event so significant, however, is linked to two new products that show where Surface is heading, and the vision Panay and the team have for computing: Surface Duo and Surface Neo. Both these devices are dual-screen devices that are tightly intertwined in the way they encapsulate the best of Microsoft in an OS-agnostic way.

So many Windows Phone and Surface fans have been waiting for a Surface phone to be added to the portfolio for a very long time. But what was delivered this week with the Surface Duo might not be exactly what they wanted. Surface Duo must not be seen as Microsoft re-entry into the phone market. Yes I know, Microsoft is making a phone and selling a phone under the Surface brand, so their sales will show up in smartphone market share statistics and people will go out of their way to see if Surface Duo is an iPhone or Galaxy Fold killer. Looking at the Surface Duo in this light misses the significant role that this device has for the present and the future of Microsoft, not just Surface. It is only when you think about this broader impact that you can understand why we have a Surface running on Android.

A Front Row Seat for Microsoft Services

So why launch a smartphone now? If you’ve been following along over the past year or so, you have noticed Microsoft building more ties between Windows and Android. Microsoft has been making sure that PC users could benefit from their services in the best possible way on an Android phone, but also that they could feel that power amplified by first-party apps that deliver value through a seamless cross-platform performance.

With the launch of Surface Duo, Surface is delivering the best Microsoft experience on an Android device. Surface Duo follows the same high-standard in hardware design we are accustomed to while empowering rich and seamless workflows where the stars are the apps and the overall experience rather than the OS. Surface Duo gives a front-row seat to Outlook, Word, OneNote, OneDrive, to millions of users who every day use these apps on their Windows 10 PC as well as their phone. I am hoping it will also expose other apps currently on Android and iOS like Microsoft Translator and Microsoft Pix. For me, this is the key difference between Surface Duo and any previous attempt, under Nokia and Microsoft to deliver a smartphone. Surface Duo is not about taking the Windows experience to a phone and attempting to create an ecosystem. It is also not about taking users to Windows, but rather it is about meeting users where they are and creating more engagement and stickiness for Microsoft services on the most popular mobile platform.

In a world that is more and more driven by the power of data and what that data empowers as far as AI and ML, it is critical for Microsoft to drive engagement on as many platforms through as many apps and services today and in the future.

The Future of Computing

The other role that Surface Duo plays is to open the way for Surface Neo. Over the years, it has been proven that changing workflows, especially around productivity, is hard. When two-in-ones and convertibles came to market, users were attracted by the designs but were reluctant to consider them as laptop replacements. The resistance that these devices were met with, and the debate surrounding what makes a PC are still alive, especially in those enterprises where workflows are centered around legacy apps. A push towards modern work with cloud-first apps has been helping drive change. Surface Pro has been somewhat immune to many of these discussions over the years because running full Windows was enough to be considered a computer. But running “full” Windows might not be always necessary when the cloud is changing apps and workflows.

We do not have much detail on Windows 10X that will be running on Surface Neo, but what we know is that it is a new expression of Windows 10 built with dual screens in mind. This means it is not a one size fits all version of Windows, but it is specifically designed to deliver a seamless experience on a dual-screen device while being familiar to users.

Time and time again, we see users bending backward to fit their workflows around their phones. We do not question whether or not that phone is a computer; we simply use it to get things done. Surface Duo will empower users to find new workflows that take advantage of the dual-screen and highly mobile design. Because it is a phone, Surface Duo will not have to fight for a place in a portfolio of products which means that users will be heavily engaged with it.

It was evident that Microsoft was very cautious about calling the Surface Duo a phone because of their painful history. And although I agree and explained why calling it a phone might have led people to think differently about this product, I think it’s also important to understand that history got us to Surface Duo. We saw these new Surface models this week because of what Microsoft learned, because of how Microsoft changed as a company and with that how the role of Windows has changed. Microsoft is now a company that sees cloud and AI at the core of everything they do. Windows is one of its assets but not the ultimate one. Microsoft is invested in bringing an experience through all Surface hardware, their first-party apps and their services that transcend operating systems and gives users value in many different ways.

This week we witnessed the role of Surface devices move from being the best implementation of Windows to being the best implementation of Microsoft. This shift does not mean that Microsoft is no longer a software company, but it does mean that software does not define and limit the value that Microsoft can bring to its customers.

Five Important New Terms to Know About 5G

In yesterday’s Techpinions column, Bob O’Donnell did a great job of providing a 5G Status Report, based in part on having attended the Qualcomm 5G Workshop in San Diego and the 5G Americas Analyst event in Dallas last week. I had the opportunity to attend the same two events, and thought I’d use the opportunity to build on Bob’s piece with a bit of a different take on 5G.

So far, initial 5G deployments are mainly focused on the EMBB (Enhanced Mobile Broadband) pillar of 5G. Basically, faster download and upload speeds that are averaging 3-5x better than typical LTE speeds. The one unique ‘use case’ for 5G so far is fixed wireless access (FWA), deployed mainly by Verizon in very limited parts of five cities, with the objective of providing a competitive alternative to fixed broadband.

But the real promise of 5G, over the long-term, rests on the other two ‘pillars’ of 5G: Massive IoT, and Ultra Low Latency. The capabilities to open up new markets and new use cases are highly dependent on the next ‘phase’ of 5G, in what’s called 3GPP Release 16 (R16), which will likely be approved in 2020, with commercial availability of some of its aspects arriving in 2021.

In preparation for that, here are five important aspects that will be critical in the development of 5G opportunities beyond enhanced mobile broadband. Be prepared – they’re a mouthful.

Ultra-Reliable Low-Latency Communication (URLLC). Now say that five times quickly, with all the dashes in the right places. This is a critical feature of 5G that delivers latencies of below 10 milliseconds (ms) and perhaps below 1 ms. These low latencies are better than what can be accomplished today on many fixed networks. They open up important use cases in the consumer realm, such as in gaming and AR/VR, as well as important new enterprise sectors, such as motion control in manufacturing or the factory floor.

Dynamic Spectrum Sharing (DSS). Notwithstanding the fact that this acronym will yield a very different (and less fortunate) type of thing in search results, DSS means that operators can reuse existing LTE bands for 5G, and that 5G NR and LTE can operate on the same band, simultaneously, with a simple software upgrade. This is going to be very important, for two reasons. First, it will enable operators to get to broader 5G coverage quickly, rather than relying exclusively on new 5G-centric bands. Second, this is a hedge against mmWave. It would allow Verizon, for example, whose 5G strategy is mmWave centric, to have broader options for its 5G deployment strategy, especially in advance of new mid-band spectrum becoming available.

Time Sensitive Networking (TSN). Supports time synchronization and dual connectivity, and gives deterministic performance. This means guaranteed packet transport with low and bounded latency, low packet delay variation, and low packet loss. This is a key requirement for some of the Industrial IoT application and use cases, such as manufacturing/factory floor. Although there are some aspects of TSN present in LTE, there are significant enhancements embedded within the URLLC specs of R16. Those who are looking seriously at the Industrial IoT sector should familiarize themselves with TSN

Coordinated Multipoint (CoMP). CoMP allows connections to several base stations at once (eNodeBs in LTE parlance). CoMP started to be used more aggressively in LTE Advanced, as a way of improving service at the cell edge, by utilizing multiple eNBs, boosting the signal and reducing interference. But CoMP takes on even greater importance in 5G. Whereas Massive MIMO is increases capacity and coverage extending to the cell edge in a macro environment, CoMP delivers some of those same capabilities for a small cell environment. Which is why CoMP is also sometimes referred to as ‘Distributed MIMO’. The capacity gains enabled by 5G CoMP will be important in small cell based enterprise and venue deployments, and the latency improvements will have application in the Industrial IoT realm.

5G NR-U. Just when you thought you understood the distinction between 5G, 5G NR, 5G NSA, and 5G SA, along comes 5G NR-U [unlicensed]. Over the past several years, there have been important developments in the use of unlicensed spectrum to expand mobile connectivity, notably LAA and MulteFire. R16 will include an LAA version of 5G NR in unlicensed spectrum (5G NR-U) that relies on a licensed anchor, as well as a standalone version of 5G NR-U that can be used by carriers and/or any entities that don’t control any licensed spectrum of their own. This will be another tool for private/enterprise deployments. Even more importantly, there’s potential harmonization with Wi-Fi 6, which itself employs many of the characteristics of 5G. Wi-Fi and cellular have always been a bit of a binary discussion. But the potential for cross-fertilization of Wi-Fi 6 and 5G is an under-recognized area of opportunity.

A 5G Status Report

Few technologies are expected to have as big an impact as 5G—the next generation wireless broadband connection standard—and now that we’ve finally started seeing the first 5G-enabled devices and real-world deployments, it’s worth taking a look at where things currently stand and how they’re likely to evolve.

Fortunately, I’m now in a much stronger position to do that as the result of two different 5G-focused events I attended last week. Qualcomm’s 5G Workshop at their headquarters in San Diego emphasized core 5G technologies and the work that the company has done to evolve and integrate those technologies into semiconductor-based components. Specifically, they highlighted their work on 5G modems, RF (radio frequency) transceivers, RF front ends for fine tuning the raw radio signals, and systems that integrate all three components into complete solutions. The 5G Americas analyst event in Dallas (TX) provided the telco carriers’ and network infrastructure equipment companies’ angle on the status of today’s 5G networks throughout the US and Latin America. It also included a session with FCC commissioner Michael O’Reilly that dove into the hot topic of radio frequency spectrum issues in the US.

The two events dovetailed nicely and offered an interesting and comprehensive perspective on today’s 5G realities. What became clear is that although 5G will eventually enable a whole wealth of new applications and possibilities, for the near-term, it’s primarily focused on faster cellular networks for smartphones, or what the industry likes to call enhanced mobile broadband (eMBB). Within that world of faster 5G cellular networks, there are two very important and widely recognized sub-groups that are divided by the different radio frequencies within which they each operate: millimeter wave frequencies (typically 24 GHz and higher—so named because their wavelengths are measured in single millimeters) and those collectively referred to as sub-6, shorthand for frequencies below 6 GHz. (As a point of reference, all current 4G LTE radio transmissions are done at frequencies below 3 GHz.)

Though it might seem a bit arcane, the distinction between frequencies is an extremely important one, because it has a huge impact on both the type of 5G services that will be available and the equipment necessary to enable them. Basically, millimeter wave offers very fast performance, but only over short distances and within certain environments. Conversely, sub-6 frequencies allow wider coverage, but at slower speeds. To make either of them work, you need network equipment that can transmit those frequencies and devices that are tuned to receive those frequencies. While that seems straightforward, the devil is in the details, and there are a wide variety of factors that can impact the ability for these devices and services to function properly and effectively. For example, just because a given smartphone supports some millimeter wave frequencies doesn’t mean it will work with the millimeter wave frequencies used by a given carrier—and the same is true for sub-6 GHz support. Bottom line? There’s a lot more for people to learn about the different types of 5G than has ever been the case with other wireless network generation transitions.

Just to complicate things a bit more, one of the more interesting take-aways from the two events is that there’s actually a third-group of frequencies that’s becoming a critical factor for 5G deployments in many countries around the world—but is still waiting to be deployed in the US. The C-Band frequencies (in telecommunications parlance, typically the frequencies from 3.5 GHz to 4.2 GHz), though technically part of the sub-6 group, offer what many consider to be a very useful compromise of both better performance and wider coverage than other frequency options above and below that range. In the US, the problem is that this set of frequencies (which happen to measure in the single centimeter range, though that does not seem to be the origin of the C-band name) is not currently available for use by telecom carriers. Right now, they’re being used for applications in defense and private industry, but as part of its spectrum modernization and evolution process, the FCC is expected to open up the frequencies and auction them off to interested parties like the major telcos in 2020.

Another interesting insight from the two events is that there are some important differences between the theory of what a technology can do and the reality of how it gets deployed. In the case of millimeter wave, for example, Qualcomm showed some impressive (though admittedly indoor) demos of how the technology can be used in more than just line-of-sight applications. The initial concern around this technology was that you needed to have a direct view from where you were standing with a smartphone to a millimeter wave-equipped cell tower in order to get the fastest possible speeds. With the Qualcomm demo, however, people were able to walk behind walls, and even into conference rooms, and still maintain the download speed benefits of millimeter wave, thanks to reflections off walls and glass. When asked why early real-world tests of 5G devices didn’t reflect this, the company essentially said that the early networks weren’t as dense as they needed to be and weren’t configured as well as they could be. Carrier representatives and network equipment makers at 5G Americas, however, countered that the reality of outdoor interference from existing 4G networks and the strength of those signals meant that—at least for the near term—real-world millimeter wave performance will be limited to line-of-sight situations.

An interesting takeaway from the demo, and the subsequent conversations, is that millimeter wave-based 5G access points could prove to be a very effective alternative to WiFi in certain indoor environments. Faster download speeds and wider coverage mean that fewer 5G small cells would have to be deployed than WiFi access points in a given building, potentially leading to lower management costs. Plus, technologies have been developed to create private 5G networks. As a result, I think millimeter wave-based 5G indoor private networks could prove to be one of the sleeper hits of this new technology.

Even though most of the current 5G efforts are focused on speeding up cellular connections, it became clear at both events that there are, in fact, a lot of interesting applications still to come in industrial environments, automotive applications, and more. Another important point that was emphasized at both events is that the initial launch of 5G is not the end of the technological development process, but just the beginning. As with previous cellular network generations, the advancements related to 5G will come in chunks roughly every 18-24 months. These developments are driven by the 3GPP (3rd Generation Partnership Project—a worldwide telecom industry standards group originally formed to create standards for 3G) and their release documents. The initial 5G launch was based on Release 15. However, Release 16 is expected to be finalized in January of 2020, and that will enable a whole other set of capabilities, all under the 5G umbrella. In addition, a great deal of work has already been done to start defining the specifications for Release 17, which is expected sometime in 2022.

The bottom line is that we’re still in the very early stages of what’s expected to be a decade-long evolution of technology and applications associated with 5G. Initial efforts are focused on faster speeds, and despite some early hiccups, excellent progress is being made with much more to come in 2020. The two conferences made it clear that the technologies underpinning next generation wireless networks are very complicated ones, but they’re also very powerful ones that, before we know it, are likely to bring us exciting new applications that we’ve yet to even imagine.

At OC6 Facebook Pushes Virtual Reality Forward

The buzz around Virtual Reality has faded for many, but this hasn’t dampened Facebook’s enthusiasm for the technology. After spending two days at the company’s Oculus Connect 6 conference, it’s easy to see why Mark Zuckerberg and the Facebook team still see a bright future for VR. They talked about the recent success of its Oculus Quest headset and made a handful of interesting announcements around better content availability, upgrades to existing hardware, and the path to increased commercial adoption. It also announced its next attempt at a VR-based social app called Horizon. I found the company’s story around three of four of those topics compelling.

Bringing Go and Rift Content to Quest
During his opening keynote, Zuckerberg said that sales of the Oculus Quest—Facebook’s newest standalone VR headset—have exceeded the company’s expectations. That’s not a terribly meaningful metric for the outside world, but I can tell you that in IDC’s 2Q19 numbers the Quest helped Facebook capture more than 38% of a 1.5-million headset worldwide market, with astounding year-over-year growth.

Perhaps more importantly, he and other executives went on to note that people are using the new headsets more often, and for longer periods of time, than any of the other Oculus headsets. This includes the lower-cost standalone Oculus Go and the PC-tethered Oculus Rift and new Rift S. Sales of software on the Quest are also taking off, and Facebook announced that it already represents about 20% of the more than $100M in Oculus app sales to date.

To feed that growth, Facebook is taking steps to make sure that existing content for both the Go and the Rift will run on Quest in the future. When I discussed my initial experience with the Quest, I lamented a good, but not great, app selection. In addition to trying to bring more developers into the ecosystem, the company is also working to bring more existing content to the Quest.

The first step—bringing Go content, much of which began life as content for the Samsung made Oculus Gear VR—is a straightforward process, as the earlier hardware is less capable than today’s Quest. Effective September 26, Facebook has made many of the most popular Go apps available on the Quest.

Even more interesting: Facebook is working to bring popular apps that run on the Oculus Rift and Rift S to the Quest. Those apps utilize the additional horsepower of the connected PC to run on the Rift headsets, while the Quest utilizes a mobile processor with less horsepower under the hood. To address this, Facebook announced a new technology called Oculus Link which will let Quest owners connect their headset to a PC using a Type C cable to utilize the PC’s processing and graphics.

Frankly, it’s a brilliant move. I’d argue that today’s Quest is among the best VR experience in the market, but its mobile chipset means it can’t drive the same level of resolution as a tethered system. The upside is the Quest feels more immersive because you’re not constantly dealing with the PC tether. With Oculus Link, Quest owners can have the best of both worlds: The ability to play in standalone or in tethered modes. Better still, the upgrade will be free. Quest owners will need a high-end Type C cable to make it work. While many of us have these cables, few will likely have one long enough for this job. Facebook will sell a premium cable when the upgrade rolls out in 2020.

VR Without Controllers
Perhaps the most notable technical announcement at the show was that in 2020 Oculus will bring hand-tracking technology to the existing Quest headset utilizing the existing integrated cameras. That right, no additional add-on cameras or room sensors required. During the keynote, and in later track sessions, various Facebook executives lauded the numerous advantages to using hand-tracking over today’s touch controllers. Chief among them: a more frictionless input modality, improved social presence, and enhanced self-presence. Yes, when you can see your own hands—sans controllers—the entire experience is more immersive.

In addition to driving interesting new gameplay options, and to create improved social interaction opportunities, I also heard from several large companies about the advantages of hand-tracking when it comes to soft-skills enterprise training. For example, when learning how to handle challenging HR situations in VR the training is more lifelike when the trainee is not holding on to touch controllers during the exercise.
There will still be plenty of VR situations where hand-held controllers will still be necessary, or even a better experience, but bringing hand-tracking to the Quest has the potential to drive a big shift in how people use the headset. And I’m heartened to see Oculus leveraging the existing hardware and bringing it to existing customers without the need to buy new accessories. The company’s willingness to continue to iterate on the platform will make both consumer and commercial buyers more willing to invest in the Quest.

Focus on Commercial Use
Until now, I’d say that the folks at HTC’s Vive team have done a better job of telling a strong commercial story, especially as it iterated on the Vive Pro hardware to bring more training-centric capabilities. At Connect, however, Oculus talked a great deal about Oculus for Business, its commercial platform currently in beta and set to launch in November. The company is putting together a very compelling commercial story that will take advantage of the strong capabilities of the Quest hardware.

I sat in on a session where Oculus walked attendees through the steps necessary to take a company from proof of concept to pilot to deployment. I’ll write more about this in a future column, but for now, I’ll say Oculus is asking the right questions and thinking about all the right things. Perhaps the biggest challenge it faces is convincing enterprise organizations that it knows how to help deploy and manage commercial hardware and software at scale.

To that end, Facebook announced that it would have a trusted channel partner network that includes a shortlist of big companies in that space including Synnex and TechData. Perhaps just as importantly, it is also currently working on standing up an ISV (independent software vendor) program with existing companies that have experience in deploying commercial VR solutions, who have handled pilots or deployments with Global 2000 customers, and who have experience with VR training, simulation, or collaboration use cases. In other words, Facebook has realized that it doesn’t have all the answers when it comes to commercial VR and it’s wisely looking for the best partners to help it address the areas where it still needs to learn.

On the Horizon
Facebook made a lot of interesting announcements this week at Oculus Connect. And the 90-minute session by John Carmack, which including an unvarnished postmortem on what went wrong with the Gear VR, is a must-listen for anyone who follows this market. But the one announcement that failed to land with me is the one where Facebook is likely spending the most money and effort: The launch of Facebook Horizon.

Facebook describes Horizon as “an interconnected and ever-expanding social VR world where people can explore new places, play games, and build communities.” It sounds interesting, on paper. But even watching the video, which shows increasingly lifelike (although still legless) avatars interacting in a perfectly rendered digital world, I kept thinking, “but what are you going to do in there?” It’s not clear to me that Facebook knows the answer to that.

Facebook Horizon will launch into beta in 2020, and the company hasn’t said when it would roll out to a larger audience. I’ll withhold judgment on Horizon until I’ve had a chance to experience it myself. Perhaps the company will surprise me. I did, however, think it was telling that during his talk even Carmack acknowledged Facebook continues to struggle to figure out the right way to do social in VR.

In the meantime, I’m very much looking forward to seeing Facebook roll out all the other updates to Oculus Quest. More apps, support for PC tethering, and integrated hand tracking should all help drive the market forward in meaningful ways. And I’ll be watching as the company ramps up to launch Oculus for Business later this year.

Amazon’s Event and Its Fifteen Alexa Incarnations

Based on my experience last year, I was expecting Amazon’s event to be packed full of products, and it was. Yet, the more I listened to Dave Limp walk us through everything new, the more it was clear that there was only one product around which not just the event, but the entire line of devices is focused on, and that is Alexa. I know, you might think I am stating the obvious here, but what I mean is that Amazon considers Alexa the actual product they sell. If you think about it this way, it becomes much easier to understand why we see Amazon invest in so many hardware categories. This approach makes Amazon a very different hardware vendor and not just because they are prepared to break even. What we saw at the event where devices that did one of three things: expanded use cases for current users, lowered the barrier of entry for new users and helped Alexa get outside the home.

The Elephant in the Room: Privacy

Before getting to the new hardware, Dave Limp addressed the privacy concerns that were raised in the press over the past few months. Aside from reiterating the ability users have to delete any recording, he also introduced new ways in which consumers can interact with Alexa to find out more about why Alexa does certain things.

These two simple utterances: “Alexa, tell me what you heard?” and “Alexa, why did you do that?” help Amazon do three things:

  • Educate users on why and how things happen. Asking Alexa why music started playing and being told someone in another room on another device asked to play such music, or asking Alexa why she was answering when you did not actually mean to engage and have Alexa explain she heard her name when you might have said Alex, all help users understand how the underlying technology works. It turns some of what might be perceived as secret magic into a rational explanation, increasing transparency.
  • Make users feel more in control, not just of their own data but also in their relationship with Alexa.
  • Finally, it continues to build trust and bond through the exchanges as it is Alexa who is explaining to them what is happening.

Ultimately, Amazon and all other providers of digital assistants will continue to be scrutinized, and rightly so, as we put more and more of our lives into their hands. Finding the right balance between wanting users to share data to improve performance and relevance while being very transparent about how such data is used will remain a key driver of trust, engagement, and loyalty.

Driving New Points of Engagement and Creating New Points of Entry

Amazon added new features such as the voice of Samuel L. Jackson, the Food Network Kitchen (great pairing with the new Amazon Smart Oven) for cooking classes, and new smart alerts for Alexa Guard. These all aim at growing engagement for current users by finding new things to do with their devices and Alexa. New accessories like the Echo Glow, also help to add value to devices, like an Echo Dot, that you might already have in your kids’ bedroom. Possibly the simplest of products among what announced was the Echo Flex. An extremely affordable wireless-smart speaker that can add Alexa’s functionalities in those rooms where you want Alexa’s brains and voice but for which you do not wish to make a significant investment.

The opportunity to appeal to new customers comes in the form of a new Echo Dot Clock, Echo Show 8 and the Echo Studio. I think it is fair to say that sound had not been Amazon’s strongest value proposition with its Echo devices. While it had improved with newer generations, consumers bought Echo devices for their functionality first and then for sound. The new Echo Studio aims to change that thanks to a collaboration with Dolby that benefits both the hardware and the new Prime Music HD service by adding Dolby Atmos sound. The quality of the sound is impressive, and as you would expect, Amazon is making sure Echo Studio also works with your TV either as a single speaker, a pair or with your subwoofer. The best way I have to describe the sound is that it is incredibly immersive, letting you hear instruments you did not realize were there before. The main difference with stereo is that the music is not coming from two specific points, but the different sounds that make up a track are all around you.

The quality of the sound coupled with the aggressive price point of $199 will put pressure on other smart speakers that had been differentiated based on sound. HomePod, in particular, will feel the pressure, given Apple Music subscribers can access the service through Echo devices. As I doubt Apple will play on price, I am curious to see if there could be an interest in differentiating their sound quality even more by embracing Dolby Atmos for an HD version of Apple Music.

Taking Alexa out of the Home

Alexa continues to dominate in the home, but things are quite different once we leave, and we rely on our smartphones for most of our day. Amazon is undoubtedly aware of this, and much is done to make Alexa more readily accessible when we are out and about. The Echo Buds, a Bose collaboration, free Alexa from the smartphone giving us access to navigation, music, and search. The deal with GM also brings Alexa outside the home as her functionality is added to Chevrolet, Buick, GMC, and Cadillac cars that are 2018 and newer and have compatible infotainment systems.

These new devices coupled with an earlier announcement for simplifying multi-wake-word support speak to Amazon’s desire to limit frustration and make consumers pick Alexa because of the superior experience not because it is the only choice. The outside world is much more unpredictable than our home both in terms of context and requests, which is something Alexa still needs some practice on. The more entry points Alexa will have throughout the day, the more value she will deliver. Other two products launched under the “Day 1 Editions” program will also help Alexa be with us all day: Echo Loop and Echo Frames both aimed at being with us all day.

Continuing to Learn

The “Day 1 Editions” Echo Loop and Echo Frames are not developers’ products, but rather ready to ship products offered on an invitation-only basis to a selected number of customers.

Echo Frames are a voice-first experience delivered through prescription glasses. Rather than convincing people to wear glasses all the time, Echo Frames are aimed at people who have to wear glasses and might be interested in using them as a vehicle for voice-first interactions.

Echo Loop is a ring that you can tap and talk into to access Alexa more quickly than reaching for your phone. While many were expecting a smartwatch, I find Amazon’s interest in experimenting with different wearables fascinating as we know how hard it is to be successful in the smartwatch market that mostly remains an Apple Watch market, especially in the US. The way you would interact with Echo Loop is quite similar to how you use Apple Watch to access Siri. Interestingly I thought that cupping my hand close to my ear to listen to Alexa’s voice coming from the ring or putting my hand in front of my mouth as if I was yawning to speak to her was much more natural than raising my wrist to speak to Apple Watch although Alexa’s voice was much fainter than Siri’s.

The feedback loop that Amazon will create with these customers who will approach usage in a very open-minded way, similar an early adopter, will be extremely useful to Amazon to finesse the products both with features and use cases and ready them for more mainstream customers.

Amazon ended the list of new announcements with the introduction of a new wireless protocol called “Amazon Sidewalk targetted at extending the working range of low-bandwidth, low-power, smart lights, sensors, and other IoT devices. By extending the range using the unlicensed 900mhz spectrum, customers will be able to place smart devices anywhere on their property even without a Bluetooth, Wi-Fi, or cellular connection. An ambitious project that has opportunity way beyond the consumer market, something Microsoft should keep an eye on.

Why Breaking up Facebook May Not be a Good Idea

While the rhetoric in Washington around breaking up high tech companies has become louder during the presidential campaign, it turns out that many American voters are in favor of this too.

This Statista chart shows that well, over 65% of Americans favor breaking up big tech.

Called the “tech lash,” some major tech companies like Facebook, Twitter, and even Apple have faced some type of anti-trust scrutiny recently and calls for breaking them up in one form or another.

There is a lot of controversy around the breakup argument, but in the case of Facebook, a break up might not actually be a good thing.
Facebook has a formula that produces great revenue at the cost of hosting both good and bad material. And the bad material actually makes them more money than the good.

It is plausible that if you split them up, each as its own entity would be even more motivated to expand their own platforms using Facebook’s revenue models and could grow their programs to add more good and bad content. Facebook’s good content comes from allowing us to connect with family, friends and even long lost acquaintances. They let us see what each other is doing, look at peoples vacation pictures, and share in the even mundane things of life. But the bad comes in many forms like commentary, news feeds and false ads and have split us up into narrow audiences, and some people only gravitate to the bad. Ironically, Facebook’s revenues get propped up more from bad content than good content.

A better way to address the problems of Facebook would be to increase regulation on bad content, including falsified accounts, false news stories, illegal ads, hate commentary, and more. Facebook by themselves are doing a bad job at this, and they need to be forced to keep as much bad content as they can off of their sites.

I actually believe Facebook’s CEO, Mark Zuckerberg, understands that his company may need regulatory help, and has said so publicly. He fears to be the one to crack down on hate speech, accounts that are on the bubble that share false information or are used for inciting any action that could harm Facebook users. He knows that his quelling of some of this content would render cries of restricting free speech and would prefer the government play the role of bad cop through some form of regulation that allows him to legally restrict the kind of content that can be posted on Facebook.

The government needs to identify what is bad content and force them to be removed or perhaps even taxed if necessary, to keep any of these sites from expanding their reach by promoting bad content at will. Breaking Facebook into three companies with no regulation is not a good way to keep the bad from growing on each of these platforms in the future. In fact, it could make the problems worse if each group becomes even more powerful without proper checks and balances.

At this stage of the election cycle, politicians are bound to get on their soapboxes and push for tech company breakups. But breaking them up without proper regulations may not be worth it. Unchecked, spin-off companies could continue to use a formula for hosting good and bad content and possibly make our world even less safe, and even more divided.

Revised Galaxy Fold Adds New Twist to Fall Phone-a-Palooza

Though the leaves may not have started changing color, there’s another sure sign that we’ve entered fall: the barrage of smartphone and other personal device announcements from major manufacturers around the world. Technically, it started in early August at Samsung’s Unpacked event in New York, where they unveiled their Note 10 line of smartphones. The bulk of the announcements, however, are happening in September, most notably Apple’s iPhone 11 line. Looking ahead, the announcements should extend at least until October, given Google’s own pre-announcement of the Pixel 4.

The most recent phone announcement isn’t actually a new one—it’s the relaunch of the Samsung Galaxy Fold with a hardened, re-engineered design. The original Galaxy Fold never shipped to the public because of a number of serious issues with the foldable display that popped up with early reviews of the first units. Though it was clearly a PR disaster for the company, to their credit, they made the difficult decision to delay the product, make the necessary changes, and are now re-releasing it.

I was fortunate enough to receive a review unit of the first edition and, as a long-time fan of the concept of foldable displays, was pleased to discover that in real-world usage, working with a smartphone-sized foldable device truly is a game-changing experience. I also had absolutely zero problems with the unit I received, so was very disappointed to have to return it. Happily, I now have the revised version of the Fold and while it’s obviously too early to say anything about long-term durability, it’s clear that the new Fold design is better conceived and feels more rugged than the original, particularly the redesigned hinge.

Samsung has been very careful this time around to warn people to be cautious with the device and frankly, the early problems with the first generation will probably serve as a good warning to potential customers that they need to treat the Fold a bit more gingerly than they do a typical smartphone. Now, we can certainly argue whether a nearly $2,000 smartphone ought to be this delicate, but the re-release of the Fold says a number of things about the state of foldable technology in general.

First, the plastic material currently used to make foldable displays is still not anywhere close to the level of scratch resistance that glass is. Companies like Corning and other display component manufacturers are working to develop more hardened foldable displays, but if you’re eager to embrace the future now with a foldable device, current material science is going to limit devices to softer, more sensitive screens. An important implication of this is that Samsung made the correct decision in choosing to go with a fold-in design on the Galaxy Fold. Fold-out designs like the Huawei Mate X and the Royole FlexPai aren’t likely to survive more than a few months of regular usage. (Unfortunately for Huawei, that’s the least of their concerns as the lack of Google Services on any of their new devices—including Mate 30 and Mate X—is going to severely handicap their opportunities outside of China.)

Second, we need to think differently about the inevitable tradeoffs between functionality and ruggedness on these new devices. While even the revised design might not be able withstand running an X-Acto blade across the screen or dropping sand into it—though let’s be honest, who’s going to do that to a nearly $2,000 smartphone—as long as the devices prove to be functional over an extended period of regular usage, that will keep most all potential customers happy. The key point to remember is that people who want a radical, cutting-edge device like Fold are interested in it because of the unique experiences it can enable. Having started using it again, I’m still excited at how incredibly useful it is and how innovative it feels to open the device and start using a tablet-sized screen on a phone-sized device. Simple perhaps, but still very cool. In fact, given all the challenges that the initial device faced, it’s pretty amazing that so many people are still interested in the new Galaxy Fold. Clearly, the lure of foldability is still quite strong.

Plus, Samsung themselves has acknowledged the potential challenges the device faces and added two additional services to ward off concerns people may have. First, they’re providing a special concierge level service for Galaxy Fold owners that gives them access to a set of dedicated support personnel who can walk people through any types of questions they have with the phone—a nice touch for an expensive device. Second, the company is offering to replace any potentially damaged screens for $149 for the first year of ownership. While that’s not cheap, it’s certainly appears to be a lot less expensive than what it will cost Samsung to have to perform that repair.

Finally, I believe the official relaunch of the Fold will mark the beginning of a wide range of commercially available products with foldable displays and start to get people thinking about the creative new form factors that these screens enable. Lenovo, for example, has previewed their ThinkPad foldable PC, which is expected to ship around this time next year—showing that foldable screens won’t just be limited to phone-size devices.

There’s no question that the Galaxy Fold is not yet a mainstream device, but it’s equally clear to me that people who want cutting edge device experiences will be drawn to it. I, for one, am eager to continue my explorations.

CBRS Launch: An Important Day in Wireless History

It might not rank up there with the introduction of the iPhone or the launch of 4G LTE., but yesterday marked an important day in the history of wireless services, with the launch of Commercial Deployment Citizens Broadband Radio Services (CBRS). This has been years in the making, and is testament to a combination of innovation, persistence, and a level of public-private cooperation that we probably all wish was more prevalent. I’d like to use this column to explain why CBRS is significant, how it will used, and what the roadmap looks like for the next couple of years.

CBRS utilize 150 MHz of spectrum in the 3.5 GHz band, so it’s sort of ‘mid-bandy’. Half of the spectrum will be made immediately available in what is known as the ‘General Authorized Access’ (GAA) layer. In GAA, the spectrum is unlicensed, similar to Wi-Fi. Companies  — mainly service providers, but also venue operators and enterprises who are certified — can request to use a certain number of channels for a specified amount of time and in a certain area. This ‘spectrum sharing’ database will be administered by one of five companies who have been chosen by the FCC to be Spectrum Access Systems (SAS) Administrators: Federated Wireless, Google, CommScope, Amdocs, and Sony. Any customer that wants to use CBRS must sign up with one of the SASs.

Although CBRS is launching later than originally planned, it is important to step back and acknowledge getting to this point marks a significant accomplishment. The genesis of CBRS goes back to 2012, when the President’s Council of Advisors on Science and Technology (PCAST) released a report, “Realizing the Full Potential of Government-Held Spectrum to Spur Economic Growth”. This report envisioned the need to provide a framework for spectrum sharing. This means that the spectrum would not be owned by any one entity, or auctioned off in the manner that has been prevalent since the mid-1990s. So, the idea was hatched to use the 3.5 GHz band, which has been historically (but sparingly) used by the U.S. federal government, principally the Dept. of Defense. Notably, 3.5 GHz is being adopted for 5G in China and other parts of the world. The FCC, and key participants across the service provider, vendor, and public sector ecosystem developed the idea to create the SASs. And in order to allow the military and other government agencies to continue to use the spectrum, the SASs would also operate an Environmental Sensor Network, which is equipment installed to detect the presence of federal incumbent radar transmissions in the 3550-3650 MHz portion of the 3.5 GHz band.

Over the past few months, the SASs have been certified and the ESCs have been tested. The CBRS Alliance, which consists of key players across the ecosystem, has branded CBRS services as OnGo, which pertains to LTE in the CBRS spectrum. A special event was held in Washington, D.C. on Wednesday, where key participants in the development of CBRS were acknowledged, and some of the Initial Commercial Deployments (ICDs) were announced. The ICDs must run for a minimum of 30 days, after which the SAS Administrators must file a report on their experiences.

OK!! So how will CBRS be used and what is the significance? It should be noted that CBRS is best suited to small cells and in-building type deployments as there are limitations on the power output of the equipment. The use cases depend, to a certain extent on the service provider. The incumbent mobile operators will use CBRS to augment LTE speed and capacity. The spectrum will be incorporated into ‘Carrier Aggregation’ techniques that combine channels across a service provider’s spectrum holdings. AT&T and Verizon, particularly, have been equipping their cell sites (especially small cells) with 3.5 GHz radios to support CBRS. And, in a boost to CBRS, the iPhone 11, which becomes available on September 20, supports CBRS (Band 48), as does the Samsung Galaxy 10, select other high-end smartphones, and a number of other devices such as mobile hotspots.

CBRS could also be very useful for cable companies, who could augment their hybrid MVNO/Wi-Fi hotspot based wireless service with 3.5 GHz services in select cities. Wireless Internet Service Providers (WISPs), which operate fixed wireless networks in mainly rural areas, could augment their services using CBRS.

I also see venues, such as stadiums and convention centers, as likely early adopters of CBRS. These are the types of entities that need the significant, but temporary, boost in capacity that the CBRS framework is made for. There has also been a lot of discussion about enterprises using CBRS to deploy a private LTE network. Initially, they’re likely to do so in conjunction with service provider partners, but eventually, companies could obtain licenses to operate CBRS themselves.

If this initial wave of CBRS is successful, a compelling roadmap lies ahead. The FCC has set a goal for making the other half of the 3.5 GHz spectrum available through an auction of what’s called Priority Access License (PAL) in June, 2020. PAL will allow for longer license terms, larger coverage areas, some expectation of license renewal, and the ability for an owner to make some of its spectrum available on the secondary market. We also expect that the 3.5 GHz spectrum could be upgraded to 5G, although that is unlikely for at least a few years. Truth be told, CBRS is yet another tool in the toolbox that enables a service provider to improve the LTE experience to the extent that it looks like 5G. Finally, if CBRS is proven successful, it’s envisioned that some future 5G millimeter wave (mmWave) bands might be designated for spectrum sharing.

Finally, at this notable moment in the history of wireless, congratulations are in order to some of the key players who had the vision and persistence to make CBRS a reality: FCC Chairman Tom Wheeler (and current Chairman Ajit Pai who saw it through); Iyad Tarazi, CEO of Federated Wireless, one of the few startups in the game; and the 150-member CBRS Alliance, which developed specifications, helped negotiate the choppy waters of the GAA and PAL license schemes, developed numerous case studies, and developed the OnGo brand and certification program; and numerous vendors who played an instrumental role in evangelizing CBRS and then developing solutions including Google, Ericsson, Boingo, Commscope (which owns Comsearch and Ruckus Wireless).

And with all the negative stuff going on in Washington and with Big Tech, it’s nice to have this positive and optimistic moment, where the public and private sectors came together to make something happen. Companies who are also competitors worked together on the CBRS Alliance and with the FCC, DOD, and the NTIA to develop specifications and work out tough issues regarding license terms. This is also a moment of U.S. technology leadership. The concept of a spectrum sharing regime, and some of the technology used to develop and implement it, is a model that’s already being considered in several other countries.

Cloud Adoption and What It Says about Enterprise Investment

Earlier this week, I attended Google Cloud’s Anthos Day in New York. After launching Anthos in the spring at Google Next, Google shared its progress, both on customers and partners’ acquisition as well as launched a new part of the platform that aims at making apps management in a hybrid cloud environment as simple as possible.

As I was sitting listening to speaker after speaker talk about the investment that either themselves or their customers are putting into transitioning current apps to the cloud or redesigning them for the cloud, I was struck by one thought. There is a stark contrast between the way that enterprises think about the part of their infrastructure that runs the business and connects to their customers versus the part that empowers their workforce. How is it that enterprises are prepared to redesign, or refit applications that are core to their business and they’ve been using in some cases for decades, all in the name of digital transformation, increased agility, improved security, and a future-proofed business, and yet we don’t often see the same treatment granted to hardware and applications that are core to their workforce?

I’m sure that even the broader Google must find it quite frustrating to see what enterprises are prepared to do when it comes to cloud and yet how long it took for G Suite to establish itself as a productivity suite in the enterprise that was cloud-first. Even more frustrating must be the discussion that some IT managers still have about whether or not Chromebooks or iPads can be a real alternative to PCs.

Workforce vs. Customers

Why such a difference in approach? It’s an interesting question and not one I have a straight answer for. But I started to think about what factors could play into this reality. This is particularly interesting to think about, at a time when we hear more and more enterprises say that they want to give their employees, the right hardware and applications because this has become critical to acquire as well as retain talent. Yet not many companies seem to go beyond “cosmetic” tweaks and truly drive a more impactful change that builds the foundation for a modern workplace.

When it comes to the way people work, I’m left to think that this increased focus on employees’ user experience is more a marketing campaign than a fundamental mindset shift that, better tools, drive not only engagement and satisfaction but also better business results. If an organization lacks that deep understanding of what drives such satisfaction and engagement, how can they think to understand what those attributes are in a customer or partner context? First-line workers and knowledge workers are, after all, internal customers of the IT department and partners of the business.

I have some ideas as to why such an investment is different.

First, I think you would agree with me that customers and partners are seen as core to the success of the business and an engaged and satisfied workforce is a nice to have, but the need just does not drive change in the same way.

I also feel that when it comes to IT infrastructure that is core to the business, there is usually a clear owner, both as far as driver and accountability. When it comes to the workforce, sponsors differ, and so does the burden of accountability. Sometimes it might be HR, some time is a direct manager, but more often than not the change is not driven by IT unless cost-cutting is behind it.

Possibly the strongest reason for such a disparity, however, can be found in how success is measured. When you invest in change, how do you measure success? Even more important: how do you measure return on investment? With partners and customers, the numbers are much more straightforward: cost savings, increased revenue, higher client satisfaction, and retention, and the list goes on. How do you measure the impact that a productivity suite change can have on your business, aside from any immediate savings?

To me it is interesting to consider how when it comes to cloud, organizations that see the most success are those that use the transition to cloud as a launchpad for assessing their business infrastructure, solutions, and processes. They use the transition to actually modernize their business. At a minimum, they look at what can be modernized, and what cannot and assess whether or not the applications that cannot be modernized are necessary to the business or if there are alternatives.

One would hope organizations learned from their move to mobile. In the beginning, the first businesses who embraced mobile were those who believed it could be a differentiator for their business, especially in a B2C environment. In other words, mobile was driven by the need to address customers’ needs and meet customers where they were. Only later companies started to embrace mobile as a way to provide a better environment to their workforce. Cloud, even more so than mobile, impacts both B2B and B2C from the very start and touches both the business and the way the business is run. If we are moving apps workloads into the cloud, why can we not move workflows that our employees face every day to a cloud-first and mobile-first environment? And if we are ready to evaluate the work environment why don’t we take the same approach that Anthos is applying to application modernization and provide the right tools that deliver on agility, manageability, and security with no compromise on simplicity. It might not be easy, but if as an organization, you are not prepared to make that your ultimate goal, it will never be achieved.

5G and the New Foundation of the Internet

I’d like to offer us to think a little differently about 5G than what most of the headlines are focusing on. The absence of 5G in Apple’s new iPhone’s drove some nonsense headlines, and commentators seemed to jump on the iPhone’s lack of 5G for 2019 as a missed opportunity. While I understand the desire to market 5G as an advantage from Apple’s competition, the reality is in most major markets in the world, 5G is not ready for the iPhone.

This early in a technology transition, most global networks are simply not ready to handle the scale to 5G Apple could bring given they would ship more 5G devices than any vendor by a magnitude more in just a matter of months. Friends of mine in telecom have confirmed my suspicion that the 5G networks are just not ready for that kind of scale. Yes, perhaps China is different, and while a fair point, Apple would not make a 5G variant of the iPhone just for China. However, this is not the point I want to focus on in this analysis. Rather, I want us to think differently about why 5G is important, and why it’s better for us to think about 5G’s value less about smartphones and more about everything else.

5G and the New Internet
5G is bigger than smartphones. Yes, it will make our smartphones faster, and let us stream more high-quality video, play more games with little to no latency, and overall help us browse the Internet faster. At a global level, this matters because there are markets whose consumers are still using painfully slow wireless broadband. So yes, 5G will be great for smartphones, but the story is much bigger.

There is a much larger connected world looming on the horizon, and 5G is absolutely built for the bigger connected world. In the LTE world, our smartphones alone are clogging the near entirety of the network. There is simply no room for connected cars, smart cities, smart grids, smart home, robotics, remote healthcare, public safety, etc., the list can go on. For the vast array of billions of devices not yet connected to the Internet, 5G was built for them. And thus, this is why I am positioning 5G as the foundation for the new Internet.

What Would 5G Bring To the New Internet?
There are a number of fundamental new advantages that come with 5G. A few key points are the ones I think make up the core of the new foundation 5G will enable that was not possible in the LTE era and LTE network architectures.

  • Low Latency enables Mission-Critical Applications. The dramatic increase in the amount of data which can be pushed up and down the network with 5G at incredibly low latency is essential for the new Internet. I mentioned autonomous cars, but these are on the shortlist of mission-critical processes that benefit from low-latency. We can’t have cars that can’t visually process the elements of the road and use a hybrid on-device and cloud processing to make split-second decisions on the road to suffer from network latency. It just simply will not be possible to have fleets of autonomous cars without massive throughput at low latency, and that is not possible with LTE. 5G was built for low-latency, and the architecture underlying both at the network and on the chipsets is essential to move autonomous transportation forward. On this point, 5G was not designed just for low-latency but levels of reliability that we have not previously seen in older network technologies. For things like autonomy, robotics, even things like remote health care (remote surgery, for example) things we all believe we are working toward in the future which are mission-critical can now become possible.

  • High Throughput and Low Power. In an industry research report I read on 5G, a point was made about 5G bringing significantly more capacity for edge devices than 4G/LTE. The report’s analysis dove into the technical elements that make this possible, but analyzed how in the 4G/LTE era, any given network/tower could only support around 2,000 devices per square kilometer. 5G enables this number to move up to 1 million devices per square kilometer. Again, all with higher throughput capabilities, at lower power demands on each device.

    This point alone helps us understand why 5G was built for the Internet of things. Many forecasts estimate in the 2021/2022 timeframe we could have 30 billion connected devices. This will not be possible without 5G.

  • True Edge Computing. Yes, edge computing is a buzzword, but enabling much higher levels of computational capabilities of edge computing devices like a vast array of camera sensors, health sensors, IoT edge devices enabled by smart cities and smart grid, and more, will all require much more computational capabilities at the edge with direct integrations into the cloud computing systems they are running on.

    There is huge upside in the data center on this point alone, as well as enabling growth for the cloud providers like Microsoft, Amazon, and Google in this future and it won’t happen without 5G infrastructure.

  • Dynamic Network Slicing. This one is interesting as it will fuse machine learning at the network level in new ways. With dynamic network slicing, which is a completely new feature with 5G, carriers and service providers will be able to dynamically optimize a portion of the network, for specific use cases. Say a specific city has higher demands on the network due to smart grid, or robotaxis, a network can smartphone optimize their network for any areas specific use case thus providing the highest quality of service. Being able to tune networks, on the fly, for specific use cases is one of the more interesting features I’ve come across, and it will be interesting to see how carriers use this part of the new 5G infrastructure.

Those buckets are the ones that stick out to me as things that 5G enables that are new, and as I said, this is a much bigger story and a much bigger future than just smartphones. Will smartphones benefit? Yes, and as augmented reality keeps developing, and other core technology the smartphone will help drive, they will all be enabled by 5G in ways LTE could not. But the 5G era is critical to moving us forward into the digital future we envision. One that will bring many businesses into the digital world in ways not previously available to them. It will transform industries and create tremendous additional value for economies worldwide.

Will it be easy? No, this may be one of the more difficult network transitions simply due to the complexity and fundamental changes in the network and on devices. This may be one of the more difficult “Gs” and costly. Unfortunately, we have had in decades if not ever. That being said, it is worth it for the benefits, and many industries are moving to take a vested interest to move 5G forward.

I understand the voices of the critics and the 5G skeptics, but the criticism I hear is large because the 5G narrative has been isolated to smartphones. This is why I encourage a much larger picture to be embraced when we think about the role 5G will play and why I think we will look back on this transition like the one that enabled a fundamentally new kind of Internet.

Apple’s Aggressive Pricing on New Services Reflect Their Strategic Importance

Apple led off its September keynote this week by offering a closer look at its two new upcoming services, Arcade and TV+. The company invited developers on stage to show off some of the new games coming to Arcade, which launches September 19th and showed previews of some of the shows coming to TV+ when it launches November 1st. The company also announced that each service will cost $4.99 a month (after a week-long free trial) and that people who buy a new iPhone, iPad, iPod Touch, Apple TV, or Mac will get a year free of TV+. This aggressive pricing, especially around TV+, represents just how important the success of these new services is to Apple.

Apple Arcade
Apple’s new gaming service will let players access games on their iPhone, iPad, iPod Touch, Mac, and Apple TV. It includes more than 100 new games exclusive to Apple Arcade, and it will first role out Sept 19th with iOS 13. It will come to iPad OS and tvOS on Sept 30th, and to MacOS Cataline in October. Apple has also announced support for third-party controllers including Xbox Wireless Controllers with Bluetooth, PlayStation DualShock 4, and MFi game controllers.

Apple Arcade is interesting in that, unlike other gaming services on the market, Apple has focused on serving casual gamers rather than the hardcore players. This has the potential to cut both ways, as it represents a much larger total available market, but those players are—by definition—less invested in their gameplay. That said, I’ve always questioned the validity of the label casual gamer. Most people who play a game do it to win, and that can mean coming back to a game over and over until they’ve completed it. Just because it’s a “casual” game doesn’t mean people approach it casually.

That’s clearly one of Apple’s goals with Apple Arcade. To offer a wide range of top-shelf games in the hopes that a few catch on with each player. This makes the subscription cost a no-brainer to anyone who wants access to those few games to kill time while they’re waiting in line at DMV, stuck waiting for their kid’s school to let out, or sitting on a plane waiting to take off. By pricing the service at less than $5, I imagine a significant percentage of players will set it and forget it.

Apple TV+
Perhaps more surprising was the $4.99 price point for TV+. Unlike Apple Arcade, there are plenty of comparable services to Apple+, from Netflix to Hulu to Disney+ and more, and they all cost more than five bucks. Plus, as noted, when you buy new hardware from Apple, you get a year-long subscription for free. As others have noted, the free-year-with-purchase means TV+ will scale to huge numbers nearly automatically over its first year, an advantage that can’t be understated. And by pricing the service very aggressively versus its competition, Apple is hoping to bring in non-Apple hardware buyers to take a look, too.

And what they will find is a large and growing slate of all original programming served up without advertising. Apple’s early track record with original video content is spotty at best, but the company has new shows on tap with big-time Hollywood stars such as Jennifer Aniston, Steve Carell, Reece Witherspoon, Jason Mamoa, Snoopy, and even Oprah Winfrey. What it doesn’t have is a back catalog of old shows, a key aspect of the success of other streaming services.

Just like Arcade, Apple knows that subscribers to TV+ don’t have to like every show on offer, but if they love one or two, then they’re going to stick around. That said, unlike games that can be consumed in small, bite-sized bits when you have a few minutes here or there, video content requires a commitment. People will only make that commitment if the quality of the content is good, and so a lot will be riding on Apple’s first slate of shows.

Show Me the Bundles
One thing that Apple didn’t announce at this week’s event was any bundled package of services. Now that we’ve seen the pricing for all of its offerings, and added Arcade and TV+ to the roster, I’d very much like to see Apple offer customers a package inclusive of other services such as News+ and Apple Music. I’d even roll in iCloud storage. Make people a great deal on this broader set of services, and many will take the leap. This has the added benefit of getting people to use Apple services they might not have seriously considered buying individually in the past.
And the ultimate package: Apple’s full suite of services, plus hardware as a service rolled in. This could include a new iPhone, new iPad, and/or a new Mac on a regular one, two, or three-year cadence, inclusive of Apple Care support. This isn’t the type of package for everyone, but I’m convinced there is a market of “all in” Apple users that would welcome the ability to pay a single monthly fee to have everything just taken care of for them.

Bundles or not, Apple’s current announcements show just how serious the company is taking its entrance into these new service categories. Apple and Wall Street expect these services to drive real growth for the company over time.

If things go well, I expect at next year’s September event Tim Cook will spend some time talking about how many subscribers Apple’s new services have acquired. Two key things to watch for specific to TV+: How long does Apple keep the subscription at $4.99, and will it eventually turn off the free year to Apple hardware buyers? When and if those both happen, you’ll know Apple considers the service a success that can stand on its own.

Apple Event: Upgrades, Upgrades, Upgrades

On Sept 10th, at the Steve Jobs Theater, Apple gathered press, analysts, and guests to take a first look at the new iPhone models. During the keynote, Apple introduced the new iPhone 11, the iPhone 11 Pro, and the iPhone 11 Pro Max together with the new iPad 7th Generation and Apple Watch Series 5.

There was a lot packed into the keynote. Rumors seem to be getting better every year, but it is about the final details on how new features and specs are delivered that help you understand the impact these new products might have on Apple’s business and the market.

iPhone

While not immediately evident at the start, it became clear, that iPhone 11 is the new iPhone XR. The name is a smart move from Apple as it simplifies the naming convention but, even more so, because it does not label the product as inferior. You might not be able to afford the iPhone 11 Pro, or you might not see yourself as a pro user, but you do not feel like you are settling for a “second best” product by buying the iPhone 11.

Despite all the concerns about the trade war with China and the impact that tariffs might have on pricing, Apple maintained iPhone 11 Pro pricing in line with last year and dropped the iPhone 11 price by $50 compared to the launch price of the iPhone XR.

While we get excited about the new products, future sales are also driven by iPhone models that remain in the line-up and get a price cut. Apple XR now starting at $599 and iPhone 8 starting at $449 offer two great options for current users who are looking to upgrade.

Upgrades are not just crucial for Apple to drive hardware sales. Making sure the current users base is on one of the most recent iPhone models ensures they can access and benefit from the services and key features Apple is providing from Apple Card to Apple TV+ to Face ID for security.

Missing from the rumored feature set was reverse charging. Your guess is as good as mine as to why we did not see this feature, but I do wonder if Apple thought iPhone battery life might get impacted too much or that charging time was not fast enough to provide a positive experience.

Apple Watch

iPhone XR and iPhone 8 were not the only two products that remained in the portfolio with a more attractive price point, Apple Watch Series 3 did too at $199. Apple Watch 5 will get the press and drive upgrades, but Apple Watch Series 3 will certainly attract new users who have been looking at Apple Watch but were either unclear about the value or they just could not afford or justify the price. I expect Fitbit to be the brand most impacted by the new price point and not only on their smartwatch portfolio but on their bands too.

The new Apple Watch Studio is a great new way to purchase Apple Watch. Customers will be able to pick size, material, and band to create precisely the product they want. This will certainly drive out of the box satisfaction, and I do not expect to negatively impact sales of additional bands as the choice is now so broad and users have started to have specific bands for specific occasions.

Apple also announced three new health research studies and a health research app which proves once again that Apple is committed to making Apple Watch not just a fitness device but a health device. It may seem a subtle difference, but it is about turning a device from being useful to being essential.

iPad

The iPad line up saw a new 7th generation iPad launch at $329 replacing the 6th generation and getting a 10.2” screen and a smart connector for the Smart Keyboard. The new design, coupled with the existing Pencil support offers a device that not only competes with the few Android tablets left in the market but also with lower-end PCs and Chromebooks.

The iPad, which is the most popular model in the portfolio, is often the first Apple device consumers buy. If you have an iPhone and an iPad, the value of having both is clear, but many consumers still see the iPad as a device that delivers a computing experience that is separate from the phone. Researching iPad over the years, I have often seen Android phone users with an iPad, and this has become more the case as the numbers of brands bringing Android tablets to market has been decreasing.

Apple Arcade and Apple TV+

Apple Arcade and Apple TV+ pricing was probably the biggest surprise of the keynote. Both are $4.99 for a family subscription. A pretty striking difference to the $14.99 Apple Music family subscription and $9.99 Apple News+ which maybe just helps to bring home the cost of licensing content and adding your cut on top.

Apple Arcade is the first of its kind as it does not target at core-gamers, but at the much broader casual gamers market, so pricing does not have a real comparison yet.

The super aggressive price of Apple TV+ compared to expectations, but also to other video services, signals a few things. First, it might be that Apple is sensitive that they do not have a track record in video content. Actually, their first attempts at producing content with Planet of the Apps and Carpool Karaoke were not very successful, to say the least. It might also be that Apple does not feel they have enough catalog that would justify a higher price from the get-go.

Second, I think that at $4.99 it becomes less about subscribing to Apple TV+ instead of another service. You are instead deciding between Apple TV+ and a movie rental or a latte. This makes the decision process much easier.

Lastly, being more aggressive on a subscription price has no negative connotation on the brand. This is quite different in the hardware business, where a lower price does impact the way the brand is perceived.

From tomorrow if you are purchasing an iPhone, iPad, Mac or Apple TV, you will receive a free annual Apple TV+ subscription. This is really not about helping devices sales, but it is about giving Apple TV+ an audience.

 

And this brings me to my final point about what we saw from Apple. As hardware and services come together, the value-add from one to the other is no longer a one-way street. We used to see software and services add value to Apple hardware, and now we also see Apple hardware being instrumental to the success of Apple services. Some services have gone beyond Apple hardware: Apple Music recent web-based beta or Apple TV+ on Samsung, LG, and other TVs, and on the web, however, Apple hardware will remain the biggest driver for Apple services uptake. This means that making sure users have the best possible device to get the added value of services might require some different thinking when it comes to pricing. This need rather than market pressure is what you saw Apple respond to this week.

Vendors to Watch In the 5G Era

In these very early days of 5G, it certainly seems on the surface that the vendor ecosystem is similar to what it has been for the past several years. The Big Three network equipment companies — Ericsson, Nokia, and Huawei — are winning the lion’s share of 5G contracts, with Samsung and ZTE gaining some share. It also appears that we’re not likely to see any major shifts in the handset ecosystem in the near future. Qualcomm remains in a strong position in the 5G chipset, licensing, and radio modem business – the biggest threat comes from the internal development on the UE side, such as Apple’s ultimate frenemy move of acquiring Intel’s modem business, barely a week after they settled their dispute with Qualcomm.

But as we truly enter the 5G era, which will move from ‘commercial trial’ phase to more broad-based network and device availability in 2020, there is an opportunity for a new wave of vendors to play a significant role. I’d like to use this column to provide a glimpse of some companies and categories of opportunity to keep your eyes on. A few caveats. First, this list is inclusive of the broader next phase of opportunity in wireless, including companies that will contribute to 5G, edge computing, new business cases and different economic/cost models. Some of these companies are [well-funded] start-ups, while others are established companies making a big play for 5G, in some cases through some recent acquisitions. Second, this is just a sample of a few companies – it is by no means exhaustive. Finally, no company has paid to be on this list or has sponsored this column in any way.

New Era of Networking
Among the companies to keep your eye on in the networking space, a few stand out.­ Mavenir is helping to transform mobile operator economics, with a comprehensive portfolio across nearly every layer of the network infrastructure stack. It is playing a particular role in contributing to the move to more software-oriented solutions for mobile networks.

Altiostar, which provides a 5G-ready virtualized RAN software solution, has raised more than $200 million. It is one of key vendors supplying Rakuten, a new, high-profile operator in Japan that is building the world’s first end-to-end fully virtualized, cloud-native mobile network.

Parallel Wireless. Although not a major 5G play at this point, Parallel has contracts with more than 60 smaller operators worldwide, with its pioneering 2G/3G/4G Open RAN solution consisting of a Converged Wireless System (CWS), a software-defined base station, and a fully virtualized HetNet Gateway (HNG.)

New Opportunities in Mobile Broadband
Evolved 4G and 5G networks are positioned to offer a competitive broadband solution, particularly in areas that are un-served or under-served by broadband. Starry, which has launched fixed wireless access in 5 cities covering some 2 million homes, has raised nearly $200 million and just won 104 licenses in the 24 GHz auction. Airspan has numerous solutions for network densification – they’re behind Sprint’s innovative ‘magic box’, and recently acquired Mimosa Networks, which focuses on fixed wireless solutions and adds some new IP in the massive MIMO space. Adtran is  playing an important role in backhaul, customer premise, and access solutions for mobile broadband. And as opportunities in fixed wireless expand, Cambium Networks is well-positioned in certain geographic areas and parts of the radio spectrum that are not typically covered by the mainstream network equipment providers.

Opportunities in User Equipment (UE)
Although we’re not forecasting any significant near-term share shifts in the handset (smartphone) market, we believe that there’s a new breed of opportunities for wireless modems (i.e. bricks/pucks/mobile hotspots) in the consumer and enterprise space. Some notable companies include: Inseego, which has won some of the initial contracts for Advanced 4G LTE (i.e. Cat 18) and 5G networks; Cradlepoint, which provides a comprehensive suite of 4G/5G/IoT routers and edge cloud solutions; and Netgear, which has among the industry’s first 5G mobile hotspot and Wi-Fi 6 products.

Edge Computing and Storage
An important element of next generation 5G networks is bringing connectivity and content to the edge, which improves network performance and can significantly alter network economics. This will expand opportunities for established Big Tech companies to expand their business in the telecom/wireless sector. A good example here is VMware, which at the recent VMworld, laid out a compelling vision for the evolution of the telecom network as a cloud, as part of the company’s Telco NFV solutions. Seagate, one of the world leaders in storage solutions, is expanding its footprint into telecom , given opportunities in the evolution of the edge and the data center. On the start-up side, VaporIO has an innovative solution that places data centers at the base of cell towers, which brings cloud-like services to the edge of the wireless network.

CBRS/Shared Spectrum/Wi-Fi 6
Another company on our watch list is Federated Wireless, which is among the leaders enabling CBRS (shared spectrum in the 3.5 GHz band) with its Spectrum Controller platform. The coming commercial launch of CBRS is key to proving the pioneering shared spectrum model, and will play an important role in the evolution to 5G.

Finally, not that many in the broader tech ecosystem are that familiar with CommScope, a $5 billion company which for years has been a significant supplier of a range of wireless network equipment, such as antennas and amplifiers. Among the reasons to keep your eyes on them is the recent acquisition of Arris, which more than doubles the company’s revenue, and through Arris’ ownership of Ruckus, provides substantial a substantial footprint in the enterprise/telco Wi-Fi and CBRS areas. I also believe Cisco is poised to play be a bigger player in 5G. The company is in a unique position, given its broad portfolio in the enterprise, mobile, and Wi-Fi areas.

We’ll address some key players in the IoT and enterprise segments of 5G, including the industrial aspect, in a future column.

A Vacation with Apple Card

I spent the past three weeks in Europe with my daughter and just before we left, I received my Apple Card, so I thought this was the perfect opportunity to test it out and learn what I like and what I wish it had.

I had registered my interest in Apple Card as soon as it was announced and when Apple offered the opportunity to be part of the group who would get a preview, I jumped on it. I was leaving for Europe and the thought of having to pay currency fees for three weeks worth of purchases made me a little sick to my stomach. I was in the process of upgrading one of my credit cards to a $95 annual subscription fee card that would also provide a no currency fee option. Despite starting the upgrade process over a month before my trip, I was not sure I would get the new card in time. I just wanted to avoid paying currency fees, which, if you travel, you know, can add up to a considerable amount.

A Weekly Credit Card

I am not much of a credit card user outside of travel. If I have the funds, I prefer to use my debit card, and I keep track of my purchases on a spreadsheet, so I am always on top of my finances. My debit card does not provide me with any rewards, so the idea of swapping my debit card with the Apple Card, especially for Apple Pay was a no brainer.

Apple Card’s repayment process makes it so that I can keep track of my purchase and pay at the end of the week so that funds are taken from my current account rather than building up debt for the whole month. I am sure it is more psychological than anything else, but in this way, I feel like I avoid any temptation of overspending. Apple certainly seems to be wanting to help drive healthier financial choices as they highlight the impact of interests on your spend.

Although I use Apple Pay regularly, it has not been my default payment option outside of tested retailers where I know it is available, and it works. As convenient as it is to flip my wrist to pay, I hate to have still to ask whether or not Apple Pay is accepted. Even within the same chains, it gets down to individual stores having updated their POS. This quickly leads to frustration, which then makes me reach for my wallet instead. Now with the Apple Cash 2% cashback incentive linked to Apple Pay, I make more of a conscious effort to use Apple Pay, and you know better than I do, that consistency creates habits.

Registering Apple Card as the default Apple Pay card was, of course much easier than the steps I had to take with any of my Citibank debit and credit cards, especially when authorizing multiple devices. This is one less friction in the whole process that consumers have to face especially those who update their iPhone on an annual basis.

Once I got to Europe, the ease of contactless payments compared to the US made it so that using Apple Card on my Apple Watch was the most convenient option. Convenience aside, using Apple Watch rather than reaching for my wallet on a busy subway or tourist spot also made me feel safer in areas where tourists can be targeted explicitly by pickpocketers. This, of course, is true for any card you use with Apple Pay but my peace of mind extended to the fact that I knew that if my Apple Card was ever compromised, I also could generate a new card number and continue to use the card while on my trip.

Apple Card’s Data

As I mentioned, I keep track of my transactions regularly. I have a spreadsheet where I list what I spend, including recurrent payments, and what I earn. Apple Card provides an incredibly useful and detailed set of data if you, like me, want to have an accurate view of where your money is going. By the time you have processed the transaction, you get a notification on your iPhone, detailing retailers, location, and amount in your primary currency. Needless to say that when you are on a three-week trip not having to worry about retaining all the receipts so I could document my expenses on my return and check them against the final amount debited in my home currency is a huge bonus too.

I particularly appreciated the speed of the transaction notifications when, while trying to buy tickets online for a museum in Rome, the transaction was flagged as possible fraud. As the payment was rejected online, my iPhone notified me of the purchase and asked me to confirm if I was indeed attempting to make the transaction. I was then able to go and try the purchase once again, which this time was cleared. If you have any credit cards, you know that that this process is not usually as simple. In most cases, even if the card issuer sends you a text message, asking you to validate the transaction, your card might still be blocked, which will require you calling your bank and having the card released. The two steps process takes considerably longer and creates much more friction for the user, especially when under time pressure on completing the transaction either online or at a physical retail store. The timeliness of the notification and the sense of control I had through the experience made Apple Card feel more like a modern, intelligent banking experience than anything I have experienced before.

I do wish Apple took the data a step further and allow users to either import the data into Excel or another budgeting app of my choice. Being able to use the data provided by Apple Card in expenses apps would also be valuable and would open up new opportunities for Apple in the enterprise space. Interestingly the recent study we, at Creative Strategies, ran on Apple Card had others mention the desire to have some level of integration into their favorite finance management app. The enterprise segment has been growing in importance for Apple but mostly as a hardware play. Being able to solidify its presence by delivering or integrating a service like Apple Card into core enterprise apps for travel and expenses would broaden the opportunity for Apple in the enterprise. As Apple pointed out over the past several earnings calls, enterprises have been investing in designing their own applications aimed at devices like the iPhone and the iPad, and you could argue that Apple Card could be another “device” those apps could consider.

 

During my trip, I earned just over $50 in Apple Cash.

While there is some good feel factor about it. That said, I have to admit that the real benefit of Apple Card for me came down to three things: a modern banking experience, peace of mind and a sense of control over my data and finances. What is interesting however is that I associate the benefit to my iPhone as Apple Card and iPhone become one for most of my purchases. As sexy as the physical Apple Card might be the only time I saw it while in Europe was when I showed it to someone. Hopefully, this will be the case more and more in the US too as contactless POSs continue to roll out. I have always said that Apple Card could be one of the stickiest services to bring value to the iPhone and in my short experience, it certainly seems that way.

Huddle Rooms and Videoconferencing Reshaping Modern Work Environments

If you had to name two things that best exemplify today’s modern office environments, you’d be hard pressed to come up with better choices than huddle rooms and videoconferences. Together, the two reflect both the different types of physical environments that many people now find themselves working in, as well as the different means by which they communicate with co-workers, customers, and other potential business partners.

Huddle rooms, for those who may not know, are the small meeting rooms or mini conference rooms that many companies have adopted—particularly those organizations with open floor plans—to provide small groups of people (typically 2-6) with an easy, space efficient means for holding meetings. Videoconferences are nothing new, of course, but their frequency has increased dramatically over the last few years, thanks to a combination of ubiquitous camera-equipped notebooks and smartphones, higher quality wireless networks, a younger workforce, and more emphasis on collaborative efforts within and across companies. Another big factor is the wider variety and broader usage of collaboration-focused software tools, ranging from modern chat platforms like Slack and Microsoft’s Teams, to integrated collaboration features in Google’s GSuite, to an enormous range of videoconferencing applications, including Zoom, Blue Jeans, GoToMeeting, Webex, Ring Central, Skype, FaceTime, and much more.

Arguably, huddle rooms and videoconferences are each separately having an important impact on how people work today, but when you put the two together—as organizations have started to do—that’s when you really start to understand how work environments of the late 2010s are very different than they were even earlier in the decade. In recognition of the fact, many companies are starting to set up a number of videoconferencing equipped huddle rooms to drive more collaborative efforts—as well as free employees from the noise-filled cacophony that many open office environments quickly turn into. Not surprisingly, a number of vendors are working to create solutions to address those needs.

From PC companies like Lenovo and HP, to more traditional videoconferencing companies like Polycom (now part of a merged company with Plantronics called Poly), there are quite a few interesting new approaches to creating hardware tools that can optimize collaboration and tap into the many software-based videoconferencing tools and services now available. In fact, arguably, one of the key reasons why Plantronics spent $2B to acquire Polycom last year to create Poly is because of the growing importance of video-based collaboration in work environments.

Looking specifically at huddle room-focused video collaboration tools, one of the more intriguing new options coming from the blended company is the Poly Studio, a $949 USB C-equipped soundbar and video camera system that integrates much of the intelligence of higher-end dedicated videoconferencing systems into a huddle room-friendly portable form factor. Anyone using the huddle room can simply plug the device into a USB C-equipped notebook PC and get access to a high-quality 4K-capable audio video system that works automatically with most popular videoconferencing tools—including the aforementioned Teams, Zoom, Blue Jeans, GoToMeeting, and more.

Unlike standalone webcams, the Poly Studio has the ability to automatically track whoever is speaking in a room, both through automatically focusing the camera and directing the microphones to pick up and prioritize the audio coming from the speaker. On top of that, some clever audio processing technology can create what Poly calls an Acoustic Fence that keeps voices outside the room (or walking past) from interrupting the discussion. The NoiseBlock feature will analyze and then automatically mute other sounds that may be coming from within the room or from other call participants. For those who prefer to only use the audio for a given session, there’s a slider available to physically block the lens. A key benefit for IT departments is that Poly Studios can be centrally managed and remotely updated and configured.

Though they may be small, huddle rooms are proving to be incredibly important resources for employees who want to be more productive in their workplace. Particularly in companies that chose to go with open office layouts (and that are likely regretting the decision now), huddle rooms can provide an oasis of calm that enables the kind of increased collaboration that open offices were supposed to offer. At the same time, it’s clear that collaboration of all types, but particularly video-enabled calls, is going to be increasingly important (and common) in businesses of all sizes and varieties. As a result, tools that can bring together real-time collaboration in small rooms are going to play a critical role in increased/improved communication and productivity moving forward.

Podcast: US Tech Manufacturing, VMWorld, Qualcomm WiFi6

This week’s Tech.pinions podcast features Tim Bajarin and Bob O’Donnell discussing the challenges in trying to manufacture tech products in the US, analyzing some of the biggest announcements from VMware’s VMWorld show, and chatting about the new WiFi6 offerings from chipmaker Qualcomm.