M&A Activity Reflect AR’s Bright Enterprise Future

While consumer-focused augmented reality continues to be a bit of a slow burn in terms of interest and adoption, on the enterprise side of things, the technology continues to ramp at a faster pace. In fact, the pandemic—and many companies’ suddenly urgent need for remote assistance and “see what I see” capabilities—has driven a spike in interest and usage of the technology, primarily on smartphones and tablets. Enterprise AR on both mobile devices and head-worn products will be a huge market. And while it is still early days, we’re already seeing some interesting mergers and acquisitions happening in the space, including the recent acquisitions of two key players: Focal and Ubimax.

Google Buys North
In late June, Google announced it was acquiring smart glass vendor North, which had been shipping version one of its Focal products and was gearing up to ship version two. Google’s Rick Osterloh, senior vice president of devices and services, said in a statement that “North’s technical expertise will help as we continue to invest in our hardware efforts and ambient computing future.”

I had included North in an IDC Innovators writeup in 2019, noting that the company—founded way back in 2012 as Thalmic Labs—had brought to market a unique product that looked more like a regular pair of glasses than most AR headsets. I had a chance to use Focals 1.0. I was impressed by the technology, which focused less on dazzling visuals and more on simply delivering information to the wearer in a useful, unobtrusive manner. However, because the company required a personalized fitting for each pair of glasses and focused on selling to consumers versus enterprise, I had concerns about its ability to scale over time.

I did not have the opportunity to test Focals 2.0 but had heard good things. Google obviously felt the same way and moved to make the acquisition. Unfortunately, the company has opted to shelve that new product and refund the purchase price to those who had preordered them. It is not clear yet if Focals 2.0 will eventually make its way to market as Google-branded glasses, or if the company will instead leverage the technology in a future version of its own product line.

While many remember Google Glass as a failed early attempt at consumer AR, Goggle pivoted that product toward enterprise use where it has enjoyed a long and successful tenure in a wide range of vertical use cases. It shipped the first version for many, many years, and in 2019 it rolled out Glass Enterprise Edition 2 with improvements such as a better camera and Qualcomm’s Snapdragon XR1.

Google’s history of hardware acquisitions is mixed at best, so it will be interesting to see how it moves forward with North. I am hopeful that good things will come from this acquisition, and that North’s great technology makes its way into new products sooner rather than later.

TeamViewer Buys Ubimax
On July 15th TeamViewer announced it was acquiring Ubimax. Early this year, I wrote about Ubimax as part of another IDC Innovators document on AR Enterprise Platforms. I noted at the time that one of the things that made the company unique was that it offered customers a complete solution, not just hardware or software. Founded in 2011, Ubimax has evolved its Frontline product into four high-level segmentations: xPick for picking and packing, xMake for assembly, xInspect for service, and xAssist for expert remote support.

Ubimax does not make its own hardware, but it supports a wide range of hardware from other companies, including smart glasses, head-mounted displays, and even smartwatches. One of the other things I like about the company: It offers companies the whole package in an as-a-service model, which I think is a smart way to help organizations begin their AR journey. Called Everything-as-a-Service (XaaS), for $200 per month, Ubimax includes all necessary hardware, software, and services.

At first blush, TeamViewer acquiring Ubimax may seem a strange fit. The company is best known for its service that lets you “control, manage, and repair computers, mobile devices, network machines, and more—from anywhere, anytime.” But the company has, for some time, also offered an AR product called TeamViewer Pilot, a remote assistance service that lets workers connect to remote experts using their smartphone. In a statement on its Web site, TeamViewer says the Ubimax acquisition will help it to expand its AR and IoT offerings while accelerating its development of use cases focusing on data analytics and artificial intelligence. I’m excited to see where the combined company takes things.

Watch this Space
I expect more mergers and acquisitions, as well as interesting partnerships and collaborations, to take place in the AR market in the coming months. And companies continue to build out their offerings to serve this important market. This includes huge, well-established enterprise players such as Microsoft, PTC, and Lenovo, as well as startups and smaller firms such as Atheer, ScopeAR, Upskill, RE’FLEKT and dozens more. Plus, mega-startup MagicLeap continues its attempt to pivot from consumer to enterprise with the recent hiring of Peggy Johnson, formerly an executive vice president of business development at Microsoft, as its CEO.

COVID-19 has made clear the utility of AR for a broader range of companies, but unfortunately, the resulting recession could slow down some companies’ near-term rollouts. As a result, this market is likely to see some roller-coaster ups and downs in the coming few years. But make no mistake, enterprise AR is here to stay.

Microsoft and Partners Bring More Hyperconverged Hybrid Cloud Options to Azure

When it comes to cloud computing, there’s little doubt that we’re in a hybrid world. In fact, that point comes through loud and clear in two different studies published this year by TECHnalysis Research. Both the Hybrid and Multi-Cloud Strategies in the Enterprise report and recently published Pandemic-Based IT Priority Shifts report highlight the high degree of usage, strategic importance and budgets spent on hybrid computing models. In fact, in many instances, hybrid cloud is considered more important than the older and more established public cloud computing methodologies.

The reason? While every company would certainly like to be running nothing but containerized, cloud-native applications, the reality is that almost none do so. There’s simply too much legacy software (typically still close to 50% of most organization’s applications) and hardware datacenters that companies need to use for a variety of reasons, including regulatory, security, cost and more. In the meantime, private clouds and hybrid models that combine or connect private cloud workloads with public cloud workloads serve as a critical steppingstone for most organizations.

As a result, we’ve seen many different tech vendors create new hybrid cloud offerings recently to tap into the burgeoning demand. At the company’s partner-focused Inspire event, Microsoft unveiled several new hybrid cloud-focused additions to its Azure cloud computing platform. In particular, they announced additional capabilities for Azure Stack HCI—the local, on-premises compatible version of Azure that runs on specialized, Microsoft-certified hardware appliance devices from hardware partners like Dell EMC, HPE and Lenovo.

These hardware appliances are built using an architecture called hyperconverged infrastructure, or HCI, that essentially combines all the elements of a data center, including compute, storage and networking, into a single, software-defined box. The beauty of the HCI approach is that it virtualizes all these elements so that simple, off-the-shelf servers can be organized and optimized in a way that improves their performance, functionality, and reliability. For example, virtualizing the storage provides SAN (Storage Area Network)-like capabilities and dependability to an HCI environment without the costs and complexities of a SAN. Similarly, virtualizing the networking lets an HCI device offers the capabilities of a load balancer via software, again without the costs and complexities of purchasing and deploying one. Best of all, these software-defined datacenter capabilities can both scale up to large datacenter environments or scale down for branch offices or other edge computing applications.

While Microsoft has talked about Azure Stack HCI before, they announced several new capabilities at Inspire. Notably, Azure Stack HCI is now a fully native Azure service, which means you can now use the Azure Portal as a combined management tool for public cloud Azure computing resources along with any local Azure Stack HCI resources, such as virtual machines, virtualized storage and more. This allows IT administrators the classic “single pane of glass” UI for monitoring and managing all their different public, private and hybrid-cloud-based workloads. In addition, by making Azure Stack HCI a native Azure service, it makes it significantly easier to use other Azure PaaS (Platform as a Service) capabilities, such as Azure Backup and Azure Security Center, with private cloud workloads. In other words, it essentially allows companies to pull these two “worlds” together in ways that weren’t possible before.

One particularly nice feature of these new Microsoft-certified systems is that they can be purchased with the Azure Stack HCI software already installed and configured on them, making them about as easy to set up and configure as possible. You literally plug them in and turn them on and they’re ready to install, making it suitable for smaller businesses, branch offices or other locations where there may not be dedicated or specially trained IT staff. In addition, Microsoft offers the option of installing the new Azure Stack HCI on existing datacenter hardware if it meets the necessary hardware certification requirements.

Combining the software-defined datacenter (SDDC) capabilities inherent in HCI along with the cloud-native opportunities of Azure Stack initially was a big step forward in getting companies to modernize their datacenters from both a hardware (HCI) and a software (Azure) perspective. While it may seem logical to do so, those two modernization efforts don’t necessarily go hand-in-hand, so it was an important step for Microsoft to take. In doing so, they made the process of migrating more apps to the cloud (and, hopefully, modernizing them along the way) much easier.

This is particularly important for companies who may have been a bit slower in moving their applications to the cloud and/or those organizations who may have run into roadblocks on some of their legacy applications. Not all organizations have all the skillsets they need in their IT organizations to do this kind of work, so the more efforts that can be done to make the process easier, the better. With their latest additions to Azure Stack HCI, Microsoft is moving down the path of further simplification and helping draw the worlds of legacy applications and hardware and the cloud a little bit closer together. No matter how you look at it, that’s a step in the right direction.

Podcast: Google Cloud Next, G Suite, IT Priority Study, Twitter Hack

This week’s Techpinions podcast features Carolina Milanesi and Bob O’Donnell analyzing the announcements from Google’s Cloud Next event, including new offerings for GCP and G Suite, discussing a new study on IT prioritization changes from the pandemic, and chatting on the big Twitter hack.

Home is Where Gmail is for the Redesigned G Suite

On day two of Google Cloud Next Online, Google announced a big redesign for its productivity suite that delivers a deeply integrated experience between Gmail, Chat, Meet and Rooms, whether on the web or on mobile. First available to the G Suite early adopter program, the new integration will roll out to a wider user set in the coming months.

It is hard to define what this new G Suite experience is. You are tempted to call it an app on your smartphone and a single tab in your browser, but Javier Soltero, the VP and GM of G Suite, calls it an “integrated workspace.” I think of it as my communication mission control.

I look at this redesign from two angles: the user and Google. From what I have seen so far, the promise of this new integrated experience will bring dividends to both sides.

Communication is the Start of Collaboration

From a user perspective, it is hard to deny that the way we communicate has become more and more fragmented. We use chat apps, messaging apps, video apps and good old emails. We do all of this across our phone and computer of choice which increases the complexity of managing all our conversations and our reliance on tools that synch properly across devices and sometimes platforms as well.

Despite the current narrative that would make you believe that only your grandparents now use email the data points to a different reality: email is indeed alive and well. A study that we, at Creative Strategies, conducted at the start of the sheltering in place period across 1000 American workers, found that 54% use Outlook every day and another 26% use Gmail every day. Fifty percent of the panelists also admitted to be using email to collaborate while on a video call.

Corralling other communication apps like Chat and Meet around Gmail seems like the right move, but not necessarily a magic wand that will make all our communication pains disappear. The team has taken the opportunity to use this new mission control to ease some of the management pain we all have when using so many different channels. For instance, you can now set a Do Not Disturb status across all the apps as you can mute all notifications in one go. Searching across Gmail and Chat will also now be possible as is joining a meeting right from your inbox. It is all about taking friction away from the experience.

When it comes to collaboration, Google is helping manage how we collaborate by guiding quick exchanges in Chat which now can include people external to your company or longer-term projects in Rooms where Google added the ability to real-time collaboration on a document and assign tasks.

Driving Value and Creating More Stickiness

As Microsoft started pitching the value of the Graph and organizing workflows more and more around Microsoft Teams, I started to feel that for Google, the power of G Suite of having two strong hooks in Gmail and Google Docs was starting to turn into a weakness. In other words, as a Gmail or Docs user you could be totally satisfied and never feel the need to try any other G Suite apps even if you have access to them through your corporate account.

Organizing the G Suite individual apps into this digital workspace offers higher discoverability by creating touchpoints throughout the workflows and has the potential to highlight the value of the suite in G Suite.

I am sure there will be a learning curve for users especially in the mobile experience, but there are a few things that will play in Google’s favor like the fact that users who rely more heavily on mobile are also those with a more positive attitude towards change. While I will know more once I use the new G Suite, it seems to me, that Google was very intentional in creating those touchpoints across apps while not altering the fundamental experience in Gmail. This will likely limit any frustration for users and avoid the feeling that other apps are being pushed into their workflows.

Both G Suite and Microsoft have now created an anchor for their productivity suites and it is interesting how they settled on different aspects of collaboration the former with mail and the latter in meetings. While at first, it might look like they took this approach based on where they see the cornerstone of collaboration reside, I believe it has more to do with a competitor risk assessment. Microsoft’s Teams started as a response to Slack and over time it ended up fending off Zoom and becoming Microsoft’s command center for collaboration. For Google, it is more about solidifying G Suite within organizations leaving no room for individual apps like Word or Outlook to take time away from it.

More to Come

One of the strongest values that G Suite has for me as a user, whether it is smart replies or the nudge feature in Gmail or the smart compose in Docs is that the more I use the tools the more they adapt to my workflow and facilitate it. Since COVID-19 the way we worked has changed so much that our workflows have had to adapt and many of these changes will stay with us in the future. Collaborating with others is done mostly digitally now, which means that as we move across email, chat, video, there is a lot to be learned from our behavior across all of these channels rather than just every single one individually. This to me is the biggest opportunity for Google going forward. The more G Suite is able to intelligently orchestrate my workflow the lower is the risk that I look elsewhere to replace one of these apps. There will be no question in the mind of a user that the value is coming from the Suite as a whole, not from any individual app.









New Study Highlights Pandemic-Driven Shifts in IT Priorities

At this point, everyone understands that the COVID-19 pandemic has had a profound impact on all aspects of society, including our personal and professional lives. But just as our understanding of how the virus spreads and its impact has shifted over time, so too has our perception of exactly how that impact is being felt in different areas.

In order to better understand specifically how the pandemic has affected IT environments in US-based medium businesses (100-999 employees) and large enterprises (1,000+ employees), TECHnalysis Research embarked on a study last month of over 600 US-based IT decision makers. Survey respondents were asked a number of questions about what their companies’ strategic and spending priorities were before the pandemic at the beginning of the year, and what they are now several months into the pandemic. In addition, respondents were asked how they expect their work environments to change, how they are acquiring and deploying PCs for their employees, how their cloud computing and app modernization efforts are evolving, and much more.

Needless to say, the results were fascinating. At a high level, one of the most interesting discoveries was that despite many dire early warnings, IT spending plans for the year are generally still intact with average annual IT budgets expected to increase 7% for this year. From a change perspective, as Fig. 1 illustrates, that means the overall levels are expected to be down just 1% versus what they were expected to be at the beginning of the year. Breaking it down by company size shows that medium-sized businesses are now expecting their IT budgets to grow slightly, while large enterprises are expecting a larger 2.3% drop overall.

Fig. 1

Priority-wise, what’s clear from the data is that companies shifted their focus from things that would be “nice-to-have” to things that they “need-to-have”. Specifically, this means that from both an overall strategic as well as spending perspective, purchasing laptops for their employees became the top priority, overtaking (at least temporarily) the attention and dollars given to their private, hybrid, and public cloud computing efforts. Conversely, it also means that some of the biggest decreases in prioritization and spending impact highly touted technologies such as edge computing, IoT, and private enterprise cellular networks.

From a PC client perspective, there have also been some very interesting shifts in the acceptance of different deployment and acquisition strategies. Notably, VDI (virtual desktop infrastructure) usage—which many have downplayed in the past as a backward-looking technology—has seen growth of over 11% since the start of the year. In addition, after appearing to have fallen out of favor, BYOD (Bring Your Own Device) programs—where employees purchase and use their own PCs—are now in place in over half of the companies that responded to the survey. Obviously, many of these changes are driven by the massive work-from-home experiment that IT departments around the world have had to immediately respond to. However, given the widely touted productivity levels that many people have reported working from home, many of those policies are likely to stay.

What’s also likely not to change is a dramatic increase in people who want to continue working from home. As Fig. 2 illustrates, on average, companies expect to have just over 1/3 of all employees still working from home into next year.

Fig. 2

Once people go back to the office, they’re also likely to see some dramatic differences when they get there. In fact, only 12% of respondents don’t expect changes to their work environments, meaning 88% do. Anticipated changes include increased sizes of work areas and cubicles, physical barriers between work areas and cubicles, and shifts from open office environments to traditional office/cube arrangements. In addition, about ¾ of respondents expect their companies to adjust the amount of real estate they have. Interestingly, medium-sized businesses expect to increase their amount of office space in order to accommodate more space per worker, but respondents from large enterprises felt their companies were more likely to close some offices and have less real estate.

Of course, as recent news has highlighted, the virus and its impact continue to evolve, so there’s no great way to know exactly how all these different factors play out until time passes. Overall, however, it’s clear that, from an IT perspective, the reactions to and impact from the virus so far are less severe than many feared. In addition, one positive side to the pandemic is that companies are throwing out their old rule books and looking at all the various technological tools at their disposal with a fresh set of eyes. In addition, many organizations plan to aggressively adopt more advanced technologies as a means not only to survive but to thrive in our new normal.

Technology, in its many forms, has proven to be a real saving grace for many organizations in these first few months of the pandemic. As a result, company leadership recognizes the importance of IT initiatives and will likely continue to allocate resources there into the foreseeable future. This isn’t to say we won’t see big challenges for some tech, particularly for IT shops and tech suppliers to hard-hit industries like travel, entertainment, etc. For the IT departments in many businesses, and most of the major tech vendors supplying them, however, the opportunities even in these challenging times continue to be strong.

(You can download a free copy of the highlights of the “Pandemic-Based IT Priority Shifts” report here. A copy of the complete 75-slide study is available for purchase.)

Podcast: Q2 2020 US CE and PC Sales Trends with NPD’s Steve Baker

This week’s Techpinions podcast features Ben Bajarin and Bob O’Donnell, along with special guest Steve Baker of NPD, talking about the surprisingly strong consumer electronics and PC sales data from the recently completed quarter, including discussions on overall trends, specific sub-category performance, and the retail brick-and-mortar vs. online sales splits.

Strong 2Q Demonstrate PC’s Continued Importance

The early view of shipment volumes for the traditional PC market is in for the second quarter, and they are quite good. While COVID-19 caused companies, schools, and communities worldwide to lockdown during the quarter, demand for PCs went through the roof, growing by 11.2% year over year, according to IDC’s preliminary numbers for the quarter. Further proof that the PC remains not just relevant, but hugely important to businesses, students, and consumers.

Market-Wide Growth
While we are still early in the process of processing the data, one thing seems clear: All the major vendors grew their shipments of traditional PCs (notebooks, desktop, and workstations) during the quarter. Apple, Acer, and HP all enjoyed double-digit year-over-year growth, and HP’s more than 18 million units placed it at the top with a market share of 25%. Lenovo was right behind, followed by Dell, Apple, and Acer. Early in the quarter, we saw HP placing large notebook orders with ODMs, a seemingly risky bet during uncertain times that clearly paid off for the company.

Around the world, as communities headed into lockdown, purchases of PCs—specifically notebooks—ramped up dramatically. Across the market, we saw vendors working throughout the quarter to try to replenish channels that sold out of stock as companies rushed to outfit workers shifting to work from home, schools scrambled to equip students to learn from home, and consumers grabbed up PCs to make their shelter-at-home quarantines more bearable.

We saw some supply-chain challenges carry over from the first quarter in China during the early part of the quarter. But as production ramped back up, the bigger challenge proved to be around logistics and skyrocketing costs to transport finished PCs out of China to markets around the world. Today, most of those challenges have abated, thanks in part to an uptick in passenger airlines shifting their focus to moving freight.

One of the other key shifts during the quarter was a massive move by buyers to online purchases. As you might imagine, with many traditional PC channels closed or operating under restricted access and hours, both commercial and consumer buyers shifted their purchases to online channels. We are still working to understand the extent of this shift but will be watching to see if this change becomes permanent in the coming quarters.

Strong Chrome Volumes
Operating-system splits for the quarter are still a work in progress, but early indications show that all the major OSes—Windows, macOS, and Chrome—saw strong volumes in the quarter. However, early signs point to a particularly good Chrome quarter driven by education buying in the US, but with reasonable strength in both traditional commercial and consumer, too.

We saw education purchases of Chromebooks in the US pulled ahead to the second quarter, ahead of the traditional buying season, as schools moved to outfit students for learning at home. There are some indications that this buying has continued into the third quarter, as schools prepare for a challenging fall semester that could require additional distance learning scenarios. One of the critical questions, as we look ahead, is where school districts are going to find the money to continue buying large volumes of Chromebooks. Many face looming budget cuts driven by tax revenue declines. While current government stimulus helps offset this, it is unlikely to make up all the difference, especially for schools that were not already planning 1-to-1 device deployments for students.

While schools drove the bulk of purchases, we also saw businesses and consumers buying more Chromebooks during the quarter, too. And we also saw a notable uptick in low-priced Chromebooks based on Media Tek ARM-based processors.

Looking Ahead
Now, in early July, we have continued to see strong PC shipments into the channel. But, at some point, the global recession is going to impact the PC market. It has already slowed small-business purchases, and eventually, larger companies and consumers will slow their purchases as they make plans to weather the downturn.
Regardless, the degree to which both the commercial side of the business (including education) and the consumer side saw growth during the early days of this pandemic bodes well for the PC industry long term. When things took a turn for the worse, people turned to the PC. The PC remains crucial to how employees get work done, how students learn, and how many people relax, play games, and consume entertainment. While there are undoubtedly challenging times ahead, I am excited to see how the industry will evolve its offerings over the next 18 months to accommodate our next normal.

Microsoft Teams and the Meeting Lifecycle

In June, Microsoft announced a substantial update to Microsoft Teams, which included support for up to 49 people in the gallery view, virtual backgrounds, support for 300 participants, instant channel meetings and more in the IT Admin suite for management and security.

But today we got more. Microsoft launched “Together Mode” for Teams. This new view turns the meeting participants into avatars who can have some level of interactions like a wave, tap on the shoulder or high-five and can be used in settings like a coffee shop or an auditorium depending on the number of participants.

Other features include a dynamic view to share content more dynamically, video filters that help with lighting and camera focus, live reactions with emojis, live captions with speaker attribution, suggested replies and the integration of Cortana.

There is a lot there to unpack, but what is particularly interesting about today’s announcements is the focus on people and on making video meetings more an integral part of collaboration rather than an island that has a beginning and an end when we turn the camera on and off. Meetings are born from work that happens before people come together and continues after people leave.

I have always thought about meetings as part of my daily workflow.  During quarantine, the growth we saw in the number of meetings made this realization even more critical as it has implications in the tools we are using as well as the features we see as most valuable.

In a study we, at Creative Strategies, conducted across 1000 US respondents working from home during the pandemic, daily meetings became a reality for 62% of the panelists up from 25% before the quarantine. As I discussed before, I do expect more flexible work and more remote working to remain in place once offices reopen. Thirty-one percent of our panelists said they would like to continue to work from home once the sheltering in place is lifted with another 22% who want the flexibility to do so a couple of days a week. Microsoft’s Work Trend Index Report published on the back of a recent study the company conducted supports our finding as 82% of managers they reached out to, said they expect to have more flexible work from home policies as the economy reopens.

It’s All About the People

So now we have established the long-term need to look at video meetings as an essential and integral part of our workflows. We have also clearly found that working from home is productive, but it has been interesting to see how video calls have impacted the way we feel about our colleagues. Microsoft said that 62% of people surveyed said they feel more empathetic towards their colleagues because they feel they made more of a connection with their personal lives by being in their home. My own set up has undoubtedly offered a glimpse into how demanding my cats are, how much my dogs sleep and how well trained my kid is at not just walking into my home office!

While people might have a better appreciation for their colleagues, they do not necessarily feel more connected to them. Sixty percent of the people Microsoft surveyed said they feel less connected to their colleagues and yet 52% of people surveyed feel more valued or included as a remote contributor. As a long time, remote worker, this last point hit me very early on this quarantine phase. It was apparent to me that everybody contributing remotely, in the same way, was democratizing the virtual meeting room table we were joining. The combination of the amount of time we spend in virtual meetings and how our brains process the visual and audio information it is receiving is having an impact on how tired and overwhelmed some people feel while working from home.

All these data points informed Microsoft’s decision to focus on tools that will make the video interaction more natural and more focused on the people. There is certainly a degree of maturity in how today’s features came about. I am not sure if there is a hype cycle of video meetings, but if there were, it would certainly show us edging from the slope of enlightenment to the plateau of productivity.

The new Together Mode might seem like a gimmick at first, but if you do not get too carried away with the high-fives, you can see there are benefits to it. First, there is a decrease in distractions as the focus shifts from what is going on around you to the people. It also draws participants more into the meeting, providing more body language cues. This is particularly useful in a lecture format, either in a business or education context. Other features help spotlight people like live captions with speaker attribution make it easier to follow along, especially when you are not familiar with the people in the meeting and the new filters that help improve the clarity of the picture so you can better read facial expressions.

The Meeting Lifecycle

I want to go back to the concept of the meeting lifecycle or, if you prefer, how you should think about the meeting as part of your daily workflow rather than an event in isolation. In a real-life context, it might be harder to see the flow but if you think about it for a second there is a flow. You would take your computer or a notebook into a room together with any documents or information you might need. During the meeting, you would take notes, brainstorm on a whiteboard, take pictures of the information and walk out of the meeting to pick up the next step over email or in future meetings.

When we come together virtually, however, the way the meeting blends with what happens before and after is much more obvious and the tools that will allow that blending to be as smooth as possible will drive more significant engagement and grow loyalty. This is why I believe video only or chat only solutions will find it harder to compete with solutions that have the ability to transcend what are becoming more unnecessary artificial boundaries between people, content and task at hand. Being able to have content and people engaged onto one platform will provide better data which in turn will provide solutions that will easily surface information at the right time, connect the right people in a timely fashion basically offer more and more value but taking care of the most frustrating aspects of collaboration.


There is no question that remote work, video meetings, in particular, will continue to see an array of developments for quite some time, including welcoming new entrants like the new collaboration app mmhmm to the space. On the one hand, this focus highlights how overlooked remote collaboration has been for so many years. On the other hand, it shows how much our current experience can be improved today by AI and software rather than having to wait for the proliferation of XR based solutions.

Nvidia Virtual GPU Update Brings Remote Desktops, Workstations and VR to Life

The new work habits that we’ve all adjusted to because of the pandemic have led many companies to take a fresh look at how they can provide computing resources to people working from home. In some cases, this is giving new life to technologies, such as VDI (virtual desktop infrastructure), that provide server-based computing sessions to remote desktops.

In addition, companies have also had to figure out how to provide remote access to workers with very specific, and very demanding technical requirements, such as architects, product designers, data scientists, media creators, and other people who typically use workstations in an office environment.

One critical technology for these new challenges is server-based virtual GPUs, or vGPUs for short. Nvidia has built datacenter optimized GPUs for many years, and several years back made them a shareable and manageable resource through the introduction of Virtual GPU software. The company’s latest July 2020 vGPU software release (one of two they typically do per year), adds several enhancements designed to make these server-based graphics chips function in a wider variety of software operating environments, with better compatibility across more applications, and managed in an easier way.

As with many enterprise-focused technologies, the devil is in the details when it comes to exactly how and where virtual GPUs can function. Given the wide range of different server virtualization platforms and the graphics driver optimizations required for certain workstation applications, it can be challenging to get promising sounding technologies, like vGPUs, to work in all environments. To address these needs, the new release adds native support for virtualization on SUSE Linux Enterprise Server-based infrastructure, which is often used by data scientists, and offers additional management optimizations for VMware-based environments.

The new release also expands the capabilities of the different level GPU drivers that Nvidia provides, thereby increasing the range of applications it can support. Even details like different versions of drivers can make a difference in compatibility and performance. The latest release gives IT managers the flexibility to run different driver versions on the server and on a client device. This capability, called cross-branch support, is critically important for shared resources like vGPUs, because one application running on one device may need one version of a driver, and another application on another device may require another one.

Real-time collaboration across multiple applications is also supported in this July 2020 release. For VR-based applications, the new software, in conjunction with Nvidia’s CloudXR platform, can provide support for untethered mixed reality headsets with 4K resolutions at up to 120 Hz refresh rates over WiFi and 5G networks.
With the Quadro Virtual Workstation software—one of the several levels of drivers that Nvidia makes available through its vGPU software—multiple people can work on CAD, architecture, or other highly-demanding applications with real-time rendering on regular PCs. For designers, engineers, and others working from home, this capability can allow them to function as they normally would in a workstation-equipped office.

Interest in the ability to get remote access to these graphically demanding applications has been extremely high during the pandemic, which should be surprising to no one. This also aligns with results from a newly completed survey by TECHnalysis Research of over 600 US-based IT managers about the impact that COVID-19 has had on their IT strategies, priorities, and computing programs.

According to the study, virtual desktop infrastructure (VDI) usage grew over 11% in just a few months, from 48% of companies saying they used server-based computing models at the beginning of the year to 59% who said they are using them now. Not all of those instances of VDI use virtual GPUs, of course, but they do represent a significant and critical portion of them.

Ongoing flexibility has become the mantra by which IT organizations and workers are adapting to new work realities. As a result, technologies, such as vGPUs, that can enable flexibility are going to be a critical part of IT managers’ toolkits for some time to come.

Are We Already Past ‘Peak’ Streaming’?

The average number TV viewing hours per household has, not unsurprisingly, risen during the coronavirus era. It has also shone a light on the good, the bad, and the ugly of the streaming TV revolution, which has been steadily upending traditional pay TV viewing patterns and business models. But I’d argue that we’re starting to reverse course on what, until recently, has been mainly a good thing. For streaming as a business, there will be higher levels of churn, several failures, and eventual consolidation. For consumers, there will be greater confusion and an eventual paring back of the number of services subscribed to.

My view is that we’ll look back on the successful launch of Disney+ as the peak of streaming TV. Until that point, most developments in the sector were accretive to the business and a positive for consumers. It all started with Netflix’s first original programming in 2013 (was it that recently?). The other ‘majors’ mainly added value. Hulu was a great alternative to a cable subscription and also produced some excellent original programming. Amazon Prime Video made a lot of sense as another reason for subscribers to keep their Amazon Prime subscriptions, but its success is measured differently than other streaming services.  HBO and Showtime doubled down on original content and successfully migrated to a TV Anywhere model, with options for non-cable subscribers and good apps for tablets and phones.

Then came Disney+ in late 2019, which has been successful so far for three reasons: exposure of previously unavailable titles from its vast library, combined with some signature original content; it’s a legendary brand serving a well-defined market segment; and the pricing is attractive.

To me, the beginning of the downward slope began with Apple TV Plus, which launched in November 2019, technically two weeks before the Disney+ launch. This is the start of what I call the “unnecessary” phase of the streaming era. It’s Apple’s latest foray into what has been, for the most part, a failed, 10+ year attempt to achieve the same success in TV it had in music. Apple is throwing billions of dollars at new content (most of it mediocre). Sure, the free year of Apple TV Plus that comes with any new Apple device purchase is a nice perk, but I doubt anybody is buying a new Apple device because of this benefit. It will be interesting to see how many people pay for the service once they come off their free year. Aside from that, there’s little value add to the experience.

Then there’s the recent launch of HBO Max, with most rational people asking “what?” and “why?”. This is a classic example of messing with the ‘if it ain’t broke, don’t fix it’ mantra. But we could see this coming from a mile away. Question: has there ever been a successful telecom company foray into media? HBO was a beloved brand with New Yorker levels of customer loyalty and a defined place in the universe with prestige content. You might have gotten Netflix to do My Brilliant Friend, but that’s HBO’s default level of quality.  So AT&T acquires Warner, and within months most of HBO’s top brass leaves and the then new head of AT&T’s media biz John Stankey tells HBO to produce more content (translation: content dilution). Then HBO Max launches, sowing further confusion into the already messy HBO Now/HBO Go distinction, and instantly upsetting millions of customers who can no longer get the HBO app on their devices. Why didn’t AT&T just leave HBO alone and tack on an extra $2-3 per month for access to Warner’s wonderful content library, if that’s what they needed to do in order to justify this ill-advised acquisition?

And what of Peacock, which is now soft-launching for Xfinity (Comcast) subscribers? Since I get it for free, my reaction is “ohhh, that’s nice” to some of the TV and movie treasures in Universal’s library. But it’s more like when the restaurant presents you a couple of nice chocolates after a good meal than something you would have ordered.  It’s also hard to compute the added value for a subscriber who already gets a decent pay TV package from Comcast, most of whose content is available on-demand or through Hulu. Yes, there will be some new original content for Peacock subscribers, but that piece of it is looking more like Apple TV Plus: “too much already, and if there’s something really good I’ll subscribe for a month and get out”.

And for those who don’t have cable, Peacock is the Comcast/Universal version of CBS All Access I suppose. But will consumers pay X per month for Peacock so they can get The Office plus some other content they didn’t even know they were missing? Put another way, Peacock+CBS All Access+HULU = Cable.

And then you’ve got Quibi, the least necessary of them all. $2+ billion invested so far. Ask ten of your under-35 year-old friends whether they were looking for more shorter form content for their phones. And they’d answer: “isn’t that what YouTube is? Jeffrey Katzenberg has always loved sequels — but I don’t think he was looking for go90: The Sequel, which Quibi is rapidly turning into. And, yes, I know, this isn’t technically streaming TV, but one of the chief complaints (perhaps amplified by Quibi’s coronavirus era launch) is, ironically that you can’t get Quibi on your TV.

So now let’s look at this from the customer perspective. It’s never a good thing when you throw customers into the underwear of the business. Consumers already were a little deer in the headlights with this streaming TV thing, so overwhelmed by the dizzying choice 500+ new scripted series every year that they opted just to watch The Office or Friends instead. But now you have the industry forcing consumers to develop some sort of mental matrix of what studio produces which content. Quiz: who produces Law & Order? a) Comcast/Universal  b) AT&T/Warner c) Paramount  d) Disney/Fox

So, in the middle of a challenging economy, when households are looking more carefully at their spending, the Netflix+HBO+CBS All Access+Showtime+Hulu+DISNEY+ is a $60+ nugget, and doesn’t include any live, linear TV.

Consumers are also now thrown into the crosshairs of this new season of content licensing and moves from one place to another. It’s not unlike the war for talent in the sports business: the free agents are the prices bidded up for established content/talent (The Office, Friends, Ryan Murphy), and then the bidding wars for the hot prospects (new shows with top talent).

There’s certainly been an upside to the streaming explosion. There’s loads of fantastic content — pretty much something for everyone. This has been a boon for writers and others involved in content development and production. And, Netflix and Amazon have filled a huge gap left by the major studios — who appear mainly interested in funding franchises and comic book movies — by funding films like the Irishman and Da 5 Bloods.

However, something’s going to give over the next couple of years. The first stage of this is already happening, in the form of subscriber losses not only in Pay Cable but also live TV alternatives such as Sling and AT&T TV (DirecTV Now). There will be further exits or consolidations in the vMPVD space (could there also be an exit for this acronym?).

The next stage will be elevated churn among some of the streaming-only players, which some recent data is already showing. Subscribers will binge a couple of favorite shows, then move on. There will be some high-profile failures, while others just quietly fade into the sunset. There will also be some consolidation. If Quibi continues to underperform, it could become a bargain buy for an AT&T/Warner or Comcast. Netflix, which has eschewed acquisitions to date, could also be a buyer.

I also think there will be an emerging, and more interesting market for inexpensive, niche services. Philo, for example, has performed well as a $20 per month skinny bundle of mainly entertainment channels, which it can do profitably because it’s not saddled by rights-prohibitive sports channels (a particularly good deal in our current sports-less TV universe!).

I think one way this space could go is that consumers might opt for a smaller number of ‘foundational’ streaming services that they subscribe to on an ongoing basis. Then they’ll then layer on shorter-term subscriptions to the niche channels (adventure sports, cooking, indie movies) for a temporary or seasonal fix.

I’ll mark late 2019 as the ‘Peak Streaming’. The next chapter is already starting to be written.

Microsoft’s Initiative to Upskill Workers Looks Beyond Charitable Donations

Covid-19 dramatically acerbated the current economic recession that the US and many countries around the world are facing. Yet, workers across different segments were already starting to see the impact technology is having in changing current roles, creating new job functions and requiring new skills to get a job as well as progress in your career.

I have written in the past about how companies like IBM are preparing the current and future workforce for digital transformation, something I find some companies in tech feel responsible for, and rightly so, given the role they play in transforming the workplace through technology.

The Context

The current environment created an urgency we had not felt before, mostly because the change that we thought was going to occur gradually has happened over a couple of months due to Covid-19. That “two years of digital transformation happening in two months,” Microsoft CEO mentioned just a while ago, is already having an impact on workers.

This, coupled with unemployment figures that exceeded what the US experience during the Great Recession in the 1930s, means that as much as we want to reopen the economy, many people will not have a job to return to.

During a video event announcing the launch of a new initiative to upskill 25 million people, Microsoft CEO, Satya Nadella, made it clear that for the economy to recover fully “we must ensure no-one is left behind” and this means empowering people to be the best they can be and do the best job they can.

Covid-19 has been impacting Communities of Color in the US more acutely than any other community, not only by taking more lives but also by suffering the most with the unemployment crisis coming out of the pandemic. If you are wondering why this is the case, you only need to refer back to Nadella opening remarks on racism and social inequity. The higher number of deaths among People of Color can be explained by the lack of access to good health services and the higher number in unemployment is linked to lack of access to education and an attitude from some employers that see People of Color as more disposable than white workers. Nadella’s statement was strong and clear: “There’s absolutely no place for hate, bias or racism in our world, the cracks of injustice and inequity in our society hinder progress for everyone. And call on us to act.”

Brad Smith pointed out during the event that over the next five years, nearly 149 million new tech jobs will be created in fields like software development cybersecurity and machine learning. For those jobs to become an opportunity for people, however, Microsoft feels there needs to be more focus on training, and, to some extent, this requires a shift in how employers think as, over the last 20 years, employer investments in employee training have been declining.


The problem, as you can see, is a complex one, and Microsoft is tacking it from two fronts: on the one hand, providing tools for people to get the skills and qualifications and for employers to have access to more data to make informed decisions.

For upskilling, Microsoft is bringing together every part of the company LinkedIn, GitHub and Microsoft and will focus on:

  • Free access to learning paths and content to help people develop the skills these positions require;
  • Low-cost certifications and free job-seeking tools to help people who develop these skills pursue new jobs.

Through the same assets and the use of the Economic Graph, Microsoft is also aiming to use data to identify in-demand jobs and the skills needed to fill them so they can create learning paths for these via LinkedIn Learning. Using this data, they identified ten jobs that are in-demand in today’s economy and are well-positioned to continue to grow in the future. These ten jobs were identified as having the greatest number of job openings, have had steady growth over the past four years, pay a livable wage, and require skills that can be learned online.

As part of the training resources, Microsoft is also providing low-cost access to industry-recognized Microsoft Certifications. To expand the reach, Microsoft is also providing $20 million in financial grants plus technical support to nonprofit organizations around the world so that they can support with complementary initiatives. Five of the $20 million will be dedicated to nonprofit in the US aimed at alleviating the impact among those who, as data shows, are hit the most by unemployment, which is both People of Color and women and, in particular, Women of Color.

When it comes to social issues such as unemployment, we know the solution is not just within the private sector. This is why Microsoft pledged to make more reliable analytics available to governments around the world if those governments are willing to open up public data sets and look at innovating government systems around workforce training and education.


This is an ambitious initiative on Microsoft’s part; there is no question about it. The thought the leadership put into breaking down the problem to come up with solutions that leveraged their assets and would have the most significant impact gives me confidence in the success it will achieve.

Microsoft’s mission is to empower every person and every organization on the planet and this cannot happen without the company owning up to the impact technology is having on the world as well as the responsibility it has to use its power and wealth to impact society positively. I look forward to seeing other tech companies stepping up and follow suit.


Power Efficient Computing Noteworthy During Pandemic

One of the few benefits many people have experienced as part of the great work-at-home and learn-at-home experiment that we’ve all been through is improved air quality. In fact, because of the significant reduction in both commuting and travel, both the visual and measured quality of the air has gotten noticeably better in most places around the world.

As a result, the pandemic has inspired a refocus on environmental issues. At the same time, there’s been a huge focus on how digital technology—particularly computing devices, cloud infrastructure, and various types of networks—has allowed us to stay as productive (if not even more so!) as we were prior to the pandemic.

Interestingly, the stories of computing and conservation have also started to become entwined in several different ways. First, there’s been a strong transition to laptop PCs, which use significantly less power than desktops, as many peoples’ primary computing device. While many people think notebooks have been the default standard for a while, the truth is that desktop PCs still represented a fairly significant portion of computers used in many businesses up through the start of the pandemic. However, with the requirement to work at home, companies have been scrambling to get laptops to their employees. As a result, the incredible reliance we have on these more power-efficient devices has never been more apparent. The real-world impact of their increased use is less demand on the electrical grid to power them which, in turn, can offer benefits to the environment.

Second, there’s been a much bigger focus on cloud-delivered apps and services, which can also indirectly lead to an improved environment. In particular, there’s been a great deal more attention placed on modernizing and “cloudifying” applications for business. Because these modernized applications can run in power-efficient cloud-computing data centers, this too has the benefit of reducing the power demands necessary to complete specific tasks.

In a recently completed survey by TECHnalysis Research of over 600 US-based IT professionals, we found that when asked to rank the top 2 priorities for IT initiatives since the rise of the pandemic, modernizing applications is the most important, followed closely by purchasing laptops for their employees. Not surprisingly, growing usage of hybrid, private, and public cloud usage rounded out the top 5, as shown in Figure 1. The app modernization effort, of course, entails the process of converting legacy applications into newer app formats that can run efficiently in one of these hybrid, private and/or public cloud environments.

Fig. 1

What’s interesting about these developments from a conservation perspective is that there have even been studies which show that cloud-based computing resources are more energy efficient than many people realize. In fact, thanks to a combination of significantly more controlled usage of computing, storage, and networking resources in large cloud data centers, new types of computing (and pricing) models that use those resources more efficiently, and the growing use of more power efficient CPUs, there have been great improvements in computing power per watt. In other words, with cloud computing, it’s possible to get even more computing work done with the same (or even smaller) amounts of power than were used in the past.

On the notebook PC side, there have been similar trends in power efficiency as well. In fact, just last week AMD announced that they surpassed their 25×20 goals set back in 2014. Specifically, the company announced six years ago that they wanted to improve the power efficiency of their mobile CPUs by a factor of 25 before the end of this year. With the release of their recent Ryzen 7 4800H mobile processor, the company actually achieved an impressive 31.7X improvement in power efficiency—specifically a 5x increase in performance combined with a reduction to 1/6th of the required power—versus a 2014 vintage AMD FX-7600P chip.

The improvements are due to a wide range of factors, including better core designs, new chiplet architectures within their CPUs, and the company’s move to 7nm production from the 28nm process used back in 2014. The company also made a number of enhancements to the chip’s thermal design and power management capabilities over the years. All told, it’s another impressive example of how far AMD has improved their technical capabilities and competitive strengths over the last few years.

As companies start to bring their employees back into the office and commuting and travel trends slowly start to tick up, we may begin to see some negative impact on the environment. In terms of computing resources, however, the ongoing developments in power and performance efficiency for both data centers and laptops can hopefully keep their influence to a minimum.

Apple Pushes Augmented Reality Forward with ARKit 4

While Apple didn’t talk much about Augmented Reality during the WWDC keynote, the company did release this week a new version of its software developer kit (SDK) that developers use to create AR apps. ARKit 4 brings new capabilities to iOS 14 that developers can leverage to create experiences on all current iOS devices. It also adds important new depth-sensing capabilities accessible on devices that have Apple’s LiDAR Scanner (currently shipping only on the latest iPad Pro products). And, perhaps most importantly, ARKit 4 introduces Location Anchors, which lets developers place a persistent virtual object in a specific place in the real world.

Leveraging LiDAR; Improved Face Tracking
Apple introduced the Scene Geometry API in ARKit 3.5, after the launch of the latest iPad Pro products with LiDAR scanners. I expect Apple to add LiDAR scanners to the next generation of iPhones shipping later this year, so the feature is likely to get a fair amount of discussion during the next launch event.

The front-facing LiDAR scanner works by shooting light onto the surrounding area and collecting the reflected light. The device uses this data to create a topological map of the environment. This information lets developers create a richer AR experience by driving more realistic occlusion. Occlusion occurs when a digital object appears in front of a real-world object and partially occludes the user’s view of that object. Good occlusion is key to creating a more immersive AR experience. The LiDAR scanner also brings enhanced capabilities such as more realistic physics-based reactions between real and virtual objects. It also offers improved virtual lighting on real-world surfaces.

In iOS 14, Apple further expands the capabilities of LiDAR scanner-enabled devices to articulate better the distance between the iOS device and objects in the environment. On-device machine learning merges the color RGB image captured from the device’s wide-angle camera with the depth reading from the LiDAR scanner to create a dense-depth map. This depth data is tied to a 60hz refresh rate, which means as the iOS device moves, the depth data reflects this movement.

LiDAR also enables improvements to a feature called ray casting, which is a rendering technique that uses computational geometry to create a three-dimensional perspective in a two-dimension map. In ARKit 4, developers can leverage the LiDAR scanner to use ray casting to place virtual objects more quickly and precisely into the real world.

Finally, Apple introduced face tracking capabilities in a previous version of ARKit, but the capability was limited to devices with a front-facing True-Depth Camera. ARKit 4 expands face-tracking capabilities to all devices with an A12 Bionic processor or later, including the recently launched iPhones SE. Face tracking lets developers create apps that place your face over virtual content, tracking expressions in real-time.
Location Anchors

While the new capabilities enabled by the LiDAR scanner are exciting, perhaps the most notable new feature Apple announced with ARKit 4 is Location Anchors. This new technology brings higher-quality AR content to the outdoors. Location Anchors let developers specify longitude, longitude, and altitude. Then ARKit 4 leverages these coordinates—plus high-resolution Apple Maps data—to place experiences at a specific location in the real world.
The process for driving this next-generation AR experience is called visual localization, and it accurately places your device in relation to the surrounding environment. Apple says this is notably more accurate than can be done with GPS alone. Advanced machine learning techniques drive this process and run locally on the device.
The end result is that when a developer places a virtual object in the real world—for example, a digital sculpture at the intersection of two streets—that object will persist in that location and will appear in the exact same location, in precisely the same manner, to anyone viewing it with a capable device. Apple says Location Anchors will first roll out in major cities such as Los Angeles, San Francisco, Chicago, Miami, and New York, with more appearing later this summer. To leverage location anchors, apps must be running on devices with GPS and Apple’s A12 Bionic chip or later.

The importance of Location Anchors cannot be overstated, and it speaks to the fact that, as per usual, Apple is playing a very long game here. There are entire startups, and market segments focused on the technology that underlies a feature capability that Apple quietly launched with zero fanfare this week. Because it owns its own map data, and because it has hundreds of millions of devices constantly capturing location data, Apple is positioning itself to bring location-based AR to the masses. These new features will enable developers to create next-generation apps that will eventually make AR a mainstream technology.

Slow, Steady AR Progress
Those of us closely monitoring the AR space sometimes lament the seemingly slow pace of advancement. While many of us would love to have our Apple Glasses now, the fact is this is a complicated technology, and doing it right is more important than doing it fast. In addition to the real-world device challenges associated with optics, battery life, wireless connectivity, and more, great AR content will require a deep understanding of the real, ever-changing physical world. Few companies have the resources to acquire that understanding on their own (see Microsoft’s Spatial Anchors and Niantic’s recent acquisition of 6D.AI). Fewer still own both the hardware and software platforms upon which that AR content will run. With ARKit 4 and iOS 14, Apple fortifies its position as the world’s largest AR platform, and it gives developers new tools to create the types of AR apps we’ve all been waiting to experience.

#WWDC20 An Apple Original

Leading into Apple’s developer conference (WWDC) I was as curious about the way Apple would run the event as I was to hear what news the California company had in software. A lot of the attention since the keynote on Monday has been on Apple Silicon. Still, there were many interesting announcements made that said a lot about the state of Apple’s ecosystem and offered some hints into Apple’s strategy with some of its products.

Let me start with the keynote, a product in itself, and then move on to a few announcements I found particularly interesting.

There were many speculations as to how Apple would approach the event. We had a little bit of an idea from the iPad Pro launch from back in April, but many questioned whether Apple would be doing the keynote from the stage of the Steve Jobs Theater as it might all feel awkward without an audience. Apple masterfully used the empty theater to make a point that this time is unprecedented and almost embraced those empty seats making them the backdrop for Tim Cook’s initial remarks. Cook’s remarks started not with COVID-19, like many events I have attended over the past few weeks, but with addressing racism in America.

“To start, I want to address the topic of racism inequality and injustice and to recognize the pain being felt throughout our nation, especially in our Black and Brown communities after the senseless killing of George Floyd. And while the events of this past month are sadly not new. They have caused us to face long-standing institutional inequalities and social and justices. This country was founded on the principles of freedom and equality for all for too many people and for too long, we haven’t lived up to those ideals. We’re inspired and moved by the passionate people around our nation and around the world who have stood up to demand change.”

Two weeks ago, Apple announced Apple’s racial equity and justice initiative, committing $100 million to challenge systemic barriers that limit the opportunity for communities of color in the critical areas of education, economic equality and criminal justice. For developers, Apple announced a new developer entrepreneur camp for Black developers.

The almost two-hour-long keynote was all pre-recorded, with a good variety of speakers orchestrated by Apple’s SVP of Software Engineering Craig Federighi. The fast pace of the announcements, the change in locations, and the lack of corny awkward moments made the whole keynote quite pleasant. The attention to detail that Apple put into its keynote was evident in many ways: the diversity of the speakers, the shots of the Apple Spaceship highlighting the rainbow stage designed by Jony Ive, the number of times Mythic Quest, the Apple TV+ series about a game developer company, popped up in the demos.

Cook did address COVID-19 by way of thanking healthcare workers and talking about the impact that the virus has had on the products they design. We saw a mask Memoji and a handwashing app for Apple Watch. Most importantly, though, was the message that ran at the end of the keynote highlighting the health and safety precautions taken by Apple to film the event. At a time when the US sees the number of cases increase throughout many states, a statement of how serious the virus should be taken is very important and a more persuasive statement than words alone could convey.

Cook closed the keynote bringing it back to diversity and calling on how Apple has been thriving thanks to the very diverse developers’ ecosystem that comes together this week for WWDC. The theme of the keynote was carried forward into the developer sessions I watched on Tuesday both in terms of style and diversity of speakers. The production of the whole event felt like an Apple Original TV series.

Default Browser and Email

Apple did not make a big announcement about the ability in iOS14 and iPadOS to change the default email and browser apps, but it was listed on a slide below as well as brought up in the Platform State of the Union. This means that you will be able to change your default browser and email to be used from any app the same way as you already do on the Mac or other platforms.

It is hard not to think that the timing of this change might help Apple alleviate some of the scrutiny it is getting from Europe on its perceived anti-competitive practices within the developer ecosystem. I am not sure if this is a carrot of sorts, but it certainly feels like Apple has nothing to lose by providing more flexibility on these two apps. Apple does not benefit from forcing mail on people, especially as it is not as differentiated as what competitors are offering. As far as the browser goes, I believe Apple is counting on most of its users appreciating the many improvements it brought to Safari as well as its continued improvement on privacy. It will be interesting to see if Maps will be next on the list, although I would argue that Maps is much more central to a wider set of experiences than email and browsing are.


iMessage is one of Apple’s most popular services; we all know about life as a blue and green bubble. What has transpired over the past few months, however, as more people have been working remotely, is how much iMessage has become a full-fledged collaboration app. In a recent study, we, at Creative Strategies, conducted in the US 31% of respondents named iMessage as the collaboration app they chose to use every day. This number grows to 50% among Mac users. It is no surprise then that Apple announced a series of new features that make iMessage feel much more like Slack than text messaging. Apple added the ability to pin conversations with people we care about the most, added more visuals to groups including the ability to assign a picture to a group and added Mentions as a way to navigate conversations to better surface those exchanges that include you.


It had been so long since Siri had her moment on stage that I think people were quite surprised. Apple addressed one of the biggest complaints users have had about Siri UI, which is the fact that when you invoke her, the whole screen is taken over. Now Siri’s logo floats on the screen more discretely. We also heard that Siri is going to be able to reply to more complex questions as well as send audio messages and translate more languages. Apple also added a new app called Translate, which helps users have conversations in different languages, all done through on-device ML playing catch up with Google on this feature.

These enhancements did not feel like the significant change that many have been expecting since John Gianandrea became the SVP or ML and AI strategy, but they are welcomed improvements. As Apple moves more and more to a single architecture across its platform; however, I feel the inconsistencies of Siri will be even more evident to users.

Sleep Tracking

I mention this mostly because I have been hoping for the past two years to see something new in connection with sleep. There is a ton of research that points to the impact that sleep has on stress, overall wellbeing and weight management that made it impossible for Apple not to address this aspect of health. Considering that Apple did not mention anything different in watchOS7 that allows for more accurate tracking, I assume Apple worked on delivering an overall experience that starts with preparing for bedtime routine on the iPhone, then using Apple watch to track sleep and wake up to the information needed to start the day including battery status. The other part that Apple might have been working on is the ML models that Apple Watch will use to sense the user’s motion and detect the movements caused by breathing, providing signals of when you are asleep or awake. Adding information on battery notifications and charge time shows Apple’s awareness that for some people using Apple Watch as a sleep tracking device might require a change in routine. It is important that, as users adjust, they do not have a negative experience with Apple Watch running out of battery in the middle of the day. Interestingly, Apple also pointed out that sleep schedule, winddown and sleep mode are also available on iPhone with iOS14 even for users who do not have an Apple Watch.


There was so much more announced during the keynote to cover everything in one article, but what was common throughout the announcements was a purposeful effort to deliver an even more seamless experience to multi-device users. Whether this was highlighted by the iPhone, iPad and Mac UI blending more and more, or by AirPods connecting naturally from device to device based on where you are playing music or picking up a call, it felt like Apple wants to deliver a familiar experience across all devices. Differentiation will be added only when it matters. For developers, that cohesive ecosystem represents an even more homogeneous addressable market. Overall, I feel Apple is also focusing on making it easier for iPhone users – its largest user base  – to add other devices by lowering their learning curve and this might mean that power users, especially Mac power users, feel that their experience is being gentrified. The upside of having a much wider set of apps should, however, make up for it.


Apple Silicon Inside

Apple’s Mac line of products may be the product line that has taken the most scrutiny in the past few years. As important as the Mac is for nearly all of Apple’s power users, early adopters, media, and arguably some of the most influential people in many segments of the tech industry, its future had seemed to constantly been in question since the launch of the iPad.

Apple’s Framing for the Transition from Intel
Apple is often one of the best storytellers and best companies when it comes to framing big ideas. That skill was on full display as they made the case as to why they were transitioning away from Intel. To lead that message was Johny Srouji SVP of Hardware Technologies.

He set the stage talking about how the high bar of the iPhone and the designs team ambitions demanded custom silicon. It is this phrase demanded custom silicon that stood out to me and hit home something I’ve tried to articulate many times before. What it means is Apple had the ambition to take the iPhone somewhere, and they could not find any vendor silicon to meet their needs. In this segment on their work developing custom silicon for iPhone, Srouji also made this statement, “this is where we developed our relentless pursuit of performance per watt.”

He then went on to talk about iPad and some specific features, like Retina Display, that also demanded custom silicon. He is trying to nail the point home that they could not find solutions from third parties to do what they wanted to do with the product, so they made it themselves.

Ultimately, this has been the theme and the biggest advantage Apple has with a world-class silicon design team under their roof. I now, more than ever, am deeply convicted that Apple’s in house silicon team is crucial to Apple’s competitive advantage.

When you have this kind of control over your products, there is almost nothing you can’t do. That’s not something you can say with confidence about Apple’s competition in the areas that run Apple silicon. This is why the glaring outliner to this equation of success and differentiation was the Mac.

Before going into the segment on what is coming for Apple silicon and Mac, Srouji sums up their work for all the products running Apple silicon with this quote:

“Our SoC’s enable each of these products with unique features and industry-leading performance per watt. And it makes each of them best in class.”

Srouji is saying Apple Silicon is the reason these products have unique features and industry-leading performance per watt. That last sentence is really quite interesting as I have not heard Apple use it quite often, and it used to be something Intel said regularly.

Moving to the Mac, Srouji showed this graphic that I thought was quite interesting.

This image may contain one of the more comprehensive list of silicon components Apple designs. Some are core, and some are companion components, but the totality comprises of the architecture Apple has developed. Even more interesting under the hood nuggets we got from Srouji was the statement about the architecture Apple developed being scaleable to meet the needs of many different product classes while still being industry-leading performance-per-watt.

Now moving to the Mac, with the stage set Srouji leads with this quote:

“Our scalable architecture contains many custom technologies that WHEN integrated with our software will bring even more innovation to the Mac.”

Here we get the whole story that has truly been the common bond of success for Apple’s products. It’s the hardware/custom silicon and software integration, which includes developer tools like Swift, Metal, etc. The silicon, the hardware design, the operating system tuned to the silicon, and the developer tools designed to run most efficiently on that hardware is the combination that drives the Apple product experience. And now all of that is coming to the Mac. Telling was Srouji’s phrase that much better performance was reason alone to transition the Mac to Apple SoCs. Apple has incredible confidence they can deliver better performance overall in Macs than Intel or AMD.

The Mac, running on Intel’s x86 platform, which once led the industry in innovation in silicon, is now the weak link in Apple’s product vision according to the way Apple framed this bold move.

The Big Unveiling Still a Mystery
Apple is offering developers a kit to help them transition their apps native to Apple Silicon by equipping a Mac Mini with the A12Z Bionic, which runs in the current iPad Pro. Srouji was clear they are developing a family of SoCs for the Mac, which means they will eventually transition off Intel entirely, so long as the vast majority of apps get optimized for Apple Silicon.

They will have Apple silicon for the full range of Macs going all the way to Mac Pro if I interpret the language correctly. Some will argue that the workstation/Pro market will still need Intel or AMD, and I can see that playing out if the pro tools don’t move to Apple Silicon. Every other Mac will move to Apple Silicon for all the reasons that Apple Silicon enables Apple’s product vision in all their other products.

There will still be an unveiling of the Apple Silicon Inside for Mac when the first line of Macs ship powered by Apple Silicon. It is then I will be fascinated to see what optimizations are made and what unique features become enabled by Apple Silicon Inside. That being said, Srouji did highlight several things that hinted at some of the experiences we should expect with Apple Silicon Inside Macs.

The first was Apple power management technology. Highlighting this, Srouji said they would maximize battery life with Macs with Apple Silicon Inside. This was always a big benefit of the Snapdragon-based Windows PCs I’ve used where they get north of 17-20 hours of battery life. I’ve always said if Arm can be successful in notebooks, we will start talking about battery life in days instead of hours. I hope Apple makes this happen as it would be a tremendously meaningful advancement for the Mac platform.

The next feature highlighted was the secure enclave. This is a point I repeatedly brought up about all the other Apple products with Apple Silicon and their advancement with security and privacy. Apple’s ability to provide industry-leading security and privacy is because they control the silicon stack, and it is a primary reason we don’t see many of the same security features on Mac. It sounds like that is all about to change.

Srouji also touched on the custom GPU Apple developed and made a point to call out pro-applications as a benefiter from Apple Silicon in Macs. Similarly, he made points about gaming, although I’m less optimistic about the gaming potential of a Mac vs. what we see in PCs. At least for now.

The last point touched on was the neural engine and machine learning accelerators Apple created claiming the Mac would become a compelling platform for machine learning applications. The part here that could be interesting is how the Mac family of silicon can run on later generation process technology if Apple wants as well as benefit from a larger die-size than what they have to do for iPhone and iPad because of size.

A key part of the debate about Apple spending the time to invest in the Mac has always centered around if it was worth the cost given Macs relatively small market share in PCs with ~10%. This was always my big question, but Apple succinctly answered that question by framing this transition that their vision for the Mac is bigger than what Intel/AMD/x86 can offer them. If Apple is right and can grow share and have the Mac appeal to more users, it benefits their whole ecosystem and customer base.

I’ve long been a fan of the phrase that Apple is blessed by its developers. iPhone would never have become the industry giant it is without Apple’s developer ecosystem. That ecosystem translated slightly to iPad, not really to Apple Watch or Apple TV, but most Apple developers first love was the Mac. While the iPad is an incredible product, for many, it is more luxury than a necessity in terms of a core computing device. The Mac, however, is an essential work machine and is more like iPhone in its necessity, and if there was any other computing platform I had a strong sense of confidence Apple’s developers would again bless them on it is the Mac.

Apple Transition Provides Huge Boost for Arm

You have to imagine that yesterday was a pretty good one for the folks at Arm—the little understood, but highly influential chip design company. Not only were they able to report that their designs power the world’s fastest supercomputer, there’s also that little detail about Apple choosing to switch from Intel-based CPUs to Apple designed custom silicon built on Arm’s core architecture for future generations of Macs.

A word on the supercomputer news first. Every year at the opening of the ISC high-performance computing conference, the organization running it releases the Top 500 performing supercomputers. As with most years, this year’s list was utterly dominated by Intel-based machines, but there was a surprise at the top. For the first time ever, Arm-based chips (in this instance, built by Fujitsu) are the CPU brains being used in the number 1 ranked machine—the Fugaku supercomputer, which is operated by the RIKEN Center for Computational Science in Japan. In addition to the prestige, it’s a huge psychological win for Arm, which has been working to make an impact on the enterprise computing world with its Neoverse CPU architecture for the last several years.

In the personal computing world, Arm notched an equally impressive victory with the official unveiling of the long-rumored Arm-powered chips for next generation Macs. Apple doesn’t have the largest market share in the PC market—it’s around 7% or so overall—but its impact, of course, greatly outstrips those numbers. As a result, by making the official announcement of custom Apple Silicon for the Mac, which was designed leveraging Apple’s architectural license of Arm’s chip IP designs (though Arm is never mentioned in the keynote or any of the press releases for the event), Arm scored a huge gain in credibility and awareness.

Of course, awareness doesn’t translate to success, and as exciting as the development may be, there are a great deal of questions, as well as previous history, to suggest that challenges await. First, while Apple talked about switching to this new design to both improve performance and reduce power consumption, it has yet to show any comparative benchmarks to existing Intel-based Macs for either of those metrics. Of course, that’s likely because the silicon isn’t done. Heck, Apple didn’t even announce the name of the new chips. (The A12Z Bionic chip in the developer system, and currently in the iPad Pro, is likely only an interim solution.) My guess is that we won’t get any of these details until the end of the year, when the first-generation Macs with these new chips are unveiled.

Apple’s primary stated reason for making the move away from Intel to custom silicon was to improve the experience, so these comparative details are going to be critically important. This is particularly true because of the generally disappointing performance of Arm-based Qualcomm and Microsoft chips in Windows on Arm PCs like the Surface Pro X. The key question will be if Apple is able to overcome some of the limitations and truly beat Intel-level performance, while simultaneously offering significantly better battery life. It’s an extremely challenging task but one that Apple clearly laid out as its goal.

There are also many unanswered questions about the ability to pair these new chips with external GPUs, such as the AMD Radeon parts Apple currently offers in certain Macs, or any other companion chips, such as 5G modems. While Apple currently uses Qualcomm modems for the iPhone and certain iPads, the company is known to be working on its own modems, and it’s not clear if those will be available in time for the launch of a 5G-equipped Macbook (should they choose to do so). As for graphics, Apple only uses its own GPU designs for its other custom parts for iPhones and iPads, but some computing applications require more graphics horsepower than those devices do, so it will be interesting to see if Apple offers the option to pair its new Mac-specific SOCs with external GPUs.

Finally, of course, there is the question of software. To get the best possible performance on any platform, you need to have software developers write applications that are native to the instruction sets being used. Because that can take a while, you also have to have a means to run existing software (that is, designed for Intel-based Macs) on the new chips via emulation. Ironically, Apple has chosen to use the exact same playbook to transition away from Intel processors that it used to transition into Intel processors. In fact, it’s even using the same names (with the addition of a version 2) for the core technologies: Universal Binaries 2 are combined applications that run on both Intel CPUs and the new Apple custom silicon chips and Rosetta 2 is the software used to emulate Intel instructions. This time around Apple also added some virtualization capabilities and demoed the ability to run Linux in a virtualized container. However, interestingly, there was no discussion of Windows running on the new Mac. Presumably all the work that Microsoft and its partners have done to bring Windows to Arm-based CPUs should port over fairly easily to Apple designs as well, but the details on this are not clear just yet.

To the company’s credit, Apple did an impressive job when it created this playbook to move from PowerPC-based chips to Intel, so here’s hoping the same strategy works the other way around. While Apple made it seem like it was a fairly trivial task to shift from x86-based instructions to Arm, if you use its Xcode development environment, history strongly suggests that the transition can be a bit daunting for some developers. To their credit, however, Apple did show functioning demos of critical Microsoft Office, Adobe Creative Cloud, and Apple professional apps running natively in the new environment. One concern Apple didn’t address at all was hardware device drivers. That was a key challenge for early Arm on Windows devices, so it will be interesting to see how Apple does with this.

One nice advantage that Apple and its developers gain by moving over to the same Arm-based architectures that it uses for the iPhone and iPad is that iOS and iPadOS applications should easily run on these new Macs—a point Apple was eager to make. As exciting as that first sounds, however, there is that detail of a lack of a touch screen on any existing Mac. Imagine trying to use a mouse with your iPhone, and you can see how initial enthusiasm for this capability may dampen, unless Apple chooses to finally allow touchscreens on Macs. We shall see.

The last point to make regarding all of these developments is that Apple ultimately chose to move to Arm to gain complete control over the Mac experience. As good as Intel’s processors have been, Apple has shown with its other devices that it likes to own the complete vertical technology stack, and the only way to do that was to design the CPU as well. It’s the last critical piece of the puzzle for Apple’s strategy to control its own destiny.

Regardless of that reasoning, however, it’s clear that both Apple’s decision and the supercomputing win mentioned earlier provide a great deal of credence to Arm’s efforts. At the same time, it arguably puts even more pressure on Arm to continue its pace of innovations. For a company that so few people really appreciate and understand, it’s great to see how far and how wide Arm has pushed the boundaries of computing. Now let’s see how they continue to evolve.

Podcast: Cisco Live, Qualcomm Snapdragon 690, Apple App Store Controversy

This week’s Techpinions podcast features Carolina Milanesi and Bob O’Donnell discussing the many announcements from the Cisco Live event, analyzing the potential impact of low-cost 5G phones from the latest Qualcomm chip, and debating the controversies around Apple’s app store payment model for developers.

Powering a More Inclusive Future Often Starts With Your Organization

This week Cisco held their customer and partner conference with over 120,000 registered attendees across the world as well as 600 analysts and press. The event was initially scheduled for the week of June 1 but was postponed following the killing of Mr. George Floyd and the protests that arose from it.

While many announcements were made during the event in areas such as networking, collaboration and research, I want to talk more about some of the initiatives Cisco spoke about in their corporate social responsibility effort.

In a blog published on Tuesday, Cisco’s CEO, Chuck Robbins, wrote: “We know our responsibilities don’t end with technology. It’s now about making the world we envision possible. Over the past six months, we concluded that our new purpose is to Power an Inclusive Future for All.”

To do that, Cisco developed a framework to guide decisions in how to respond to what they see as a crisis, an injustice, or a global challenge. Four primary pillars of response anchor this:

  • The Most Vulnerable, led by Tae Yoo, SVP, Corporate Affairs, will focus on the non-profits and partners that support underserved communities and those disproportionately impacted by systemic issues and crises.
  • Families and Community, led by Fran Katsoudas, EVP and Chief People Officer, will focus on expanding care and well-being services beyond our employees.
  • Research and Resilience, led by Liz Centoni, SVP Emerging Technologies & Incubation, will focus on technology solutions that can advance healthcare research and address social inequities.
  • Strategic Recovery, led by Maria Martinez, EVP and Chief Customer Experience Officer, will focus on helping healthcare and education institutions adapt their operations so they can continue to provide care to impacted communities and critical pathways to job opportunities during times of uncertainty.

There are two points I really appreciate about these pillars. The first is that they are born from four critical business components: corporate affairs, people, technology and customers. This, to me, is the only way corporate social responsibility is entrenched in the business rather than a project. Being entrenched in the business is what will make it effective because it will drive change while making a difference to the bottom-line.

The second point I appreciate is how these four pillars are entrusted to four senior women leaders of a diverse background. As a woman and someone who follows diversity and inclusion in tech closely, this does not surprise me. Cisco made top lists in the Fortune 2019 Best Workplace for Women, 2018 Best Workplaces for Diversity and just this past March Cisco took 3rd place in the Great Place to Work For All 2020 Leadership Awards.

I have been arguing for a long time that building a great company culture is one of the best ways to attract and retain talent. And not just talent from a skillset perspective but talent that shares your value and is a driver of change within the organization. Listening to Cisco’s Chief People Officer, Fran Katsoudas talk about how they built and scaled a program to support mental health is an excellent example of how to foster a safe space within your organization:

“Mental health is embedded and integrated into everything we do, which is I think exactly where it needs to be. We built some internal response elements around things like “safe to talk,” where our employees create this environment where people can seek out advice or get some help. We have leveraged technology; we have something called talk space where people can text and get some relief as well. When we started having our COVID check-in sessions, where we were just answering basic questions from employees, we had a mental health expert with us to answer questions about anxiety and stress and depression. Our biggest learning is embedding mental health in the conversation make it one of the most natural conversations and helped to acknowledge that mental health is health.”

The role of leaders is undoubtedly becoming more complex as technology and politics become more intertwined, but also as employees are holding their employers accountable in the role they play within society. Over the past few weeks, we have seen many tech companies speaking out against racism and donating to organizations focused on bringing about equality. Still, there is no question that tech, as an industry, could and should be doing more.

Responding to a pandemic is something that nobody will ever be criticized for. One would think that standing up against racism and working towards a more inclusive world would be met with the same degree of positivity. Yet this is not the case. Corporations navigate politics and shareholders and ultimately, it boils down to the conviction of the CEO in doing the right thing.

This week is not the first time we have heard Chuck Robbins talk about the role he wants Cisco and its technology play in the world. For years Cisco has been working with local governments in several countries to close the digital divide by bringing together the private and public sectors and move communities forward. When asked about how a company goes about balancing its employees’ needs with corporate social responsibility while still delivering value to shareholders, Robbins made it very clear:

“It’s a complicated balance that we’re all trying to deal with. But I will tell you two things, as you think about the community efforts, the purpose of the company, what inspires employees and motivates them and makes them excited about showing up, which in turn leads to more innovation, which in turn leads to more success for our customers. I think they’re all connected.”

I will continue to follow closely the steps Cisco will take to drive its vision of inclusion. I hope Chuck Robbins is right in believing this time will be different, that social justice and inequality will remain not just on Cisco’s agenda but on the tech agenda till hard solutions to hard problems are found.

Tech, Inequality, and the ‘Industrial Complexes’

Many of us have been doing a lot of thinking over these fraught past couple of weeks. Certainly there have been calls on corporations to do more. Several companies in the tech sector have pledged significant sums, promised to increase diversity in hiring, and reexamine compensation structures.

Laudable as these intentions might be, I think we need to take a closer look at the root of the problem. I’ll refer to a theme Senator Elizabeth Warren has been espousing for many years, which is that the ‘cards are stacked against’ many in the middle class and below, disproportionately affecting people of color. This is a trend that has accelerated during my thirty-year professional lifetime.

One strand of this economic repression has been the rise of what I refer to as several new ‘industrial complexes’ in our economy. To refresh your memory, the term ‘Military-Industrial Complex’ described the “informal alliance between the military and the industry that supplies it, seen together as a vested interest that influences public policy”. This term gained popularity when President Eisenhower spoke of its detrimental effects in his farewell address in 1961.

I believe that modern-day ‘industrial complexes’ in areas such as health care, education, and taxation have played a significant role in expanding inequality (and endemic racism), erecting barriers to racial and economic progress. Let me describe each of these in a brief paragraph, and then prescribe some ways tech might help break these down and create new opportunities for those outside today’s privileged class.

A June 16 Wall Street Journal op-ed comparing the health care systems of the United States and Singapore illustrates the failings of the U.S. if what I call the ‘Health Care-Industrial Complex’. The U.S. spends 18% of its GDP on health care, compared to 5% in Singapore, with inferior outcomes across nearly every metric. Common medical procedures cost 5-10x here what they do in Singapore (and most other OECD countries). Why? Our inefficient health care system has a massive ‘middle sector’ unparalleled anywhere else in the world, mainly orbiting insurance companies with tentacles expanding to billing and other parts of the medical bureaucracy. This is compounded by the ‘fee for service’ approach, which incentivizes treatment rather than prevention. Here’s a coronavirus example: hospitals are now spending billions of dollars in advertising begging people to come in for treatments and procedures they might have skipped during the lockdown. The fact that hospitals are desperate for revenues reveals the structural failings of our system. It’s similarly dispiriting that the high-profile health-care venture Haven, led by Atul Gawande and funded by Amazon, Morgan Stanley, and Berkshire Hathaway, has not, so far, shown any success in its objective of “simplified, high-quality and transparent health care at a reasonable cost” (Gawande stepped down in May).

This ‘health care industrial complex’ is a huge contributor to today’s economic disparities. Even if low-wage workers get health care through their employer, the miasma of deductibles and copays is ridiculously burdensome. Privileged white collar workers might be able to spend two hours on the phone fighting with their insurance company, but what about the factory worker or person for whom English is not their first language?  Those living paycheck-to-paycheck, or the 40% of Americans who would struggle to come up with $400 for an unexpected bill (according to a recent study) are massively vulnerable to the failings of the health care-industrial complex.

The system of higher education has remarkable similarities to the health care system. We all know that the cost of attending university  has risen at several times the rate of inflation over the past 20 years. This ‘Education-Industrial Complex’ benefits the privileged few. At universities, the pay of tenured professors and the ranks of well-compensated deans and senior administration have mushroomed, built on the backs of growing numbers of adjunct faculty who can barely make a living. To pay for this increasingly expensive education, a massive student loan industry, with its usurious rates, fees and hard-edged tactics, has become the education-industrial complex’s version of the health care industry’s insurance companies. So, higher education, a major ticket out of poverty, is increasingly out of reach, and/or saddles students with such debt that they are behind the eight ball from the day they graduate — which inhibits their ability to afford the middle class trappings that so many take for granted.

One more example is what I call the Tax-Industrial Complex. In many developed countries, taxes are done automatically, or, at best, take an hour for the average person to complete. Not here. The increasingly complex tax system has spawned a huge industry of tax accountants, preparers, companies, and software programs who exist merely to help you do your taxes. Or, for the privileged, to help find creative ways to pay less taxes. No other country has a bureaucracy akin to the IRS and a ‘middleman’ sector that exists to demystify it or, for the privileged, do war with it. It’s another example of an institutionalized structure that’s designed to protect the ‘haves’ and the status quo.

The more deeply entrenched these ‘industrial complexes’, the more impervious they are to reform. A mini industrial complex of lobbyists, consultants, and lawyers spends billions annually to resist structural change in the very sectors that desperately need it.

So, where does the tech sector come in? With its collective capital, brainpower, and entrepreneurial spirit, I’m hoping there’s an opportunity for tech to play a role in disrupting some of these industrial complexes. Perhaps some of the same appetite and avarice that’s been used to disrupt some industry sectors — some for good, others less so — can be applied to the health care and education industries, changing lives for the better by narrowing the gaps between the haves and have-nots.

The coronavirus era has provided a glimpse of what could happen, only because the pandemic’s disruption was so sudden and far-reaching that there wasn’t time for the usual coterie of resistors, bureaucrats, and lobbyists to get in the way. So you had remote health care, for example, arise almost instantly, and with largely satisfactory outcomes. What might have taken ten years to happen took ten weeks. Even if a coronavirus vaccine were to happen tomorrow, the new reality is that a meaningful percentage of doctor visits going forward will be online.

Or take education. The move to remote/virtual learning was by no means perfect and remains very much a work in progress. Nothing fully replicates the experience of a classroom, or the full ‘college experience’. However, we’ve seen that for some subjects, quality instruction can be delivered virtually. This certainly opens our eyes to the possibility of educating more people and for less cost.

Notable tech entrepreneurs such as Reid Hoffman and Peter Thiel have called on big tech companies to think more broadly about the impact they have on the world, and for entrepreneurs to think about bigger problems than the next micro-segmented dating app.

Some of the most important categories of business opportunity right now reflect broader global challenges: dealing with a pandemic; addressing climate change; reinventing transportation. Perhaps it’s time to add the objective of reversing systemic inequality, which is one form of racism, to this ambitious list. Deconstructing some prevalent ‘industrial complexes’ in areas such as health care and education is one way to start, and areas where tech can make a tangible difference.

Cisco Highlights Focus on Location as Companies Start to Reopen

As states in the US start to reopen and businesses around the country (and the world) start to plan for employees to return, there’s been a lot of discussion around what the new “normal” in the workplace will be. Of course, we don’t really know what it’s going to be like, but most people are fairly certain it’s going to be different. Whether it’s staggered work schedules, spread out workspaces, plexiglass shield-equipped lunch tables, or other workplace adjustments, many people who start to return to the office will likely encounter different environments.

Of course, many others won’t be returning for some time, if at all—upcoming research data from TECHnalysis Research suggests we could have as many as 35% of workers still working from home even into 2021. Regardless of where people do their work, however, it’s never been clearer that the need for flexible, secure access to work resources is extremely high. In addition, as some people do start to venture back into the office, it’s also clear that they’re going to want/need tools that can help them stay safe while they’re there.

At the Cisco Live Digital event, the networking giant highlighted a number of new and updated initiatives it has been working on to address some of these issues. On the security side, the company’s big news is around its SecureX cloud-native cybersecurity platform, which it is starting to integrate into all Cisco security products at the end of this month. Key enhancements include a single dashboard for viewing live threat data, increased automation of security tools, and enhanced security capabilities that can intelligently leverage analytics data from multiple sources simultaneously.

The company also unveiled a number of enhancements to its Webex collaboration platform, including the announcement that it now has an impressive 3x the capacity to handle more meetings. For those returning to the office, Cisco also made some interesting additions via its Webex Control Hub application. Control Hub lets IT managers quickly install the Webex voice assistant onto conference room devices, which keeps people from having to touch the screens or touchpads in meeting rooms. In addition, Control Hub offers expanded analytics on meeting room usage, which can impact cleaning schedules for those rooms and can manage meeting room locations/configurations to keep people spread out. Cisco also enhanced the support capabilities for meetings that will incorporate both on-site and remote workers.

Another intriguing location-based set of capabilities comes via the updated DNA Spaces offering. Related to the company’s larger Digital Network Architecture (DNA) initiative, which is essentially Cisco’s enhanced version of software-defined networking (SDN), DNA Spaces is an indoor location-based service platform that can leverage data from WiFi hotspots, including those from its Meraki division, to determine how people are moving through or congregating within a location. The company made two additions to the platform, including the descriptively named Cisco DNA Spaces for Return to Business, and Indoor IoT Services, which can use WiFi6-enabled access points to work with Bluetooth LE devices, such as beacons, to do things like asset tracking, environmental monitoring, room tracking, and more.

In a manner that’s conceptually similar to the Bluetooth-based contact tracing apps that have been in the news, DNA Spaces for Return to Business can track the WiFi (or GPS) signals from mobile devices, and then can use that to analyze people’s real-time movement patterns through the office. The resulting data can subsequently be used to do things like limit the number of people in a given building, or section of the office, that a company could define as being at maximum capacity. In conjunction with Indoor IoT Services, which Cisco claims is the first indoor IoT-as-a Service offering, the same data could be combined with other sensor data to do things like suggest alternative places to meet, encourage employees to social distance, and more.

While there are certainly going to be some questions about privacy concerns for any location-based service, companies (and likely a decent percentage of employees) probably feel that the potential safety benefits outweigh those privacy concerns within the limited office environment. Over time those feelings may change—and it will certainly be an interesting trend to watch—but to get people to feel comfortable about returning to office environments, these types of technology-based solutions will likely play an important role. Companies that deploy these solutions will have to make sure employees feel confident that they aren’t being tracked once they leave the workplace, however, otherwise they’ll likely face significant pushback. As long as companies ensure privacy outside the workplace, employees are likely to accept these tracking solutions as just one of the many new aspects of the new normal inside the workplace.

Podcast: Facial Recognition Technology, Sony PS5, Android 11, Adobe Photoshop Camera

This week’s Techpinions podcast features Carolina Milanesi and Bob O’Donnell discussing tech companies’ recent shifts in policy around facial recognition, and analyzing the debut of Sony’s PS5 gaming console, the beta of Google’s Android 11 and Adobe’s new Photoshop Camera app for smartphones.

WiFi 6E Opens New Possibilities for Fast Wireless Connectivity

One of the most obvious impacts of the COVID-19 pandemic is how reliant we have all become on connectivity, particularly wireless connectivity. For most of us, the combination of a fast broadband connection along with a solid WiFi wireless network inside our home has literally made the difference between being able to work, attend classes, and enjoy entertainment on a consistent, reliable basis or not being able to do so.

As a result, there’s significantly more attention being placed on connectivity overall these days, within all of our different devices. Of course, it doesn’t hurt that we’re also at the dawn of a new era of wireless connectivity, thanks to the recent launch of 5G networks and the growing availability of lower-cost 5G-capable devices. But, while 5G may currently be getting the lion’s share of attention, there have been some tremendously important developments happening in the world of WiFi as well.

In fact, just six weeks ago, the FCC gave official approval for WiFi to extend its reach to an enormous swath of new radio spectrum in the 6 GHz band here in the US. Specifically, the new WiFi 6E standard will have access to 1.2 GHz, or 1,200 MHz of radio spectrum, ranging from 5.9 GHz to 7.1 GHz (and incorporating all the 6 GHz frequencies in between, hence the 6 GHz references). Just to put that in perspective, even the widest connections for millimeter wave 5G—the fastest kind of 5G connection available—are limited to 800 MHz. In other words, the new WiFi connections have access to nearly 1.5 times the amount of frequencies to transmit on as the fastest 5G connections.

Theoretically, that means that WiFi 6E connection speeds could prove to be significantly faster than even the best that 5G has to offer. Plus, because of the basic laws of physics and signal propagation, WiFi 6E coverage can actually be wider than millimeter wave 5G. To be fair, total coverage is very dependent on the amount of power used for transmission—cellular transmission levels are typically several times stronger than WiFi—but in environments like office buildings, conference centers, as well as in our homes, it’s not unreasonable to expect that WiFi 6E will be faster than 5G, just as current 5 GHz WiFi (802.11a and its variants) are typically faster than 4G LTE signals.

One important clarification is that all of these benefits only extend to WiFi 6E—not WiFi 6, which is also relatively new. For WiFi 6, there are a number of improvements in the way signals are encoded and transmitted, all of which should decrease the congestion and reduce the power requirements for using WiFi. However, all those improvements still use the traditional 2.4 and 5 GHz frequency bands that WiFi has used for the last 20 years. The critical new addition for WiFi 6E is the 6 GHz frequency band.

To make sense of all this, you have to understand at least a little bit about radio frequency spectrum (whether you want to or not!). The bottom line is, the higher the frequency, the shorter the distance a wireless signal can travel and the lower the frequency, the farther it can travel. The analogy I like to use is to think of hearing a music concert from a far-away stadium. If you’re driving by a concert venue while a band is playing, you typically can hear a wide range of frequencies and can better make out what’s being played. The farther away you are, however, the more that the higher frequencies are harder to hear—all that’s left is the low-frequency rumble of bass frequencies, making it difficult to tell what song is being played. All radio frequency signals, including both cellular and WiFi, follow these basic rules of frequency and distance.

There is a critically important twist for data transmission, however, and that has to do with availability and width of channels for transmitting (and receiving) signals. The basic rule of thumb is the lower the frequency, the smaller the channel width and the higher the frequency, the wider the channel width. Data throughput and overall wireless connection speed is determined by the width of these channels. For 4G and what’s called low-band 5G (such as with T-Mobile’s 600 MHz 5G network), those channels can be as small as 5 MHz wide or up to 20 MHz. The mmWave frequencies for 5G, on the other hand, are 100 MHz wide and, in theory up to eight of them are available for a total of 800 MHz of bandwidth.

The beauty of WiFi 6E is that it supports up to 7 channels of 160 MHz, or a total of 1,120 MHz of bandwidth. (As a point of comparison, 5 GHz WiFi supports a maximum of two 160 MHz channels and 500 MHz overall, while 2.4 GHz WiFi only supports a maximum of three 20 MHz channels and 70 MHz overall.) In addition, WiFi 6E has these wide channels at a significantly lower frequency than used for millimeter wave (typically 24 GHz and up, although most US carriers are using 39 GHz), which explains why WiFi 6E can have broader coverage than mmWave. Finally, because 6 GHz spectrum will be unoccupied by other devices, the real-world speed should be even better. The lack of other traffic will enable much lower latency, or lag, times for devices on WiFi 6E networks.

Of course, to take advantage of WiFi 6E, you need to have both routers and devices that support that standard. To do that, you need to use chips that also support the standard (as well as live in a country that supports the full frequency range—right now the US is leading the way and the only country to support the full 1.2 GHz of new spectrum). Broadcom and Intel have both announced support for WiFi 6E, but the only company currently shipping chips for both types of devices is Qualcomm. For client devices like smartphones, PCs and others, the company offers the FastConnect 6700 and 6900, while for routers, the company has a new line of tri-band (that is, supporting 2.4 GHz, 5 GHz and 6 GHz) Networking Pro Series chips, including the Networking Pro 610, 810, 1210 and 1610, which support 6, 8, 12, and 16 streams, respectively, of WiFi 6E connectivity.

In addition, the new Networking Pro line supports what the company calls Qualcomm Max User Architecture and Multi-User Traffic Management, which enable up to 2,000 simultaneous client connections, thanks to advanced OFDMA (Orthogonal Frequency-Division Multiple Access) and 8-user MU-MIMO (Multi User—Multiple Input, Multiple Output) per channel. The new router-focused Networking Pro chips also support SON (Self-Organizing Networks), which makes them well suited for future versions of WiFi mesh routers.

In a way, the benefits of WiFi6E offer an interesting challenge for Qualcomm and other companies that make both 5G cellular and WiFi-focused chips and devices. For certain applications—notably public venues, certain office environments, etc.—the two technologies are likely to compete directly with one another, in which case the core component companies will essentially have to sell against themselves. Because of the increasingly complex range of wireless network architectures, different security requirements, business models and more, however, the likely truth is that both technologies will co-exist for some time to come. As a result, it makes better business sense to have offerings that support both than to simply pick a side.

The good news for those of us in the US is that we’re about to enjoy a significantly improved range of wireless networking options, thanks to both of these recent WiFi 6E enhancements, as well as the forthcoming auctions for mid-band (3.5 GHz) 5G spectrum. Despite the many other challenges we face, it’s looking to be a good year for wireless.

Podcast: Twitter Controversy, Arm IP Designs, Qualcomm XR Viewers

This week’s Techpinions podcast features Carolina Milanesi and Bob O’Donnell discussing the controversy around Twitter’s efforts to tag tweets and the implications for social media overall, analyzing the new mobile IP chips designs from Arm, and chatting about the latest announcements around AR/VR/XR headsets attached to Qualcomm-powered 5G smartphones in conjunction with telco carriers.

HP’s Reverb G2 Headset Positions It Well for VR’s Next Act

It has been an exciting week for those of us closely monitoring the Virtual Reality market, with numerous important new product announcements. Earlier this week, Carolina Milanesi talked about Qualcomm’s plans to work with partners to bring VR viewers to 5G smartphones and the new product and social platform coming from Perter Chou’s XRSpace. And yesterday, HP announced the next version of its Reverb headset, called the G2, which I’ll be discussing here. More broadly, however, these announcements reflect an industry that is finding its footing and figuring out what consumers and business users want and need.

HP’s Ongoing Commitment to VR
HP has been a player in the VR market since the launch of its original Mixed-Reality based headset called the VR1000 back in 2017. HP was just one of a number of PC OEMs that followed Microsoft’s reference design recipe for a mixed reality headset, hurriedly launched into the market in an attempt to grab some of the early-adopter market that the Oculus Rift and HTC Vive had cornered.

While very few of the PC OEMs followed up with a second headset, HP did. Its Reverb headset, launched in March 2019, saw the company get much more serious about VR. The new headset offered a much more refined and comfortable design, a higher resolution display, and integrated headphones. The Reverb was well received by the market, and HP found success in selling the device to both consumers and commercial users.
In addition to its focus on the Reverb, HP also brought to market purpose-built PCs for driving both its own headsets and others. In addition to high-powered desktop and notebook computers, the company has also continued to ship Desktop VR Backpack products that let gamers move around without being tethered to a stationary PC. These types of rigs are essential to location-based VR installations.

Finally, and perhaps just as important as the hardware launches, HP hired Joanna Popper as the global head of VR for location-based entertainment in 2018. Formerly a Hollywood producer who also held jobs at NBC Universal and Singularity University, Popper is an important player and thought leader in the VR space. Her hiring showed just how serious HP was about VR as a business.

Reverb G2
This brings us to the G2, announced this week, and set to ship later this year. I’ve yet to test out the headset, but all signs point to well-conceived product. HP worked closely with both Microsoft and Valve, which operates the SteamVR platform (and also, incidentally, ships its own Index headset). In fact, Valve designed the new lenses HP uses in the G2, which it says boosts the visual experience and better leverages the 2k by 2k per-eye resolution. Like the original Reverb, the G2 offers a 114-degree field of view.

Other new features include integrated speakers in the headset that support spatial audio, Four built-in cameras that HP says enables 1.4 X more movement capture than the previous headset. A new flip-up design makes wearing the headset more comfortable. And new controllers with an updated button layout.

HP says the Reverb G2 will be available in the Fall, selling for $599. I’m pleased by the pricing, but I do wish the headset was ready to ship now.

In fact, I would argue that right now, one of the critical challenges that VR is facing is not a lack of content, use cases, or demand, but a simple lack of supply. It is exceedingly hard to get VR headsets right now, with key products such as the Oculus Quest and the Valve Index consistently on backorder, due to supply-side challenges and very robust demand.

As the world headed into lockdown, we saw both consumer and commercial users come back to take another look at VR. On the consumer side, people stuck inside showed a renewed interest in the technology, helped by exciting new games (such as Valve’s Alyx) and use cases (such as the fitness app Supernatural). And on the commercial side, organizations faced with challenges around employee training and remote collaboration are looking ever more closely at VR. It’s notable that after some time in closed beta, Oculus Business is now open and accepting orders. The extent to which companies are leveraging VR is truly impressive. My colleague Ramon Llamas recently wrote about the growth of VR training for empathy.

All told, VR is having a moment. Broadly speaking, the industry has reset expectations. It’s not going to take over the world anytime soon (or, frankly, ever), but with new products like the Reverb 2 and a broadening range of use cases and content, the technology is becoming more relevant to more people every day. The missing element continues to be a widely available, universally known, safe place for people to meet and hang out in VR. That might be the upcoming XRSpace Manova or Oculus Horizon, or something we’ve yet to see. Once that piece falls into place, things are going to get very interesting in VR.

Extended Reality: A Sceptic Wishlist

Today kicks off Augmented World Expo USA 2020, one of the annual touchpoints for the world of Extended Reality (XR), basically from Augmented Reality to Virtual Reality and anything in between. Like many other events, the tech conference has gone digital due to COVID-19, so it will be interesting to see how they use XR in their own lineup. What is certain is that we can expect several announcements to be made during the week. A couple of brands even got ahead of the news cycle by announcing early.

First, Qualcomm announced a partnership with 15 global operators to bring VR viewers tethered to 5G smartphones to market within a year. Then, former HTC CEO, Peter Chou, introduced his new project: a new VR company called XRSpace. The project comes with an all in one 5G-connected XR headset called XRSpace Mova as well as a social reality platform called XRSpace Manova. Although the two announcements been only a few hours apart was probably pure coincidence, there is a lot that these two companies are doing to try and accelerate adoption, including delivering on some of my wishlist items that would considerably change my attitude towards XR. I have a feeling, however, these items are far from being unique to me, considering the still relatively limited uptake we have seen thus far.

Don’t get me wrong I do believe XR will play an important role in the way we communicate and experience in the future. Adoption will happen much faster in a business context than in a consumer one, mostly because the return of investment and the value add in a business context will be much more apparent to users.

At a time when so many are physically isolated because of COVID-19, and despite VR still limited penetration, we have seen an attempt to use VR to alleviate solitude, improve fitness and mental health.  At the same time, the current economic downturn coupled with what was a tech overload for many parents dealing with distance learning while trying to work might have also not been the optimal environment to test out VR. All in all, too much of a mixed bag to assess how effective VR could be during shelter in place orders but not a reason to be negative as far as a future opportunity.

XR Viewers as a Steppingstone to Broader Adoption

I’ve been excited about the role that XR viewers could play in broaden adoption since I first heard Qualcomm talk about their plans and roadmap at Mobile World Congress in 2019. The excitement came from two main components of this new kind of viewers: first, the ability to lower the barrier of entry represented by the cost of many VR headsets. Second, the richer and more user-friendly experience that can be delivered compared to those viewers where the mobile phone would sit within the viewer itself.

For many consumers investing $400 to $700 for a VR headset remains a hard choice, mostly because the value these headsets bring is unclear either because they have never tried it or because they don’t see themselves using VR often enough to justify the investment. Some consumers who do appreciate what VR has to offer are put off by the bulky design of most headsets and the clunkiness of the experience.

The Qualcomm XR Optimized Certification Program will help OEM partners deliver on that rich experience by guaranteeing compatibility between the 5G phone and the viewer, specifically looking at these key features: Six Degrees of Freedom (6DoF), head tracking performance, display calibration validation, motion to photon latency validation, power and thermal performance.

5G connectivity and the lighter, nimbler designs of the viewers will also help make the experience more mobile, creating more opportunities for engagement. Another limitation of the current systems is that they are mostly used within the home.

If the relative success of Oculus Quest is something to go by, it seems clear that standalone VR headsets are what users want, but for those who might not be quite as ready to invest XR, viewers might be the closest thing to a standalone experience. Linking XR viewers to 5G smartphones is a smart move because of the experience that can be delivered through 5G but also because it represents a great opportunity for OEMs and carriers to offer bundles lowering the barrier of entry some more.

XR Must Connect Not Isolate

Peter Chou has always been a big believer in VR after all HTC got into this business when he was still the CEO. He first teased the idea of XRSpace back in 2019 at Mobile World Congress when he talked about an XR experience with a social component at its core. The idea of bringing a social component to VR is not new. Last year, Facebook launched Horizon, basically a VR world designed for Oculus users to meet and socialize. The move seemed a pretty obvious step to make Facebook relevant in the future and limit the risk of missing the transition from mobile in a similar way as we saw from the PC to mobile. Chou’s decision, however,  has little to do with business model and a lot to do with opportunity as he claims that without a social component, VR will fail to win over a broader set of consumers.

While XRSpace Mova grabs the title of the world’s first 5G consumer mobile VR headset powered by Qualcomm, it is the XRSpace Manova platform that should get much of our attention. Manova introduces full-body avatars that can interact in a different social context like work, health and fitness, education and entertainment. Mova is capable through its sensors and a proprietary scanning technology to understand hand gestures and analyze real-world spaces to replicate them within the VR app. Between 5G and this scanning tech, it sure sounds like Mova will allow for much more freedom than the current head-mounted displays allow for, which will help build a broader set of use cases and, therefore, a wider appeal.

If there is one thing that COVID-19 has shown us is how much can be done digitally. What it has also made clear is that humans will always crave real-life human contact and interaction. So, it seems to me that Chou’s idea that VR should connect not isolate people makes a lot of sense. Whether or not the platform will be successful will depend on how much content is created, another limitation the segment is currently facing, and how much it will all cost.

As I look at the marketing videos and material for XRSpaces, I am struck by the lack of diversity that comes screaming through these vignettes. If we want to extend our reality and create more opportunities for social interaction, let’s make sure it is a better reality, one where everybody feels seen and represented. The lack of diversity in VR could ultimately cap its opportunity much more so than cost and tech.