This week’s Techpinions podcast features Carolina Milanesi and Bob O’Donnell discussing their experiences with Microsoft’s Surface Duo device, analyzing the launch of the second generation Motorola Razr foldable 5G phone, chatting about the details of the next generation XBox gaming console and previewing Apple’s Event for next week.
Author: Bob O'Donnell
Podcast: Samsung Galaxy Z Fold 2, Nvidia Gaming GPUs, Intel CPUs and Branding, Qualcomm IFA Announcements
This week’s Techpinions podcast features Carolina Milanesi and Bob O’Donnell discussing their experiences with Samsung’s second generation foldable device, analyzing the GeForce RTX 3000 series GPU announcements from Nvidia, talking about Intel’s new 11th Generation Core CPUs and the company’s new Evo platform brand, and chatting about the many different announcements from Qualcomm’s IFA keynote speech.
Podcast: TikTok, Apple Facebook, HP and Dell Earnings, Fall Product Preview
This week’s Techpinions podcast features Ben Bajarin and Bob O’Donnell discussing the latest developments and challenges around the potential sale of social media app TikTok, the controversies between Apple and Facebook on activity tracking, the latest quarterly earnings from PC industry leaders HP and Dell, and the potential impact of a range of tech products expected to be released this fall.
Podcast: 5G, Radio Frequency Spectrum and What it All Means
This week’s Techpinions podcast features Mark Lowenstein and Bob O’Donnell explaining many of the details of how 5G works, what radio frequency (RF) spectrum is, why it’s critically important and what the latest developments are, how how all of this impacts telco carriers and device makers, and more.
Podcast: Microsoft Surface Duo, Qualcomm Court Decision, Fortnite Battle with Apple and Google
This week’s Techpinions podcast features Carolina Milanesi and Bob O’Donnell analyzing the news around Microsoft’s Surface Duo mobile device, discusses the positive legal outcome for Qualcomm’s IP licensing business, and debates the issues around Epic’s Games’ Fortnite-driven battle with Apple and Google’s app store policies.
Podcast: Samsung Unpacked, T-Mobile 5G, Apple App Store, Microsoft-TikTok
This week’s Techpinions podcast features Carolina Milanesi and Bob O’Donnell discussing the announcements from the Samsung Unpacked event, including their new Note 20 and Galaxy Z Fold 2 smartphones as well as their partnership with Microsoft on software and gaming services, chatting about T-Mobile’s launch of the world’s first 5G SA (Standalone) network, controversies around Apple’s App Store policy and cloud-based gaming services like Microsoft’s upcoming xCloud, and analyzing the potential purchase of TikTok by Microsoft.
Podcast Special: Marta Karczewicz of Qualcomm Discussing Video Compression Technology
This is a special Techpinions podcast with Carolina Milanesi and Bob O’Donnell along with special guest Marta Karczewicz, VP of Technology at Qualcomm, discussing the evolution of video compression technology and standards and how they impact our ability to watch streaming videos from services like Netflix on our smartphones and TVs. In addition, they discuss the role of women in engineering roles and the importance of diversity in technology research and development.
Podcast: AMD Earnings, Congressional Hearings, Amazon, Apple, Facebook and Google Earnings
This week’s Techpinions podcast features Ben Bajarin and Bob O’Donnell analyzing the quarterly financial results from AMD and what they say about the semiconductor industry overall, discussing the congressional anti-trust hearings with major tech CEOs, and chatting about the earnings from those same companies as well.
The Shifting Semiconductor Sands
There was a time—and it wasn’t really that long ago—when if you asked anyone who tracked the chip business about Intel, they probably would have said they were invincible. After all, they owned 99% of the datacenter CPU market, close to 90% of the PC CPU market and even made ambitious acquisitions in other “alternative” architectures, such as FPGAs (Field Programmable Gate Arrays) and dedicated AI processors. On top of that, they had a decades-long history of impeccable execution and industry-leading innovations in the process of semiconductor manufacturing.
And then, last Thursday hit.
During what was otherwise a stellar second quarter earnings report, with impressive revenues and growth numbers across the board, the company acknowledged that their already delayed transition to 7nm process technology for future generation CPUs was delayed for another six months. Now, arguably, that really shouldn’t be that big of a deal. After all, this is ridiculously complex technology. The company said they knew what the problem was and, therefore, had a clear path to fixing it. They also certainly wouldn’t be the first major tech company to face some technical challenges that caused delays in the release of eagerly awaited new products.
But the market didn’t see it that way, and subsequently, Intel stock has lost nearly 20% of its value in the last week. To be fair, this is also a stock market that over the last few months has shown absolutely no sense of rationality, so you have to take any dramatic stock price moves in the current environment with a very large grain of salt.
Fundamentally, however, there appears to be some loss of faith in Intel’s previously irreproachable reputation for delivering what they said, when they said they would do it. While some view the most recent news, as well as the forthcoming and likely related departure of chief engineering officer Murthy Renduchintala, as the primary catalyst for this perspective, you could make the argument that the problem started earlier. In the case of dedicated AI accelerators, for example, Intel made a large investment in Nervana and put Nervana’s main execs in charge of their dedicated AI investments back in 2016. Then, shortly after they released their first Nervana chips to customers, they essentially abandoned all that work to purchase Habana Labs for $2 billion late last year and moved in a different direction. Obviously, cutting edge technologies like AI accelerators can certainly shift quickly, and, in this case, Intel clearly recognized that they needed to make an aggressive move. However, it certainly raised some questions.
At the same time, there are also several other very interesting developments in the semiconductor market that appear to be driving some fundamental shifts in how people (and investors) are viewing it. One, of course, is a hugely reinvigorated AMD—a fact that’s been reflected in the company’s impressive growth and even more impressive stock price run over the last several years (as well as the nice boost it received last week as a result of Intel’s news).
To their enormous credit, AMD’s CEO Lisa Su, CTO Mark Papermaster and team have done a remarkable job in turning a company that some felt was headed for extinction just a few years back, into a formidable competitor and an important force in the chip industry overall. You could argue (and many have) that, from a market valuation perspective, the company has received more credit than its sales numbers reflect. However, there’s no question that AMD has been shaking up and enlivening the previously static CPU market and that it will continue to do so for many years to come.
In addition, there’s been a great deal of momentum recently towards Arm-based CPUs in both datacenters and PCs. Apple’s recent announcement to switch from Intel to its own Arm-based CPU designs in future Mac, for example, highlights some of the high-level changes that are happening in the CPU market.
Despite all this bad news for Intel, it is important to keep everything in perspective. Intel is still by far the largest CPU manufacturer in the world and will be for some time to come. The compaSny will certainly be facing a more competitive marketplace than it has had to worry about for a very long time, but it’s undoubtedly up to the task. Also, in the long run, good competition will inevitably be better for all of us.
As a long-time Intel follower who essentially learned most everything about the importance of process technology from Intel (they’ve done a fantastic job of educating analysts and press about these issues for a very long time), I have to admit that it’s somewhat shocking to see Intel in this state. At the same time, it’s also important to remember that not all numbers in the semiconductor process game are created equal. While it’s certainly up for debate, Intel has argued for years that it’s 7nm process is closer to what other vendors call 5nm.
Regardless of the numbers, however, it is clear that Intel has slipped from its mantel of invincibility and will need to reprove itself to the industry and market at large. The fact that the company has already discussed working with third-party foundries on advanced process nodes for some of its upcoming chips (including its widely anticipated new GPU) is a testament to that. In the Intel of old, that decision would have probably been unthinkable. But we are in a new era, and despite these short-term concerns, it is encouraging to see Intel’s CEO Bob Swan willing to admit the challenges they have and take some aggressive actions to address them.
The sands beneath the semiconductor market are clearly shifting, and it’s going to be very interesting to see how things look over time.
Podcast: Microsoft Inspire, Google G Suite Essentials, Netflix, Microsoft and Intel Earnings
This week’s Techpinions podcast features Carolina Milanesi and Bob O’Donnell analyzing the Azure and Microsoft M365 news and diversity and leadership sessions from Inspire, discussing the addition of Google’s G Suite Essentials offering, and chatting about the quarterly earnings from Netflix, Microsoft and Intel and what they say about the current state of the tech market.
Microsoft and Partners Bring More Hyperconverged Hybrid Cloud Options to Azure
When it comes to cloud computing, there’s little doubt that we’re in a hybrid world. In fact, that point comes through loud and clear in two different studies published this year by TECHnalysis Research. Both the Hybrid and Multi-Cloud Strategies in the Enterprise report and recently published Pandemic-Based IT Priority Shifts report highlight the high degree of usage, strategic importance and budgets spent on hybrid computing models. In fact, in many instances, hybrid cloud is considered more important than the older and more established public cloud computing methodologies.
The reason? While every company would certainly like to be running nothing but containerized, cloud-native applications, the reality is that almost none do so. There’s simply too much legacy software (typically still close to 50% of most organization’s applications) and hardware datacenters that companies need to use for a variety of reasons, including regulatory, security, cost and more. In the meantime, private clouds and hybrid models that combine or connect private cloud workloads with public cloud workloads serve as a critical steppingstone for most organizations.
As a result, we’ve seen many different tech vendors create new hybrid cloud offerings recently to tap into the burgeoning demand. At the company’s partner-focused Inspire event, Microsoft unveiled several new hybrid cloud-focused additions to its Azure cloud computing platform. In particular, they announced additional capabilities for Azure Stack HCI—the local, on-premises compatible version of Azure that runs on specialized, Microsoft-certified hardware appliance devices from hardware partners like Dell EMC, HPE and Lenovo.
These hardware appliances are built using an architecture called hyperconverged infrastructure, or HCI, that essentially combines all the elements of a data center, including compute, storage and networking, into a single, software-defined box. The beauty of the HCI approach is that it virtualizes all these elements so that simple, off-the-shelf servers can be organized and optimized in a way that improves their performance, functionality, and reliability. For example, virtualizing the storage provides SAN (Storage Area Network)-like capabilities and dependability to an HCI environment without the costs and complexities of a SAN. Similarly, virtualizing the networking lets an HCI device offers the capabilities of a load balancer via software, again without the costs and complexities of purchasing and deploying one. Best of all, these software-defined datacenter capabilities can both scale up to large datacenter environments or scale down for branch offices or other edge computing applications.
While Microsoft has talked about Azure Stack HCI before, they announced several new capabilities at Inspire. Notably, Azure Stack HCI is now a fully native Azure service, which means you can now use the Azure Portal as a combined management tool for public cloud Azure computing resources along with any local Azure Stack HCI resources, such as virtual machines, virtualized storage and more. This allows IT administrators the classic “single pane of glass” UI for monitoring and managing all their different public, private and hybrid-cloud-based workloads. In addition, by making Azure Stack HCI a native Azure service, it makes it significantly easier to use other Azure PaaS (Platform as a Service) capabilities, such as Azure Backup and Azure Security Center, with private cloud workloads. In other words, it essentially allows companies to pull these two “worlds” together in ways that weren’t possible before.
One particularly nice feature of these new Microsoft-certified systems is that they can be purchased with the Azure Stack HCI software already installed and configured on them, making them about as easy to set up and configure as possible. You literally plug them in and turn them on and they’re ready to install, making it suitable for smaller businesses, branch offices or other locations where there may not be dedicated or specially trained IT staff. In addition, Microsoft offers the option of installing the new Azure Stack HCI on existing datacenter hardware if it meets the necessary hardware certification requirements.
Combining the software-defined datacenter (SDDC) capabilities inherent in HCI along with the cloud-native opportunities of Azure Stack initially was a big step forward in getting companies to modernize their datacenters from both a hardware (HCI) and a software (Azure) perspective. While it may seem logical to do so, those two modernization efforts don’t necessarily go hand-in-hand, so it was an important step for Microsoft to take. In doing so, they made the process of migrating more apps to the cloud (and, hopefully, modernizing them along the way) much easier.
This is particularly important for companies who may have been a bit slower in moving their applications to the cloud and/or those organizations who may have run into roadblocks on some of their legacy applications. Not all organizations have all the skillsets they need in their IT organizations to do this kind of work, so the more efforts that can be done to make the process easier, the better. With their latest additions to Azure Stack HCI, Microsoft is moving down the path of further simplification and helping draw the worlds of legacy applications and hardware and the cloud a little bit closer together. No matter how you look at it, that’s a step in the right direction.
Podcast: Google Cloud Next, G Suite, IT Priority Study, Twitter Hack
This week’s Techpinions podcast features Carolina Milanesi and Bob O’Donnell analyzing the announcements from Google’s Cloud Next event, including new offerings for GCP and G Suite, discussing a new study on IT prioritization changes from the pandemic, and chatting on the big Twitter hack.
New Study Highlights Pandemic-Driven Shifts in IT Priorities
At this point, everyone understands that the COVID-19 pandemic has had a profound impact on all aspects of society, including our personal and professional lives. But just as our understanding of how the virus spreads and its impact has shifted over time, so too has our perception of exactly how that impact is being felt in different areas.
In order to better understand specifically how the pandemic has affected IT environments in US-based medium businesses (100-999 employees) and large enterprises (1,000+ employees), TECHnalysis Research embarked on a study last month of over 600 US-based IT decision makers. Survey respondents were asked a number of questions about what their companies’ strategic and spending priorities were before the pandemic at the beginning of the year, and what they are now several months into the pandemic. In addition, respondents were asked how they expect their work environments to change, how they are acquiring and deploying PCs for their employees, how their cloud computing and app modernization efforts are evolving, and much more.
Needless to say, the results were fascinating. At a high level, one of the most interesting discoveries was that despite many dire early warnings, IT spending plans for the year are generally still intact with average annual IT budgets expected to increase 7% for this year. From a change perspective, as Fig. 1 illustrates, that means the overall levels are expected to be down just 1% versus what they were expected to be at the beginning of the year. Breaking it down by company size shows that medium-sized businesses are now expecting their IT budgets to grow slightly, while large enterprises are expecting a larger 2.3% drop overall.
Fig. 1
Priority-wise, what’s clear from the data is that companies shifted their focus from things that would be “nice-to-have” to things that they “need-to-have”. Specifically, this means that from both an overall strategic as well as spending perspective, purchasing laptops for their employees became the top priority, overtaking (at least temporarily) the attention and dollars given to their private, hybrid, and public cloud computing efforts. Conversely, it also means that some of the biggest decreases in prioritization and spending impact highly touted technologies such as edge computing, IoT, and private enterprise cellular networks.
From a PC client perspective, there have also been some very interesting shifts in the acceptance of different deployment and acquisition strategies. Notably, VDI (virtual desktop infrastructure) usage—which many have downplayed in the past as a backward-looking technology—has seen growth of over 11% since the start of the year. In addition, after appearing to have fallen out of favor, BYOD (Bring Your Own Device) programs—where employees purchase and use their own PCs—are now in place in over half of the companies that responded to the survey. Obviously, many of these changes are driven by the massive work-from-home experiment that IT departments around the world have had to immediately respond to. However, given the widely touted productivity levels that many people have reported working from home, many of those policies are likely to stay.
What’s also likely not to change is a dramatic increase in people who want to continue working from home. As Fig. 2 illustrates, on average, companies expect to have just over 1/3 of all employees still working from home into next year.
Fig. 2
Once people go back to the office, they’re also likely to see some dramatic differences when they get there. In fact, only 12% of respondents don’t expect changes to their work environments, meaning 88% do. Anticipated changes include increased sizes of work areas and cubicles, physical barriers between work areas and cubicles, and shifts from open office environments to traditional office/cube arrangements. In addition, about ¾ of respondents expect their companies to adjust the amount of real estate they have. Interestingly, medium-sized businesses expect to increase their amount of office space in order to accommodate more space per worker, but respondents from large enterprises felt their companies were more likely to close some offices and have less real estate.
Of course, as recent news has highlighted, the virus and its impact continue to evolve, so there’s no great way to know exactly how all these different factors play out until time passes. Overall, however, it’s clear that, from an IT perspective, the reactions to and impact from the virus so far are less severe than many feared. In addition, one positive side to the pandemic is that companies are throwing out their old rule books and looking at all the various technological tools at their disposal with a fresh set of eyes. In addition, many organizations plan to aggressively adopt more advanced technologies as a means not only to survive but to thrive in our new normal.
Technology, in its many forms, has proven to be a real saving grace for many organizations in these first few months of the pandemic. As a result, company leadership recognizes the importance of IT initiatives and will likely continue to allocate resources there into the foreseeable future. This isn’t to say we won’t see big challenges for some tech, particularly for IT shops and tech suppliers to hard-hit industries like travel, entertainment, etc. For the IT departments in many businesses, and most of the major tech vendors supplying them, however, the opportunities even in these challenging times continue to be strong.
(You can download a free copy of the highlights of the “Pandemic-Based IT Priority Shifts” report here. A copy of the complete 75-slide study is available for purchase.)
Podcast: Q2 2020 US CE and PC Sales Trends with NPD’s Steve Baker
This week’s Techpinions podcast features Ben Bajarin and Bob O’Donnell, along with special guest Steve Baker of NPD, talking about the surprisingly strong consumer electronics and PC sales data from the recently completed quarter, including discussions on overall trends, specific sub-category performance, and the retail brick-and-mortar vs. online sales splits.
Nvidia Virtual GPU Update Brings Remote Desktops, Workstations and VR to Life
The new work habits that we’ve all adjusted to because of the pandemic have led many companies to take a fresh look at how they can provide computing resources to people working from home. In some cases, this is giving new life to technologies, such as VDI (virtual desktop infrastructure), that provide server-based computing sessions to remote desktops.
In addition, companies have also had to figure out how to provide remote access to workers with very specific, and very demanding technical requirements, such as architects, product designers, data scientists, media creators, and other people who typically use workstations in an office environment.
One critical technology for these new challenges is server-based virtual GPUs, or vGPUs for short. Nvidia has built datacenter optimized GPUs for many years, and several years back made them a shareable and manageable resource through the introduction of Virtual GPU software. The company’s latest July 2020 vGPU software release (one of two they typically do per year), adds several enhancements designed to make these server-based graphics chips function in a wider variety of software operating environments, with better compatibility across more applications, and managed in an easier way.
As with many enterprise-focused technologies, the devil is in the details when it comes to exactly how and where virtual GPUs can function. Given the wide range of different server virtualization platforms and the graphics driver optimizations required for certain workstation applications, it can be challenging to get promising sounding technologies, like vGPUs, to work in all environments. To address these needs, the new release adds native support for virtualization on SUSE Linux Enterprise Server-based infrastructure, which is often used by data scientists, and offers additional management optimizations for VMware-based environments.
The new release also expands the capabilities of the different level GPU drivers that Nvidia provides, thereby increasing the range of applications it can support. Even details like different versions of drivers can make a difference in compatibility and performance. The latest release gives IT managers the flexibility to run different driver versions on the server and on a client device. This capability, called cross-branch support, is critically important for shared resources like vGPUs, because one application running on one device may need one version of a driver, and another application on another device may require another one.
Real-time collaboration across multiple applications is also supported in this July 2020 release. For VR-based applications, the new software, in conjunction with Nvidia’s CloudXR platform, can provide support for untethered mixed reality headsets with 4K resolutions at up to 120 Hz refresh rates over WiFi and 5G networks.
With the Quadro Virtual Workstation software—one of the several levels of drivers that Nvidia makes available through its vGPU software—multiple people can work on CAD, architecture, or other highly-demanding applications with real-time rendering on regular PCs. For designers, engineers, and others working from home, this capability can allow them to function as they normally would in a workstation-equipped office.
Interest in the ability to get remote access to these graphically demanding applications has been extremely high during the pandemic, which should be surprising to no one. This also aligns with results from a newly completed survey by TECHnalysis Research of over 600 US-based IT managers about the impact that COVID-19 has had on their IT strategies, priorities, and computing programs.
According to the study, virtual desktop infrastructure (VDI) usage grew over 11% in just a few months, from 48% of companies saying they used server-based computing models at the beginning of the year to 59% who said they are using them now. Not all of those instances of VDI use virtual GPUs, of course, but they do represent a significant and critical portion of them.
Ongoing flexibility has become the mantra by which IT organizations and workers are adapting to new work realities. As a result, technologies, such as vGPUs, that can enable flexibility are going to be a critical part of IT managers’ toolkits for some time to come.
Power Efficient Computing Noteworthy During Pandemic
One of the few benefits many people have experienced as part of the great work-at-home and learn-at-home experiment that we’ve all been through is improved air quality. In fact, because of the significant reduction in both commuting and travel, both the visual and measured quality of the air has gotten noticeably better in most places around the world.
As a result, the pandemic has inspired a refocus on environmental issues. At the same time, there’s been a huge focus on how digital technology—particularly computing devices, cloud infrastructure, and various types of networks—has allowed us to stay as productive (if not even more so!) as we were prior to the pandemic.
Interestingly, the stories of computing and conservation have also started to become entwined in several different ways. First, there’s been a strong transition to laptop PCs, which use significantly less power than desktops, as many peoples’ primary computing device. While many people think notebooks have been the default standard for a while, the truth is that desktop PCs still represented a fairly significant portion of computers used in many businesses up through the start of the pandemic. However, with the requirement to work at home, companies have been scrambling to get laptops to their employees. As a result, the incredible reliance we have on these more power-efficient devices has never been more apparent. The real-world impact of their increased use is less demand on the electrical grid to power them which, in turn, can offer benefits to the environment.
Second, there’s been a much bigger focus on cloud-delivered apps and services, which can also indirectly lead to an improved environment. In particular, there’s been a great deal more attention placed on modernizing and “cloudifying” applications for business. Because these modernized applications can run in power-efficient cloud-computing data centers, this too has the benefit of reducing the power demands necessary to complete specific tasks.
In a recently completed survey by TECHnalysis Research of over 600 US-based IT professionals, we found that when asked to rank the top 2 priorities for IT initiatives since the rise of the pandemic, modernizing applications is the most important, followed closely by purchasing laptops for their employees. Not surprisingly, growing usage of hybrid, private, and public cloud usage rounded out the top 5, as shown in Figure 1. The app modernization effort, of course, entails the process of converting legacy applications into newer app formats that can run efficiently in one of these hybrid, private and/or public cloud environments.
Fig. 1
What’s interesting about these developments from a conservation perspective is that there have even been studies which show that cloud-based computing resources are more energy efficient than many people realize. In fact, thanks to a combination of significantly more controlled usage of computing, storage, and networking resources in large cloud data centers, new types of computing (and pricing) models that use those resources more efficiently, and the growing use of more power efficient CPUs, there have been great improvements in computing power per watt. In other words, with cloud computing, it’s possible to get even more computing work done with the same (or even smaller) amounts of power than were used in the past.
On the notebook PC side, there have been similar trends in power efficiency as well. In fact, just last week AMD announced that they surpassed their 25×20 goals set back in 2014. Specifically, the company announced six years ago that they wanted to improve the power efficiency of their mobile CPUs by a factor of 25 before the end of this year. With the release of their recent Ryzen 7 4800H mobile processor, the company actually achieved an impressive 31.7X improvement in power efficiency—specifically a 5x increase in performance combined with a reduction to 1/6th of the required power—versus a 2014 vintage AMD FX-7600P chip.
The improvements are due to a wide range of factors, including better core designs, new chiplet architectures within their CPUs, and the company’s move to 7nm production from the 28nm process used back in 2014. The company also made a number of enhancements to the chip’s thermal design and power management capabilities over the years. All told, it’s another impressive example of how far AMD has improved their technical capabilities and competitive strengths over the last few years.
As companies start to bring their employees back into the office and commuting and travel trends slowly start to tick up, we may begin to see some negative impact on the environment. In terms of computing resources, however, the ongoing developments in power and performance efficiency for both data centers and laptops can hopefully keep their influence to a minimum.
Apple Transition Provides Huge Boost for Arm
You have to imagine that yesterday was a pretty good one for the folks at Arm—the little understood, but highly influential chip design company. Not only were they able to report that their designs power the world’s fastest supercomputer, there’s also that little detail about Apple choosing to switch from Intel-based CPUs to Apple designed custom silicon built on Arm’s core architecture for future generations of Macs.
A word on the supercomputer news first. Every year at the opening of the ISC high-performance computing conference, the organization running it releases the Top 500 performing supercomputers. As with most years, this year’s list was utterly dominated by Intel-based machines, but there was a surprise at the top. For the first time ever, Arm-based chips (in this instance, built by Fujitsu) are the CPU brains being used in the number 1 ranked machine—the Fugaku supercomputer, which is operated by the RIKEN Center for Computational Science in Japan. In addition to the prestige, it’s a huge psychological win for Arm, which has been working to make an impact on the enterprise computing world with its Neoverse CPU architecture for the last several years.
In the personal computing world, Arm notched an equally impressive victory with the official unveiling of the long-rumored Arm-powered chips for next generation Macs. Apple doesn’t have the largest market share in the PC market—it’s around 7% or so overall—but its impact, of course, greatly outstrips those numbers. As a result, by making the official announcement of custom Apple Silicon for the Mac, which was designed leveraging Apple’s architectural license of Arm’s chip IP designs (though Arm is never mentioned in the keynote or any of the press releases for the event), Arm scored a huge gain in credibility and awareness.
Of course, awareness doesn’t translate to success, and as exciting as the development may be, there are a great deal of questions, as well as previous history, to suggest that challenges await. First, while Apple talked about switching to this new design to both improve performance and reduce power consumption, it has yet to show any comparative benchmarks to existing Intel-based Macs for either of those metrics. Of course, that’s likely because the silicon isn’t done. Heck, Apple didn’t even announce the name of the new chips. (The A12Z Bionic chip in the developer system, and currently in the iPad Pro, is likely only an interim solution.) My guess is that we won’t get any of these details until the end of the year, when the first-generation Macs with these new chips are unveiled.
Apple’s primary stated reason for making the move away from Intel to custom silicon was to improve the experience, so these comparative details are going to be critically important. This is particularly true because of the generally disappointing performance of Arm-based Qualcomm and Microsoft chips in Windows on Arm PCs like the Surface Pro X. The key question will be if Apple is able to overcome some of the limitations and truly beat Intel-level performance, while simultaneously offering significantly better battery life. It’s an extremely challenging task but one that Apple clearly laid out as its goal.
There are also many unanswered questions about the ability to pair these new chips with external GPUs, such as the AMD Radeon parts Apple currently offers in certain Macs, or any other companion chips, such as 5G modems. While Apple currently uses Qualcomm modems for the iPhone and certain iPads, the company is known to be working on its own modems, and it’s not clear if those will be available in time for the launch of a 5G-equipped Macbook (should they choose to do so). As for graphics, Apple only uses its own GPU designs for its other custom parts for iPhones and iPads, but some computing applications require more graphics horsepower than those devices do, so it will be interesting to see if Apple offers the option to pair its new Mac-specific SOCs with external GPUs.
Finally, of course, there is the question of software. To get the best possible performance on any platform, you need to have software developers write applications that are native to the instruction sets being used. Because that can take a while, you also have to have a means to run existing software (that is, designed for Intel-based Macs) on the new chips via emulation. Ironically, Apple has chosen to use the exact same playbook to transition away from Intel processors that it used to transition into Intel processors. In fact, it’s even using the same names (with the addition of a version 2) for the core technologies: Universal Binaries 2 are combined applications that run on both Intel CPUs and the new Apple custom silicon chips and Rosetta 2 is the software used to emulate Intel instructions. This time around Apple also added some virtualization capabilities and demoed the ability to run Linux in a virtualized container. However, interestingly, there was no discussion of Windows running on the new Mac. Presumably all the work that Microsoft and its partners have done to bring Windows to Arm-based CPUs should port over fairly easily to Apple designs as well, but the details on this are not clear just yet.
To the company’s credit, Apple did an impressive job when it created this playbook to move from PowerPC-based chips to Intel, so here’s hoping the same strategy works the other way around. While Apple made it seem like it was a fairly trivial task to shift from x86-based instructions to Arm, if you use its Xcode development environment, history strongly suggests that the transition can be a bit daunting for some developers. To their credit, however, Apple did show functioning demos of critical Microsoft Office, Adobe Creative Cloud, and Apple professional apps running natively in the new environment. One concern Apple didn’t address at all was hardware device drivers. That was a key challenge for early Arm on Windows devices, so it will be interesting to see how Apple does with this.
One nice advantage that Apple and its developers gain by moving over to the same Arm-based architectures that it uses for the iPhone and iPad is that iOS and iPadOS applications should easily run on these new Macs—a point Apple was eager to make. As exciting as that first sounds, however, there is that detail of a lack of a touch screen on any existing Mac. Imagine trying to use a mouse with your iPhone, and you can see how initial enthusiasm for this capability may dampen, unless Apple chooses to finally allow touchscreens on Macs. We shall see.
The last point to make regarding all of these developments is that Apple ultimately chose to move to Arm to gain complete control over the Mac experience. As good as Intel’s processors have been, Apple has shown with its other devices that it likes to own the complete vertical technology stack, and the only way to do that was to design the CPU as well. It’s the last critical piece of the puzzle for Apple’s strategy to control its own destiny.
Regardless of that reasoning, however, it’s clear that both Apple’s decision and the supercomputing win mentioned earlier provide a great deal of credence to Arm’s efforts. At the same time, it arguably puts even more pressure on Arm to continue its pace of innovations. For a company that so few people really appreciate and understand, it’s great to see how far and how wide Arm has pushed the boundaries of computing. Now let’s see how they continue to evolve.
Podcast: Cisco Live, Qualcomm Snapdragon 690, Apple App Store Controversy
This week’s Techpinions podcast features Carolina Milanesi and Bob O’Donnell discussing the many announcements from the Cisco Live event, analyzing the potential impact of low-cost 5G phones from the latest Qualcomm chip, and debating the controversies around Apple’s app store payment model for developers.
Cisco Highlights Focus on Location as Companies Start to Reopen
As states in the US start to reopen and businesses around the country (and the world) start to plan for employees to return, there’s been a lot of discussion around what the new “normal” in the workplace will be. Of course, we don’t really know what it’s going to be like, but most people are fairly certain it’s going to be different. Whether it’s staggered work schedules, spread out workspaces, plexiglass shield-equipped lunch tables, or other workplace adjustments, many people who start to return to the office will likely encounter different environments.
Of course, many others won’t be returning for some time, if at all—upcoming research data from TECHnalysis Research suggests we could have as many as 35% of workers still working from home even into 2021. Regardless of where people do their work, however, it’s never been clearer that the need for flexible, secure access to work resources is extremely high. In addition, as some people do start to venture back into the office, it’s also clear that they’re going to want/need tools that can help them stay safe while they’re there.
At the Cisco Live Digital event, the networking giant highlighted a number of new and updated initiatives it has been working on to address some of these issues. On the security side, the company’s big news is around its SecureX cloud-native cybersecurity platform, which it is starting to integrate into all Cisco security products at the end of this month. Key enhancements include a single dashboard for viewing live threat data, increased automation of security tools, and enhanced security capabilities that can intelligently leverage analytics data from multiple sources simultaneously.
The company also unveiled a number of enhancements to its Webex collaboration platform, including the announcement that it now has an impressive 3x the capacity to handle more meetings. For those returning to the office, Cisco also made some interesting additions via its Webex Control Hub application. Control Hub lets IT managers quickly install the Webex voice assistant onto conference room devices, which keeps people from having to touch the screens or touchpads in meeting rooms. In addition, Control Hub offers expanded analytics on meeting room usage, which can impact cleaning schedules for those rooms and can manage meeting room locations/configurations to keep people spread out. Cisco also enhanced the support capabilities for meetings that will incorporate both on-site and remote workers.
Another intriguing location-based set of capabilities comes via the updated DNA Spaces offering. Related to the company’s larger Digital Network Architecture (DNA) initiative, which is essentially Cisco’s enhanced version of software-defined networking (SDN), DNA Spaces is an indoor location-based service platform that can leverage data from WiFi hotspots, including those from its Meraki division, to determine how people are moving through or congregating within a location. The company made two additions to the platform, including the descriptively named Cisco DNA Spaces for Return to Business, and Indoor IoT Services, which can use WiFi6-enabled access points to work with Bluetooth LE devices, such as beacons, to do things like asset tracking, environmental monitoring, room tracking, and more.
In a manner that’s conceptually similar to the Bluetooth-based contact tracing apps that have been in the news, DNA Spaces for Return to Business can track the WiFi (or GPS) signals from mobile devices, and then can use that to analyze people’s real-time movement patterns through the office. The resulting data can subsequently be used to do things like limit the number of people in a given building, or section of the office, that a company could define as being at maximum capacity. In conjunction with Indoor IoT Services, which Cisco claims is the first indoor IoT-as-a Service offering, the same data could be combined with other sensor data to do things like suggest alternative places to meet, encourage employees to social distance, and more.
While there are certainly going to be some questions about privacy concerns for any location-based service, companies (and likely a decent percentage of employees) probably feel that the potential safety benefits outweigh those privacy concerns within the limited office environment. Over time those feelings may change—and it will certainly be an interesting trend to watch—but to get people to feel comfortable about returning to office environments, these types of technology-based solutions will likely play an important role. Companies that deploy these solutions will have to make sure employees feel confident that they aren’t being tracked once they leave the workplace, however, otherwise they’ll likely face significant pushback. As long as companies ensure privacy outside the workplace, employees are likely to accept these tracking solutions as just one of the many new aspects of the new normal inside the workplace.
Podcast: Facial Recognition Technology, Sony PS5, Android 11, Adobe Photoshop Camera
This week’s Techpinions podcast features Carolina Milanesi and Bob O’Donnell discussing tech companies’ recent shifts in policy around facial recognition, and analyzing the debut of Sony’s PS5 gaming console, the beta of Google’s Android 11 and Adobe’s new Photoshop Camera app for smartphones.
WiFi 6E Opens New Possibilities for Fast Wireless Connectivity
One of the most obvious impacts of the COVID-19 pandemic is how reliant we have all become on connectivity, particularly wireless connectivity. For most of us, the combination of a fast broadband connection along with a solid WiFi wireless network inside our home has literally made the difference between being able to work, attend classes, and enjoy entertainment on a consistent, reliable basis or not being able to do so.
As a result, there’s significantly more attention being placed on connectivity overall these days, within all of our different devices. Of course, it doesn’t hurt that we’re also at the dawn of a new era of wireless connectivity, thanks to the recent launch of 5G networks and the growing availability of lower-cost 5G-capable devices. But, while 5G may currently be getting the lion’s share of attention, there have been some tremendously important developments happening in the world of WiFi as well.
In fact, just six weeks ago, the FCC gave official approval for WiFi to extend its reach to an enormous swath of new radio spectrum in the 6 GHz band here in the US. Specifically, the new WiFi 6E standard will have access to 1.2 GHz, or 1,200 MHz of radio spectrum, ranging from 5.9 GHz to 7.1 GHz (and incorporating all the 6 GHz frequencies in between, hence the 6 GHz references). Just to put that in perspective, even the widest connections for millimeter wave 5G—the fastest kind of 5G connection available—are limited to 800 MHz. In other words, the new WiFi connections have access to nearly 1.5 times the amount of frequencies to transmit on as the fastest 5G connections.
Theoretically, that means that WiFi 6E connection speeds could prove to be significantly faster than even the best that 5G has to offer. Plus, because of the basic laws of physics and signal propagation, WiFi 6E coverage can actually be wider than millimeter wave 5G. To be fair, total coverage is very dependent on the amount of power used for transmission—cellular transmission levels are typically several times stronger than WiFi—but in environments like office buildings, conference centers, as well as in our homes, it’s not unreasonable to expect that WiFi 6E will be faster than 5G, just as current 5 GHz WiFi (802.11a and its variants) are typically faster than 4G LTE signals.
One important clarification is that all of these benefits only extend to WiFi 6E—not WiFi 6, which is also relatively new. For WiFi 6, there are a number of improvements in the way signals are encoded and transmitted, all of which should decrease the congestion and reduce the power requirements for using WiFi. However, all those improvements still use the traditional 2.4 and 5 GHz frequency bands that WiFi has used for the last 20 years. The critical new addition for WiFi 6E is the 6 GHz frequency band.
To make sense of all this, you have to understand at least a little bit about radio frequency spectrum (whether you want to or not!). The bottom line is, the higher the frequency, the shorter the distance a wireless signal can travel and the lower the frequency, the farther it can travel. The analogy I like to use is to think of hearing a music concert from a far-away stadium. If you’re driving by a concert venue while a band is playing, you typically can hear a wide range of frequencies and can better make out what’s being played. The farther away you are, however, the more that the higher frequencies are harder to hear—all that’s left is the low-frequency rumble of bass frequencies, making it difficult to tell what song is being played. All radio frequency signals, including both cellular and WiFi, follow these basic rules of frequency and distance.
There is a critically important twist for data transmission, however, and that has to do with availability and width of channels for transmitting (and receiving) signals. The basic rule of thumb is the lower the frequency, the smaller the channel width and the higher the frequency, the wider the channel width. Data throughput and overall wireless connection speed is determined by the width of these channels. For 4G and what’s called low-band 5G (such as with T-Mobile’s 600 MHz 5G network), those channels can be as small as 5 MHz wide or up to 20 MHz. The mmWave frequencies for 5G, on the other hand, are 100 MHz wide and, in theory up to eight of them are available for a total of 800 MHz of bandwidth.
The beauty of WiFi 6E is that it supports up to 7 channels of 160 MHz, or a total of 1,120 MHz of bandwidth. (As a point of comparison, 5 GHz WiFi supports a maximum of two 160 MHz channels and 500 MHz overall, while 2.4 GHz WiFi only supports a maximum of three 20 MHz channels and 70 MHz overall.) In addition, WiFi 6E has these wide channels at a significantly lower frequency than used for millimeter wave (typically 24 GHz and up, although most US carriers are using 39 GHz), which explains why WiFi 6E can have broader coverage than mmWave. Finally, because 6 GHz spectrum will be unoccupied by other devices, the real-world speed should be even better. The lack of other traffic will enable much lower latency, or lag, times for devices on WiFi 6E networks.
Of course, to take advantage of WiFi 6E, you need to have both routers and devices that support that standard. To do that, you need to use chips that also support the standard (as well as live in a country that supports the full frequency range—right now the US is leading the way and the only country to support the full 1.2 GHz of new spectrum). Broadcom and Intel have both announced support for WiFi 6E, but the only company currently shipping chips for both types of devices is Qualcomm. For client devices like smartphones, PCs and others, the company offers the FastConnect 6700 and 6900, while for routers, the company has a new line of tri-band (that is, supporting 2.4 GHz, 5 GHz and 6 GHz) Networking Pro Series chips, including the Networking Pro 610, 810, 1210 and 1610, which support 6, 8, 12, and 16 streams, respectively, of WiFi 6E connectivity.
In addition, the new Networking Pro line supports what the company calls Qualcomm Max User Architecture and Multi-User Traffic Management, which enable up to 2,000 simultaneous client connections, thanks to advanced OFDMA (Orthogonal Frequency-Division Multiple Access) and 8-user MU-MIMO (Multi User—Multiple Input, Multiple Output) per channel. The new router-focused Networking Pro chips also support SON (Self-Organizing Networks), which makes them well suited for future versions of WiFi mesh routers.
In a way, the benefits of WiFi6E offer an interesting challenge for Qualcomm and other companies that make both 5G cellular and WiFi-focused chips and devices. For certain applications—notably public venues, certain office environments, etc.—the two technologies are likely to compete directly with one another, in which case the core component companies will essentially have to sell against themselves. Because of the increasingly complex range of wireless network architectures, different security requirements, business models and more, however, the likely truth is that both technologies will co-exist for some time to come. As a result, it makes better business sense to have offerings that support both than to simply pick a side.
The good news for those of us in the US is that we’re about to enjoy a significantly improved range of wireless networking options, thanks to both of these recent WiFi 6E enhancements, as well as the forthcoming auctions for mid-band (3.5 GHz) 5G spectrum. Despite the many other challenges we face, it’s looking to be a good year for wireless.
Podcast: Twitter Controversy, Arm IP Designs, Qualcomm XR Viewers
This week’s Techpinions podcast features Carolina Milanesi and Bob O’Donnell discussing the controversy around Twitter’s efforts to tag tweets and the implications for social media overall, analyzing the new mobile IP chips designs from Arm, and chatting about the latest announcements around AR/VR/XR headsets attached to Qualcomm-powered 5G smartphones in conjunction with telco carriers.
Arm Doubles Down on AI for Mobile Devices
While many people still aren’t that familiar with semiconductor IP stalwart Arm, most everyone knows their key customers—Qualcomm, Apple, Samsung, MediaTek, and HiSilicon (a division of Huawei) to name just a few in the mobile market. Arm provides chip designs that these companies and others use to power virtually every single smartphone in existence around the world.
As a result, if you care the least bit about where the mobile device market is headed, it’s important to keep track of the new advancements that Arm introduces. While you won’t experience them immediately, if you purchase a new smartphones 12-18 months from now, it will likely be powered by a chip (or several) that incorporates these new enhancements. In particular, expect to see a big boost in AI performance, across a range of different chips.
Those who are familiar with Arm know that, like clockwork every year, the company announces new capabilities for its Cortex CPUs, Mali GPUs and, most recently Ethos NPUs (neural processing units). As you’d expect, most of these include refinements to the chip designs and resulting increases in performance. This year, however, Arm has thrown in a few additional twists that serve as an excellent roadmap for where the smartphone market is headed at several different levels.
But first, let’s get the basics. The latest top-end 64-bit Cortex CPU design is the Cortex-A78 (up from last year’s A77), a further refinement of the company’s ARMv8.2 core. The A78 features 20% sustained performance improvements versus last year’s design, thanks to several advanced architectural refinements. The biggest focus this year is on power efficiency, letting the new design achieve that 20% improvement at the same power draw, or allowing it to achieve the same performance as the A77 with just 50% of the power, thereby saving battery life. These benefits result in better performance per watt, making the design of the A78 well suited for both power- and performance-hungry 5G phones, as well as foldable and other devices featuring larger displays.
In addition to the A78, Arm debuted a whole new branch of CPUs with the Cortex-X1, a larger, but more powerful design. Recognizing the growing interest in gaming-focused smartphones and other applications that demand even more performance, Arm decided to provide an even more performant version of their CPU core with the X1 (it features a 30% performance boost over the A77).
Even more interesting is the fact that the X1 doubles the performance for machine learning and AI models. Despite the appearance of dedicated AI accelerators (like the company’s Ethos NPUs) as well as the extensive focus on GPUs for AI, the truth is that most neural network and other AI models designed for mobile devices run on the CPU, so it’s critical to enhance performance there.
While the X1 isn’t intended for mainstream usage and won’t represent a particularly large segment of the market (particularly because of its larger and more power-hungry design), its appearance reflects the increasing diversity and segmentation of the smartphone market. In addition, the Cortex-X looks like it would be a good candidate for future versions of Arm CPUs for PCs and other larger devices.
On the GPU side, the company made two different introductions: one at the top end of the performance chain and the other emphasizing the rapidly growing opportunity for moderately priced smartphones. The top-of-the-line Mali-G78 features a 25% increase in standard graphics performance over its Mali-G77 predecessor, as well as a 15% boost in machine learning application performance. Given the interest in achieving PC and console gaming-like quality on smartphones, the G78 adds support for up to 24 shader cores, but leverages a clever asynchronous power design that allows it to create high-level graphics without drawing too much power.
The other new design is the Mali-G68, which Arm classifies as being targeted to a “sub-premium” tier of phones. Leveraging essentially the same design as the G78, but limited to a maximum of 6 shader cores, the G68 allows its chip customers and then smartphone makers in turn to create products with premium-like features but at lower price points. Given the price compression that many expect to see in smartphones over the next several years, this seems like an important step.
The final new design from Arm was their Ethos-N78, just the second generation of their dedicated line of AI co-processors for mobile devices. Featuring more than 2x the peak performance of the N77, as well as a greater than 25% improvement in performance efficiency, the N78 also offers more flexibility in configuring its core elements, letting companies more easily use it across a wide range of different mobile devices.
Even more important than raw performance in the AI/ML world is software. Not surprisingly then, the company also announced new enhancements to their Arm Development Studio and other tools that make it easier to optimize AI applications not only for the N78, but for its full line of Cortex CPUs and Mali GPUs as well. In fact, Arm is offering a unified software stack that essentially allows developers to create AI/ML models that can run transparently across any combination of Arm CPUs, GPUs or NPUs. Conceptually, it’s very similar to Intel’s One API idea, which is intended to provide the same level of flexibility across a range of different Intel silicon designs. Real-world performance for all of these “write once, run anywhere” heterogenous computing models remains to be seen—and the challenges for all of them seem quite high—but it’s easy to see why they could be very popular with developers.
As expected, Arm brought a range of new mobile-focused chip designs to the table once again this year, but thanks to the debut of the Cortex-X1, the sub-premium Mali-G68, and the overall emphasis on AI and machine learning, they still managed to shape things up a bit. Clearly, the company sees a growing demand for all these market sub-segments and, because of the pivotal role they play, their efforts will go a long way toward making them real.
The ultimate decisions on how all these new capabilities get deployed and the features they enable get implemented is up to the company’s more famous customers and, in some cases, their customers’ customers, of course. More “intelligent” devices, more immersive augmented reality (AR) and virtual reality (VR) enhancements, and significantly improved graphics performance all seem like straightforward outcomes they could enable. Nevertheless, the groundwork has now been laid for future mobile devices and it’s up to other vendors in the mobile industry to see exactly where it will take us.
Podcast: Microsoft Build, Work from Home Forever
This week’s Techpinions podcast features Carolina Milanesi and Bob O’Donnell discussing the news as well as the structure of Microsoft’s recent virtual Build developer conference, as well as the trend of tech companies offering their employees the ability to work from home for as long as they would like.
Microsoft Project Reunion Widens Windows 10 Opportunity to One Billion Devices
Sometimes, things just take a little bit longer than expected. At Microsoft’s Build conference five years ago, the company made a widely reported prediction that the Windows 10 ecosystem would expand out to one billion devices over the course of a 2-3 year time period. Unfortunately, they didn’t make it by the original deadline, but just a few months ago they were finally able to announce that they had reached that ambitious milestone.
Appropriately, at this year’s virtual Build developer conference, the company made what could prove to be an even more impactful announcement that will allow developers to take full advantage of that huge installed base. In short, the company unveiled something they call Project Reunion that will essentially make it easier for a variety of different types of Windows applications—built via different programming models—to run more consistently and more effectively across more devices.
Before getting into the details, a bit of context is in order. Back in 2015 when then Executive VP Terry Myerson made the one billion prediction, Microsoft’s OS efforts were more grandiose than simply for PCs. The company was still actively pursuing the smartphone market with Windows Phone, had just unveiled the first HoloLens concept devices and Surface Hub, talked about the role that Xbox One had in its OS plans, and generally was thinking more about a multi-device world for its then new OS.
Looking back now, it’s clear that we indeed entered an era of multiple devices, but the only ones that ended up having a significant impact on the Windows 10 installed base number turned out to be PCs in all flavors and forms, from desktops and laptops, to 2-in-1s and convertibles like the original Surface. In fact, the nearly complete reliance on PCs is undoubtedly why it took longer to reach the one billion goal.
In retrospect, however, that’s actually a good thing, because there are now approximately one billion relatively similar devices for which developers can create applications, instead of a mixed group of devices that were more related to Windows 10 in name than true capability. Even with this large similar grouping, however, not all applications for Windows 10 were created or function in the same way. Because of some of Microsoft’s early bets on device diversity under the Windows 10 umbrella, they made decisions about promoting a more basic (and legacy-free) application development architecture that they hoped would ensure that applications ran across this wide range of devices. Specifically, Microsoft promoted the concept of Universal Windows Platform (UWP) APIs (Application Programming Interfaces) and a number of developers took them up on these initiatives.
At this point, however, because of some of the limitations in UWP, there really isn’t much need (or demand) for these efforts, hence Project Reunion. At a basic level, the goal with Project Reunion is to provide the complete set of Windows 10 capabilities (the Win32 or Windows APIs) to applications originally created around the UWP concept—in essence to “reunite” the two application development platforms and their respective APIs into a single, more modern Windows platform. This, in turn, allows programmers to have a more consistent means of interaction between their apps and the Windows 10 operating system, regardless of the approach they first took to create the application. In addition, thanks to a number of extensions that Microsoft is making to that model, it allows developers to create more modern, web and service-friendly applications.
Specifically, for example, Project Reunion is going to enable something the company is calling WinUI 3 Preview 1, which is a new framework for building modern, fast, flexible user interfaces that can easily scale across different devices. By leveraging the open-source, multi-OS friendly Fluent Design-based tools, developers can actually achieve an even more widespread reach not only across different Windows 10-based devices, but those running other OS’s as well. Plus, thanks to hooks into previous development platforms, developers can use these UI tools to modernize the look of existing apps as well as build new ones.
Another specific element of Project Reunion is WebView 2, which is a set of tools that lets developers easily integrate native web content within an app and even integrate with browsers across different platforms. As with WinUI 3 and the new more modern Windows APIs, WebView 2 isn’t locked to any version of Windows, giving developers more flexibility in leveraging their application’s codebase across multiple platforms.
Microsoft also announced new extensions that allow Windows developers to tap into services built into Microsoft 365 such as Microsoft Search and Microsoft Graph. This allows developers to create a modern web service-like application that can leverage the capabilities and data that Microsoft’s tools provide and offer extensions and connections to the company’s widely used SaaS offerings.
The Project Reunion capabilities look to finally complete the picture around the one billion device installed base that the company promised, but in a much different way than most people originally thought. Interestingly, thanks to the growing importance and influence of the PC—a point that’s really been brought home in our current environment—there’s arguably a less diverse set of Windows 10-based devices to specifically code for than most predicted. However, the new tools and capabilities promised for Project Reunion potentially allow developers to create applications for that entire base, instead of a smaller subset that realistically was all that was possible from the original UWP efforts.
Additionally, because of Microsoft’s significantly more open approach to application developments and open source in general since that 2015 announcement, the range of devices that Windows-based developers can target is now significantly broader than even that impressive one billion figure. Obviously delivering on that promise is a lot harder than simply defining the vision, but it’s certainly interesting to see how Microsoft continues to keep the world of Windows fresh and relevant. Throw in the fact that a new version of Windows—10X—is on the horizon, and it’s clear that 2020, and beyond, is going to be an interesting time for a platform that many had written off, albeit incorrectly, as irrelevant.