Extended Reality: A Sceptic Wishlist

Today kicks off Augmented World Expo USA 2020, one of the annual touchpoints for the world of Extended Reality (XR), basically from Augmented Reality to Virtual Reality and anything in between. Like many other events, the tech conference has gone digital due to COVID-19, so it will be interesting to see how they use XR in their own lineup. What is certain is that we can expect several announcements to be made during the week. A couple of brands even got ahead of the news cycle by announcing early.

First, Qualcomm announced a partnership with 15 global operators to bring VR viewers tethered to 5G smartphones to market within a year. Then, former HTC CEO, Peter Chou, introduced his new project: a new VR company called XRSpace. The project comes with an all in one 5G-connected XR headset called XRSpace Mova as well as a social reality platform called XRSpace Manova. Although the two announcements been only a few hours apart was probably pure coincidence, there is a lot that these two companies are doing to try and accelerate adoption, including delivering on some of my wishlist items that would considerably change my attitude towards XR. I have a feeling, however, these items are far from being unique to me, considering the still relatively limited uptake we have seen thus far.

Don’t get me wrong I do believe XR will play an important role in the way we communicate and experience in the future. Adoption will happen much faster in a business context than in a consumer one, mostly because the return of investment and the value add in a business context will be much more apparent to users.

At a time when so many are physically isolated because of COVID-19, and despite VR still limited penetration, we have seen an attempt to use VR to alleviate solitude, improve fitness and mental health.  At the same time, the current economic downturn coupled with what was a tech overload for many parents dealing with distance learning while trying to work might have also not been the optimal environment to test out VR. All in all, too much of a mixed bag to assess how effective VR could be during shelter in place orders but not a reason to be negative as far as a future opportunity.

XR Viewers as a Steppingstone to Broader Adoption

I’ve been excited about the role that XR viewers could play in broaden adoption since I first heard Qualcomm talk about their plans and roadmap at Mobile World Congress in 2019. The excitement came from two main components of this new kind of viewers: first, the ability to lower the barrier of entry represented by the cost of many VR headsets. Second, the richer and more user-friendly experience that can be delivered compared to those viewers where the mobile phone would sit within the viewer itself.

For many consumers investing $400 to $700 for a VR headset remains a hard choice, mostly because the value these headsets bring is unclear either because they have never tried it or because they don’t see themselves using VR often enough to justify the investment. Some consumers who do appreciate what VR has to offer are put off by the bulky design of most headsets and the clunkiness of the experience.

The Qualcomm XR Optimized Certification Program will help OEM partners deliver on that rich experience by guaranteeing compatibility between the 5G phone and the viewer, specifically looking at these key features: Six Degrees of Freedom (6DoF), head tracking performance, display calibration validation, motion to photon latency validation, power and thermal performance.

5G connectivity and the lighter, nimbler designs of the viewers will also help make the experience more mobile, creating more opportunities for engagement. Another limitation of the current systems is that they are mostly used within the home.

If the relative success of Oculus Quest is something to go by, it seems clear that standalone VR headsets are what users want, but for those who might not be quite as ready to invest XR, viewers might be the closest thing to a standalone experience. Linking XR viewers to 5G smartphones is a smart move because of the experience that can be delivered through 5G but also because it represents a great opportunity for OEMs and carriers to offer bundles lowering the barrier of entry some more.

XR Must Connect Not Isolate

Peter Chou has always been a big believer in VR after all HTC got into this business when he was still the CEO. He first teased the idea of XRSpace back in 2019 at Mobile World Congress when he talked about an XR experience with a social component at its core. The idea of bringing a social component to VR is not new. Last year, Facebook launched Horizon, basically a VR world designed for Oculus users to meet and socialize. The move seemed a pretty obvious step to make Facebook relevant in the future and limit the risk of missing the transition from mobile in a similar way as we saw from the PC to mobile. Chou’s decision, however,  has little to do with business model and a lot to do with opportunity as he claims that without a social component, VR will fail to win over a broader set of consumers.

While XRSpace Mova grabs the title of the world’s first 5G consumer mobile VR headset powered by Qualcomm, it is the XRSpace Manova platform that should get much of our attention. Manova introduces full-body avatars that can interact in a different social context like work, health and fitness, education and entertainment. Mova is capable through its sensors and a proprietary scanning technology to understand hand gestures and analyze real-world spaces to replicate them within the VR app. Between 5G and this scanning tech, it sure sounds like Mova will allow for much more freedom than the current head-mounted displays allow for, which will help build a broader set of use cases and, therefore, a wider appeal.

If there is one thing that COVID-19 has shown us is how much can be done digitally. What it has also made clear is that humans will always crave real-life human contact and interaction. So, it seems to me that Chou’s idea that VR should connect not isolate people makes a lot of sense. Whether or not the platform will be successful will depend on how much content is created, another limitation the segment is currently facing, and how much it will all cost.

As I look at the marketing videos and material for XRSpaces, I am struck by the lack of diversity that comes screaming through these vignettes. If we want to extend our reality and create more opportunities for social interaction, let’s make sure it is a better reality, one where everybody feels seen and represented. The lack of diversity in VR could ultimately cap its opportunity much more so than cost and tech.

Arm Doubles Down on AI for Mobile Devices

While many people still aren’t that familiar with semiconductor IP stalwart Arm, most everyone knows their key customers—Qualcomm, Apple, Samsung, MediaTek, and HiSilicon (a division of Huawei) to name just a few in the mobile market. Arm provides chip designs that these companies and others use to power virtually every single smartphone in existence around the world.

As a result, if you care the least bit about where the mobile device market is headed, it’s important to keep track of the new advancements that Arm introduces. While you won’t experience them immediately, if you purchase a new smartphones 12-18 months from now, it will likely be powered by a chip (or several) that incorporates these new enhancements. In particular, expect to see a big boost in AI performance, across a range of different chips.

Those who are familiar with Arm know that, like clockwork every year, the company announces new capabilities for its Cortex CPUs, Mali GPUs and, most recently Ethos NPUs (neural processing units). As you’d expect, most of these include refinements to the chip designs and resulting increases in performance. This year, however, Arm has thrown in a few additional twists that serve as an excellent roadmap for where the smartphone market is headed at several different levels.

But first, let’s get the basics. The latest top-end 64-bit Cortex CPU design is the Cortex-A78 (up from last year’s A77), a further refinement of the company’s ARMv8.2 core. The A78 features 20% sustained performance improvements versus last year’s design, thanks to several advanced architectural refinements. The biggest focus this year is on power efficiency, letting the new design achieve that 20% improvement at the same power draw, or allowing it to achieve the same performance as the A77 with just 50% of the power, thereby saving battery life. These benefits result in better performance per watt, making the design of the A78 well suited for both power- and performance-hungry 5G phones, as well as foldable and other devices featuring larger displays.

In addition to the A78, Arm debuted a whole new branch of CPUs with the Cortex-X1, a larger, but more powerful design. Recognizing the growing interest in gaming-focused smartphones and other applications that demand even more performance, Arm decided to provide an even more performant version of their CPU core with the X1 (it features a 30% performance boost over the A77).

Even more interesting is the fact that the X1 doubles the performance for machine learning and AI models. Despite the appearance of dedicated AI accelerators (like the company’s Ethos NPUs) as well as the extensive focus on GPUs for AI, the truth is that most neural network and other AI models designed for mobile devices run on the CPU, so it’s critical to enhance performance there.

While the X1 isn’t intended for mainstream usage and won’t represent a particularly large segment of the market (particularly because of its larger and more power-hungry design), its appearance reflects the increasing diversity and segmentation of the smartphone market. In addition, the Cortex-X looks like it would be a good candidate for future versions of Arm CPUs for PCs and other larger devices.

On the GPU side, the company made two different introductions: one at the top end of the performance chain and the other emphasizing the rapidly growing opportunity for moderately priced smartphones. The top-of-the-line Mali-G78 features a 25% increase in standard graphics performance over its Mali-G77 predecessor, as well as a 15% boost in machine learning application performance. Given the interest in achieving PC and console gaming-like quality on smartphones, the G78 adds support for up to 24 shader cores, but leverages a clever asynchronous power design that allows it to create high-level graphics without drawing too much power.

The other new design is the Mali-G68, which Arm classifies as being targeted to a “sub-premium” tier of phones. Leveraging essentially the same design as the G78, but limited to a maximum of 6 shader cores, the G68 allows its chip customers and then smartphone makers in turn to create products with premium-like features but at lower price points. Given the price compression that many expect to see in smartphones over the next several years, this seems like an important step.

The final new design from Arm was their Ethos-N78, just the second generation of their dedicated line of AI co-processors for mobile devices. Featuring more than 2x the peak performance of the N77, as well as a greater than 25% improvement in performance efficiency, the N78 also offers more flexibility in configuring its core elements, letting companies more easily use it across a wide range of different mobile devices.

Even more important than raw performance in the AI/ML world is software. Not surprisingly then, the company also announced new enhancements to their Arm Development Studio and other tools that make it easier to optimize AI applications not only for the N78, but for its full line of Cortex CPUs and Mali GPUs as well. In fact, Arm is offering a unified software stack that essentially allows developers to create AI/ML models that can run transparently across any combination of Arm CPUs, GPUs or NPUs. Conceptually, it’s very similar to Intel’s One API idea, which is intended to provide the same level of flexibility across a range of different Intel silicon designs. Real-world performance for all of these “write once, run anywhere” heterogenous computing models remains to be seen—and the challenges for all of them seem quite high—but it’s easy to see why they could be very popular with developers.

As expected, Arm brought a range of new mobile-focused chip designs to the table once again this year, but thanks to the debut of the Cortex-X1, the sub-premium Mali-G68, and the overall emphasis on AI and machine learning, they still managed to shape things up a bit. Clearly, the company sees a growing demand for all these market sub-segments and, because of the pivotal role they play, their efforts will go a long way toward making them real.

The ultimate decisions on how all these new capabilities get deployed and the features they enable get implemented is up to the company’s more famous customers and, in some cases, their customers’ customers, of course. More “intelligent” devices, more immersive augmented reality (AR) and virtual reality (VR) enhancements, and significantly improved graphics performance all seem like straightforward outcomes they could enable. Nevertheless, the groundwork has now been laid for future mobile devices and it’s up to other vendors in the mobile industry to see exactly where it will take us.

Podcast: Microsoft Build, Work from Home Forever

This week’s Techpinions podcast features Carolina Milanesi and Bob O’Donnell discussing the news as well as the structure of Microsoft’s recent virtual Build developer conference, as well as the trend of tech companies offering their employees the ability to work from home for as long as they would like.

Microsoft Project Reunion Widens Windows 10 Opportunity to One Billion Devices

Sometimes, things just take a little bit longer than expected. At Microsoft’s Build conference five years ago, the company made a widely reported prediction that the Windows 10 ecosystem would expand out to one billion devices over the course of a 2-3 year time period. Unfortunately, they didn’t make it by the original deadline, but just a few months ago they were finally able to announce that they had reached that ambitious milestone.

Appropriately, at this year’s virtual Build developer conference, the company made what could prove to be an even more impactful announcement that will allow developers to take full advantage of that huge installed base. In short, the company unveiled something they call Project Reunion that will essentially make it easier for a variety of different types of Windows applications—built via different programming models—to run more consistently and more effectively across more devices.

Before getting into the details, a bit of context is in order. Back in 2015 when then Executive VP Terry Myerson made the one billion prediction, Microsoft’s OS efforts were more grandiose than simply for PCs. The company was still actively pursuing the smartphone market with Windows Phone, had just unveiled the first HoloLens concept devices and Surface Hub, talked about the role that Xbox One had in its OS plans, and generally was thinking more about a multi-device world for its then new OS.

Looking back now, it’s clear that we indeed entered an era of multiple devices, but the only ones that ended up having a significant impact on the Windows 10 installed base number turned out to be PCs in all flavors and forms, from desktops and laptops, to 2-in-1s and convertibles like the original Surface. In fact, the nearly complete reliance on PCs is undoubtedly why it took longer to reach the one billion goal.

In retrospect, however, that’s actually a good thing, because there are now approximately one billion relatively similar devices for which developers can create applications, instead of a mixed group of devices that were more related to Windows 10 in name than true capability. Even with this large similar grouping, however, not all applications for Windows 10 were created or function in the same way. Because of some of Microsoft’s early bets on device diversity under the Windows 10 umbrella, they made decisions about promoting a more basic (and legacy-free) application development architecture that they hoped would ensure that applications ran across this wide range of devices. Specifically, Microsoft promoted the concept of Universal Windows Platform (UWP) APIs (Application Programming Interfaces) and a number of developers took them up on these initiatives.

At this point, however, because of some of the limitations in UWP, there really isn’t much need (or demand) for these efforts, hence Project Reunion. At a basic level, the goal with Project Reunion is to provide the complete set of Windows 10 capabilities (the Win32 or Windows APIs) to applications originally created around the UWP concept—in essence to “reunite” the two application development platforms and their respective APIs into a single, more modern Windows platform. This, in turn, allows programmers to have a more consistent means of interaction between their apps and the Windows 10 operating system, regardless of the approach they first took to create the application. In addition, thanks to a number of extensions that Microsoft is making to that model, it allows developers to create more modern, web and service-friendly applications.

Specifically, for example, Project Reunion is going to enable something the company is calling WinUI 3 Preview 1, which is a new framework for building modern, fast, flexible user interfaces that can easily scale across different devices. By leveraging the open-source, multi-OS friendly Fluent Design-based tools, developers can actually achieve an even more widespread reach not only across different Windows 10-based devices, but those running other OS’s as well. Plus, thanks to hooks into previous development platforms, developers can use these UI tools to modernize the look of existing apps as well as build new ones.

Another specific element of Project Reunion is WebView 2, which is a set of tools that lets developers easily integrate native web content within an app and even integrate with browsers across different platforms. As with WinUI 3 and the new more modern Windows APIs, WebView 2 isn’t locked to any version of Windows, giving developers more flexibility in leveraging their application’s codebase across multiple platforms.

Microsoft also announced new extensions that allow Windows developers to tap into services built into Microsoft 365 such as Microsoft Search and Microsoft Graph. This allows developers to create a modern web service-like application that can leverage the capabilities and data that Microsoft’s tools provide and offer extensions and connections to the company’s widely used SaaS offerings.

The Project Reunion capabilities look to finally complete the picture around the one billion device installed base that the company promised, but in a much different way than most people originally thought. Interestingly, thanks to the growing importance and influence of the PC—a point that’s really been brought home in our current environment—there’s arguably a less diverse set of Windows 10-based devices to specifically code for than most predicted. However, the new tools and capabilities promised for Project Reunion potentially allow developers to create applications for that entire base, instead of a smaller subset that realistically was all that was possible from the original UWP efforts.

Additionally, because of Microsoft’s significantly more open approach to application developments and open source in general since that 2015 announcement, the range of devices that Windows-based developers can target is now significantly broader than even that impressive one billion figure. Obviously delivering on that promise is a lot harder than simply defining the vision, but it’s certainly interesting to see how Microsoft continues to keep the world of Windows fresh and relevant. Throw in the fact that a new version of Windows—10X—is on the horizon, and it’s clear that 2020, and beyond, is going to be an interesting time for a platform that many had written off, albeit incorrectly, as irrelevant.

Distance Learning During COVID-19: A Parent View

A couple of weeks after the shelter in place orders started rolling out across the United States, we, at Creative Strategies, ran a study to capture some valuable information on remote work. We wanted to know which tools, both hardware and software, people are using while trying to work and supporting their children’s education efforts from home. We tried to get to a lot of the sentiment, both positive and negative, that emerged from using specific solutions and the pain points of new workflows. Our online questionnaire was taken by just over 1000 Americans, 850 of whom have been working from home during the quarantine. Among those, 342 have children living with them and in school.

There has been plenty of data shared by the leading tech companies on how they worked with schools and school districts to enable distance learning practically overnight. Many tools were made available for free by Microsoft, Apple, Google, Zoom and Cisco to enable teachers to hold classes over video, share material, mark work and more. With the success stories, we also heard the reality many schools faced with students who did not have access to devices or a fast and stable internet connection. The challenges schools faced were different in nature. Teachers faced technological hurdles, like learning how to use brand new tools like Zoom. They also faced more practical challenges, like deciding whether to deliver a class synchronously or asynchronously. Many settled on a mixed solution so they could be in touch with their students but also let them free to organize part of the workaround their family’s circumstances.

When we set out to do our study, we wanted to hear the voice of the parents, understand the challenges they faced in supporting their children as well as how prepared they felt their school and their children were for this new normal.

Device Ownership Paints a Familiar Picture

The trend that Chromebooks reign supreme in education was confirmed by our data that saw a range between 29 and 44 percent of children in the panel using a Chromebook for their schoolwork between third grade and college. Children between Kindergarten and second grade relied more on Windows PC (38%) and iPads (33%).

iPad penetration by grade is particularly interesting as it seems to corroborate the “need a notebook to do real work” mantra we often hear from enterprise users. iPad penetration among the respondents drops from 33% in K-2 to 17.5% among high-schoolers and 16.6% among college students. Benefitting from this decline seems to be the Mac, which grows from 13.3% in K-2 to 25.7% in high school and 28.4% in college.

Google Classroom Dominance

Considering the strength of Chromebooks, it should not come as a surprise that Google Classroom dominates as the most used software solutions across all age groups in our study, with the highest percentage among middle-schoolers where 52% are using Google Classroom for their distance learning. Google Docsmirrors the strength of Google Classroom with penetration as high as 49.4% among sixth to eighth-graders.

The correlation between iPad penetration and Apple Classroom shows there is more work to be done by Apple to match the popularity of its devices with their classroom software. Apple Classroom is most popular among third to fifth graders, but even there penetration remains limited, reaching 7%.

When it comes to video, Zoom is the clear winner with an average penetration across all children of 43% and a higher penetration of 51% in grades 3 to 5. It is interesting to note that Google Meet was unable to capitalize on the strong presence of Google Docs and Google Classroom, reaching a peak of 20% among middle-schoolers. Google Meet is certainly suffering from some branding issues as well as a less flexible set up compared to Zoom. Up to this week, in fact, Google Meet was only available to enterprise and education accounts, which meant, similarly to Microsoft Teams, it required a top-down set up through an IT manager. From May 12, Google has opened up Google Meet to anyone with a Gmail account. Users will be able to have video calls with up to 100 people and, until September 30, users will have no time limit. After that date, meetings will be limited to 60 minutes.

Kids’ Top Struggles

Kids will be kids, no matter their age! They all struggled with a lack of socialization. Third to fifth graders were the most impacted, with 62% lamenting that video is just not enough to connect with friends.

The very young (42%) and the pre-teens (41%) struggled with motivation because distance learning just did not feel like school. Kindergarteners to second graders also struggled the most with keeping still and staying focused, with 44% of the parents finding this to be an issue.

Parents’ Struggles

Except for college students, parents felt that their children required more assistance than expected in submitting classwork and doing assignments like taking pictures of work or videos (31%) with parents of kindergartners to second-graders most heavily impacted by this – 43.8%.

The other side of the same coin showed that, on average, 32% of parents wished that the school prepared their kids to problem solve more on their own. This sentiment was particularly strong among parents of third to fifth graders – 44%.

Post COVID-19

Most states have already said schools will be out for the remainder of the academic year and State Colleges in California announced this week that in-person classes would remain suspended even in the Fall semester. We wanted to see what parents hope will be retained once kids can go back to school.

Fifty-two percent of parents on our panel hope to see distance learning be an option when their children are sick or cannot attend school for other reasons. Another 35% expect to see their kid’s school use video for teacher/parent conferences. Finally, 28% of parents would like to see teachers offer “office hours” to get support for homework.

All in all, the parents in our study seemed to have been able to cope with their newly found role of teacher aid. This is neither remote work nor homeschooling, but a juggling act that we hope will end as soon as it is safely possible. That said, there is no doubt that this experience will impact both businesses and schools going forward. The extent of the change we will see as we establish yet again a new normal will depend on many factors from the level of investment required to the forward-thinking mentality of the leaders as well as any kind of crisis preventative measure imposed by the government.

New Workplace Realities Highlight Opportunity for Cloud-Based Apps and Devices

One of the numerous interesting outcomes of our new work realities is that many tech-related ideas introduced over the past few years are getting a fresh look. In particular, products and services based on concepts that seemed sound in theory but ran into what I’ll call “negative inertia”—that is, a huge, seemingly immovable installed base of a legacy technology or application—are being reconsidered.

Some of the most obvious examples of these are cloud-based applications. While there’s certainly been strong adoption of consumer-based cloud services, such as Netflix, Spotify, and many others, the story hasn’t been quite as clear-cut in the business side of the world. Most organizations and institutions (including schools) still use a very large number of pre-packaged or legacy custom-built applications that haven’t been moved to the cloud.

For understandable reasons, that situation has started to change, and the percentage of cloud-friendly or cloud-native applications has begun to increase. Although the numbers aren’t going to change overnight (or even in the next few months), it’s fairly clear now to even the most conservative of IT organizations that the time to expand their usage of cloud-based software and computing models is now.

As a result of this shift in mindset, businesses are reconsidering their interest in and ability to use even more cloud-friendly tools. This, in turn, is starting to create a bit of a domino effect where previous dependencies and/or barriers that were previously considered insurmountable are now being tossed aside at the drop of a hat. It’s truly a time for fresh thinking in IT.

At the same time, companies also now have the benefit of learning from others that may have made more aggressive moves to the cloud several years back. In addition, they recognize that they can’t just start over, but need to use the existing hardware and software resources that they currently own or have access to. The end result is a healthy, pragmatic focus on finding tools that can help companies meet their essential needs more effectively. In real-world terms, that’s translating to a growing interest in hybrid cloud computing models, where both elements of the public cloud and on-premise or managed computing resources in a private cloud come together to create an optimal mix of capabilities for most organizations.

It’s also allowing companies to take a fresh look at alternatives to tools that may have been a critical part of their organization for a long time. In the case of office productivity suites, for example, companies that have relied on the traditional, licensed versions of Microsoft Office can start to more seriously consider something like Google’s cloud native G Suite as they make more of a shift to the cloud. Of course, they may also simply choose to switch to the newly updated Microsoft 365 cloud-based versions of their productivity suite. Either way, moving to cloud-based office productivity apps can go a long way towards a more flexible IT organization, as well as getting end users more accustomed to accessing all their critical applications from the web.

Directly related to this is the ability to look at new alternatives for client computing devices. As I’ve discussed previously, PC clamshell-based notebook form factors have become the de facto workhorses for most remote workers now and the range of different laptop needs has grown with the number of people now using them. The majority of those devices have been (and will continue to be) Windows-based, but as companies start to rely more on cloud-based applications across the board, Chromebooks become a viable option for more businesses as well.

Most of the attention (and sales) for Chromebooks to date has been in the education market—where they’ve recently proven to be very useful for learn-at-home applications—but the rapidly evolving business app ecosystem does start to shift that story. It also doesn’t hurt that the big PC vendors (Dell, HP, and Lenovo) all have a line of business-focused Chromebooks. On top of that, we’re starting to see some interesting innovations in Chromebook form factors, with options ranging from basic clamshells to convertible 2-in-1s.

The bottom line is that as companies continue to adapt their IT infrastructure to support our new workplace realities, there are a number of very interesting potential second-order effects that may result from quickly adapting to a more cloud-focused world.. While we aren’t likely to move to the kind of completely cloud-dependent vision that used to be posited as the future of computing, it’s clear that we are on the brink of what will undoubtedly be some profound changes in how, and with what tools, we all work.

Podcast: IBM Think, PC Industry News from HP, Microsoft, AMD, Samsung, Apple, Lenovo

This week’s Techpinions podcast features Carolina Milanesi and Bob O’Donnell discussing the IBM Think conference as well as a number of different PC, OS and chip announcements from major vendors in the PC business and analyzing what it means for the state of the PC category moving forward.

In the Modern Workforce, The Role of PCs Continues to Evolve

It’s been an interesting week for the once again vibrant PC industry. We saw the release of several new systems from different vendors, announcements on the future directions of Windows, and hints of yet more new systems and chip developments on the near-term horizon.

While most of the news wasn’t triggered by the COVID-19 pandemic, all of it takes on a new degree of relevance because of it. Why? As recent US retail sales reports demonstrate and conversations with PC OEMs and component suppliers have confirmed, PCs and peripherals are hot again—really hot. Admittedly, there are many questions about how long the sales burst can last, and most forecasts for the full year still show a relatively large decline, but there’s little doubt that in the current era, the PC has regained its role as the most important digital device that most people own—both for personal and work-related purposes. And, I would argue, even if (or when) the sales do start to decline, the philosophical importance of the PC and its relative degree of usage—thanks in part to extended work-from-home initiatives—will likely remain high for some time to come.

The recent blog post from Microsoft’s Windows and Surface leader Panos Panay provides interesting insights in that regard, as he noted that Windows usage has increased by 75% compared to last year. In recognition of that fact, the company has even decided to pivot on their Windows 10X strategy—which was originally targeted solely at dual-screen devices—to make it available for all regular single-screen PCs. Full details on what exactly that will bring remain to be seen, but the key takeaway is Windows PCs will be getting their first major OS upgrade in some time. To my mind, that’s a clear sign of a vital product category.

Apple is moving forward with their personal computer strategies as well, having been one of several vendors who announced new systems this week. In their case, it was an upgrade to their MacBook Pro line with enhanced components and, according to initial reports, a much-improved keyboard. Samsung also widened their line of Windows notebooks with the formal release of their Galaxy Book Flex and Galaxy Book Flex α 2-in-1 convertibles, and Galaxy Book Ion clamshell, all of which feature the same QLED display technology found in Samsung’s TVs. The Galaxy Book Flex and Ion also have the same type of wireless PowerShare features for charging mobile peripherals as their Galaxy line of smartphones.

The broadest array of new product announcements this week, however, comes from HP. What’s interesting about the HP news isn’t just the number of products, but how their range of offerings are reflective of several important trends in the PC market overall. Gaming PCs, for example, have been a growing category for some time now, even despite the other challenges the PC market has faced. With the extended time that people have been staying home, interest, usage and demand for gaming PCs has grown even stronger.

Obviously, HP didn’t plan things in this way, but the timing of their new OMEN 25L and 30L gaming desktops and OMEN 27i gaming monitor couldn’t have been better. The desktops offer a number of refinements over their predecessors, including a choice of high-performance Intel i9 or AMD Ryzen 9 CPUs, Nvidia RTX 2080 or AMD Radeon RX 5700 XT graphics cards, Cooler Master cooling components, HyperX high-speed DRAM, WD Black SSDs and a new case design. The new gaming monitor features Quad HD (2,560 x 1,440) resolution and a 165 Hz refresh rate with support for Nvidia’s G-Sync technology.

HP showed even more fortuitous timing with the launch of their new line of enterprise-focused Chromebooks and, believe it or a not, a new mobile thin client. Chromebooks have been performing yeoman’s duty in the education market for learn-from-home students as a result of the pandemic, but there’s also been growing interest in the enterprise side of the world as well. While the market for business-focused Chromebooks has admittedly been relatively modest so far, the primary reason has been that most companies are still using many legacy applications that haven’t been optimized for the cloud. Now that many application modernization efforts are being fast-tracked within organizations, however, a cloud software-friendly device starts to make a lot more sense.

With its latest announcements, HP expanded their range of business Chromebook offerings. They now start with the upgraded $399 Chromebook Enterprise 14 G6, which offers basic performance, but a large 14” display and a wipeable/cleanable keyboard, then move up to the mid-range Pro c640 Chromebook Enterprise and finally end up at the Elite C1030 Chromebook Enterprise. Interestingly, the C1030 is the first Intel Athena project certified Chromebook (it features a 10th Gen Intel Core CPU) and offers the same 2-in-1 form factor as their high-end EliteBook Windows PCs. It’s also the world’s first Chromebook made with a 75% recycled aluminum top lid, a 50% recycled plastic keyboard, and speakers made from ocean-bound plastics—all part of HP’s ongoing sustainability efforts.

HP also introduced the mt22 Mobile Thin Client, a device that in another era, would barely get much of a mention. However, with the now critical need in certain industries for modern devices that are optimized for VDI (virtual desktop infrastructure) and Windows Virtual Desktop (WVD), the mt22 looks to be a great solution for workers in regulated or highly secure industries who still need to be able to work-from-home. Finally, HP also announced ThinPro Go, a USB stick that can essentially turn any functioning PC with an internet connection into a thin client device running HP’s Linux-based ThinPro OS. While similar types of devices that work by booting from the USB stick have existed in the past, they once again take on new meaning and relevance in our current era.

All told, HP’s announcements reflect the continued diversity that exists in today’s market and highlight how many different, but essential, roles PCs continue to play. Couple that with the other PC-related announcements from this week and it’s clear that the category continues to innovate in a way that surprises us all.

Podcast: Tech Earnings from Facebook, Alphabet/Google, Microsoft, Amazon, Apple

This week’s Techpinions podcast features Carolina Milanesi and Bob O’Donnell analyzing this week’s big tech quarterly earnings reports from Facebook, Google’s parent company Alphabet, Microsoft, Amazon and Apple, with a focus on what the numbers mean for each of the companies individually and for the tech industry as a whole.

Google Anthos Extending Cloud Reach with Cisco, Amazon and Microsoft Connections

While it always sounds nice to talk about complete solutions that a single company can offer, in today’s reality of multi-vendor IT environments, it’s often better if everyone can play together. The strategy team over at Google Cloud seems to be particularly conscious of this principle lately and are working to extend the reach of GCP and their Anthos platform into more places.

Last week, Google made several announcements, including a partnership with Cisco that will better connect Cisco’s software-defined wide area network (SD-WAN) tools with Google Cloud. Google also announced the production release of Anthos for Amazon’s AWS and a preview release of Anthos for Microsoft’s Azure cloud. These two new Anthos tools are applications/services for both migrating and managing cloud workloads to and from GCP to AWS or Azure respectively.

The Cisco-Google partnership offering is officially called the Cisco SD-WAN Hub with Google Cloud. It provides a manageable private connection for applications all the way from an enterprise’s data center to the cloud. Many organizations use SD-WAN tools to manage the connections between branches of an office or other intra-company networks, but the new tools extend that reach to Google’s GCP cloud platform. What this means is that companies can see, manage, and measure the applications they share over SD-WAN connections from within their organizations all the way out to the cloud.

Specifically, the new connection fabric being put into place with this service (which is expected to be previewed at the end of this year) will allow companies to do things like maintain service-level agreements, compliance policies, security settings, and more for applications that reach into the cloud. Without this type of connectivity, companies have been limited to maintaining these services only for internal applications. In addition, the Cisco-powered connection gives companies the flexibility to put portions of an application in one location (for example, running AI/ML algorithms in the cloud), while running another portion, such as the business logic, on a private cloud, but managing them all through Google’s Anthos.

Given the growing interest and usage of hybrid cloud computing principles—where applications can be run both within local private clouds and in public cloud environments—these connection and management capabilities are critically important. In fact, according to the TECHnalysis Research Hybrid and Multi-Cloud study, roughly 86% of organizations that have any type of cloud computing efforts are running private clouds, and 83% are running hybrid clouds, highlighting the widespread use of these computing models and the strategically important need for this extended reach.

Of course, in addition to hybrid cloud, there’s been a tremendous increase in both interest and usage of multi-cloud computing, where companies leverage more than one different cloud provider. In fact, according to the same study, 99% of organizations that leverage cloud computing use more than one public cloud provider. Appropriately enough, the other Anthos announcements from Google were focused on the ability to potentially migrate and to manage cloud-based applications across multiple providers. Specifically, the company’s Anthos for AWS allows companies to move existing workloads from Amazon’s Web Services to GCP (or the other way, if they prefer). Later this year, the production version of Anthos for Azure will bring the same capabilities to and from Microsoft’s cloud platform.

While the theoretical concept of moving workloads back and forth across providers, based on things like pricing or capability changes, sounds interesting, realistically speaking, even Google doesn’t expect workload migration to be the primary focus of Anthos. Instead, just having the potential to make the move gives companies the ability to avoid getting locked into a single cloud provider.

More importantly, Anthos is designed to provide a single, consistent management backplane to an organization’s cloud workloads, allowing them all to be managed from a single location—eventually, regardless of the public cloud platform on which they’re running. In addition, like many other vendors, Google incorporates a number of technologies into Anthos that lets companies modernize their applications. The ability to move applications running inside virtual machines into containers, for example, and then to leverage the Kubernetes-based container management technologies that Anthos is based on, for example, is something that a number of organizations have been investigating.

Ultimately, all of these efforts appear to be focused on making hybrid, multi-cloud computing efforts more readily accessible and more easily manageable for companies of all sizes. Industry discussions on these issues have been ongoing for years now, but efforts like these emphasize that they’re finally becoming real and that it takes the efforts of multiple vendors (or tools that work across multiple platforms) to make them happen.

Podcast: Intel Earnings, Magic Leap, WiFi6E, Arm-Based Mac

This week’s Techpinions podcast features Ben Bajarin and Bob O’Donnell analyzing the earnings announcements from Intel and what they say about tech industry evolution, discussing the layoffs and repivoting of Magic Leap and what it says about the future of Augmented Reality, describing the importance of the new WiFi6E 6GHz extensions to WiFi, and chatting about the potential for an Arm processor-based future Mac.

Remote Access Solutions Getting Extended and Expanded

Now that we’re several weeks into work from home mandates and clearly still many weeks (and likely months) away from most people being able or willing to go back to their offices, companies are starting to extend and expand their remote access plans. Early on, most organizations had to focus their attention on the critical basics: making sure people had PCs they could work on, providing access to email, chat and video meetings, and enabling basic onramps to corporate networks and the resources they contain.

However, it’s become increasingly clear that the new normal of remote work is going to be here for quite some time, at least for some percentage of employees. As a result, IT organizations and vendors that want to support them are refocusing their efforts on providing safe, reliable remote access to all the same resources that would be available to their employees if they were working from their offices. In particular, there’s a need to get access to legacy applications, sensitive security-focused applications, or other software tools that run only within the walls of corporate data centers.

While there’s little question that the pandemic and its aftermath will accelerate efforts to move more applications to the cloud and increase the usage of SaaS-based solutions, those changes won’t happen overnight. Plus, depending on the company, as much as 2/3 of the applications that companies use to run their businesses may fall into the difficult-to-access legacy camp, so even sped up efforts are going to take a while. Yes, small, medium, and large-sized organizations have been moving to the cloud for some time, and some younger businesses have been able to successfully move most of their computing resources and applications there. Collectively, however, there is still a huge amount of non-cloud workloads which companies depend on that can’t be easily reached (or reached at all) outside the office for many employees.

Of course, there are several ways to solve the challenge of providing remote access to these and other types of difficult to reach tools. Many companies have used services like VPNs (virtual private networks), for example, to provide access to some of these kinds of critical applications for years. In most cases, however, those VPNs were intended for occasional use from a limited set of employees, not full-time use from all their employees. In fact, there are stories of companies that quickly ran into license limitations with the VPN software providers when full-time use occurred.

Many other organizations are starting to redeploy technologies and concepts that some had written off as irrelevant or no longer necessary, including VDI (virtual desktop infrastructure) and thin clients. In a VDI environment—which for the record, has been and continues to be going strong in places like health care facilities, financial institutions, government agencies, call centers, etc. even before the pandemic hit—applications are run in virtualized sessions on servers and accessed remotely via dedicated thin client devices or on PCs that have been configured (or recommissioned) to run specialized client software. The beauty of the thin client computing model is that it is very secure, because thin clients don’t have any local storage and all applications and data stay safe within the walls of the corporate data center or other hosted environment.

Companies like Citrix and VMWare have been powering these types of remote access VDI computing solutions for decades now. Initially, much of the focus was around providing access to legacy applications that couldn’t be easily ported to run on Windows-based PCs, but the basic concept of letting remote workers use critical internal applications, whether they are truly legacy or not, is proving to be extremely useful and timely in our current challenging work from home environment. Plus, these tools have evolved well beyond simply providing access to legacy applications. Citrix, in particular, has developed the concept of digital workspaces, sometimes referred to as Desktop as a Service, which integrates remote access to all types of data and applications, whether they’re public cloud-based SaaS apps, private cloud-based tools, traditional on-premise applications or even mobile applications into a single, secure unified workspace or desktop. (By the way, Desktop as a Service is not to be confused with the very similarly named Device as a Service, which entails a leasing-like acquisition and remote management of client devices. Unfortunately, both get shortened to DaaS.)

In addition to these approaches, we’ve started to see other vendors talk more about some of their remote access capabilities. Google, for example, just released a new blog describing their BeyondCorp Remote Access offering, which enables internal web apps to be opened and run remotely in a browser. Though it’s not a new product from Google—it’s actually been available for several years—its capabilities have taken on new relevance in this extended work from home era. As a result, Google is talking more about the organizations that have deployed it, some best practices on how to leverage it, and more.

Most companies are probably going to need a combination of these and other types of remote access work tools to match the specific needs of their organizations. The simple fact is that disaster recovery and contingency plans are now everyday needs for many companies. As a result, IT organizations are going to have to shift into these modes for much longer periods of time than anyone could have anticipated. Though it’s a challenging task, the good news is, there are a wealth of solid, established tools and technologies available to let companies adapt to the new normal and keep their organizations running this way for some time to come. Yes, adjustments will continue to be made, security issues and approaches have to be addressed, and situations will continue to change, but at least the opportunity is there to let people function in a reasonable meaningful way. That’s something for which we can all be thankful.

Podcast: Apple Google Contact Tracing, iPhone SE, OnePlus 8, Samsung 10 Lite

This week’s Techpinions podcast features Carolina Milanesi and Bob O’Donnell analyzing the surprising announcement from Apple and Google to work together on creating a smartphone-based system for tracking those who have been exposed to people with COVID-19, and discussing the launch of several new moderately priced smartphones and what they mean to the overall smartphone market.

Apple Google Contact Tracing Effort Raises Fascinating New Questions

In a move that caught many off guard—in part because of its release on the notoriously slow news day of Good Friday—Apple and Google announced an effort to create a standardized means of sharing information about the spread of the COVID-19 virus. Utilizing the Bluetooth Low Energy (LE) technology that’s been built into smartphones for the last 6 or 7 years and some clever mechanisms for anonymizing the data, the companies are working on building a standard API (application programming interface) that can be used to inform people if they’ve come into contact with someone who’s tested positive for the virus.

Initially those efforts will require people to download and enable specialized applications from known health care providers, but eventually the two companies plan to embed this capability directly into their respective mobile phone operating systems: iOS and Android.

Numerous articles have already been written about some of the technical details of how it works, and the companies themselves have put together a relatively simple explanation of the process. Rather than focusing on those details, however, I’ve been thinking more about the second-order impacts from such a move and what they have to say about the state of technology in our lives.

First, it’s amazing to think how far-reaching and impactful an effort like this could prove to be. While it may be somewhat obvious on one hand, it’s also easy to forget how widespread and common these technologies have become. In an era when it’s often difficult to get coordinated efforts within a single country (or even state), with one decisive step, these two tech industry titans are working to put together a potential solution that could work for most of the world. (Roughly half the world’s population owns a smartphone that runs one of these OS’s and a large percentage of people who don’t have one likely live with others who do. That’s incredible.)

With a few notable exceptions, tech industry developments essentially ignore country boundaries and have become global in nature right before our eyes. At times like this, that’s a profoundly powerful position to be in—and a strong reason to hope that, despite potential difficulties, the effort is a success. Of course, because of that reach and power, it also wouldn’t be terribly surprising to see some governments raise concerns about these advancements as they are further developed and as the potential extent of their influence becomes more apparent. Ultimately, however, while there has been discussion in the past of the potential good that technology can bring to the world, this combined effort could prove to be an actual life and death example of that good.

Unfortunately, some of the concerns regarding security, privacy, and control that have been raised about this new effort also highlight one of the starkest examples of what the potential misuse of widespread technology could do. And this is where some of the biggest questions about this project are centered. Even people who understand that the best of intentions are at play also know that concerns about data manipulation, creating false hopes (or fears), and much more are certainly valid when you start talking about putting so many people’s lives and personal health data under this level of technical control and scrutiny.

While there are no easy answers to these types of questions, one positive outcome that I certainly hope to see as a result of this effort is enhanced scrutiny of any kind of personal tracking technologies, particularly those focused on location tracking. Many of these location-based or application-driven efforts to harvest data on what we’re doing, what we’re reading, where we’re going, and so on—most all of which are done for the absurdly unimportant task of “personalizing” advertisements—have already gotten way out of hand. In fact, it felt like many of these technologies were just starting to see some real push back as the pandemic hit.

Let’s hope that as more people get smarter about the type of tracking efforts that really do matter and can potentially impact people’s lives in a positive way, we’ll see much more scrutiny of these other unimportant tracking efforts. In fact, with any luck there will be much more concentrated efforts to roll back or, even better, completely ban these hidden, little understood and yet incredibly invasive technologies and the mountains of data they create. As it is, they have existed for far too long. The more light that can be shone into these darker sides of technology abuse, the more outrage it will undoubtedly cause, which should ultimately force change.

Finally, on a very different note, I am quite curious to see how this combined Apple Google effort could end up impacting the overall view of Google. While Apple is generally seen to be a trustworthy company, many people still harbor concerns around trusting Google because of some of the data collection policies (as well as ad targeting efforts) that the company has utilized in the past. If Google handles these efforts well—and uses the opportunity to become more forthright about its other data handling endeavors—I believe they could gain a great deal of trust back from many consumers. They’ve certainly started making efforts in that regard, so I hope they can use this experience to do even more.

Of course, if the overall efficacy of this joint effort doesn’t prove to be as useful or beneficial as the theory of it certainly sounds—and numerous concerns are already being raised—none of these second-order impacts will matter much. I am hopeful, however, that progress can be made, not only for the ongoing process of managing people’s health and information regarding the COVID-19 pandemic, but for how technology can be smartly leveraged in powerful and far-reaching ways.

Podcast: The Global Semiconductor Market

This week’s Techpinions podcast features Ben Bajarin, Mario Morales of IDC, and Bob O’Donnell discussing the state of the global semiconductor market and how the COVID-19 pandemic is impacting major chip and end device companies and the tech industry overall.

Here’s a link to the IDC Semiconductor market forecast that Mario discussed on the podcast: https://www.idc.com/getdoc.jsp?containerId=US46155720

Need for Multiple Video Platforms Becoming Apparent

Like most of you, I’ve been doing more than my fair share of video calls lately and feel relatively certain that the practice will continue for some time to come—even when life beyond the COVID-19 pandemic starts to return to normal. As we’ve all learned from the experience, in the proper amount and for the proper length, they can be a very effective form of communication. Plus, as many have discussed and promised for years, they do give us the flexibility to work from many different locations and, for certain types of events, can reduce the time, costs, and hassles of travel.

That’s not to say, however, that they are a cure all. As we’ve also all learned, there are definitely limitations to what can be achieved via video calls and sometimes things just get, well, awkward.

For people who don’t work at large organizations that have standardized on a single videoconferencing platform, another challenge is the need to work with, install, and learn multiple different apps. As someone who talks to lots of different companies, I can safely say that I’m pretty sure I’ve used virtually every major videoconferencing option that’s out there over the last few weeks: Microsoft Teams, Cisco Webex, Google Hangouts/Meet, Skype, GoToMeeting, Blue Jeans, and of course, Zoom.

Initially, I admit to being frustrated by the lack of standards across the different tools and have wondered if it wouldn’t make more sense to just have a single platform, or at least a primary one that could serve as the default. As time has gone on, however, I realize that my initial thinking was lacking a certain amount of insight. As unintuitive as it may first sound, there actually is a great deal of sense to having multiple videoconferencing apps and platforms.

To be clear, there’s definitely work that could be done to enable and/or improve the interoperability across some of these platforms–even if it’s nothing more than allowing the creation of a high-level log-in tool that could manage underlying audio and video connections to the various platforms. However, just as choice and competition in other categories ends up creating better products for everyone, the same is true with videoconferencing tools—for many different reasons.

First, as we’ve certainly started to see and learn from much of the Zoom fallout that’s started to occur, things can get ugly if too many people start to over-rely on a single platform. Not only is there the potential for reliability concerns—even on a cloud-based platform—but a platform that gets too much attention is bound to become a tempting target for hackers and other troublemakers. Stories of “Zoombombing” and other related intrusions have grown so commonplace that the FBI is even starting to investigate. Plus, nearly every day, it seems, there’s news of yet another large organization moving away from or forbidding the use of Zoom.

To the company’s credit, much of the attention and the continuing strong usage of Zoom is because they took the often awkward, painful, and unreliable process of connecting multiple people from multiple locations into a functioning video call and made it easy. For many people and some organizations, that was good enough, and thankfully, we’re starting to see other videoconferencing platforms improve these critical basics as a competitive response. That’s a win for everyone.

However, it’s also become increasingly clear that Zoom wasn’t nearly as focused on security and privacy as many people and organizations thought they were and as they should have been. From questions about encryption, to publicly accessible recordings of private calls, the routing of US calls through Chinese servers, and much more, Zoom is facing a reckoning on some of the choices they’ve made.

Other videoconferencing platforms, including Webex and GotoMeeting have been focused on privacy and security for some time—unfortunately, sometimes at the expense of ease-of-use—but it’s clear that many organizations are starting to look at other alternatives that are a better match for their security needs. Microsoft, to its credit, has made security an essential part of its relatively new Teams platform.

But even beyond the obvious critical security needs, it’s clear, in using the various videoconferencing tools, that some are better suited for different types of meetings than others. The mechanisms for sharing and annotating files, for example, take different forms among different tools. In addition, some tools have better capabilities for working within the structure of a defined multi-part meeting, such as a virtual event.

The bottom line is, it’s very difficult to find a single tool that can work for all types of meetings, all types of leaders, or even all types of company cultures. Meetings can vary tremendously across companies or even across groups within companies, so it isn’t realistic to think that a single platform is going to meet everyone’s virtual meeting needs. Choice and focus continue to be important and will likely lead many organizations to adopting several different videoconferencing tools for different meeting needs.

And let’s not forget, we won’t be doing this many video meetings for ever. While there’s little doubt that we’ll all be doing more video meetings post-pandemic than we were doing pre-pandemic, the overall number of video meetings will go down from current levels for most people. In fact, once things get back to normal, I think people are actually going to look forward to face-to-face meetings¬—despite the frustrations they often create. We’ll all just be a lot more sensitive to what types of things work in video meetings and what’s better live. That’s an improvement I think we can all look forward to.

Podcast: Microsoft 365, T-Mobile-Sprint, Network Reliability

This week’s Techpinions podcast features Carolina Milanesi, Mark Lowenstein and Bob O’Donnell discussing the release of Microsoft’s new Microsoft 365 subscription service, analyzing the impact of the T-Mobile-Sprint merger on the US telecom market and the rollout of 5G, and chatting about overall broadband and mobile network reliability during the COVID19 crisis.

Microsoft 365 Shift Demonstrates Evolution of Cloud-Based Services

If there’s one piece of software that has held up remarkably well over several decades, it’s Microsoft’s Office suite of productivity apps. From business to personal life, the applications in Office have proven their value time and time again to people all over the world. Perhaps because of that, Microsoft has used Office as a means to push forward the definition of what software is, how it should be delivered, how it should be sold, what platforms it should run on, and much more over the last decade or so.

In June of 2011, for example, the company officially unveiled Office 365, which provided access to all the same applications in the regular suite but in a subscription-like “service” form that was delivered (and updated) via the internet. Since then, the company has added new features and functions to the service, made it available to mobile platforms such as Android and iOS, in addition to Windows and MacOS, and generally used it as a means to expand how people think about applications they use on a regular basis. In the process, Microsoft has made many people comfortable with the idea of cloud-based software becoming a cloud-based service.

Yesterday, the company took the next step in the evolution of the product and renamed the consumer, as well as the small and medium business versions of Office 365 to Microsoft 365—changes that will all occur on April 21. The name change is obviously a subtle one, but beyond the title, the changes run much deeper. Specifically, the new brand reflects how the set of applications that make up the company’s popular subscription-based offering is evolving. It also reflects how the company itself is changing.

In the case of the SMB versions of Microsoft 365, the name change is simply a branding one, which better reflects that the service includes more than just basic office productivity, particularly with the Teams collaboration tools and service. For the new consumer-oriented Personal and Family versions of Microsoft 365, the changes are more extensive.

Notably, the consumer versions of Microsoft 365 include the addition of several new applications, a number of AI-powered intelligent enhancements to existing applications and—in an important first for Microsoft—some mobile-first advancements. The new version of the Microsoft Editor function works across Word, Outlook.com, and the web, and is essentially a Grammarly competitor that moves beyond simple spell and grammar checking to making AI-powered rewriting suggestions, avoiding plagiarism and more.

The AI-based Designer feature in PowerPoint—which I have found to be incredibly useful—has been enhanced in this latest version of Microsoft 365 to support a wider array of content that it can “beautify” and includes support for a greatly expanded library of supplementary graphics, videos, fonts and templates.
The biggest change to Excel is the forthcoming addition of Money for Excel, an add-in that gives it Quicken-like money and account management features. In addition, working in conjunction with Wolfram Alpha, Microsoft is adding in support for over 100 new “smart” data types that makes it significantly easier to track everything from calories to travel locations and more. In essence, it provides the type of intelligence that people may have expected computing devices and applications to have all along.

The addition of both Teams (for Consumers) and Family Safety are interesting because of the capabilities they bring to the service and because both will launch first on mobile OSes—Android and iOS. Microsoft has had mobile versions of its main productivity suite apps, as well as its One Drive storage service for a while now, but this Microsoft 365 launch marks the first time the company will debut new apps in mobile form. On the one hand, the move is logical and not terribly surprising given how much people use their mobile devices today—particularly for communications and tracking, which are the core functions of Teams and Family Safety respectively. Nevertheless, it’s still noteworthy, because it does show how Microsoft has been able to pivot on its typical “deliver on PC first” strategy and keep itself as relevant as possible.

In the case of Teams, the company isn’t replicating its Enterprise version, but instead has developed a consumer-focused edition that allows for real-time chats, document sharing, creating and tracking lists, and more in a manner that should make sense for most consumers. Family Safety is completely new and allows parents to provide limits and controls on digital device usage and content, as well as track the physical location and even driving of other family members. Importantly, Microsoft made the point to say that it’s doing all these things without sharing (or certainly not selling) any of this information to auto insurance companies, advertisers or any other companies. While the company would have undoubtedly created a bit of an outcry if it did any of that, it was still reassuring to hear a big tech vendor emphasize these privacy and security-focused concerns. Let’s hope all major tech vendors follow suit.

Speaking of privacy and security, Microsoft took the opportunity with its Microsoft 365 launch announcement to also unveil the latest version of Microsoft Edge, the company’s significantly improved browser. In addition to several convenience-based features, such as the addition of vertical tabs, smart copying from web pages, and the ability to easily create portable “collections” of content from web-based sources, the company debuted some important privacy features as well. Password Monitor, for example, can automatically track whether any of your logins are available on the dark web and encourage you to change your passwords on sites where that may have occurred. Given the huge number of security breaches and data exposure that have impacted almost all of us at this point, this could prove to be an incredibly valuable new feature. In addition, the company added refined tracking controls that allows you to set the amount information you are willing to share with other websites as a result of your browsing sessions.

All told, it was a pretty impressive set of announcements that highlights how Microsoft has managed to continue adjusting its strategies to match the changing needs of the market and its customers. Of course, many consumers will still be content using the free versions of the basic Office applications and services that Microsoft will continue to make available even after April 21. However, the functionality that the company has built into its new Microsoft 365 Personal and Family offerings will be compelling enough for many to make the switch, and the success that the Office suite of applications has enjoyed for so long will continue with the new Microsoft 365.

Some “Lighter Side” Headlines For the Coronavirus Era

For most of us, the past couple of weeks have been ‘all coronavirus all the time’, whether personally, professionally, or financially. I just looked at my work e-mail inbox, and 80% of messages have COVID-19 in the subject line. So it’s time to take a break from adding to the chorus of bleak forecasts or recommendations on what XYZ company should do. With April Fools approaching, here’s an attempted humorous take at some of the headlines we might see coming out of the tech and telecom worlds over the next couple of weeks.

WeWork To Change Its Name to iWork. With nearly all co-working places closed worldwide and the massive shift to working remotely, the company has been advised that anything connoted to the concept of ‘We’ will take an even bigger chunk out of WeWork’s ongoing valuation freefall. Adam Neumann, the company’s beleaguered former CEO, has suggested that the stock symbol for its future IPO should be just “I”, rather than “WE”.

Apple Temporarily Disabling Screen Time Feature. Nearly two years ago, Apple introduced a feature called Screen Time, to help customers take control over how much time they’re spending on their various iDevices. But with the kids at home for an extended period and parents also pulling their hair out, Apple has decided it’s better for everyone’s mental health if they just simply don’t see that their usage across all screens roughly mirrors the hockey-stick-like surge in confirmed coronavirus cases.

New TV Show To Debut on Netflix: “Billionaire Island”. Part ‘Survivor’ and part ‘Hunger Games’, this new series puts 12 tech billionaires infected with coronavirus on an island, without any access to PPEs or ventilators. Bernie Sanders and Elizabeth Warren have signed on as advisors to the show, and, as one would expect, they’ve designed some interesting challenges and plot twists.

Secret Emerges in DISH offer to Lend Spectrum to T-Mobile. DISH’s offer let T-Mobile use a chunk of its 600 MHz spectrum for free in order to help meet network capacity demands in the coming weeks is laudable. Many were surprised, however, since DISH has been hoarding its spectrum for years. We’ve since learned that in a break during the negotiations between DISH and T-Mobile that helped get the Sprint deal done, John Legere challenged Charlie Ergen (a famous poker player) to a game of Texas Hold ‘Em, using spectrum channels rather than dollars, as currency. The evening ended with Legere up about 30 MHz.

Alexa To Be Used to Help Identify Those With Coronavirus. The shortage of test kits for the coronavirus continues to be a serious problem. But tech companies are stepping up. In his news conference yesterday, President Trump announced a new arrangement with Amazon, whereby anyone who says “I might have coronavirus” within earshot of Alexa will be entered into a national database, as a first step in determining who should get tested. The president added “If  Amazon does a good job, we might take another look at that JEDI contract”.

Fund Managers Start Shorting Netflix Stock. This might seem like a counterintuitive move, since it appears that we’re all homebound for the long haul. However, it’s possible that we’ll have cycled through pretty much everything Netflix has to offer — something that until recently seemed mathematically impossible — which could lead to significant numbers of subscribers dumping Netflix once they’re actually allowed to go.out.again.

Airlines Will Make a Huge Comeback…With $1,000 Bag Fees. Airlines are the good cop/bad cop of corporate America. During the past ten flush years, they’ve done nearly everything possible to make the flying experience less pleasant. Now, they’re being all nicey-nicey, waiving change fees, cancel fees, and baiting you with $30 fares to Hawaii. But when flying returns, you can bet that they’ll be looking at all sorts of creative ways to recoup their losses. We hear they’re considering $1,000 bag fees, surcharges for crying children, and huge fines for anyone boarding a plane with so much as a runny nose.

Facebook to Relaunch Facebook Dating as A Virtual Service. Facebook received a huge amount of bad press when it launched the Facebook Dating app not long after CEO Mark Zuckerberg was summoned to testify in front of Congress about customer privacy concerns. Predictably, the app has not been a standout success. However, we hear that Facebook has pivoted and is re-casting Facebook Dating for the coronavirus era. In a press release, the company said, “Our whole company was built on the basis of people virtually, rather than actually, interacting. Facebook Dating is the logical extension of that, and is perfect for these times, since people can only virtually, and not actually, date”.

Two New Coronavirus-Era Reality Shows Being Rushed to Market:

  • ‘Zoom Bloopers’, Hulu. Next week will mark the debut of the show on Hulu. Zoom users — who now equal about 100% of the U.S. population — will be asked to send in videos of their favorite Zoom fails and embarrassing moments. Categories include: Most Embarrassing Unmute Moment; Worst Audio of the Week; Most Distracting Behavior by a Child During a Work-From-Home Zoom Call;  Best Example of Someone Getting Frustrated Learning How the F___ To Use This Thing;  and You Shoulda Left the Camera Off.
  • ‘Work From Home War Stories’, Netflix. Hundreds of millions of people worldwide who have never worked from home are now getting adjusted to this new reality. Well, this sure works better for some than others. Sketches for the first couple of episodes include: WFH Will Ruin My Marriage; Odd Spaces for Home Offices; Sweatpants Are The New ‘Business Casual’; One Thousand Ways to Distract Your Kids So You Can Actually Work For Ten Minutes; These Types of Classes Just Don’t Work For Online Learning; and Foods Not To Eat While On a Conference Call. People will be able to nominate their colleagues for a special award, to be given at the end of each episode: Least Effective Home Worker Of The Week

Stay safe, stay healthy, and stay sane.

The Time for Pragmatism in Tech is Now

The tech industry has always prided itself—and for good reason—on describing and building products, services, and even business models that look to the future. In fact, the technologies behind many of today’s advances are arguably helping define our future. Because of that, it’s become quite normal to think and talk about these developments as having to unfold over the course of several years before their true impact can be accurately measured.

But the COVID-19 crisis is focusing a dramatically different lens on many of these efforts and forcing companies to think (and act) on completely different timelines. It’s also getting people to think differently about what technology products can and can’t do for them, which is leading to some important reassessments of what really matters as well as what’s truly useful and what isn’t. Frankly, in many instances, it’s a rethinking that’s been overdue.

Reassessing and/or revising expectations has some potentially profound implications for tech companies, which can then smartly recognize ways they can shift both their messaging and even their product strategies. It also opens up some interesting opportunities to make meaningful improvements in existing products. Last, but certainly not least, it also provides an incredible opportunity for at least some portion of the tech industry to turn the increasingly negative narrative about big tech around and to reposition the tech industry as a beneficent force that can help improve our society and our world.

Thankfully, the manifestations of these new approaches are already starting to happen in both big ways and small. T-Mobile, for example, quickly got the FCC to give its approval for what’s called Temporary Spectrum Access to increase the available bandwidth they had at 600 MHz—which the company uses for both 4G and 5G service—by essentially “borrowing” unused spectrum from Dish and Comcast. Because T-Mobile had already built-up a good part of its network infrastructure for its 5G deployment, it was able to move much more quickly than it would have otherwise been able to. In addition, the company followed up this week by also launching a new low-cost ($15/month) plan sooner than originally planned. For their part, both AT&T and Verizon also joined in the FCC’s Keep Americans Connected Pledge and made similar efforts of their own to increase available bandwidth, remove data caps for broadband services, pledge not to turn off connectivity plans due to financial hardship caused by the crisis, and more.

Collectively, these quick efforts showed the telecom industry as a whole to be very responsive and sensitive to the issues at hand, all of which should certainly go a long way in improving consumers’ perception of them. Throw in the fact that, as of now, the critical telecom and data delivery infrastructure has held up remarkably well given the huge increase in traffic it’s had to deal with from the many people working and living exclusively at home, and it’s arguably been an impressive week or two for the telecom industry.

Yet another interesting example and set of data comes from Cisco, whose equipment powers large segments of these infrastructure networks. On a call with Cisco executives and CEO Chuck Robbins, the company talked about having to approach these network loads in entirely different ways than they had in the past. Rather than taking a more systematic approach to problem solving, they freely discussed having to make adjustments in real time—a clearly different approach to what they’d done in the past, and yet, based on what we’ve been experiencing, a successful one.

Not surprisingly, the Cisco execs also discussed the incredibly robust demand they’ve seen for their networking products—every company is looking to their bandwidth—as well as the enormous traffic increase (up to 24x) that they’ve seen for their Webex videoconferencing and remote collaboration services. Clearly, these are things that companies need immediately, so Cisco’s ability to adjust its own networks on the fly to meet these huge demands speaks volumes about the pragmatic approach the company is taking to address these issues. One interesting side note from the Cisco call was that the vast majority of Webex client software downloads was for PCs over smartphones, once again highlighting the real-world value that PCs (laptops in particular) continue to play.

In a different and yet thematically related development, IBM, along with a number of government labs and technology partners like HPE, made the decision to open up access to many of the world’s fast and most powerful supercomputers to scientists who are working to battle the virus. It was a smart, fast, pragmatic decision that serves an incredibly important cause and highlights, in a very public way, the efforts that IBM is making to assist in whatever way it can.
Of course, many other tech companies also announced their own efforts to address some of the concerns that the COVID-19 pandemic has created. In fact, as a long-time industry observer, it was very encouraging and even heartwarming to see how much concern that the tech industry was displaying. While it may prove to be short-lived, there also seems to be much more willingness for companies to consider partnering with each other to help create new solutions that, in otherwise normal times, might not happen.

Even with these efforts to provide quick benefits, however, the new “normal” has made it clear that much work still needs to be done, particularly in making some tech products and services easier to use. Case in point: given the huge increase in video calls that I and most other people are now experiencing, it’s easy to find instances in applications like videoconferencing that need to be improved—and quickly. If you’ve ever suffered through trying to troubleshoot your audio and video connections for these calls, for example (and let’s be honest, who hasn’t), then you understand the need. Something as obvious as having a button on the main page of an online service or in the launch screen of a videoconferencing app to let you test your connection (or even better, to use some kind of AI or other software intelligence to fix it automatically), without having to log-in to an account or find the buried preference settings, seems like a very easy thing to do, yet, it’s just not there. These are the kind of small pragmatic differences that companies should also be thinking about.

To be clear, the more pragmatic approach to creating, marketing, and even selling tech products that the COVID-19 pandemic is forcing upon us doesn’t have to come completely at the expense of forward-looking technology advances. The R&D-focused efforts within the tech industry that are enabling things like quantum computing, or the latest neuromorphic chips that Intel recently unveiled, remain an absolutely essential and defining part of the business. The difference now, and likely into the foreseeable future, is really more one of focus and emphasis. Companies need to look much harder at the types of changes they can make here and now both to existing products and upcoming products. I’d argue that the tech industry had gone a little too far down the path of promising long-term revolutions without thinking enough about short-term implications. If nothing else, I expect that one of the more important outcomes that will linger on after we pass this crisis will be more attention to what kind of ideas, products, and services make a difference in the near-term—not just in some far off “vision” for where things might go.

Of course, it’s also important to remember that necessity is the mother of invention, and there are likely few times in recorded history when the necessity of thinking and acting differently has been more urgent. As a result, an even more important silver lining from our current crisis is that we will soon start to see and enjoy the inventive benefits of many of the most brilliant minds in the world who are spending their time thinking, from a present-focused pragmatic perspective, about how to solve many types of tech-related problems both big and small. It’s not clear when, how, or in what exact form those innovations will appear, but I have absolutely no doubt that they will arrive and that we will all benefit from them.

Podcast: Apple Product Launch, Sony PS5 and Microsoft Xbox X, Intel Neuromorphic Chip

This week’s Techpinions podcast features Carolina Milanesi and Bob O’Donnell analyzing Apple’s surprise launch of their new iPad Pro, Magic Keyboard, and updated Macbook Air and Mac Mini, discussing the product spec reveal this week of the forthcoming AMD-powered PS5 and Xbox X gaming consoles, and chatting about the innovations enabled with Intel’s latest neuromorphic chip.

The Value of Contingencies and Remote Collaboration

The stark realities that we’re all facing from the COVID-19 pandemic unfortunately won’t be going away any time soon. The simple truth is that life is just going to be different for a while. Let’s hope that the extraordinary efforts that companies and people are taking to minimize the spread of the virus prove to be effective sooner rather than later.

In the meantime, however, it’s clearly time to settle into new modes of working, with technology obviously playing a key role. Work at home numbers are going to shoot up tremendously, for example, and many people are about to get a crash course in things that work well—and things that don’t—in that environment. (By the way, if you’re looking for some advice and pointers on the subject based on years of experience, check out the latest Techpinions podcast: https://techpinions.com/podcast-working-from-home/59461)

In addition, many companies are going to have to start up whatever contingency and emergency plans they have in place. The speed at which events are occurring and situations are shifting is undoubtedly catching even the most well-prepared organizations off-guard to some degree. Once things start to settle down, however, the critical importance and value of technology-enabled contingency plans should start to become very obvious.

Unfortunately, there are likely several companies that didn’t have those types of plans in place. In addition, there may be an even larger number that had a basic plan in place but didn’t take it to the level that our current situation has created. (To be fair, it would have been hard for anybody to really predict the speed and depth of impact that COVID-19 has created.) The challenge for these organizations will be to quickly put together plans that can help them adapt in the best way possible to the new environment. I have no doubt that, in fact, that’s exactly what a lot of IT professionals are in the process of doing as we speak.

The good news is that we now have access to an amazing array of different technology-based options to help address at least some of the challenges organizations are going to be facing. Additionally, thanks to a series of encouraging announcements, a wide range of tech companies, carriers and others are pitching in to make their services free or to reduce data caps in order to ease the potential limitations of network bandwidth.

From high-quality videoconferencing solutions, to fast, reliable broadband networks, to mature cloud-based collaboration software tools, the tech industry has never had a wider range of tools to help ease the process of working at home (or remotely). In fact, once we get past all this, there’s little doubt that we’ll look back at these next few months as being a defining moment for remote collaboration. The extensive use of these tools is also going to be an incredibly valuable real-time experiment that will clearly expose the real advantages (and challenges) of existing tools. Hopefully, these next few weeks will also quickly lead to tweaks and adjustments that make them easier and more reliable. If these tools do perform well, they could end up becoming significantly more important in the average worker’s arsenal even beyond this crisis. Of course, if they don’t work well for many, expect to see some serious pushback against them.

In addition to these basic remote work enablement capabilities, there are a number of more nuanced factors that are going to come into play, particularly as time goes on. Even if companies have the basic tools they need to enable collaboration, for example, what level of control or management do they have over the devices or the networks being used to do that work? Those are details that can get overlooked in basic contingency plans but need to be a factor for longer-term emergency plans that, hopefully, every company is now creating, if they haven’t already.

If we learn nothing else from this crisis, it should be abundantly clear to all that the need for creating plans that allow business continuity in even the most challenging of situations is absolutely essential. There should be little doubt that aggressively leveraging the new types of remote connectivity and collaboration tools now available needs to be a critical part of those plans.

Another Remote Work Take Or Remote Work Does Not Suck

I am sure by now you had enough about people pitching remote work and how the future will change because of what we are all experiencing due to COVID-19. I already wrote about how frustrating it is that it takes a pandemic, now an official one, to look at how far tech has come to empower remote work. I also warned about the need for companies to take seriously the cultural change that needs to occur to leverage remote work after this crisis is over.

So why am I writing about this again? I read an article earlier this week that really struck a nerve for the many generalizations that the author made on who benefits from remote work and how remote work negatively impacts creativity.

In my career, I have worked both from an office and remotely. I have done so both in the UK and in the US. My experience is my own. We are all a little different, the work we do might be different and the companies we work for are also different making each situation almost unique. So I will try my best not to succumb to generalization just to prove a different point from that expressed in the article and that is remote work does not have to suck.

Who Can Benefit From Working Remotely?

The article calls out new parents as a group that can benefit from working remotely. When I had my daughter, I was still in the UK and I was already working remotely. Those first few months were the hardest I ever had as a remote worker, so I am not sure it was quite a benefit. If you are a mother and you are breastfeeding, working from home allows demand and supply to be in the same location, which certainly simplifies things. Yet, trying to adjust to being a new parent while working all under the same roof makes boundaries much harder. As breastfeeding did not last long for me, I opted to go into the office for a few hours a week as a way to create a separation between me as a mom and me as a business person. Generally, I would say it is not feasible to work from home while looking after a child of any age unless it is for a limited amount of time, like during an illness or a bad weather day when one can temporarily rearrange calls and deadlines.

The other group the article suggests could benefit from working from home are “people with disabilities or others who aren’t well-served by a traditional office set-up.” I would think the hardest part for many in this group is commuting rather than actually being in the office. There is no question that cities could make it much easier to support people with disabilities when it comes to commuting. Often as I battled through the London underground during rush hour I wonder how visually impaired people or people using a wheelchair dealt with the number of stairs and escalators let alone the number of people.

Commuting is also tasking, of course not to the same extent, for people who do not have disabilities. The level of stress that commuting adds to our life impacts both our physical and mental health. In 2017, a study developed by VitalityHealth, the University of Cambridge, RAND Europe and Mercer, examined the impact of commuting on employee health and productivity across more than 34,000 workers across all UK industries. Commutes longer than 30 minutes appeared to have a negative effect on mental wellbeing with 33% of longer commute workers more likely to suffer from depression, 12% more likely to report multiple aspects of work-related stress, 46% more likely to get less than the recommended seven hours of sleep each night and 21% more likely to be obese.

There are other groups who I think could take advantage of working remotely and those are people who do not live in areas where work opportunities are plentiful. For some of these people moving to look for a job might mean not being able to afford a decent home or leaving behind any family support which could help with caring for their kids. It could also mean they are the ones unable to do the caring for family members, for instance.

Remote working might also result in a more diverse workforce. Companies not limiting their talent sourcing to the cities and counties where they have offices might make it easier to attract talent from different ethnicities. Take the tech sector and San Francisco as an example where, in 1970, the Black population represented around 13% of the total population and by 2018, that number was down to less than 6%. How can tech improve diversity if it is fighting against decreasing numbers of available candidates? Also, how can these companies attract diverse employees when it often means not having a community they can belong to?

Productivity and Creativity

I am not sure one could ever settle the discussion on remote work productivity and the jokes I have seen on Twitter over the past week are really not helping. There is this fantasy that working from home means you are less productive because you are easily distracted by family, roommates, pets, delivery people, the TV and apparently whatever else is in your home. While a little self-discipline is required, the distractions are only different, not necessarily less than what an open office can offer. Those who argue for higher productivity often mention the lack of commute time, which can impact how present or more relaxed one is by the time they sit at their desk but might not necessarily result in more hours spent working.

On creativity, the author is clear that working from home kills your creativity because of the lack of stimuli, he even quotes Steve Jobs about how staring at email does not help. But who stairs at email now? The reality is that with today’s technology, you can brainstorm, collaborate and connect in so many different ways and have a quality experience. Being in the office does not necessarily guarantee you are where your team members are, especially if you are working with international teams. The chances of those casual conversations by the micro-kitchen being always about work are also pretty slim. Not that my colleague Ben Bajarin and I should be used as an example, but we are rarely in the same place unless we are traveling. We both work from home and have our best brainstorming session over chat or iMessage. I am also more connected to Ben than I ever been to most of the people I saw every day at my old office.

Working Remotely Does Not Mean Being Alone

The current circumstances are, of course, very unique as we practice social distancing, but in general, working remotely does not mean being alone. The more people in the team are remote, the more the company will have a culture of inclusion and you will not feel like the odd one out. Just this past week seeing most of the people I had meetings with working from their home office rather than all being around one conference tablet and me dialing in from home made a big difference in how we all felt we belonged equally.

The more important point, though, is that remote work gives you the flexibility to include work out time, or running an errand over your lunch break, catching up with a colleague or a client over coffee, and the list goes on. Yes, the jokes about never taking your pajamas off might have some truth for the first couple of days and there are some people for whom remote work could lead to depression induced by isolation, which is no joking matter. Still, with a little proactiveness in setting up human contact and technology that improves telepresence, I think most people would find it quiet but not lonely.

 

So, do I think that remote work sucks? Absolutely not. Do I believe that remote work is for everybody? Of course not. If you are new to remote work and you want to see if it might be something you want to continue to do once the emergency is over, give it some thought. Evaluate the pros and cons, looks at the technology both devices and software that might help you improve the quality of your experience – this is the time to ask for what you need – and try to get into a routine.

AMD Highlights Path to the Future

After a gangbuster performance on the stock market for the last several years, AMD, its CEO Dr. Lisa Su, and its executive leadership team have been under the glare of a lot of media attention recently. Despite the apparent pressure, however, the company keeps coming out swinging and the announcements from last week’s Financial Analyst Day indicate that AMD is showing no signs of letting up.

In fact, the key takeaway from the event was that the company leadership—and apparently many of the financial analysts who attended—now have even more confidence in the business’ future. (The company was even willing to reiterate its guidance for the first quarter, which, given the impact of the coronavirus on many its customers and the tech industry as a whole, was an impressively optimistic statement.)

As a long-time company observer, what particularly stood out to me was that the company has now built up a several-year history of achieving some fairly grand plans based on big decisions it made 4-5 years back. In the past, previous AMD leadership has also talked about big ideas, but frankly, they weren’t able to deliver on them. The key difference with the current leadership team is that they are now able to execute on those ideas. As a result, the credibility of their forward-looking plans has gone up significantly.

And what plans they are. The company made a number of important announcements about its future product strategies and roadmaps at the event, most all of which were targeted around high-performance computing, both for CPUs and GPUs. On the GPU roadmap, a particularly noteworthy development was the introduction of a new datacenter-focused GPU architecture named CDNA (“C” for Compute)—an obvious link to the RDNA architecture currently used for PC and gaming-consoled focused GPU designs. Full details around CDNA and specific Radeon Instinct GPUs based on it are still to come, but the company is clearly focusing on the machine learning, AI, and other datacenter-focused workloads that its primary competitor Nvidia has been targeting for the last several years. One key point the company made is that the second and third generation CDNA-based GPUs would leverage the company’s Infinity interconnect architecture, allowing future CPUs and GPUs to share memory in a truly heterogenous computing environment, as well as providing a way for multiple GPU “chiplets” to connect with one another. The company even talked about offering software that would convert existing CUDA code (which Nvidia uses for its data center GPUs) into platform-agnostic HIP code that would run on these new CDNA-based GPUs.

AMD also talked about plans for future consumer-focused GPUs and discussed its next-generation RDNA2 technology and its Navi 2X chips, which are expected to offer hardware-accelerated support for ray tracing, as well as improvements in variable rate shading and overall performance per watt. Notably, the hardware ray tracing support is expected to be a common architecture between both PCs and gaming consoles (both the PlayStation 5 and next-generation Xbox are based on custom AMD GPU designs), so that should be an important advancement for game developers. The company also mentioned RDNA3, which is expected in the 2020-2021 timeframe and will be manufactured with what is described as an “Advanced Node.” Presumably that will be smaller than the 7nm production being used for current RDNA-based GPUs and those based on the forthcoming RDNA2 architecture.

Speaking of production, the company discussed how it intends to move forward aggressively, not only on smaller size process nodes, but also to add in 2.5 and 3D chip stacking (which it termed X3D). Over the past year or so, packaging technologies have taken on new levels of importance for future semiconductor designs, so it will be interesting to see what AMD does here.

On the CPU side, the company laid out its roadmap for several new generations of its Zen core CPU architectures, including a 7nm-based Zen 3 core expected in the next year or so, and the company’s first 5nm CPU, the Zen 4, planned for 2021 or 2022. AMD made a point to highlight the forthcoming Ryzen Mobile 4000 series CPUs for notebooks, expected to be available later this month, which the company expects will boost them to the top of the notebook performance charts, just as the Ryzen Zen 2-based CPUs did for desktops. The company also mentioned that its 3rd-generation Epyc server processor, codenamed Milan and based on the forthcoming Zen 3 core, is expected to ship later this year.

For even higher-performance computing, the combination of Zen 4-based CPU cores, 3rd generation CDNA GPU cores and the 3rd generation Infinity interconnect architecture in the late 2022 timeframe is also what enables the exascale level of computing powering AMD’s recent El Capitan supercomputer announcement. Built in conjunction with HPE on behalf of Lawrence Livermore Laboratory and the US Department of Energy, El Capitan is expected to be the fastest supercomputer in the world when it’s released and, amazingly, will be more powerful than today’s 200 fastest supercomputers combined.

All told, it was a very impressive set of announcements that highlights how AMD continues to build on the momentum it started to create a few years back. Obviously, there are enormous questions about exactly where the tech market is headed in the short term, but looking further out, it’s clear that AMD is here to stay. For the sake of the overall semiconductor market and the competitiveness that it will enable, that’s a good thing.

Podcast: Coronavirus, Virtual Events, AMD

This week’s Techpinions podcast features Carolina Milanesi and Bob O’Donnell discussing the ongoing impact of the coronavirus on the tech industry and how it may provide some people with a bit more time to think through the direction the tech industry is heading, analyzing the impact of the cancellation of many in-person events and how companies should best think about holding virtual events, and chatting about the news from AMD’s financial analyst day regarding their advancements in supercomputers, datacenter-focused GPUs, process technologies and more.