New Workplace Realities Highlight Opportunity for Cloud-Based Apps and Devices

One of the numerous interesting outcomes of our new work realities is that many tech-related ideas introduced over the past few years are getting a fresh look. In particular, products and services based on concepts that seemed sound in theory but ran into what I’ll call “negative inertia”—that is, a huge, seemingly immovable installed base of a legacy technology or application—are being reconsidered.

Some of the most obvious examples of these are cloud-based applications. While there’s certainly been strong adoption of consumer-based cloud services, such as Netflix, Spotify, and many others, the story hasn’t been quite as clear-cut in the business side of the world. Most organizations and institutions (including schools) still use a very large number of pre-packaged or legacy custom-built applications that haven’t been moved to the cloud.

For understandable reasons, that situation has started to change, and the percentage of cloud-friendly or cloud-native applications has begun to increase. Although the numbers aren’t going to change overnight (or even in the next few months), it’s fairly clear now to even the most conservative of IT organizations that the time to expand their usage of cloud-based software and computing models is now.

As a result of this shift in mindset, businesses are reconsidering their interest in and ability to use even more cloud-friendly tools. This, in turn, is starting to create a bit of a domino effect where previous dependencies and/or barriers that were previously considered insurmountable are now being tossed aside at the drop of a hat. It’s truly a time for fresh thinking in IT.

At the same time, companies also now have the benefit of learning from others that may have made more aggressive moves to the cloud several years back. In addition, they recognize that they can’t just start over, but need to use the existing hardware and software resources that they currently own or have access to. The end result is a healthy, pragmatic focus on finding tools that can help companies meet their essential needs more effectively. In real-world terms, that’s translating to a growing interest in hybrid cloud computing models, where both elements of the public cloud and on-premise or managed computing resources in a private cloud come together to create an optimal mix of capabilities for most organizations.

It’s also allowing companies to take a fresh look at alternatives to tools that may have been a critical part of their organization for a long time. In the case of office productivity suites, for example, companies that have relied on the traditional, licensed versions of Microsoft Office can start to more seriously consider something like Google’s cloud native G Suite as they make more of a shift to the cloud. Of course, they may also simply choose to switch to the newly updated Microsoft 365 cloud-based versions of their productivity suite. Either way, moving to cloud-based office productivity apps can go a long way towards a more flexible IT organization, as well as getting end users more accustomed to accessing all their critical applications from the web.

Directly related to this is the ability to look at new alternatives for client computing devices. As I’ve discussed previously, PC clamshell-based notebook form factors have become the de facto workhorses for most remote workers now and the range of different laptop needs has grown with the number of people now using them. The majority of those devices have been (and will continue to be) Windows-based, but as companies start to rely more on cloud-based applications across the board, Chromebooks become a viable option for more businesses as well.

Most of the attention (and sales) for Chromebooks to date has been in the education market—where they’ve recently proven to be very useful for learn-at-home applications—but the rapidly evolving business app ecosystem does start to shift that story. It also doesn’t hurt that the big PC vendors (Dell, HP, and Lenovo) all have a line of business-focused Chromebooks. On top of that, we’re starting to see some interesting innovations in Chromebook form factors, with options ranging from basic clamshells to convertible 2-in-1s.

The bottom line is that as companies continue to adapt their IT infrastructure to support our new workplace realities, there are a number of very interesting potential second-order effects that may result from quickly adapting to a more cloud-focused world.. While we aren’t likely to move to the kind of completely cloud-dependent vision that used to be posited as the future of computing, it’s clear that we are on the brink of what will undoubtedly be some profound changes in how, and with what tools, we all work.

Podcast: IBM Think, PC Industry News from HP, Microsoft, AMD, Samsung, Apple, Lenovo

This week’s Techpinions podcast features Carolina Milanesi and Bob O’Donnell discussing the IBM Think conference as well as a number of different PC, OS and chip announcements from major vendors in the PC business and analyzing what it means for the state of the PC category moving forward.

In the Modern Workforce, The Role of PCs Continues to Evolve

It’s been an interesting week for the once again vibrant PC industry. We saw the release of several new systems from different vendors, announcements on the future directions of Windows, and hints of yet more new systems and chip developments on the near-term horizon.

While most of the news wasn’t triggered by the COVID-19 pandemic, all of it takes on a new degree of relevance because of it. Why? As recent US retail sales reports demonstrate and conversations with PC OEMs and component suppliers have confirmed, PCs and peripherals are hot again—really hot. Admittedly, there are many questions about how long the sales burst can last, and most forecasts for the full year still show a relatively large decline, but there’s little doubt that in the current era, the PC has regained its role as the most important digital device that most people own—both for personal and work-related purposes. And, I would argue, even if (or when) the sales do start to decline, the philosophical importance of the PC and its relative degree of usage—thanks in part to extended work-from-home initiatives—will likely remain high for some time to come.

The recent blog post from Microsoft’s Windows and Surface leader Panos Panay provides interesting insights in that regard, as he noted that Windows usage has increased by 75% compared to last year. In recognition of that fact, the company has even decided to pivot on their Windows 10X strategy—which was originally targeted solely at dual-screen devices—to make it available for all regular single-screen PCs. Full details on what exactly that will bring remain to be seen, but the key takeaway is Windows PCs will be getting their first major OS upgrade in some time. To my mind, that’s a clear sign of a vital product category.

Apple is moving forward with their personal computer strategies as well, having been one of several vendors who announced new systems this week. In their case, it was an upgrade to their MacBook Pro line with enhanced components and, according to initial reports, a much-improved keyboard. Samsung also widened their line of Windows notebooks with the formal release of their Galaxy Book Flex and Galaxy Book Flex α 2-in-1 convertibles, and Galaxy Book Ion clamshell, all of which feature the same QLED display technology found in Samsung’s TVs. The Galaxy Book Flex and Ion also have the same type of wireless PowerShare features for charging mobile peripherals as their Galaxy line of smartphones.

The broadest array of new product announcements this week, however, comes from HP. What’s interesting about the HP news isn’t just the number of products, but how their range of offerings are reflective of several important trends in the PC market overall. Gaming PCs, for example, have been a growing category for some time now, even despite the other challenges the PC market has faced. With the extended time that people have been staying home, interest, usage and demand for gaming PCs has grown even stronger.

Obviously, HP didn’t plan things in this way, but the timing of their new OMEN 25L and 30L gaming desktops and OMEN 27i gaming monitor couldn’t have been better. The desktops offer a number of refinements over their predecessors, including a choice of high-performance Intel i9 or AMD Ryzen 9 CPUs, Nvidia RTX 2080 or AMD Radeon RX 5700 XT graphics cards, Cooler Master cooling components, HyperX high-speed DRAM, WD Black SSDs and a new case design. The new gaming monitor features Quad HD (2,560 x 1,440) resolution and a 165 Hz refresh rate with support for Nvidia’s G-Sync technology.

HP showed even more fortuitous timing with the launch of their new line of enterprise-focused Chromebooks and, believe it or a not, a new mobile thin client. Chromebooks have been performing yeoman’s duty in the education market for learn-from-home students as a result of the pandemic, but there’s also been growing interest in the enterprise side of the world as well. While the market for business-focused Chromebooks has admittedly been relatively modest so far, the primary reason has been that most companies are still using many legacy applications that haven’t been optimized for the cloud. Now that many application modernization efforts are being fast-tracked within organizations, however, a cloud software-friendly device starts to make a lot more sense.

With its latest announcements, HP expanded their range of business Chromebook offerings. They now start with the upgraded $399 Chromebook Enterprise 14 G6, which offers basic performance, but a large 14” display and a wipeable/cleanable keyboard, then move up to the mid-range Pro c640 Chromebook Enterprise and finally end up at the Elite C1030 Chromebook Enterprise. Interestingly, the C1030 is the first Intel Athena project certified Chromebook (it features a 10th Gen Intel Core CPU) and offers the same 2-in-1 form factor as their high-end EliteBook Windows PCs. It’s also the world’s first Chromebook made with a 75% recycled aluminum top lid, a 50% recycled plastic keyboard, and speakers made from ocean-bound plastics—all part of HP’s ongoing sustainability efforts.

HP also introduced the mt22 Mobile Thin Client, a device that in another era, would barely get much of a mention. However, with the now critical need in certain industries for modern devices that are optimized for VDI (virtual desktop infrastructure) and Windows Virtual Desktop (WVD), the mt22 looks to be a great solution for workers in regulated or highly secure industries who still need to be able to work-from-home. Finally, HP also announced ThinPro Go, a USB stick that can essentially turn any functioning PC with an internet connection into a thin client device running HP’s Linux-based ThinPro OS. While similar types of devices that work by booting from the USB stick have existed in the past, they once again take on new meaning and relevance in our current era.

All told, HP’s announcements reflect the continued diversity that exists in today’s market and highlight how many different, but essential, roles PCs continue to play. Couple that with the other PC-related announcements from this week and it’s clear that the category continues to innovate in a way that surprises us all.

Podcast: Tech Earnings from Facebook, Alphabet/Google, Microsoft, Amazon, Apple

This week’s Techpinions podcast features Carolina Milanesi and Bob O’Donnell analyzing this week’s big tech quarterly earnings reports from Facebook, Google’s parent company Alphabet, Microsoft, Amazon and Apple, with a focus on what the numbers mean for each of the companies individually and for the tech industry as a whole.

Google Anthos Extending Cloud Reach with Cisco, Amazon and Microsoft Connections

While it always sounds nice to talk about complete solutions that a single company can offer, in today’s reality of multi-vendor IT environments, it’s often better if everyone can play together. The strategy team over at Google Cloud seems to be particularly conscious of this principle lately and are working to extend the reach of GCP and their Anthos platform into more places.

Last week, Google made several announcements, including a partnership with Cisco that will better connect Cisco’s software-defined wide area network (SD-WAN) tools with Google Cloud. Google also announced the production release of Anthos for Amazon’s AWS and a preview release of Anthos for Microsoft’s Azure cloud. These two new Anthos tools are applications/services for both migrating and managing cloud workloads to and from GCP to AWS or Azure respectively.

The Cisco-Google partnership offering is officially called the Cisco SD-WAN Hub with Google Cloud. It provides a manageable private connection for applications all the way from an enterprise’s data center to the cloud. Many organizations use SD-WAN tools to manage the connections between branches of an office or other intra-company networks, but the new tools extend that reach to Google’s GCP cloud platform. What this means is that companies can see, manage, and measure the applications they share over SD-WAN connections from within their organizations all the way out to the cloud.

Specifically, the new connection fabric being put into place with this service (which is expected to be previewed at the end of this year) will allow companies to do things like maintain service-level agreements, compliance policies, security settings, and more for applications that reach into the cloud. Without this type of connectivity, companies have been limited to maintaining these services only for internal applications. In addition, the Cisco-powered connection gives companies the flexibility to put portions of an application in one location (for example, running AI/ML algorithms in the cloud), while running another portion, such as the business logic, on a private cloud, but managing them all through Google’s Anthos.

Given the growing interest and usage of hybrid cloud computing principles—where applications can be run both within local private clouds and in public cloud environments—these connection and management capabilities are critically important. In fact, according to the TECHnalysis Research Hybrid and Multi-Cloud study, roughly 86% of organizations that have any type of cloud computing efforts are running private clouds, and 83% are running hybrid clouds, highlighting the widespread use of these computing models and the strategically important need for this extended reach.

Of course, in addition to hybrid cloud, there’s been a tremendous increase in both interest and usage of multi-cloud computing, where companies leverage more than one different cloud provider. In fact, according to the same study, 99% of organizations that leverage cloud computing use more than one public cloud provider. Appropriately enough, the other Anthos announcements from Google were focused on the ability to potentially migrate and to manage cloud-based applications across multiple providers. Specifically, the company’s Anthos for AWS allows companies to move existing workloads from Amazon’s Web Services to GCP (or the other way, if they prefer). Later this year, the production version of Anthos for Azure will bring the same capabilities to and from Microsoft’s cloud platform.

While the theoretical concept of moving workloads back and forth across providers, based on things like pricing or capability changes, sounds interesting, realistically speaking, even Google doesn’t expect workload migration to be the primary focus of Anthos. Instead, just having the potential to make the move gives companies the ability to avoid getting locked into a single cloud provider.

More importantly, Anthos is designed to provide a single, consistent management backplane to an organization’s cloud workloads, allowing them all to be managed from a single location—eventually, regardless of the public cloud platform on which they’re running. In addition, like many other vendors, Google incorporates a number of technologies into Anthos that lets companies modernize their applications. The ability to move applications running inside virtual machines into containers, for example, and then to leverage the Kubernetes-based container management technologies that Anthos is based on, for example, is something that a number of organizations have been investigating.

Ultimately, all of these efforts appear to be focused on making hybrid, multi-cloud computing efforts more readily accessible and more easily manageable for companies of all sizes. Industry discussions on these issues have been ongoing for years now, but efforts like these emphasize that they’re finally becoming real and that it takes the efforts of multiple vendors (or tools that work across multiple platforms) to make them happen.

Podcast: Intel Earnings, Magic Leap, WiFi6E, Arm-Based Mac

This week’s Techpinions podcast features Ben Bajarin and Bob O’Donnell analyzing the earnings announcements from Intel and what they say about tech industry evolution, discussing the layoffs and repivoting of Magic Leap and what it says about the future of Augmented Reality, describing the importance of the new WiFi6E 6GHz extensions to WiFi, and chatting about the potential for an Arm processor-based future Mac.

Remote Access Solutions Getting Extended and Expanded

Now that we’re several weeks into work from home mandates and clearly still many weeks (and likely months) away from most people being able or willing to go back to their offices, companies are starting to extend and expand their remote access plans. Early on, most organizations had to focus their attention on the critical basics: making sure people had PCs they could work on, providing access to email, chat and video meetings, and enabling basic onramps to corporate networks and the resources they contain.

However, it’s become increasingly clear that the new normal of remote work is going to be here for quite some time, at least for some percentage of employees. As a result, IT organizations and vendors that want to support them are refocusing their efforts on providing safe, reliable remote access to all the same resources that would be available to their employees if they were working from their offices. In particular, there’s a need to get access to legacy applications, sensitive security-focused applications, or other software tools that run only within the walls of corporate data centers.

While there’s little question that the pandemic and its aftermath will accelerate efforts to move more applications to the cloud and increase the usage of SaaS-based solutions, those changes won’t happen overnight. Plus, depending on the company, as much as 2/3 of the applications that companies use to run their businesses may fall into the difficult-to-access legacy camp, so even sped up efforts are going to take a while. Yes, small, medium, and large-sized organizations have been moving to the cloud for some time, and some younger businesses have been able to successfully move most of their computing resources and applications there. Collectively, however, there is still a huge amount of non-cloud workloads which companies depend on that can’t be easily reached (or reached at all) outside the office for many employees.

Of course, there are several ways to solve the challenge of providing remote access to these and other types of difficult to reach tools. Many companies have used services like VPNs (virtual private networks), for example, to provide access to some of these kinds of critical applications for years. In most cases, however, those VPNs were intended for occasional use from a limited set of employees, not full-time use from all their employees. In fact, there are stories of companies that quickly ran into license limitations with the VPN software providers when full-time use occurred.

Many other organizations are starting to redeploy technologies and concepts that some had written off as irrelevant or no longer necessary, including VDI (virtual desktop infrastructure) and thin clients. In a VDI environment—which for the record, has been and continues to be going strong in places like health care facilities, financial institutions, government agencies, call centers, etc. even before the pandemic hit—applications are run in virtualized sessions on servers and accessed remotely via dedicated thin client devices or on PCs that have been configured (or recommissioned) to run specialized client software. The beauty of the thin client computing model is that it is very secure, because thin clients don’t have any local storage and all applications and data stay safe within the walls of the corporate data center or other hosted environment.

Companies like Citrix and VMWare have been powering these types of remote access VDI computing solutions for decades now. Initially, much of the focus was around providing access to legacy applications that couldn’t be easily ported to run on Windows-based PCs, but the basic concept of letting remote workers use critical internal applications, whether they are truly legacy or not, is proving to be extremely useful and timely in our current challenging work from home environment. Plus, these tools have evolved well beyond simply providing access to legacy applications. Citrix, in particular, has developed the concept of digital workspaces, sometimes referred to as Desktop as a Service, which integrates remote access to all types of data and applications, whether they’re public cloud-based SaaS apps, private cloud-based tools, traditional on-premise applications or even mobile applications into a single, secure unified workspace or desktop. (By the way, Desktop as a Service is not to be confused with the very similarly named Device as a Service, which entails a leasing-like acquisition and remote management of client devices. Unfortunately, both get shortened to DaaS.)

In addition to these approaches, we’ve started to see other vendors talk more about some of their remote access capabilities. Google, for example, just released a new blog describing their BeyondCorp Remote Access offering, which enables internal web apps to be opened and run remotely in a browser. Though it’s not a new product from Google—it’s actually been available for several years—its capabilities have taken on new relevance in this extended work from home era. As a result, Google is talking more about the organizations that have deployed it, some best practices on how to leverage it, and more.

Most companies are probably going to need a combination of these and other types of remote access work tools to match the specific needs of their organizations. The simple fact is that disaster recovery and contingency plans are now everyday needs for many companies. As a result, IT organizations are going to have to shift into these modes for much longer periods of time than anyone could have anticipated. Though it’s a challenging task, the good news is, there are a wealth of solid, established tools and technologies available to let companies adapt to the new normal and keep their organizations running this way for some time to come. Yes, adjustments will continue to be made, security issues and approaches have to be addressed, and situations will continue to change, but at least the opportunity is there to let people function in a reasonable meaningful way. That’s something for which we can all be thankful.

Podcast: Apple Google Contact Tracing, iPhone SE, OnePlus 8, Samsung 10 Lite

This week’s Techpinions podcast features Carolina Milanesi and Bob O’Donnell analyzing the surprising announcement from Apple and Google to work together on creating a smartphone-based system for tracking those who have been exposed to people with COVID-19, and discussing the launch of several new moderately priced smartphones and what they mean to the overall smartphone market.

Podcast Extra: COVID-19 Business Continuity

This is an extra Techpinions podcast featuring Carolina Milanesi, Bob O’Donnell and special guest Darcy Ortiz from Intel talking about the critical role of contingency efforts, business continuity plans and how companies can best handle a pandemic from a company that’s in the rare position of having a pandemic planning team for over fifteen years.


 

Apple Google Contact Tracing Effort Raises Fascinating New Questions

In a move that caught many off guard—in part because of its release on the notoriously slow news day of Good Friday—Apple and Google announced an effort to create a standardized means of sharing information about the spread of the COVID-19 virus. Utilizing the Bluetooth Low Energy (LE) technology that’s been built into smartphones for the last 6 or 7 years and some clever mechanisms for anonymizing the data, the companies are working on building a standard API (application programming interface) that can be used to inform people if they’ve come into contact with someone who’s tested positive for the virus.

Initially those efforts will require people to download and enable specialized applications from known health care providers, but eventually the two companies plan to embed this capability directly into their respective mobile phone operating systems: iOS and Android.

Numerous articles have already been written about some of the technical details of how it works, and the companies themselves have put together a relatively simple explanation of the process. Rather than focusing on those details, however, I’ve been thinking more about the second-order impacts from such a move and what they have to say about the state of technology in our lives.

First, it’s amazing to think how far-reaching and impactful an effort like this could prove to be. While it may be somewhat obvious on one hand, it’s also easy to forget how widespread and common these technologies have become. In an era when it’s often difficult to get coordinated efforts within a single country (or even state), with one decisive step, these two tech industry titans are working to put together a potential solution that could work for most of the world. (Roughly half the world’s population owns a smartphone that runs one of these OS’s and a large percentage of people who don’t have one likely live with others who do. That’s incredible.)

With a few notable exceptions, tech industry developments essentially ignore country boundaries and have become global in nature right before our eyes. At times like this, that’s a profoundly powerful position to be in—and a strong reason to hope that, despite potential difficulties, the effort is a success. Of course, because of that reach and power, it also wouldn’t be terribly surprising to see some governments raise concerns about these advancements as they are further developed and as the potential extent of their influence becomes more apparent. Ultimately, however, while there has been discussion in the past of the potential good that technology can bring to the world, this combined effort could prove to be an actual life and death example of that good.

Unfortunately, some of the concerns regarding security, privacy, and control that have been raised about this new effort also highlight one of the starkest examples of what the potential misuse of widespread technology could do. And this is where some of the biggest questions about this project are centered. Even people who understand that the best of intentions are at play also know that concerns about data manipulation, creating false hopes (or fears), and much more are certainly valid when you start talking about putting so many people’s lives and personal health data under this level of technical control and scrutiny.

While there are no easy answers to these types of questions, one positive outcome that I certainly hope to see as a result of this effort is enhanced scrutiny of any kind of personal tracking technologies, particularly those focused on location tracking. Many of these location-based or application-driven efforts to harvest data on what we’re doing, what we’re reading, where we’re going, and so on—most all of which are done for the absurdly unimportant task of “personalizing” advertisements—have already gotten way out of hand. In fact, it felt like many of these technologies were just starting to see some real push back as the pandemic hit.

Let’s hope that as more people get smarter about the type of tracking efforts that really do matter and can potentially impact people’s lives in a positive way, we’ll see much more scrutiny of these other unimportant tracking efforts. In fact, with any luck there will be much more concentrated efforts to roll back or, even better, completely ban these hidden, little understood and yet incredibly invasive technologies and the mountains of data they create. As it is, they have existed for far too long. The more light that can be shone into these darker sides of technology abuse, the more outrage it will undoubtedly cause, which should ultimately force change.

Finally, on a very different note, I am quite curious to see how this combined Apple Google effort could end up impacting the overall view of Google. While Apple is generally seen to be a trustworthy company, many people still harbor concerns around trusting Google because of some of the data collection policies (as well as ad targeting efforts) that the company has utilized in the past. If Google handles these efforts well—and uses the opportunity to become more forthright about its other data handling endeavors—I believe they could gain a great deal of trust back from many consumers. They’ve certainly started making efforts in that regard, so I hope they can use this experience to do even more.

Of course, if the overall efficacy of this joint effort doesn’t prove to be as useful or beneficial as the theory of it certainly sounds—and numerous concerns are already being raised—none of these second-order impacts will matter much. I am hopeful, however, that progress can be made, not only for the ongoing process of managing people’s health and information regarding the COVID-19 pandemic, but for how technology can be smartly leveraged in powerful and far-reaching ways.

Podcast: The Global Semiconductor Market

This week’s Techpinions podcast features Ben Bajarin, Mario Morales of IDC, and Bob O’Donnell discussing the state of the global semiconductor market and how the COVID-19 pandemic is impacting major chip and end device companies and the tech industry overall.

Here’s a link to the IDC Semiconductor market forecast that Mario discussed on the podcast: https://www.idc.com/getdoc.jsp?containerId=US46155720

Need for Multiple Video Platforms Becoming Apparent

Like most of you, I’ve been doing more than my fair share of video calls lately and feel relatively certain that the practice will continue for some time to come—even when life beyond the COVID-19 pandemic starts to return to normal. As we’ve all learned from the experience, in the proper amount and for the proper length, they can be a very effective form of communication. Plus, as many have discussed and promised for years, they do give us the flexibility to work from many different locations and, for certain types of events, can reduce the time, costs, and hassles of travel.

That’s not to say, however, that they are a cure all. As we’ve also all learned, there are definitely limitations to what can be achieved via video calls and sometimes things just get, well, awkward.

For people who don’t work at large organizations that have standardized on a single videoconferencing platform, another challenge is the need to work with, install, and learn multiple different apps. As someone who talks to lots of different companies, I can safely say that I’m pretty sure I’ve used virtually every major videoconferencing option that’s out there over the last few weeks: Microsoft Teams, Cisco Webex, Google Hangouts/Meet, Skype, GoToMeeting, Blue Jeans, and of course, Zoom.

Initially, I admit to being frustrated by the lack of standards across the different tools and have wondered if it wouldn’t make more sense to just have a single platform, or at least a primary one that could serve as the default. As time has gone on, however, I realize that my initial thinking was lacking a certain amount of insight. As unintuitive as it may first sound, there actually is a great deal of sense to having multiple videoconferencing apps and platforms.

To be clear, there’s definitely work that could be done to enable and/or improve the interoperability across some of these platforms–even if it’s nothing more than allowing the creation of a high-level log-in tool that could manage underlying audio and video connections to the various platforms. However, just as choice and competition in other categories ends up creating better products for everyone, the same is true with videoconferencing tools—for many different reasons.

First, as we’ve certainly started to see and learn from much of the Zoom fallout that’s started to occur, things can get ugly if too many people start to over-rely on a single platform. Not only is there the potential for reliability concerns—even on a cloud-based platform—but a platform that gets too much attention is bound to become a tempting target for hackers and other troublemakers. Stories of “Zoombombing” and other related intrusions have grown so commonplace that the FBI is even starting to investigate. Plus, nearly every day, it seems, there’s news of yet another large organization moving away from or forbidding the use of Zoom.

To the company’s credit, much of the attention and the continuing strong usage of Zoom is because they took the often awkward, painful, and unreliable process of connecting multiple people from multiple locations into a functioning video call and made it easy. For many people and some organizations, that was good enough, and thankfully, we’re starting to see other videoconferencing platforms improve these critical basics as a competitive response. That’s a win for everyone.

However, it’s also become increasingly clear that Zoom wasn’t nearly as focused on security and privacy as many people and organizations thought they were and as they should have been. From questions about encryption, to publicly accessible recordings of private calls, the routing of US calls through Chinese servers, and much more, Zoom is facing a reckoning on some of the choices they’ve made.

Other videoconferencing platforms, including Webex and GotoMeeting have been focused on privacy and security for some time—unfortunately, sometimes at the expense of ease-of-use—but it’s clear that many organizations are starting to look at other alternatives that are a better match for their security needs. Microsoft, to its credit, has made security an essential part of its relatively new Teams platform.

But even beyond the obvious critical security needs, it’s clear, in using the various videoconferencing tools, that some are better suited for different types of meetings than others. The mechanisms for sharing and annotating files, for example, take different forms among different tools. In addition, some tools have better capabilities for working within the structure of a defined multi-part meeting, such as a virtual event.

The bottom line is, it’s very difficult to find a single tool that can work for all types of meetings, all types of leaders, or even all types of company cultures. Meetings can vary tremendously across companies or even across groups within companies, so it isn’t realistic to think that a single platform is going to meet everyone’s virtual meeting needs. Choice and focus continue to be important and will likely lead many organizations to adopting several different videoconferencing tools for different meeting needs.

And let’s not forget, we won’t be doing this many video meetings for ever. While there’s little doubt that we’ll all be doing more video meetings post-pandemic than we were doing pre-pandemic, the overall number of video meetings will go down from current levels for most people. In fact, once things get back to normal, I think people are actually going to look forward to face-to-face meetings¬—despite the frustrations they often create. We’ll all just be a lot more sensitive to what types of things work in video meetings and what’s better live. That’s an improvement I think we can all look forward to.

Podcast: Microsoft 365, T-Mobile-Sprint, Network Reliability

This week’s Techpinions podcast features Carolina Milanesi, Mark Lowenstein and Bob O’Donnell discussing the release of Microsoft’s new Microsoft 365 subscription service, analyzing the impact of the T-Mobile-Sprint merger on the US telecom market and the rollout of 5G, and chatting about overall broadband and mobile network reliability during the COVID19 crisis.

Microsoft 365 Shift Demonstrates Evolution of Cloud-Based Services

If there’s one piece of software that has held up remarkably well over several decades, it’s Microsoft’s Office suite of productivity apps. From business to personal life, the applications in Office have proven their value time and time again to people all over the world. Perhaps because of that, Microsoft has used Office as a means to push forward the definition of what software is, how it should be delivered, how it should be sold, what platforms it should run on, and much more over the last decade or so.

In June of 2011, for example, the company officially unveiled Office 365, which provided access to all the same applications in the regular suite but in a subscription-like “service” form that was delivered (and updated) via the internet. Since then, the company has added new features and functions to the service, made it available to mobile platforms such as Android and iOS, in addition to Windows and MacOS, and generally used it as a means to expand how people think about applications they use on a regular basis. In the process, Microsoft has made many people comfortable with the idea of cloud-based software becoming a cloud-based service.

Yesterday, the company took the next step in the evolution of the product and renamed the consumer, as well as the small and medium business versions of Office 365 to Microsoft 365—changes that will all occur on April 21. The name change is obviously a subtle one, but beyond the title, the changes run much deeper. Specifically, the new brand reflects how the set of applications that make up the company’s popular subscription-based offering is evolving. It also reflects how the company itself is changing.

In the case of the SMB versions of Microsoft 365, the name change is simply a branding one, which better reflects that the service includes more than just basic office productivity, particularly with the Teams collaboration tools and service. For the new consumer-oriented Personal and Family versions of Microsoft 365, the changes are more extensive.

Notably, the consumer versions of Microsoft 365 include the addition of several new applications, a number of AI-powered intelligent enhancements to existing applications and—in an important first for Microsoft—some mobile-first advancements. The new version of the Microsoft Editor function works across Word, Outlook.com, and the web, and is essentially a Grammarly competitor that moves beyond simple spell and grammar checking to making AI-powered rewriting suggestions, avoiding plagiarism and more.

The AI-based Designer feature in PowerPoint—which I have found to be incredibly useful—has been enhanced in this latest version of Microsoft 365 to support a wider array of content that it can “beautify” and includes support for a greatly expanded library of supplementary graphics, videos, fonts and templates.
The biggest change to Excel is the forthcoming addition of Money for Excel, an add-in that gives it Quicken-like money and account management features. In addition, working in conjunction with Wolfram Alpha, Microsoft is adding in support for over 100 new “smart” data types that makes it significantly easier to track everything from calories to travel locations and more. In essence, it provides the type of intelligence that people may have expected computing devices and applications to have all along.

The addition of both Teams (for Consumers) and Family Safety are interesting because of the capabilities they bring to the service and because both will launch first on mobile OSes—Android and iOS. Microsoft has had mobile versions of its main productivity suite apps, as well as its One Drive storage service for a while now, but this Microsoft 365 launch marks the first time the company will debut new apps in mobile form. On the one hand, the move is logical and not terribly surprising given how much people use their mobile devices today—particularly for communications and tracking, which are the core functions of Teams and Family Safety respectively. Nevertheless, it’s still noteworthy, because it does show how Microsoft has been able to pivot on its typical “deliver on PC first” strategy and keep itself as relevant as possible.

In the case of Teams, the company isn’t replicating its Enterprise version, but instead has developed a consumer-focused edition that allows for real-time chats, document sharing, creating and tracking lists, and more in a manner that should make sense for most consumers. Family Safety is completely new and allows parents to provide limits and controls on digital device usage and content, as well as track the physical location and even driving of other family members. Importantly, Microsoft made the point to say that it’s doing all these things without sharing (or certainly not selling) any of this information to auto insurance companies, advertisers or any other companies. While the company would have undoubtedly created a bit of an outcry if it did any of that, it was still reassuring to hear a big tech vendor emphasize these privacy and security-focused concerns. Let’s hope all major tech vendors follow suit.

Speaking of privacy and security, Microsoft took the opportunity with its Microsoft 365 launch announcement to also unveil the latest version of Microsoft Edge, the company’s significantly improved browser. In addition to several convenience-based features, such as the addition of vertical tabs, smart copying from web pages, and the ability to easily create portable “collections” of content from web-based sources, the company debuted some important privacy features as well. Password Monitor, for example, can automatically track whether any of your logins are available on the dark web and encourage you to change your passwords on sites where that may have occurred. Given the huge number of security breaches and data exposure that have impacted almost all of us at this point, this could prove to be an incredibly valuable new feature. In addition, the company added refined tracking controls that allows you to set the amount information you are willing to share with other websites as a result of your browsing sessions.

All told, it was a pretty impressive set of announcements that highlights how Microsoft has managed to continue adjusting its strategies to match the changing needs of the market and its customers. Of course, many consumers will still be content using the free versions of the basic Office applications and services that Microsoft will continue to make available even after April 21. However, the functionality that the company has built into its new Microsoft 365 Personal and Family offerings will be compelling enough for many to make the switch, and the success that the Office suite of applications has enjoyed for so long will continue with the new Microsoft 365.

The Time for Pragmatism in Tech is Now

The tech industry has always prided itself—and for good reason—on describing and building products, services, and even business models that look to the future. In fact, the technologies behind many of today’s advances are arguably helping define our future. Because of that, it’s become quite normal to think and talk about these developments as having to unfold over the course of several years before their true impact can be accurately measured.

But the COVID-19 crisis is focusing a dramatically different lens on many of these efforts and forcing companies to think (and act) on completely different timelines. It’s also getting people to think differently about what technology products can and can’t do for them, which is leading to some important reassessments of what really matters as well as what’s truly useful and what isn’t. Frankly, in many instances, it’s a rethinking that’s been overdue.

Reassessing and/or revising expectations has some potentially profound implications for tech companies, which can then smartly recognize ways they can shift both their messaging and even their product strategies. It also opens up some interesting opportunities to make meaningful improvements in existing products. Last, but certainly not least, it also provides an incredible opportunity for at least some portion of the tech industry to turn the increasingly negative narrative about big tech around and to reposition the tech industry as a beneficent force that can help improve our society and our world.

Thankfully, the manifestations of these new approaches are already starting to happen in both big ways and small. T-Mobile, for example, quickly got the FCC to give its approval for what’s called Temporary Spectrum Access to increase the available bandwidth they had at 600 MHz—which the company uses for both 4G and 5G service—by essentially “borrowing” unused spectrum from Dish and Comcast. Because T-Mobile had already built-up a good part of its network infrastructure for its 5G deployment, it was able to move much more quickly than it would have otherwise been able to. In addition, the company followed up this week by also launching a new low-cost ($15/month) plan sooner than originally planned. For their part, both AT&T and Verizon also joined in the FCC’s Keep Americans Connected Pledge and made similar efforts of their own to increase available bandwidth, remove data caps for broadband services, pledge not to turn off connectivity plans due to financial hardship caused by the crisis, and more.

Collectively, these quick efforts showed the telecom industry as a whole to be very responsive and sensitive to the issues at hand, all of which should certainly go a long way in improving consumers’ perception of them. Throw in the fact that, as of now, the critical telecom and data delivery infrastructure has held up remarkably well given the huge increase in traffic it’s had to deal with from the many people working and living exclusively at home, and it’s arguably been an impressive week or two for the telecom industry.

Yet another interesting example and set of data comes from Cisco, whose equipment powers large segments of these infrastructure networks. On a call with Cisco executives and CEO Chuck Robbins, the company talked about having to approach these network loads in entirely different ways than they had in the past. Rather than taking a more systematic approach to problem solving, they freely discussed having to make adjustments in real time—a clearly different approach to what they’d done in the past, and yet, based on what we’ve been experiencing, a successful one.

Not surprisingly, the Cisco execs also discussed the incredibly robust demand they’ve seen for their networking products—every company is looking to their bandwidth—as well as the enormous traffic increase (up to 24x) that they’ve seen for their Webex videoconferencing and remote collaboration services. Clearly, these are things that companies need immediately, so Cisco’s ability to adjust its own networks on the fly to meet these huge demands speaks volumes about the pragmatic approach the company is taking to address these issues. One interesting side note from the Cisco call was that the vast majority of Webex client software downloads was for PCs over smartphones, once again highlighting the real-world value that PCs (laptops in particular) continue to play.

In a different and yet thematically related development, IBM, along with a number of government labs and technology partners like HPE, made the decision to open up access to many of the world’s fast and most powerful supercomputers to scientists who are working to battle the virus. It was a smart, fast, pragmatic decision that serves an incredibly important cause and highlights, in a very public way, the efforts that IBM is making to assist in whatever way it can.
Of course, many other tech companies also announced their own efforts to address some of the concerns that the COVID-19 pandemic has created. In fact, as a long-time industry observer, it was very encouraging and even heartwarming to see how much concern that the tech industry was displaying. While it may prove to be short-lived, there also seems to be much more willingness for companies to consider partnering with each other to help create new solutions that, in otherwise normal times, might not happen.

Even with these efforts to provide quick benefits, however, the new “normal” has made it clear that much work still needs to be done, particularly in making some tech products and services easier to use. Case in point: given the huge increase in video calls that I and most other people are now experiencing, it’s easy to find instances in applications like videoconferencing that need to be improved—and quickly. If you’ve ever suffered through trying to troubleshoot your audio and video connections for these calls, for example (and let’s be honest, who hasn’t), then you understand the need. Something as obvious as having a button on the main page of an online service or in the launch screen of a videoconferencing app to let you test your connection (or even better, to use some kind of AI or other software intelligence to fix it automatically), without having to log-in to an account or find the buried preference settings, seems like a very easy thing to do, yet, it’s just not there. These are the kind of small pragmatic differences that companies should also be thinking about.

To be clear, the more pragmatic approach to creating, marketing, and even selling tech products that the COVID-19 pandemic is forcing upon us doesn’t have to come completely at the expense of forward-looking technology advances. The R&D-focused efforts within the tech industry that are enabling things like quantum computing, or the latest neuromorphic chips that Intel recently unveiled, remain an absolutely essential and defining part of the business. The difference now, and likely into the foreseeable future, is really more one of focus and emphasis. Companies need to look much harder at the types of changes they can make here and now both to existing products and upcoming products. I’d argue that the tech industry had gone a little too far down the path of promising long-term revolutions without thinking enough about short-term implications. If nothing else, I expect that one of the more important outcomes that will linger on after we pass this crisis will be more attention to what kind of ideas, products, and services make a difference in the near-term—not just in some far off “vision” for where things might go.

Of course, it’s also important to remember that necessity is the mother of invention, and there are likely few times in recorded history when the necessity of thinking and acting differently has been more urgent. As a result, an even more important silver lining from our current crisis is that we will soon start to see and enjoy the inventive benefits of many of the most brilliant minds in the world who are spending their time thinking, from a present-focused pragmatic perspective, about how to solve many types of tech-related problems both big and small. It’s not clear when, how, or in what exact form those innovations will appear, but I have absolutely no doubt that they will arrive and that we will all benefit from them.

Podcast: Apple Product Launch, Sony PS5 and Microsoft Xbox X, Intel Neuromorphic Chip

This week’s Techpinions podcast features Carolina Milanesi and Bob O’Donnell analyzing Apple’s surprise launch of their new iPad Pro, Magic Keyboard, and updated Macbook Air and Mac Mini, discussing the product spec reveal this week of the forthcoming AMD-powered PS5 and Xbox X gaming consoles, and chatting about the innovations enabled with Intel’s latest neuromorphic chip.

The Value of Contingencies and Remote Collaboration

The stark realities that we’re all facing from the COVID-19 pandemic unfortunately won’t be going away any time soon. The simple truth is that life is just going to be different for a while. Let’s hope that the extraordinary efforts that companies and people are taking to minimize the spread of the virus prove to be effective sooner rather than later.

In the meantime, however, it’s clearly time to settle into new modes of working, with technology obviously playing a key role. Work at home numbers are going to shoot up tremendously, for example, and many people are about to get a crash course in things that work well—and things that don’t—in that environment. (By the way, if you’re looking for some advice and pointers on the subject based on years of experience, check out the latest Techpinions podcast: https://techpinions.com/podcast-working-from-home/59461)

In addition, many companies are going to have to start up whatever contingency and emergency plans they have in place. The speed at which events are occurring and situations are shifting is undoubtedly catching even the most well-prepared organizations off-guard to some degree. Once things start to settle down, however, the critical importance and value of technology-enabled contingency plans should start to become very obvious.

Unfortunately, there are likely several companies that didn’t have those types of plans in place. In addition, there may be an even larger number that had a basic plan in place but didn’t take it to the level that our current situation has created. (To be fair, it would have been hard for anybody to really predict the speed and depth of impact that COVID-19 has created.) The challenge for these organizations will be to quickly put together plans that can help them adapt in the best way possible to the new environment. I have no doubt that, in fact, that’s exactly what a lot of IT professionals are in the process of doing as we speak.

The good news is that we now have access to an amazing array of different technology-based options to help address at least some of the challenges organizations are going to be facing. Additionally, thanks to a series of encouraging announcements, a wide range of tech companies, carriers and others are pitching in to make their services free or to reduce data caps in order to ease the potential limitations of network bandwidth.

From high-quality videoconferencing solutions, to fast, reliable broadband networks, to mature cloud-based collaboration software tools, the tech industry has never had a wider range of tools to help ease the process of working at home (or remotely). In fact, once we get past all this, there’s little doubt that we’ll look back at these next few months as being a defining moment for remote collaboration. The extensive use of these tools is also going to be an incredibly valuable real-time experiment that will clearly expose the real advantages (and challenges) of existing tools. Hopefully, these next few weeks will also quickly lead to tweaks and adjustments that make them easier and more reliable. If these tools do perform well, they could end up becoming significantly more important in the average worker’s arsenal even beyond this crisis. Of course, if they don’t work well for many, expect to see some serious pushback against them.

In addition to these basic remote work enablement capabilities, there are a number of more nuanced factors that are going to come into play, particularly as time goes on. Even if companies have the basic tools they need to enable collaboration, for example, what level of control or management do they have over the devices or the networks being used to do that work? Those are details that can get overlooked in basic contingency plans but need to be a factor for longer-term emergency plans that, hopefully, every company is now creating, if they haven’t already.

If we learn nothing else from this crisis, it should be abundantly clear to all that the need for creating plans that allow business continuity in even the most challenging of situations is absolutely essential. There should be little doubt that aggressively leveraging the new types of remote connectivity and collaboration tools now available needs to be a critical part of those plans.

Podcast: Working from Home

This week’s Techpinions podcast, which features Ben Bajarin and Bob O’Donnell, takes a different approach than normal and features an in-depth conversation on how to make the best of working from home, providing advice and best practices gained from many years of personal experience on everything from developing habits, to optimizing your connectivity, to leveraging the right kind of tech equipment and more.

AMD Highlights Path to the Future

After a gangbuster performance on the stock market for the last several years, AMD, its CEO Dr. Lisa Su, and its executive leadership team have been under the glare of a lot of media attention recently. Despite the apparent pressure, however, the company keeps coming out swinging and the announcements from last week’s Financial Analyst Day indicate that AMD is showing no signs of letting up.

In fact, the key takeaway from the event was that the company leadership—and apparently many of the financial analysts who attended—now have even more confidence in the business’ future. (The company was even willing to reiterate its guidance for the first quarter, which, given the impact of the coronavirus on many its customers and the tech industry as a whole, was an impressively optimistic statement.)

As a long-time company observer, what particularly stood out to me was that the company has now built up a several-year history of achieving some fairly grand plans based on big decisions it made 4-5 years back. In the past, previous AMD leadership has also talked about big ideas, but frankly, they weren’t able to deliver on them. The key difference with the current leadership team is that they are now able to execute on those ideas. As a result, the credibility of their forward-looking plans has gone up significantly.

And what plans they are. The company made a number of important announcements about its future product strategies and roadmaps at the event, most all of which were targeted around high-performance computing, both for CPUs and GPUs. On the GPU roadmap, a particularly noteworthy development was the introduction of a new datacenter-focused GPU architecture named CDNA (“C” for Compute)—an obvious link to the RDNA architecture currently used for PC and gaming-consoled focused GPU designs. Full details around CDNA and specific Radeon Instinct GPUs based on it are still to come, but the company is clearly focusing on the machine learning, AI, and other datacenter-focused workloads that its primary competitor Nvidia has been targeting for the last several years. One key point the company made is that the second and third generation CDNA-based GPUs would leverage the company’s Infinity interconnect architecture, allowing future CPUs and GPUs to share memory in a truly heterogenous computing environment, as well as providing a way for multiple GPU “chiplets” to connect with one another. The company even talked about offering software that would convert existing CUDA code (which Nvidia uses for its data center GPUs) into platform-agnostic HIP code that would run on these new CDNA-based GPUs.

AMD also talked about plans for future consumer-focused GPUs and discussed its next-generation RDNA2 technology and its Navi 2X chips, which are expected to offer hardware-accelerated support for ray tracing, as well as improvements in variable rate shading and overall performance per watt. Notably, the hardware ray tracing support is expected to be a common architecture between both PCs and gaming consoles (both the PlayStation 5 and next-generation Xbox are based on custom AMD GPU designs), so that should be an important advancement for game developers. The company also mentioned RDNA3, which is expected in the 2020-2021 timeframe and will be manufactured with what is described as an “Advanced Node.” Presumably that will be smaller than the 7nm production being used for current RDNA-based GPUs and those based on the forthcoming RDNA2 architecture.

Speaking of production, the company discussed how it intends to move forward aggressively, not only on smaller size process nodes, but also to add in 2.5 and 3D chip stacking (which it termed X3D). Over the past year or so, packaging technologies have taken on new levels of importance for future semiconductor designs, so it will be interesting to see what AMD does here.

On the CPU side, the company laid out its roadmap for several new generations of its Zen core CPU architectures, including a 7nm-based Zen 3 core expected in the next year or so, and the company’s first 5nm CPU, the Zen 4, planned for 2021 or 2022. AMD made a point to highlight the forthcoming Ryzen Mobile 4000 series CPUs for notebooks, expected to be available later this month, which the company expects will boost them to the top of the notebook performance charts, just as the Ryzen Zen 2-based CPUs did for desktops. The company also mentioned that its 3rd-generation Epyc server processor, codenamed Milan and based on the forthcoming Zen 3 core, is expected to ship later this year.

For even higher-performance computing, the combination of Zen 4-based CPU cores, 3rd generation CDNA GPU cores and the 3rd generation Infinity interconnect architecture in the late 2022 timeframe is also what enables the exascale level of computing powering AMD’s recent El Capitan supercomputer announcement. Built in conjunction with HPE on behalf of Lawrence Livermore Laboratory and the US Department of Energy, El Capitan is expected to be the fastest supercomputer in the world when it’s released and, amazingly, will be more powerful than today’s 200 fastest supercomputers combined.

All told, it was a very impressive set of announcements that highlights how AMD continues to build on the momentum it started to create a few years back. Obviously, there are enormous questions about exactly where the tech market is headed in the short term, but looking further out, it’s clear that AMD is here to stay. For the sake of the overall semiconductor market and the competitiveness that it will enable, that’s a good thing.

Podcast: Coronavirus, Virtual Events, AMD

This week’s Techpinions podcast features Carolina Milanesi and Bob O’Donnell discussing the ongoing impact of the coronavirus on the tech industry and how it may provide some people with a bit more time to think through the direction the tech industry is heading, analyzing the impact of the cancellation of many in-person events and how companies should best think about holding virtual events, and chatting about the news from AMD’s financial analyst day regarding their advancements in supercomputers, datacenter-focused GPUs, process technologies and more.

Coronavirus-Induced Pause Gives Tech Industry Opportunity to Reflect

As the news has now made clear, the COVID-19 coronavirus is having a significant impact, not just on the tech industry, but on society and the globe as a whole. There are still huge numbers of unanswered questions about the virus and what it’s full effect will be. Importantly, and appropriately, most of the focus is on the health and well-being of those impacted and educating people about how to keep themselves and their loved ones safe. There’s also a lot being done to keep people accurately and adequately informed about which concerns are legitimate and which ones are unnecessarily overblown.

At the same time, it’s now very clear that there’s a very practical impact happening to people in the tech industry: their calendars are opening up in a way that many haven’t experience before. The reason? The cancellation and/or “digitization” of more and more events scheduled for this spring and, likely, into the summer. Not just big events like MWC, GDC and F8, but lots of small public and private events are being cancelled, rescheduled, or in the latest move, “virtualized” to streaming-only digital form.

Combine that with the travel restrictions in place for important tech-focused countries around the world, and the tangible result is that many people in the tech industry are going to be falling way short of their frequent flyer requirements this year. Practically speaking, they’re also going to have more time available to them.

The reality is that this “pause” in the action will likely be short-lived. If history has taught us nothing else, it is that “this too shall pass”, and there will come a time in the hopefully not-to-distant future when coronavirus-related concerns will be nothing but a memory.

For a while at least, though, things are going to be different for a lot of people in tech. So, the important question that comes to mind is, how are people going to be spending that extra time?

I don’t claim to have any brilliant answers, but I certainly hope that, in addition to maybe spending a little more time with our loved ones, some of that newfound time is spent thinking about the direction that some key tech industry trends are heading, and whether or not they’re moving in a manner that people really want or intended. On the privacy and security front, for example, there’s arguably a great deal of soul-searching that ought to be done about what kind of data can and/or should be collected about each of us as we go about our digital lives. Similarly, advertising and other information-driven services that leverage (or, in many cases, abuse) that information, might want to consider less invasive alternative approaches.

In the case of autonomous cars, I’d argue that’s it’s time to look past technological advances and figure out how real people actually want to interact with their vehicles. Similarly, it’s worth taking time to think more about how vehicles could be made safer without necessarily becoming dependent on autonomous control.
For many companies, the “found time” may (and should) also lead to more discussions about how to refine their messages and deliver information that doesn’t overpromise what’s possible (as the tech industry has become notorious for doing), but gives people a realistic set of expectations.

There are also bound to be some very interesting discussions about the overall merits of holding big (or even small) events. Again, society and the industry will make it through this, so it will be very interesting to see what people believe was lost and/or gained from the cancellations or recasting of these events. Yes, I’m sure we’ll see more discussions about working from home and video-based collaboration and those are all good things. However, there are still serious questions about how much people are willing to change their work habits for the long-term.

Of course, there are literally millions of other positive ways that people in tech can use this potential opportunity of extra time for good. What I’m afraid might happen, however, is that more of it will be spent on social media, adding yet more undeserved influence to a serious blight on the tech industry’s legacy that, among other things, has already cultivated a heightened level of fear and panic about the coronavirus.

It’s rare that an industry, or a society, suddenly finds itself with access to the precious resource of additional time. In the end, I think that’s one positive thing that we could end up realizing from the unfortunate reality that is now upon us. Let’s hope the newfound time gets used in a positive way.

Podcast: Coronavirus, Intel 5G, Asian Phone Launches, Qualcomm

This week’s Techpinions podcast features Carolina Milanesi and Bob O’Donnell discussing the ongoing impact of the Coronavirus on the tech industry, and several news announcements that were originally scheduled for MWC including Intel’s debut of new chips for 5G network infrastructure, the launch of several new Android phones from Huawei, Xiaomi and other Chinese vendors, and Qualcomm’s press briefing on 5G phone momentum, the third generation X60 5G modem, and more.

Intel Focuses on 5G Infrastructure

Despite the cancellation of this year’s Mobile World Congress show in Barcelona, quite a few tech companies are still making important announcements that were originally planned for the show. Not surprisingly, several of those announcements are focused on 5G.

One of them—perhaps somewhat surprisingly—comes from chip leader Intel. The company sold its 5G modem business assets to Apple last fall, and many considered that move to be the company’s exit from the 5G world. But Intel has a much bigger, though significantly less well-known, business creating chips that help power the network infrastructure equipment that sits at the heart of today’s 4G LTE and 5G networks, including base stations and other core components.

For years, much of the network silicon inside these devices was custom designed and built by the vendors making the equipment—companies like Ericsson, Nokia, Huawei, etc. However, with the growth of more software-based networking advancements, including things like network function virtualization (NFV), as well as increasing demand for general compute performance to handle applications like AI at the edge, Intel and other specialized vendors like Marvell have seen strong interest in their off-the-shelf “merchant” chips.

The basic idea is that, as with many other computing platforms, it’s the software that’s driving the biggest innovations in networking. By building on more standardized hardware platforms (x86 for Intel and Arm for Marvell) and leveraging open source software tools and platforms, like Linux, companies can create networking-related innovations at a faster and more efficient pace.

To better address those needs, Intel made several different announcements focused on adding the computing power and specialized resources that new 5G networks require at multiple points along the network path. Starting at the network edge, the company introduced a new version of its Atom processor, the P5900, that’s specifically targeted towards wireless base stations. Based on a 10nm process technology, the new SOC (system on chip) integrates a next-generation set of Atom CPU cores along with network acceleration functions that are specifically targeted for radio access functions, small cells, and other edge of the network applications. Given the strong expected market for 5G-focused small cells that millimeter wave, mid-band sub-6 GHz and CBRS-based spectrum demand—as well as the potential to do cloud-based computing workloads on the edge, such as AI—this looks to be a very interesting opportunity.

For more compute-intensive workloads at the core of the network, the company also chose to make a number of additions to its second-generation general-purpose Xeon Scalable server processors as part of this 5G announcement. Facing intensive pricing and performance pressure from AMD’s second generation Epyc server processors, Intel added 18 new SKUs to its lineup that offer more cores, faster clock speeds, and large cache sizes at lower prices than some of its initial second-gen Xeon Scalable parts. In terms of performance, Intel touted up to 56% improvement for NFV workloads versus some of its first-generation Xeon Scalable CPUs (though the company didn’t clarify performance improvements vs. some of the earlier second-generation parts).

Another key element that’s essential to speed up the performance of core telecom markets are programmable chips that can be optimized to run network packet processing and other functions that are critical to guaranteeing lower latency and meeting consistent quality of service requirements. These points are becoming particularly important for 5G, which has promised improved latency as one of its key benefits versus 4G.

FPGAs (Field Programmable Gate Arrays) have traditionally done much of this kind of work in telecom equipment, and Intel has a large, established FPGA business with its Agilex line of chips. The power and flexibility of FPGAs do come with a cost, however, in terms of both pricing and power, so Intel debuted its first all-new design in a chip category it’s calling a structured ASIC and a product that’s currently codenamed Diamond Mesa. The idea with a structured ASIC is that it’s essentially only partially programmable, and therefore sits between an FPGA and custom-designed ASIC. From a practical perspective, that means it offers faster time to market than building a custom ASIC at a lower price and power requirement than an FPGA. To ease the transition for existing FPGA users, however, Intel has designed Diamond Mesa to be footprint compatible with its FPGAs, making it easier to integrate into existing designs. The real-world benefit is that, used in conjunction with the latest Xeon Scalable CPUs, Diamond Mesa will let telco equipment providers create products that can handle the increased performance, latency, and security demands of 5G networks.

The last portion of the Intel announcement centered on, of all things, a wired ethernet adaptor. While much of the focus for 5G and any other telecom network is typically on wireless technologies, the reality is that much of the infrastructure still uses wired connections for interconnecting different components across the network core and to enable certain capabilities. Particularly for applications that require time-sensitive networking—including things like precise industrial automation—we’re still several years away from being able to ensure consistent real-time signal delivery over completely wireless networks. As a result, Intel’s new 700 series network adapter—which incorporates hardware-enhanced precision time protocol (PTP) support that leverages GPS clock signals for cross-network service synchronization, according to the company—still has an important, if not terribly exciting, function to fulfill in 5G networks.

All told, the Intel 5G network infrastructure story offers a pretty comprehensive set of offerings that highlight how the company has a bigger role to play in the latest generation wireless network than many people may initially realize. Of course, it’s a big field, with a lot of different opportunities for many different vendors, but there’s no doubt that Intel is serious about making its presence felt in 5G. With these announcements the company has made several important steps in that direction, and it will be interesting to see what the future brings.

Apple Coronavirus Warnings Highlight Complexities of Tech Supply Chains

As the impact of the coronavirus spreads, Apple issued a rare statement yesterday related to the coronavirus’ impact on its quarterly earnings guidance and that announcement is now reverberating throughout the tech industry as well. The company reported that its current quarter’s earnings will likely be negatively affected by several factors related to the virus, specifically its effect on the Chinese market and its global supply chain.

What makes the news even more disconcerting is that the company had already suggested on its last earnings call just a few weeks back that revenues for the quarter could fall into a much wider range of potential outcomes than they typically provide because of the uncertainties the virus was creating. A second negative statement just a few weeks later highlights that the impact of the virus is proving to be much worse than originally thought. The fact that they didn’t say how much the earnings guidance decline would be also emphasizes the uncertainty about the total extent of the virus’ impact.

Specifically, Apple said that sales of iPhones in China—an increasingly important market for the company—will be lower than it had predicted because many of its retail stores and other retail partners’ stores have been closed as a result of the virus. In addition, as stores reopen, the traffic in them has been significantly lighter than normal, leading to the slowdown in sales. Theoretically, online sales shouldn’t be impacted as strongly, but it’s not hard to imagine that the delivery mechanisms in China have also been slowed by the virus.

The second factor Apple cited—a slower ramp to full production after Chinese New Year—is potentially more troublesome, because it impacts the company’s entire global supply of iPhones and other devices. In addition, it certainly implies that other major tech hardware vendors could start feeling this soon as well.

As most people know, the vast majority of Apple’s devices are built in Chinese factories, so the company—like most every other hardware tech vendor—is currently very reliant on these Chinese factories cranking out products in huge quantities on a steady basis. And that’s really the problem, because if Apple is starting to notice the impact strongly enough that it felt the need to issue a statement on revised guidance, then we’re likely going to see a lot of other hardware-focused tech vendors do something similar over the next few days or weeks. In fact, even before Apple, Nintendo had disclosed that it won’t be able to build as many of its Switch gaming consoles as it would like, because its primary production partner Foxconn—who also happens to be Apple’s largest factory partner—was facing delays at its Chinese factories.

The other thing to bear in mind is that even companies that don’t have factories in the most affected areas of China can see their production slowed because of their dependence on certain parts or other components that do come from the most impacted regions. These days, the number of subcomponents that go into more sophisticated tech devices can easily reach over 100, and because so many of these subcomponents are built in China, the range of impact from the virus is potentially much wider than it first appears.

By the way, the timing here is also very important. One thing that many people don’t understand is that, as terrible as it is, the coronavirus started seriously impacting Chinese factories just before the one week in the year when they’re scheduled to be offline: Chinese New Year. If the virus had hit at another time of year, the impact could have been much worse. Now companies are trying to determine how many workers are returning to the factories after their scheduled break, and it’s those metrics that are going to be the most closely watched over the next few weeks.

In addition, out of an abundance of caution, I’ve also heard some hardware vendors say that the Chinese government is imposing mandatory factory shutdowns of 30 days if a single worker is discovered to be infected. Needless to say, that’s going to force companies to be very conservative about letting employees come back to work, which could also result in serious delays in production.

Ultimately, however, it’s essential to remember that this issue is an extremely challenging humanitarian crisis and that companies need to be (and, likely will be) sensitive to the issue and do whatever they can to keep their workers safe. Taking the big picture view, these production delays will likely (and hopefully) be little more than a blip on the long-term radar of tech industry production. Unfortunately, because many institutional investors are more concerned with short-term financial performance, this may cause some short-term challenges for those companies who are being impacted. Long-term, let’s hope the tech industry can learn from this crisis and figure out ways to both protect the workers who help bring products to life and to create supply chains that can withstand the inevitable challenges that lie ahead.

Podcast: Samsung S20 and Z Flip Launch, T-Mobile-Sprint Merger, MWC Cancellation

This week’s Techpinions podcast features Carolina Milanesi and Bob O’Donnell analyzing the new smartphone announcements from Samsung, including their 5G-enabled S20 line and the foldable Galaxy Z Flip, discussing the implications of the approval for the T-Mobile-Sprint merger on 5G competitiveness in the US, and chatting about the impact of the cancellation of the big Mobile World Congress trade show.