Arm Doubles Down on AI for Mobile Devices

While many people still aren’t that familiar with semiconductor IP stalwart Arm, most everyone knows their key customers—Qualcomm, Apple, Samsung, MediaTek, and HiSilicon (a division of Huawei) to name just a few in the mobile market. Arm provides chip designs that these companies and others use to power virtually every single smartphone in existence around the world.

As a result, if you care the least bit about where the mobile device market is headed, it’s important to keep track of the new advancements that Arm introduces. While you won’t experience them immediately, if you purchase a new smartphones 12-18 months from now, it will likely be powered by a chip (or several) that incorporates these new enhancements. In particular, expect to see a big boost in AI performance, across a range of different chips.

Those who are familiar with Arm know that, like clockwork every year, the company announces new capabilities for its Cortex CPUs, Mali GPUs and, most recently Ethos NPUs (neural processing units). As you’d expect, most of these include refinements to the chip designs and resulting increases in performance. This year, however, Arm has thrown in a few additional twists that serve as an excellent roadmap for where the smartphone market is headed at several different levels.

But first, let’s get the basics. The latest top-end 64-bit Cortex CPU design is the Cortex-A78 (up from last year’s A77), a further refinement of the company’s ARMv8.2 core. The A78 features 20% sustained performance improvements versus last year’s design, thanks to several advanced architectural refinements. The biggest focus this year is on power efficiency, letting the new design achieve that 20% improvement at the same power draw, or allowing it to achieve the same performance as the A77 with just 50% of the power, thereby saving battery life. These benefits result in better performance per watt, making the design of the A78 well suited for both power- and performance-hungry 5G phones, as well as foldable and other devices featuring larger displays.

In addition to the A78, Arm debuted a whole new branch of CPUs with the Cortex-X1, a larger, but more powerful design. Recognizing the growing interest in gaming-focused smartphones and other applications that demand even more performance, Arm decided to provide an even more performant version of their CPU core with the X1 (it features a 30% performance boost over the A77).

Even more interesting is the fact that the X1 doubles the performance for machine learning and AI models. Despite the appearance of dedicated AI accelerators (like the company’s Ethos NPUs) as well as the extensive focus on GPUs for AI, the truth is that most neural network and other AI models designed for mobile devices run on the CPU, so it’s critical to enhance performance there.

While the X1 isn’t intended for mainstream usage and won’t represent a particularly large segment of the market (particularly because of its larger and more power-hungry design), its appearance reflects the increasing diversity and segmentation of the smartphone market. In addition, the Cortex-X looks like it would be a good candidate for future versions of Arm CPUs for PCs and other larger devices.

On the GPU side, the company made two different introductions: one at the top end of the performance chain and the other emphasizing the rapidly growing opportunity for moderately priced smartphones. The top-of-the-line Mali-G78 features a 25% increase in standard graphics performance over its Mali-G77 predecessor, as well as a 15% boost in machine learning application performance. Given the interest in achieving PC and console gaming-like quality on smartphones, the G78 adds support for up to 24 shader cores, but leverages a clever asynchronous power design that allows it to create high-level graphics without drawing too much power.

The other new design is the Mali-G68, which Arm classifies as being targeted to a “sub-premium” tier of phones. Leveraging essentially the same design as the G78, but limited to a maximum of 6 shader cores, the G68 allows its chip customers and then smartphone makers in turn to create products with premium-like features but at lower price points. Given the price compression that many expect to see in smartphones over the next several years, this seems like an important step.

The final new design from Arm was their Ethos-N78, just the second generation of their dedicated line of AI co-processors for mobile devices. Featuring more than 2x the peak performance of the N77, as well as a greater than 25% improvement in performance efficiency, the N78 also offers more flexibility in configuring its core elements, letting companies more easily use it across a wide range of different mobile devices.

Even more important than raw performance in the AI/ML world is software. Not surprisingly then, the company also announced new enhancements to their Arm Development Studio and other tools that make it easier to optimize AI applications not only for the N78, but for its full line of Cortex CPUs and Mali GPUs as well. In fact, Arm is offering a unified software stack that essentially allows developers to create AI/ML models that can run transparently across any combination of Arm CPUs, GPUs or NPUs. Conceptually, it’s very similar to Intel’s One API idea, which is intended to provide the same level of flexibility across a range of different Intel silicon designs. Real-world performance for all of these “write once, run anywhere” heterogenous computing models remains to be seen—and the challenges for all of them seem quite high—but it’s easy to see why they could be very popular with developers.

As expected, Arm brought a range of new mobile-focused chip designs to the table once again this year, but thanks to the debut of the Cortex-X1, the sub-premium Mali-G68, and the overall emphasis on AI and machine learning, they still managed to shape things up a bit. Clearly, the company sees a growing demand for all these market sub-segments and, because of the pivotal role they play, their efforts will go a long way toward making them real.

The ultimate decisions on how all these new capabilities get deployed and the features they enable get implemented is up to the company’s more famous customers and, in some cases, their customers’ customers, of course. More “intelligent” devices, more immersive augmented reality (AR) and virtual reality (VR) enhancements, and significantly improved graphics performance all seem like straightforward outcomes they could enable. Nevertheless, the groundwork has now been laid for future mobile devices and it’s up to other vendors in the mobile industry to see exactly where it will take us.

Podcast: Microsoft Build, Work from Home Forever

This week’s Techpinions podcast features Carolina Milanesi and Bob O’Donnell discussing the news as well as the structure of Microsoft’s recent virtual Build developer conference, as well as the trend of tech companies offering their employees the ability to work from home for as long as they would like.

More Nuance is Needed With Regard to China

There’s little question that the situation with regard to China is becoming increasingly fraught and tense. But it’s also highly incongruous. On the one hand, severe actions are being taken against companies such as Huawei. On the other hand, many U.S. companies are significantly expanding their operations in China, particularly in certain retail segments.

From a tech sector perspective, I’m concerned that if we don’t adopt a more balanced, nuanced, and better thought through approach, there could be real damage to U.S. companies, the economy, and even our health and well-being. The extent to which the economies of China are intertwined is, I believe, significantly under-recognized.

To begin with, we need to dismiss that the U.S.-China “Cold War”, as it has been characterized by some, is anything like the 1950s-1980s era Cold War between the United States and the then U.S.S.R. That Cold War was primarily military and geopolitical. But there was very little economy and trade between the U.S. and the Soviets. Very few U.S. companies did business there, and very few Russian companies did business here. The global economy was also much less interconnected at the time. Even if, today, the current U.S.-Russia rift expanded into a broader economic war, the impact would not be that significant, comparatively.

With regard to China, there’s ample reason to be concerned. The Trump administration has been directionally correct to make this a bigger issue. Obama, and his ‘Asia pivot’, downplayed some of China’s egregious actions and potential long-term threat. There’s ample evidence that Chinese companies have spied on U.S. companies and individuals, stolen our technology, and fomented discord. The Chinese have placed onerous demands on U.S. companies wanting to do business there, and have outright banned some companies and ‘categories’ of business. It’s OK to recognize that our societies are fundamentally different.

However, I think many people don’t fully understand how dependent the U.S. is on China, in three ‘mega’ ways. First, China is a huge market for many U.S. companies, in a broad range of sectors. If the situation were to escalate, causing the Chinese to place greater barriers on U.S. businesses, there could be a major impact back home. And the U.S. economy, already in a coronavirus-induced fragile state, can ill afford another shock.

Second, the U.S. is highly dependent on China’s massive manufacturing capacity and its role in the global supply chain. The trade war and tariffs in certain sectors have caused some damage. But if things were to escalate, the effects could be profound. It would be inconvenient if you couldn’t get your new iPhone. But many critical pharmaceuticals, and much medical equipment, are manufactured in China. We’d be in big trouble if that were significantly disrupted. Frankly, I’m surprised that it’s held up as well as it has through the Pandemic, given the school-yard-ish rhetoric being spewn by the White House.

Third, there are a lot more ‘comings and goings’ between these two countries than many people realize. The fact that 430,000 people arrived in the United States on direct flights from China in the first three months of 2020 is illustrative. There were some 400,000 Chinese students in the United States in 2019, many of them paying full boat at U.S. universities (thereby subsidizing the financial aid given to U.S.-based students). Many U.S. universities have significant operations in China. And, there is a lot of cooperative R&D between the two countries. Chinese tourists spent nearly $10 billion in the United States in 2018.

Put a different way, the impact of a significant reduction in relations and commerce between the U.S. and China would be far more damaging than between the U.S. and Europe — from an economic perspective.

So, what to be done? I think we need a coordinated national strategy, and more middle ground. One disappointment in the federal government’s approach is that there has been little involvement of the private sector in the decision-making. Yes, there are potential conflicts of interest, but what if the White House assembled a task force of some key U.S. companies in tech, aerospace, pharma, and so on to at least get some advice and counsel on how real is the threat, and what prudent measures can be taken?

Recognizing the importance of our mutual economies and the fact that there are grave reasons for concern, we need to develop an effective, longer-term approach. This means developing an overarching policy, proper safeguards and means of verification. There are certain to be individual skirmishes, but it would be helpful if those were applied across a broader set of principles and policies. Maybe we should look at this as the 2020s version of negotiating an arms control treaty. Only this would be more economy/tech based.

Take Huawei as an example. I don’t think there’s much to be gained by masterminding a full takedown of the company. Why not use Huawei as a textbook case to develop some of these security mechanisms? Their management has shown a willingness to engage.

This strategic view should also take a look at manufacturing. Again, the Trump administration is directionally correct to want to bring more manufacturing back to the United States. Between the China trade war and coronavirus, it would be prudent to be less dependent on China’s role in the supply chain for certain critical industries and products.

Perhaps, on the way to this, we can tone down the rhetoric. Calling COVID-19 the ‘China Virus’, and other such xenophopic behavior is unnecessary, and beneath our ideals. It can only come back to bite us at some point.

As we emerge, haltingly, into a ‘Covid re-entry’ world, let’s focus on the constructive, rather than the destructive. The economy needs it. Our mental well-being needs it.

Microsoft Build Shows The Future of Events and Collaboration

On Tuesday, May 19, 2020, Microsoft kicked off Build, its annual developer conference. The live event held in Seattle for the past few years was this year turned into a fully digital gathering due to the COVID-19 pandemic. Build follows the sun with 48 hours filled with sessions streaming live across different time zones.

Towards the end of day one, Head of Comms, Frank Shaw, shared some numbers about the event: 202,219 people registered, of those 87,000 created a profile and scheduled 1 million sessions. The live Twitter feed recorded over 200,000 views. Pretty impressive numbers for an event that was reengineered from the ground up to fit a fully digital consumption model. For seasoned attendees, part of the event had a familiar feel: Satya Nadella’s keynote, the technical keynotes with Scott Hanselman, Scott Guthrie and Kevin Gallo, the Imagination Cup and a long list of sessions. There was plenty of news about Azure, Power Apps, Project Cortex, Microsoft 365, Project Reunion and Fluid Framework. What struck me the most, though, was how much learning opportunity there was for developers. Of course, this is not new, but it was much more apparent this year as labs, panels, focus groups and Microsoft Learn sessions were intertwined in the schedule in such a way that it made it much more obvious to see plenty of opportunities for attendees to bring it back to their needs and their business.

While many in the analyst and press world were lamenting not attending in person, I have to admit there were a few things that the digital format really facilitated.

First, a diversity of speakers. If you follow me enough, you know I tend to pay attention to how diverse the main stage of every conference is and, I must say, I was quite impressed with Build Day One. There was a great mix of speakers across gender, age, race, geographies and because of the format, it felt everybody shared the main stage because everything you watch was your main stage.

Second, a variety of topics. Side by side technical sessions and product announcements, there was the opportunity to hear about students’ mentorship into science and tech jobs, diversity and inclusion (I could have spent much longer on this), ethics and fairness in AI and more. The fact that you can do back to back sessions gives you the flexibility to fit more on your schedule and so does the shorter format that most sessions took on – a brilliant move from Microsoft.

This is the first digital even from Microsoft and won’t be the last as all events moved to digital until July 2021. In a pre-Build analyst session, Julia White said it is unlikely we will return to events the way they were before #COVID-19 both because it will take a while for people to be comfortable but also because they will learn the strengths of going digital.

From an announcement perspective there was one bit of news most Windows and Office users will be particularly interested in:

Fluid Framework

Last year at Ignite, Microsoft announced the public preview of Microsoft Fluid Framework aimed at making collaboration more seamless by making workflows go more fluidly across apps. This week we had a glimpse at the first integration of the Fluid Framework into Microsoft 365 with Outlook and Office.com.

Instead of creating documents, Fluid creates canvases where multiple people can collaborate in real-time and bringing together different elements like text, pictures, charts, tables. Each component is instantaneous and editable, making for a flexible and fast-paced experience, according to Microsoft. The simulation video sure seems impressive and Microsoft 365 enterprise and education subscribers will get access to a beta in the coming weeks.

The underlying concept is not dissimilar from Google Docs, but the implementation is quite different as each Fluid component is pulled together into one canvas, not a multitude of tabs. This could potentially be easier for current Office users to adopt and might cement users more into Microsoft 365. Right now, Office suffers from a lack of loyalty as users turn to Google when they need to collaborate. In a recent study we, at Creative Strategies, conducted in the US across 1000 users, we found that among Microsoft Office 365 users, 28% use Google Docs, 23% use Gmail and 18% use Google Sheets. Interestingly, Gsuite does not have a monopoly on its users’ time either, creating an opportunity for Fluid to convert users fully to Microsoft 365. Among our Gsuite users, 47% dabble into Word, 39% into Excel and 32% into Outlook.

The staggered approach Microsoft is taking with Fluid will allow users to experiment without imposing too much change too quickly. After the initial rollout to Outlook and Office.com, we will see Fluid incorporated into Microsoft Teams later this year and into the desktop versions of Outlook next year. Microsoft has also opened up the Fluid Framework to developers by making it opensource. This means that aside from Microsoft’s first-party Office apps, we will see other apps being able to create components to be added to the canvases. This might prove quite interesting for large enterprises that have proprietary apps. It will certainly be interesting to see how the likes of SalesForce, SAP and IBM will look at taking advantage of the Framework. It could be an opportunity or a threat depending on how breaking down an app into components might end up disenfranchising the app itself, making less clear where the value is coming from beyond Microsoft 365.

With Fluid and Edge, Microsoft is certainly moving more into a cloud and browser first experience for Microsoft 365. This, of course, means that the competition with G Suite and Chrome will heat up, which in turn means users will see more innovation, never a bad thing.

 

Lastly, whether you were attending those sessions that talked about the ability to schedule appointments, broadcast events, add automated workflows or a chatbot, there was one product that was center stage across Build: Microsoft Teams. If the growth Microsoft Teams has had over the past two months weren’t enough, these two days were a pretty strong testament to the robustness and capabilities of the product but also how much Microsoft has riding on it. It was clear to me Microsoft Teams is as central to the success of Build as it is to the success of Microsoft 365.

Microsoft Project Reunion Widens Windows 10 Opportunity to One Billion Devices

Sometimes, things just take a little bit longer than expected. At Microsoft’s Build conference five years ago, the company made a widely reported prediction that the Windows 10 ecosystem would expand out to one billion devices over the course of a 2-3 year time period. Unfortunately, they didn’t make it by the original deadline, but just a few months ago they were finally able to announce that they had reached that ambitious milestone.

Appropriately, at this year’s virtual Build developer conference, the company made what could prove to be an even more impactful announcement that will allow developers to take full advantage of that huge installed base. In short, the company unveiled something they call Project Reunion that will essentially make it easier for a variety of different types of Windows applications—built via different programming models—to run more consistently and more effectively across more devices.

Before getting into the details, a bit of context is in order. Back in 2015 when then Executive VP Terry Myerson made the one billion prediction, Microsoft’s OS efforts were more grandiose than simply for PCs. The company was still actively pursuing the smartphone market with Windows Phone, had just unveiled the first HoloLens concept devices and Surface Hub, talked about the role that Xbox One had in its OS plans, and generally was thinking more about a multi-device world for its then new OS.

Looking back now, it’s clear that we indeed entered an era of multiple devices, but the only ones that ended up having a significant impact on the Windows 10 installed base number turned out to be PCs in all flavors and forms, from desktops and laptops, to 2-in-1s and convertibles like the original Surface. In fact, the nearly complete reliance on PCs is undoubtedly why it took longer to reach the one billion goal.

In retrospect, however, that’s actually a good thing, because there are now approximately one billion relatively similar devices for which developers can create applications, instead of a mixed group of devices that were more related to Windows 10 in name than true capability. Even with this large similar grouping, however, not all applications for Windows 10 were created or function in the same way. Because of some of Microsoft’s early bets on device diversity under the Windows 10 umbrella, they made decisions about promoting a more basic (and legacy-free) application development architecture that they hoped would ensure that applications ran across this wide range of devices. Specifically, Microsoft promoted the concept of Universal Windows Platform (UWP) APIs (Application Programming Interfaces) and a number of developers took them up on these initiatives.

At this point, however, because of some of the limitations in UWP, there really isn’t much need (or demand) for these efforts, hence Project Reunion. At a basic level, the goal with Project Reunion is to provide the complete set of Windows 10 capabilities (the Win32 or Windows APIs) to applications originally created around the UWP concept—in essence to “reunite” the two application development platforms and their respective APIs into a single, more modern Windows platform. This, in turn, allows programmers to have a more consistent means of interaction between their apps and the Windows 10 operating system, regardless of the approach they first took to create the application. In addition, thanks to a number of extensions that Microsoft is making to that model, it allows developers to create more modern, web and service-friendly applications.

Specifically, for example, Project Reunion is going to enable something the company is calling WinUI 3 Preview 1, which is a new framework for building modern, fast, flexible user interfaces that can easily scale across different devices. By leveraging the open-source, multi-OS friendly Fluent Design-based tools, developers can actually achieve an even more widespread reach not only across different Windows 10-based devices, but those running other OS’s as well. Plus, thanks to hooks into previous development platforms, developers can use these UI tools to modernize the look of existing apps as well as build new ones.

Another specific element of Project Reunion is WebView 2, which is a set of tools that lets developers easily integrate native web content within an app and even integrate with browsers across different platforms. As with WinUI 3 and the new more modern Windows APIs, WebView 2 isn’t locked to any version of Windows, giving developers more flexibility in leveraging their application’s codebase across multiple platforms.

Microsoft also announced new extensions that allow Windows developers to tap into services built into Microsoft 365 such as Microsoft Search and Microsoft Graph. This allows developers to create a modern web service-like application that can leverage the capabilities and data that Microsoft’s tools provide and offer extensions and connections to the company’s widely used SaaS offerings.

The Project Reunion capabilities look to finally complete the picture around the one billion device installed base that the company promised, but in a much different way than most people originally thought. Interestingly, thanks to the growing importance and influence of the PC—a point that’s really been brought home in our current environment—there’s arguably a less diverse set of Windows 10-based devices to specifically code for than most predicted. However, the new tools and capabilities promised for Project Reunion potentially allow developers to create applications for that entire base, instead of a smaller subset that realistically was all that was possible from the original UWP efforts.

Additionally, because of Microsoft’s significantly more open approach to application developments and open source in general since that 2015 announcement, the range of devices that Windows-based developers can target is now significantly broader than even that impressive one billion figure. Obviously delivering on that promise is a lot harder than simply defining the vision, but it’s certainly interesting to see how Microsoft continues to keep the world of Windows fresh and relevant. Throw in the fact that a new version of Windows—10X—is on the horizon, and it’s clear that 2020, and beyond, is going to be an interesting time for a platform that many had written off, albeit incorrectly, as irrelevant.

COVID-19 Hastens Some Industries Shift to Cloud

There is nothing like a crisis to make people and organizations shift their mindsets on cloud-based technologies from “that’s interesting, but we’d never do it” to “how fast can we turn it on?” Bob O’Donnell’s recent column talked about this shift as it relates to cloud-based apps and devices, and today I’d like to talk about two concrete examples that recently came to my attention.

PTC’s OnShape
During a recent call with PTC’s CEO Jim Heppelmann, he mentioned that his company’s OnShape product had seen a considerable uptick in usage since the COVID-19 pandemic had caused colleges and offices to close. OnShape is essentially a cloud-based Computer-Aided Design (CAD) and Product Data Management (PDM) platform suite.

CAD apps typically run on high-powered workstations that leverage high-end CPUs, specific types of high-dollar graphics cards, huge amounts of RAM, and lots of high-speed I/O. While mobile workstations exist, many organizations still rely on desktop versions that were left behind when states, counties, and cities began issuing stay-at-home orders. At best, users might have access to these systems through VPNs; at worst, they were off-limits entirely. The problem was particularly acute with higher education institutes, where engineering students typically physically go to computer labs to work on their projects. With those labs locked-up tight, it seemed the remainder of the school year would be lost for many.

PTC has a long-running academic program, and OnShape offers a version of its service free to educational institutions. When COVID-19 hit, a ton of them took up on that offer. According to PTC’s Jordan Cox, the company saw a 300% increase in registration for the service in March and a 400% increase in April. He says many universities pivoted to use the app to try to salvage the semester.

OnShape is a very interesting product that lets users access a full-fledged design application typically reserved for use on a high-power computer through any Web browser. This means you can access it from a standard Windows PC, Mac, or Chromebook. It also works through mobile browsers running on Android and iOS smartphones and tablets. And because it is also a PDM platform, it also offers integrated version control and release management, which ensures users are always working on the latest design data. That may not sound particularly revolutionary to those of us who have enjoyed these features in modern office suites, but it is not a trivial upgrade for most CAD users.

Like any platform shift, making such a move can represent some challenges, especially if an organization has a long legacy of using certain apps. But like tearing a bandaid off, once they’ve done it, they begin to see the opportunities inherent in the new technology.

Arch Platform Technologies
I read about this company in a recent Variety story, and this week it officially launched its platform: a cloud-based infrastructure for visual effects companies. The Arch platform lets companies “set up secure workstations, render farms and storage and workflow management without having to rely on machine rooms.”

Arch’s well-timed product launch has drawn the attention of studios that are grappling with the issue of graphics artists stuck working from home. Typically these employees work in studios that house high-end workstations as well as high-cost render farms that utilize a great deal of power and require a ton of cooling. Shifting these workloads to the cloud means an organization can accommodate employees working from anywhere with a good internet connection.

And there are some critical long-term benefits to leveraging a cloud-based system for visual effects, too. Namely, organizations can eliminate the substantial up-front capital expense of buying new hardware to take on a new job, spreading out the cost over time as they pay a per-month service fee. Another added benefit: Studios can put visual effects employees in places where they can maximize tax incentives.

Shifting workloads to the cloud is hardly a new phenomenon, so it is interesting to watch industries and higher education institutions that have dragged their feet on these types of advancements moving so quickly to embrace them in times of need. Now that many have made the shift under duress, I suspect many will be happy to continue using them once the dust settles, which could have notable ramifications for some hardware categories going forward.

New Workplace Realities Highlight Opportunity for Cloud-Based Apps and Devices

One of the numerous interesting outcomes of our new work realities is that many tech-related ideas introduced over the past few years are getting a fresh look. In particular, products and services based on concepts that seemed sound in theory but ran into what I’ll call “negative inertia”—that is, a huge, seemingly immovable installed base of a legacy technology or application—are being reconsidered.

Some of the most obvious examples of these are cloud-based applications. While there’s certainly been strong adoption of consumer-based cloud services, such as Netflix, Spotify, and many others, the story hasn’t been quite as clear-cut in the business side of the world. Most organizations and institutions (including schools) still use a very large number of pre-packaged or legacy custom-built applications that haven’t been moved to the cloud.

For understandable reasons, that situation has started to change, and the percentage of cloud-friendly or cloud-native applications has begun to increase. Although the numbers aren’t going to change overnight (or even in the next few months), it’s fairly clear now to even the most conservative of IT organizations that the time to expand their usage of cloud-based software and computing models is now.

As a result of this shift in mindset, businesses are reconsidering their interest in and ability to use even more cloud-friendly tools. This, in turn, is starting to create a bit of a domino effect where previous dependencies and/or barriers that were previously considered insurmountable are now being tossed aside at the drop of a hat. It’s truly a time for fresh thinking in IT.

At the same time, companies also now have the benefit of learning from others that may have made more aggressive moves to the cloud several years back. In addition, they recognize that they can’t just start over, but need to use the existing hardware and software resources that they currently own or have access to. The end result is a healthy, pragmatic focus on finding tools that can help companies meet their essential needs more effectively. In real-world terms, that’s translating to a growing interest in hybrid cloud computing models, where both elements of the public cloud and on-premise or managed computing resources in a private cloud come together to create an optimal mix of capabilities for most organizations.

It’s also allowing companies to take a fresh look at alternatives to tools that may have been a critical part of their organization for a long time. In the case of office productivity suites, for example, companies that have relied on the traditional, licensed versions of Microsoft Office can start to more seriously consider something like Google’s cloud native G Suite as they make more of a shift to the cloud. Of course, they may also simply choose to switch to the newly updated Microsoft 365 cloud-based versions of their productivity suite. Either way, moving to cloud-based office productivity apps can go a long way towards a more flexible IT organization, as well as getting end users more accustomed to accessing all their critical applications from the web.

Directly related to this is the ability to look at new alternatives for client computing devices. As I’ve discussed previously, PC clamshell-based notebook form factors have become the de facto workhorses for most remote workers now and the range of different laptop needs has grown with the number of people now using them. The majority of those devices have been (and will continue to be) Windows-based, but as companies start to rely more on cloud-based applications across the board, Chromebooks become a viable option for more businesses as well.

Most of the attention (and sales) for Chromebooks to date has been in the education market—where they’ve recently proven to be very useful for learn-at-home applications—but the rapidly evolving business app ecosystem does start to shift that story. It also doesn’t hurt that the big PC vendors (Dell, HP, and Lenovo) all have a line of business-focused Chromebooks. On top of that, we’re starting to see some interesting innovations in Chromebook form factors, with options ranging from basic clamshells to convertible 2-in-1s.

The bottom line is that as companies continue to adapt their IT infrastructure to support our new workplace realities, there are a number of very interesting potential second-order effects that may result from quickly adapting to a more cloud-focused world.. While we aren’t likely to move to the kind of completely cloud-dependent vision that used to be posited as the future of computing, it’s clear that we are on the brink of what will undoubtedly be some profound changes in how, and with what tools, we all work.

Podcast: IBM Think, PC Industry News from HP, Microsoft, AMD, Samsung, Apple, Lenovo

This week’s Techpinions podcast features Carolina Milanesi and Bob O’Donnell discussing the IBM Think conference as well as a number of different PC, OS and chip announcements from major vendors in the PC business and analyzing what it means for the state of the PC category moving forward.

A Huge Opportunity for Tech If It Regains Trust

Among the many fronts that have opened in the war against COVID-19 and the effort to reopen the economy and society was the news two weeks ago that Google and Apple are working together on developing content tracing apps that could be used by public health authorities. Among my colleagues in the tech community, as well as friends and family, this potentially promising development was greeted with a mix of optimism and trepidation. On the one hand, few people questioned that these two companies, with their collective resources and brainpower, could play a vital role in helping to develop some form of ‘coronavirus early warning system’. At the same time, it also raises questions about consumer privacy, and the potentially undesirable and nefarious ways the data, and the window into our lives it would peer into, could be used.

This is a really important conversation, because if this initial effort on the COVID-19 front is successful, it could provide a valuable springboard into a much larger opportunity for Big Tech in healthcare. Putting aside Covid for a moment, few would argue that from a systems, IT, and data perspective, our health care system is a bloated, inefficient mess. We spend 2x per capita what other OECD countries do, with relatively little to show for it. You show up at your doctor’s office and are asked to fill out a 1950s-era mimeographed form with skewed letters at the top. Poll question: If you wanted to access your medical records right now, could you? And the well-meaning $50+ billion Obama-era electronic health records (EHR) initiative is so cumbersome it has driven large numbers of doctors away from the profession.

Few would question that tech could play an important role in improving these systems and reducing their cost. COVID-19, and the threat of recurrence and future pandemics in our globally connected society, adds a sense of urgency and focus to the issue. The smartphone — with its combined set of connectivity capabilities (cellular, Wi-Fi, Bluetooth, and GPS), suite of (current and future potential) sensors, apps, and the data about what we do, who we communicate with, and where we go — is the ideal home base for Covid-related tracing and some broader set of opportunities in health care.

The problem is that Big Tech is starting from a relatively low point on the ‘ol consumer trust-o-meter. It’s not any one company, or any one particularly egregious action. It’s the collective body of trust and privacy violations, ranging from Cambridge Analytica to consumer data breaches to fake advertisements to the optics of Facebook launching a dating app in the middle of Congressional investigations. It’s the fact that nobody from Theranos has yet been brought to justice, and the near-criminal lack of oversight that enabled the disaster at WeWork and allows Adam Neumann to walk away with billions and sit on a beach in Israel.  The beyond-what’s-reasonable profits, incessant need for growth, and short-termism driven by the needs of the stock market, juxtaposed against the widening equality gap now placed into particularly sharp relief are all ingredients added to this soup.

That said, this is a generational opportunity. Big Tech could be the systems and software companion to the ground-breaking and in-record-speed work being done on the life sciences front to fight Covid and fast-track our return to some semblance of normalcy. We should note the collective brainpower at Apple, Google, and others, and the deep capabilities of their senior leadership. In another time, these are the people that would have been working in government, NASA, universities, or other publicly-funded institutions. But those sectors have been hollowed out by declining budgets and a de-prioritization of their work. Much R&D has shifted from the public sector to the private sector.

Priority One will be accomplishing the immediate mission at hand: developing some form of contact tracing capability, with the proper safeguards and data protections. The stakes are high – even if the development of the app is successful, one high-profile wrong move on the data/privacy front would set the longer-term, bigger opportunities back a ways. But the level of private/public sector collaboration that will be needed to pull this off could be the template for a much bigger play in health care. Think of what Apple+Google+Facebook+Microsoft+Amazon+Sales Force+IBM+Qualcomm+Palantir+etc, etc. could do, with the proper allocation of resources, prioritization, and, yes, potential for profit. This could be THE project of the 2020s, in a way that mobile and cloud were the projects of the 2010s.

One final note: There’s been much discussion about U.S. leadership, and its waning role on the global stage. But the list of private sector companies in both Silicon Valley and in life sciences, plus the collective venture capital and private equity assets that might need to be amassed to beat this thing, and help deal with the world that results at the other end, provide the type of opportunity that could help reverse this course.

What Windows 10X Can Learn from the Making of Surface

It has been a busy week for Windows and Surface. We started on Monday with a blog post by Chief Product Officer, Windows and Devices, Panos Panay outlining what is coming in the Windows 10 May 10 update as well as some changes in the rollout plans for Windows 10X. Then today, with another blog, Panay announced the Surface Go 2, Surface Book 3, Surface Headphones 2 and Surface Dock 2 all updates to popular products in the Surface lineup. The announcement also introduced the new Surface Earbuds, first seen back in October 2019. I am sure we will see plenty of reviews of the hardware over the coming days and I will share my experience as I try some of the products myself. Still, there are broader and more fundamental points linking these two sets of announcements I thought were worth highlighting.

Panay took over the leadership of Windows about three months ago and, since then, has spoken quite openly how being able to design hardware and software together would make the Windows experience better for the whole ecosystem. A shared leadership has the potential to accelerate innovation and improve execution, two aspects that it would be fair to say Windows could have benefitted from during the past few years. I could not agree more with Panay’s intent and I am convinced there are vital lessons learned from bringing to market the Surface portfolio that will benefit Windows 10 as a whole and Windows 10X in particular.

New Form Factors Are Hard

The original Surface showed that for PC users, getting used to new form factors takes time. This is especially true when in addition to new form factors, you also have a new operating system with different input mechanisms and UI.

Users, especially in the enterprise, are mostly set in their workflows often reliant on legacy apps that don’t do well with change. Business users, or maybe their IT managers, also have expectations of what it takes to do productive work.

The Surface portfolio grew, in some ways, because that early start, aimed at taking users into the PC of the future, had to be accompanied by more traditional form factors for those users who were not quite ready to embrace the future either because of comfort or because or concrete needs that desktops and notebooks can deliver. Now Surface has a full portfolio catering to different users and their workflows. One size does not fit all, especially in an enterprise context.

Making New Workflows Natural

As the Surface portfolio was evolving, so was Windows from Windows 8 to Windows 10. A dual-screen device will certainly require new workflows to be developed to take advantage of the new form factors fully and to do so, Microsoft has been developing Windows 10X. Getting used to a new OS, even when the core stays the same, is even harder than breaking in a new form factor.

Back in October, when we first heard about Windows 10X, I wrote:

“Time and time again, we see users bending backward to fit their workflows around their phones. We do not question whether or not that phone is a computer; we simply use it to get things done. Surface Duo will empower users to find new workflows that take advantage of the dual-screen and highly mobile design. Because it is a phone, Surface Duo will not have to fight for a place in a portfolio of products, which means that users will be heavily engaged with it.”

Windows 10X can help consumer embrace cloud-based workflows now, so they can be ready to transfer them onto dual-screen devices when the time comes, thus making the transition much easier than having to learn both a new form factor and workflows at the same time

Business Response to COVID-19 as a Catalyst

The COVID-19 crisis has been an incredible driver of digital transformation. Microsoft’s CEO, Satya Nadella, said, during their earnings call, that he saw two years of digital transformation in two months. Because of our new reality, the needs and priorities of businesses and individuals alike have changed. It is understandable then that some planned releases both of software and devices might have changed also.

In this week’s blog, Panay said:

With Windows 10X, we designed for flexibility, and that flexibility has enabled us to pivot our focus toward single-screen Windows 10X devices that leverage the power of the cloud to help our customers work, learn and play in new ways. These single-screen devices will be the first expression of Windows 10X that we deliver to our customers, and we will continue to look for the right moment, in conjunction with our OEM partners, to bring dual-screen devices to market.”

Microsoft wants to continue to facilitate this wave of digital transformation to deliver an operating system that is meant for cloud-based workflows. Being able to fit into this wave of change is critical for Microsoft not just for Windows but for Office as well. With more enterprises embracing digital transformation, the search for the right partner and the right tools is on. The strength of having been at the center of most workflows in the past might be seen as a limitation, not an advantage, leading some companies to look for partners like Google, the poster child for the future of work.

Must-Have vs. Nice to Have

The economic downturn kicked started by the COVID-19 pandemic has changed people’s priorities overall, including what they might be able and interested to spend when it comes to tech. The newly found needs to work and learn from home pushed both enterprises and consumers to buy more technology in the past few months than they had likely planned.

Microsoft said they registered a 35% increase in time spent on Windows devices since the beginning of February. People are relying on their PCs more than they have done in a very long time. Under the current stressful circumstances, users want familiarity, straightforward workflows, and ease of use. When the demands for our time and attention are high, the last thing we want is the added stress of figuring out new workflows or new form factors.

Microsoft’s reprioritization of Windows 10 X to focus first on delivering better user experience and improved functionality on single screen devices fits such needs and requirements. The cost of dual-screen and foldable devices, as well as their unproven track record in enabling productivity, would make it difficult to gain the support of IT managers and the budgets of mainstream consumers.

 

It might be disappointing for industry watchers not to see highly anticipated devices like Surface Neo and frustrating for some partners to have to put on hold their foldable devices. Yet, a lot has changed since last October, a lot has changed since last month, really, and for Microsoft to continue as if it were business, as usual, would be a huge disservice to partners and an insult to customers.

 

In the Modern Workforce, The Role of PCs Continues to Evolve

It’s been an interesting week for the once again vibrant PC industry. We saw the release of several new systems from different vendors, announcements on the future directions of Windows, and hints of yet more new systems and chip developments on the near-term horizon.

While most of the news wasn’t triggered by the COVID-19 pandemic, all of it takes on a new degree of relevance because of it. Why? As recent US retail sales reports demonstrate and conversations with PC OEMs and component suppliers have confirmed, PCs and peripherals are hot again—really hot. Admittedly, there are many questions about how long the sales burst can last, and most forecasts for the full year still show a relatively large decline, but there’s little doubt that in the current era, the PC has regained its role as the most important digital device that most people own—both for personal and work-related purposes. And, I would argue, even if (or when) the sales do start to decline, the philosophical importance of the PC and its relative degree of usage—thanks in part to extended work-from-home initiatives—will likely remain high for some time to come.

The recent blog post from Microsoft’s Windows and Surface leader Panos Panay provides interesting insights in that regard, as he noted that Windows usage has increased by 75% compared to last year. In recognition of that fact, the company has even decided to pivot on their Windows 10X strategy—which was originally targeted solely at dual-screen devices—to make it available for all regular single-screen PCs. Full details on what exactly that will bring remain to be seen, but the key takeaway is Windows PCs will be getting their first major OS upgrade in some time. To my mind, that’s a clear sign of a vital product category.

Apple is moving forward with their personal computer strategies as well, having been one of several vendors who announced new systems this week. In their case, it was an upgrade to their MacBook Pro line with enhanced components and, according to initial reports, a much-improved keyboard. Samsung also widened their line of Windows notebooks with the formal release of their Galaxy Book Flex and Galaxy Book Flex α 2-in-1 convertibles, and Galaxy Book Ion clamshell, all of which feature the same QLED display technology found in Samsung’s TVs. The Galaxy Book Flex and Ion also have the same type of wireless PowerShare features for charging mobile peripherals as their Galaxy line of smartphones.

The broadest array of new product announcements this week, however, comes from HP. What’s interesting about the HP news isn’t just the number of products, but how their range of offerings are reflective of several important trends in the PC market overall. Gaming PCs, for example, have been a growing category for some time now, even despite the other challenges the PC market has faced. With the extended time that people have been staying home, interest, usage and demand for gaming PCs has grown even stronger.

Obviously, HP didn’t plan things in this way, but the timing of their new OMEN 25L and 30L gaming desktops and OMEN 27i gaming monitor couldn’t have been better. The desktops offer a number of refinements over their predecessors, including a choice of high-performance Intel i9 or AMD Ryzen 9 CPUs, Nvidia RTX 2080 or AMD Radeon RX 5700 XT graphics cards, Cooler Master cooling components, HyperX high-speed DRAM, WD Black SSDs and a new case design. The new gaming monitor features Quad HD (2,560 x 1,440) resolution and a 165 Hz refresh rate with support for Nvidia’s G-Sync technology.

HP showed even more fortuitous timing with the launch of their new line of enterprise-focused Chromebooks and, believe it or a not, a new mobile thin client. Chromebooks have been performing yeoman’s duty in the education market for learn-from-home students as a result of the pandemic, but there’s also been growing interest in the enterprise side of the world as well. While the market for business-focused Chromebooks has admittedly been relatively modest so far, the primary reason has been that most companies are still using many legacy applications that haven’t been optimized for the cloud. Now that many application modernization efforts are being fast-tracked within organizations, however, a cloud software-friendly device starts to make a lot more sense.

With its latest announcements, HP expanded their range of business Chromebook offerings. They now start with the upgraded $399 Chromebook Enterprise 14 G6, which offers basic performance, but a large 14” display and a wipeable/cleanable keyboard, then move up to the mid-range Pro c640 Chromebook Enterprise and finally end up at the Elite C1030 Chromebook Enterprise. Interestingly, the C1030 is the first Intel Athena project certified Chromebook (it features a 10th Gen Intel Core CPU) and offers the same 2-in-1 form factor as their high-end EliteBook Windows PCs. It’s also the world’s first Chromebook made with a 75% recycled aluminum top lid, a 50% recycled plastic keyboard, and speakers made from ocean-bound plastics—all part of HP’s ongoing sustainability efforts.

HP also introduced the mt22 Mobile Thin Client, a device that in another era, would barely get much of a mention. However, with the now critical need in certain industries for modern devices that are optimized for VDI (virtual desktop infrastructure) and Windows Virtual Desktop (WVD), the mt22 looks to be a great solution for workers in regulated or highly secure industries who still need to be able to work-from-home. Finally, HP also announced ThinPro Go, a USB stick that can essentially turn any functioning PC with an internet connection into a thin client device running HP’s Linux-based ThinPro OS. While similar types of devices that work by booting from the USB stick have existed in the past, they once again take on new meaning and relevance in our current era.

All told, HP’s announcements reflect the continued diversity that exists in today’s market and highlight how many different, but essential, roles PCs continue to play. Couple that with the other PC-related announcements from this week and it’s clear that the category continues to innovate in a way that surprises us all.

Podcast: Tech Earnings from Facebook, Alphabet/Google, Microsoft, Amazon, Apple

This week’s Techpinions podcast features Carolina Milanesi and Bob O’Donnell analyzing this week’s big tech quarterly earnings reports from Facebook, Google’s parent company Alphabet, Microsoft, Amazon and Apple, with a focus on what the numbers mean for each of the companies individually and for the tech industry as a whole.

Poor First-Quarter Results Foreshadow Challenging Year for Smartphones

At IDC, we released our preliminary 1Q20 smartphone shipment results this week, and they weren’t pretty, with units down 11.7% from the year-ago quarter, reaching just 275.8 million units. That number represents the most significant year-over-year drop we have measured in the market. Unfortunately, things are likely to get worse before they get better, as consumers tighten their spending in the face of continued economic hardship—including massive job losses—and ongoing COVID-19 concerns.

Top Five Vendors
Our prelim numbers put Samsung at the top of the market, with more than 58 million smartphones shipped during the quarter. While the company grabbed better than 21% of the market for the quarter, its volume still represented a nearly 19% year-over-year drop. As Carolina recently noted, Samsung has worked hard to grow its share of the mid-range market with its A-series phones. That line sold well in the first quarter and should serve the company well in a tighter economic environment.

Huawei grabbed the number two spot with a 17.8% share of the worldwide market, on volumes of about 49 million units. That represented a 17.1% decline year over year. The company moves a significant quantity of phones in China, which was the first country to be hammered by COVID-19, which impacted its overall volumes. The fact that the country is also out front in terms of emerging from initial lockdowns could give boost Huawei’s fortunes through the rest of the year.

We estimated Apple’s volumes at 36.7 million phones for a third-place spot with a 13.3% share. That represented a less-than-1% drop from the year-ago quarter. Apple also announced earnings on the same day, noting overall revenue growth during the quarter was up by 1%, and iPhone revenue was $29B, down from $31B a year ago. As I wrote a few weeks back, Apple’s fortuitously timed launch of the iPhone SE could position it well during the coming challenging months.

Rounding out the top five were Xiaomi, which grabbed a 10.7% worldwide share with 29.5 million units shipped on year-over-year growth of 6.1%, and Vivo, which grabbed 9% with 7% year-over-year growth. Both companies saw substantial volumes in India in the first quarter and will be negatively impacted looking forward due to the full lockdown happening there.

China’s Rebound, Plus 5G
At IDC, we are taking the unusual step of working on a prelim forecast well ahead of our regular quarterly cadence because our clients are seeking guidance in this challenging environment. It is no easy task forecasting right now with such a high level of uncertainty, and frankly, it is not looking great for the rest of this year. One thing worth noting, however, is that our analysts in China believe the fact that it was the first country to emerge from lockdowns could mean it’s the first to see some light at the end of the tunnel in terms of shipment volume declines. At present, they believe China could even see some growth by the end of the year.

It is an interesting perspective, and it made me think about something that the executives at Qualcomm said during its recent earnings call. While the company is forecasting total worldwide handset shipments to drop by 30% in the June quarter versus its pre-COVID-19 forecast, it did not change its outlook for 5G shipments. It currently believes 5G shipments will hit 175-225 million units during the year. In other words, it now expects the mix of 5G phones to be higher than in its pre-pandemic forecasts. A big part of that bet is on China, where the company said it exited the first quarter with 30% of devices shipped into the channel offering 5G. The company also noted that 71% of all new launch models carried 5G.

Qualcomm’s 5G number seems very aggressive, but the fact it did not back away from it this week seems to suggest they have strong reason to believe they can still hit it, and China will be a key driver there. I am very curious to watch how subsequent 5G launches happen here in the United States, especially as additional lower-cost options appear. As the COVID-19 crisis continues to play out here, it is unclear how many people will be ready to spend money on a new smartphone in the current quarter and the second half of the year. And it could be an especially challenging year for anticipated high-end 5G phones.

Apple’s Most Versatile Computer = iPad Pro and Magic Keyboard

I have so many thoughts about the iPad and the new Magic Keyboard. Since 2010, outside of the role smartphones are playing on our daily computing lives, iPad has been one of the hardware products I have spent the most time thinking about.

I go back to the day Steve Jobs launched iPad. It was there Steve Jobs framed the iPad in a way that has stuck with me. He said the iPad was unique because it was more intimate than a notebook and more powerful than a smartphone. So perfectly succinct, and it was this description that was his answer as to why iPad should exist.

iPad has evolved much through the years. It has become more powerful, more capable, and added an array of iPad only apps. But there has been a continuous debate as to if it can—-or should—-replace a notebook. I’ve long believed the iPad was the most approachable computer Apple has ever made. But as it has evolved, so has my thinking and I would now consider iPad the most versatile computer Apple has ever made.

This distinction is important and it is outlined in this video by SVP of software Craig Fedhereighi where he doubles down on iPad’s value as being its versatility. It can be used in computer like ways and it can be used in smartphone like ways that are unique to its form and the iPadOS platform. It’s only limitation, until now, has been the types of input it supported.

What has been fascinating to observe, was how iPad has increased in function to eek closer to a full-fledged productivity device as the core OS has adapted. But one of the main struggles has remained the lack of a cursor. Many in the “iPad can’t do real work” camp use the touch input as a main criticism. They seem content a keyboard exists, but their complaints show what they really wanted was a cursor. Apple may have hoped more applications from a work based standpoint would adapt to support or even innovate on touch inputs but that did not happen. Now that iPad has mouse/cursor support, and I have been spending time with the Magic Keyboard, it has become glaringly clear one of iPad’s struggles is that it had been battling an uphill existence in the mouse/cursor-based world of productivity, collaboration, and enterprise software.

Frankly, there are just some times when a mouse/cursor is the superior input mechanism.

When it Comes to Work, It is a Cursor Based World
I have long fought the idea of iPad supporting a mouse/cursor. I hated the idea in fact because I felt it would take away from the original vision and be a compromise. Part of me still wishes the software world of productivity would have evolved and innovated around touch instead of cursor but it did not.

This is where the interesting approach Apple took to the cursor comes in. Apple took the initiative and didn’t just duplicate the Mac/PC cursor and trackpad but rather innovated upon it uniquely for the world of iPad and iPadOS.

While the iPad supports a mouse/cursor it is unlike any mouse/cursor input you have tried before. It is context-aware whether it is clicking or dragging, or selecting text, etc., the shape and state of the cursor intelligently changes. Apple has created a software situation where innovation can now come from both the touch-based input as well as the cursor-based input as developers now have more choices of which input mechanisms to support at their disposal.

What has always differentiated iPad from a product like Microsoft Surface, which is the only real tablet/computer competitor to iPad, is the world of iOS apps. I love my Surface Pro but the app ecosystem lacks compared to the world of iPad apps. Apple now has blended the best of two worlds from an input standpoint of mouse/cursor and touch/pen and the world of iPad apps, desktop productivity apps, creativity apps, entertainment apps, games and more and created a truly versatile computer and as of now that versatility is truly unmatched.

The evolution of the iPad has always been to be whatever you want it to be. It is, at its core design, a slate and on that canvas can be whatever the user wants it to be. What I had not appreciated in my criticism of mouse/cursor support was how adding it took the iPad one more step deeper toward this vision. By supporting every type of input, and truly excelling at all of them, iPad now meets that true vision of a blank canvas and a platform that will allow the product to do anything and everything.

Google Anthos Extending Cloud Reach with Cisco, Amazon and Microsoft Connections

While it always sounds nice to talk about complete solutions that a single company can offer, in today’s reality of multi-vendor IT environments, it’s often better if everyone can play together. The strategy team over at Google Cloud seems to be particularly conscious of this principle lately and are working to extend the reach of GCP and their Anthos platform into more places.

Last week, Google made several announcements, including a partnership with Cisco that will better connect Cisco’s software-defined wide area network (SD-WAN) tools with Google Cloud. Google also announced the production release of Anthos for Amazon’s AWS and a preview release of Anthos for Microsoft’s Azure cloud. These two new Anthos tools are applications/services for both migrating and managing cloud workloads to and from GCP to AWS or Azure respectively.

The Cisco-Google partnership offering is officially called the Cisco SD-WAN Hub with Google Cloud. It provides a manageable private connection for applications all the way from an enterprise’s data center to the cloud. Many organizations use SD-WAN tools to manage the connections between branches of an office or other intra-company networks, but the new tools extend that reach to Google’s GCP cloud platform. What this means is that companies can see, manage, and measure the applications they share over SD-WAN connections from within their organizations all the way out to the cloud.

Specifically, the new connection fabric being put into place with this service (which is expected to be previewed at the end of this year) will allow companies to do things like maintain service-level agreements, compliance policies, security settings, and more for applications that reach into the cloud. Without this type of connectivity, companies have been limited to maintaining these services only for internal applications. In addition, the Cisco-powered connection gives companies the flexibility to put portions of an application in one location (for example, running AI/ML algorithms in the cloud), while running another portion, such as the business logic, on a private cloud, but managing them all through Google’s Anthos.

Given the growing interest and usage of hybrid cloud computing principles—where applications can be run both within local private clouds and in public cloud environments—these connection and management capabilities are critically important. In fact, according to the TECHnalysis Research Hybrid and Multi-Cloud study, roughly 86% of organizations that have any type of cloud computing efforts are running private clouds, and 83% are running hybrid clouds, highlighting the widespread use of these computing models and the strategically important need for this extended reach.

Of course, in addition to hybrid cloud, there’s been a tremendous increase in both interest and usage of multi-cloud computing, where companies leverage more than one different cloud provider. In fact, according to the same study, 99% of organizations that leverage cloud computing use more than one public cloud provider. Appropriately enough, the other Anthos announcements from Google were focused on the ability to potentially migrate and to manage cloud-based applications across multiple providers. Specifically, the company’s Anthos for AWS allows companies to move existing workloads from Amazon’s Web Services to GCP (or the other way, if they prefer). Later this year, the production version of Anthos for Azure will bring the same capabilities to and from Microsoft’s cloud platform.

While the theoretical concept of moving workloads back and forth across providers, based on things like pricing or capability changes, sounds interesting, realistically speaking, even Google doesn’t expect workload migration to be the primary focus of Anthos. Instead, just having the potential to make the move gives companies the ability to avoid getting locked into a single cloud provider.

More importantly, Anthos is designed to provide a single, consistent management backplane to an organization’s cloud workloads, allowing them all to be managed from a single location—eventually, regardless of the public cloud platform on which they’re running. In addition, like many other vendors, Google incorporates a number of technologies into Anthos that lets companies modernize their applications. The ability to move applications running inside virtual machines into containers, for example, and then to leverage the Kubernetes-based container management technologies that Anthos is based on, for example, is something that a number of organizations have been investigating.

Ultimately, all of these efforts appear to be focused on making hybrid, multi-cloud computing efforts more readily accessible and more easily manageable for companies of all sizes. Industry discussions on these issues have been ongoing for years now, but efforts like these emphasize that they’re finally becoming real and that it takes the efforts of multiple vendors (or tools that work across multiple platforms) to make them happen.

Podcast: Intel Earnings, Magic Leap, WiFi6E, Arm-Based Mac

This week’s Techpinions podcast features Ben Bajarin and Bob O’Donnell analyzing the earnings announcements from Intel and what they say about tech industry evolution, discussing the layoffs and repivoting of Magic Leap and what it says about the future of Augmented Reality, describing the importance of the new WiFi6E 6GHz extensions to WiFi, and chatting about the potential for an Arm processor-based future Mac.

The Mid-Tier Smartphone Opportunity

If you live in North America, you could be forgiven for thinking the mid-tier smartphone market died a cruel death a few years ago. According to GFK, in fact, 81% of smartphone sales in 2019, in the US, came from smartphones costing $600 and higher. Across the rest of the world, however, where the $600 and higher price points captured anything between 11 and 65 percent, the mid-tier smartphone market is alive and well. There are different reasons why consumers chose a mid-tier device. The biggest driver is, of course, cost. Consumers might have limited disposable income to allocate to the device many see as their computing device in their pocket. Price for some is not only what they can afford but what they are willing to spend, the right balance between cost and return on investment.

Not Every Mid-Tier Smartphone is Created Equal

For many years, mid-tier devices were mostly designed as a stripped-down version of a higher-end device. Brands would start from a higher-end device and lower the feature-set to hit the price they thought was right. Often this meant that phones aimed at emerging markets did not quite feel as if they were designed for the users in that market. There was a disconnect between wants and needs and the product, which in turn was impacting what potential customers were prepared to pay. Another side effect of this lack of focus was that consumers might have preferred to opt for a refurbished or secondhand flagship product or a new but older version of that flagship model.

In the last couple of years, we have seen a drastic change in the way some top brands have been addressing the needs of consumers in this space. The drive behind this more targeted approach was born out of necessity as Chinese brands started to expand outside of China with a very aggressive pricing strategy. Brands like Huawei, Xiaomi and the brands in the BBK franchise were delivering smartphones showcasing features akin to a high-end device with a price point much closer to a mid-tier. Brands like Motorola started to bring to market more tailored mid-tier products and, as their position in the higher-end of the market weakened, they doubled down on product families like the Moto G and the Moto E.

But what could vendors with a robust high-end portfolio do to win share back and reinvigorate upgrades of a sizable part of the market? Well, in September 2018, Samsung’s Mobile CEO at the time, DJ Koh, made a decision that left some industry watchers puzzled. He announced a change in its mid-tier strategy. Koh wanted to bring to market new technology in its mid-tier portfolio first, rather than its flagship products, at first trying to appeal to the growing numbers of Millennials across the world who were in the market for a phone but saw flagship models just out of reach. Before the year was over, we saw the first example of what Koh envisioned in the Galaxy A8s, the first Samsung’s phone with an Infinity-O Hole-punch display. Then in early 2019, at an Unpacked -like launch event in Bangkok, Thailand and Milan, Italy, Samsung introduced the Galaxy A80 sporting Samsung’s first slide up triple camera system and an in-display fingerprint sensor. As the Galaxy A series grew, so did the number of countries where Samsung started to bring these products with skews that addressed both consumers’ needs and the competitive landscape.

The Times They Are a Changing in the US

Historically, the Galaxy A series phones you could purchase in the US were unlocked and mostly international models not optimized for the American networks or market dynamics. Earlier this month, Samsung announced a whole set of Galaxy A models that will come to the US market starting early this summer. In the portfolio, two models stand out the A71 5G and the A51 5G two devices that bring 5G to the $600 and $500 price points. As 5G networks continue to roll out, it is clear that carriers cannot only rely on high-end buyers to get their return on the huge infrastructure investment they made. At a similar time as Samsung’s announcement, we also had TCL confirm their TCL 10 line up, which included the TCL 10 5G prices at €399 or $488. Samsung’s strong brand, channel presence and marketing power will no doubt make TCL’s effort harder. Still, the opportunity in the market is sizable, sadly also due to the expected economic recession generated by the COVID-19 crisis.

This is not the first time that the smartphone market has faced an economic recession, but it is the first time such a recession happens at a time when there are no strong technology or market shifts. Back in 2008/2009, we were still at the very beginning of the smartphone market. Innovation around software, cameras and 4G technology were converging, giving consumers a solid reason to upgrade from their feature phones. Today most consumers have a capable smartphone already, it might not be the latest model, but it does the job. This means that, as their disposable income is restricted, if they need upgrading their phone, they will be driven by core purchase drivers such as display size and quality, camera and battery. For those consumers who are particularly pragmatic and usually hold on to their device for three to four years, they might also be interested in future-proofing their purchase when it comes to cellular and buying an affordable 5G device might look appealing. Bringing 5G into the mid-tier should also help Samsung lower the risk of churn towards the newly released second-generation iPhone SE.

The current economic environment might also bring to the US a trend that has been developing in Europe: the resurgence of the corporate-liable smartphones in the enterprise market. Having mid-tier devices that tick the box on crucial features will help organizations provide a full portfolio of options that are attractive to users. Samsung’s ability to have Samsung Knox support across the portfolio, including the Galaxy A series product, provides differentiation for enterprises looking at Android devices that have attractive features at a lower cost.

What About Cannibalization?

If you are a consumer, having options is great. If you are an investor or an industry watcher, however, you might be concerned about the cannibalization that products like the Galaxy A series and iPhone SE might bring to the flagship models. Well, you should not worry. I want to remind you of one point I made at the start of this article: the needs of consumers who could not afford a high-end phone have been met by older flagship models and secondhand phones. These will be the two main markets that will be impacted. Vendors will benefit from higher satisfaction and higher engagement that the newer features these mid-tier devices offer will drive in their users. In turn, that satisfaction and engagement might drive unique revenue opportunities from adjacent product categories like wearables or new services.

Remote Access Solutions Getting Extended and Expanded

Now that we’re several weeks into work from home mandates and clearly still many weeks (and likely months) away from most people being able or willing to go back to their offices, companies are starting to extend and expand their remote access plans. Early on, most organizations had to focus their attention on the critical basics: making sure people had PCs they could work on, providing access to email, chat and video meetings, and enabling basic onramps to corporate networks and the resources they contain.

However, it’s become increasingly clear that the new normal of remote work is going to be here for quite some time, at least for some percentage of employees. As a result, IT organizations and vendors that want to support them are refocusing their efforts on providing safe, reliable remote access to all the same resources that would be available to their employees if they were working from their offices. In particular, there’s a need to get access to legacy applications, sensitive security-focused applications, or other software tools that run only within the walls of corporate data centers.

While there’s little question that the pandemic and its aftermath will accelerate efforts to move more applications to the cloud and increase the usage of SaaS-based solutions, those changes won’t happen overnight. Plus, depending on the company, as much as 2/3 of the applications that companies use to run their businesses may fall into the difficult-to-access legacy camp, so even sped up efforts are going to take a while. Yes, small, medium, and large-sized organizations have been moving to the cloud for some time, and some younger businesses have been able to successfully move most of their computing resources and applications there. Collectively, however, there is still a huge amount of non-cloud workloads which companies depend on that can’t be easily reached (or reached at all) outside the office for many employees.

Of course, there are several ways to solve the challenge of providing remote access to these and other types of difficult to reach tools. Many companies have used services like VPNs (virtual private networks), for example, to provide access to some of these kinds of critical applications for years. In most cases, however, those VPNs were intended for occasional use from a limited set of employees, not full-time use from all their employees. In fact, there are stories of companies that quickly ran into license limitations with the VPN software providers when full-time use occurred.

Many other organizations are starting to redeploy technologies and concepts that some had written off as irrelevant or no longer necessary, including VDI (virtual desktop infrastructure) and thin clients. In a VDI environment—which for the record, has been and continues to be going strong in places like health care facilities, financial institutions, government agencies, call centers, etc. even before the pandemic hit—applications are run in virtualized sessions on servers and accessed remotely via dedicated thin client devices or on PCs that have been configured (or recommissioned) to run specialized client software. The beauty of the thin client computing model is that it is very secure, because thin clients don’t have any local storage and all applications and data stay safe within the walls of the corporate data center or other hosted environment.

Companies like Citrix and VMWare have been powering these types of remote access VDI computing solutions for decades now. Initially, much of the focus was around providing access to legacy applications that couldn’t be easily ported to run on Windows-based PCs, but the basic concept of letting remote workers use critical internal applications, whether they are truly legacy or not, is proving to be extremely useful and timely in our current challenging work from home environment. Plus, these tools have evolved well beyond simply providing access to legacy applications. Citrix, in particular, has developed the concept of digital workspaces, sometimes referred to as Desktop as a Service, which integrates remote access to all types of data and applications, whether they’re public cloud-based SaaS apps, private cloud-based tools, traditional on-premise applications or even mobile applications into a single, secure unified workspace or desktop. (By the way, Desktop as a Service is not to be confused with the very similarly named Device as a Service, which entails a leasing-like acquisition and remote management of client devices. Unfortunately, both get shortened to DaaS.)

In addition to these approaches, we’ve started to see other vendors talk more about some of their remote access capabilities. Google, for example, just released a new blog describing their BeyondCorp Remote Access offering, which enables internal web apps to be opened and run remotely in a browser. Though it’s not a new product from Google—it’s actually been available for several years—its capabilities have taken on new relevance in this extended work from home era. As a result, Google is talking more about the organizations that have deployed it, some best practices on how to leverage it, and more.

Most companies are probably going to need a combination of these and other types of remote access work tools to match the specific needs of their organizations. The simple fact is that disaster recovery and contingency plans are now everyday needs for many companies. As a result, IT organizations are going to have to shift into these modes for much longer periods of time than anyone could have anticipated. Though it’s a challenging task, the good news is, there are a wealth of solid, established tools and technologies available to let companies adapt to the new normal and keep their organizations running this way for some time to come. Yes, adjustments will continue to be made, security issues and approaches have to be addressed, and situations will continue to change, but at least the opportunity is there to let people function in a reasonable meaningful way. That’s something for which we can all be thankful.

Podcast: Apple Google Contact Tracing, iPhone SE, OnePlus 8, Samsung 10 Lite

This week’s Techpinions podcast features Carolina Milanesi and Bob O’Donnell analyzing the surprising announcement from Apple and Google to work together on creating a smartphone-based system for tracking those who have been exposed to people with COVID-19, and discussing the launch of several new moderately priced smartphones and what they mean to the overall smartphone market.

iPhone SE: Apple’s Most Important Product Launch of 2020?

This week Apple announced the launch of the second-generation iPhone SE, a brand new, $399 device with a design and feature set that runs counter to just about every current trend in the smartphone industry. And Apple is likely to sell a ton of them, as this may turn out to be the right product, at the right time, for an awful lot of people.

Next-Generation Internals; Comfortably Familiar Body
Apple built the new iPhone SE upon the chassis of the iPhone 8, but it includes fresh internals, including the A13 Bionic processor, the same processor it shipped in its high-end iPhone 11 Pro last year. That alone makes this phone noteworthy, as that chip enables a list of next-gen capabilities around photography, AI, AR, and other technologies. Apple also added WiFi-6 and gigabit LTE, dual sim with eSim, wireless charging, and fast-charge capabilities. This list of features ensures that anyone who upgrades from an iPhone that’s two to three years old will get notably improved performance, and a robust set of new features to enjoy. And the entry-level product ships with 64GB of storage, likely a good bump for many people.

Perhaps what is most notable about the SE, however, is the trends and technology Apple chose to ignore with the phone. For starters, it is a 4.7-inch phone in a world where everybody else is rocketing toward sizes closer to 7-inches. (My colleague Anthony Scarcella notes that in 2019 phones with smaller than 5-inch screens represented just 4.2% of the market.) That said, this is likely to be a key selling point to a subset of iPhone buyers who have lamented the industry-wide rush to larger screen sizes, and who prefer a more manageable sized phone. It’s a perceived negative that I expect many actual buyers will see as a positive.

The SE, as noted, is an LTE phone and not 5G. In 2020, most smartphone vendors expect to ramp up 5G shipments. While the first 5G phones have all carried premium price points, we will see a growing list of Android products land at sub $500. And we expect Apple to launch its first high-end 5G phones later this year. Despite all this, I believe SE’s LTE could be a selling point for many buyers. Most consumers looking to buy in this price range know very little about 5G, and its advertised benefits. And most are generally happy with LTE performance. Finally, some might be concerned that their carrier will charge them more for 5G coverage. Net net, LTE over 5G, may equate to a positive and not a negative for many.

Apple opted to retain the home button on the SE and with it, Touch ID (and the resulting large top and bottom bezels). Bleeding-edge users may see this as a significant compromise, but I’d argue that many people in the market for an SE will see this as a clear benefit. They like the iPhone home button, and they are comfortable with that interface mode. Moreover, they are just not that interested in using Face ID.

Perhaps the biggest perceived downside to the SE, and the trend it bucks the hardest, is its inclusion of a single, 12MP camera. At a time when most (but not all) Android vendors are shipping phones with two, three, or even four cameras, this might seem a deal-killer for many. But the new SE leverages everything Apple has learned about computational photography over the years. Combined with the A13 Bionic chip, I expect it will represent a notable leap in performance over what many buyers are using today, even if their current phone has more than one lens.

The SE’s combination of features, and, frankly, lack some features, makes it a strangely compelling new product. In uncertain times, you cannot overestimate the importance of appealing to people’s need for familiarity and comfort. I expect this product to fill that need for many iPhone buyers.

Starting Price of $399
Apple’s decision to ship a new iPhone at the sub $400 price point will have notably ramifications across its line. Some financial analysts will decry the downward push this will have on the average selling price of iPhones, but that is a short-sighted view. By shipping a new iPhone at this price, Apple is aggressively moving to capture buyers who often shop for iPhones in the secondary market.

In 2019, IDC estimated that over 200 million phones moved through the secondary market, and Apple phones represented nearly three-quarters of that volume. I am told Apple captures very little of the residual value of those reclaimed phones itself, leaving that for its refurbishment partners. Instead, it has focused on the installed-base benefits of getting those phones back into the market, and its ability to sell software and services to those owners.
With the new SE, Apple offers a brand new phone likely to appeal to those customers, and it could have a significant impact on the secondary market. Near-term, it will likely drive down demand for used phones, and causing prices to slip. Lower refurb prices mean even more, price-conscious buyers may explore the option of buying an iPhone, perhaps their first.

It is also worth noting that while the new SE starts at $399 with 64GB of storage, many buyers will acquire it for notably less as Apple is offering buy-back options on this phone, too. So, for example, you can currently trade-in your existing iPhone 8 and get up to $170 off the price, for a total cost of $229.
Finally, the SE positions Apple quite well for the increasingly important prepaid market, too.

The Right Phone at the Right Time
Apple has already shipped some important new products this year, and we expect it will launch more later in 2020. Future launches could include the first 5G iPhones, new Macbook Pros, and perhaps even the first A-Series based Mac notebook products. But with the SE, Apple has—perhaps through luck as much as planning—launched a phone that may turn out to be precisely the right product at the right moment in time. What may have looked like a cost-down device that lacked some modern features six months ago now looks like a familiar, comfortable product with a time-tested design. Combine that with a reasonable price, and the SE makes sense for buyers who desperately need a new phone, but who can’t justify a higher-end product during these challenging and uncertain times.

Podcast Extra: COVID-19 Business Continuity

This is an extra Techpinions podcast featuring Carolina Milanesi, Bob O’Donnell and special guest Darcy Ortiz from Intel talking about the critical role of contingency efforts, business continuity plans and how companies can best handle a pandemic from a company that’s in the rare position of having a pandemic planning team for over fifteen years.


 

The Purposeful Nature of the iPhone SE

Today Apple introduced the highly rumored update to the iPhone SE. With a starting price of $399, Apple says in its press release that the second-generation iPhone SE embodies the core qualities of the original model affordability, compact size and, thanks to the A13 Bionic, performance. Of course, since the original iPhone SE, the market has changed quite a bit and so has the concept of a small size phone. Considering that today’s phones are as big as 6.7 inches, it clear that as cute as it is, a 4inch screen would have been too small for most people. I also think it is important to consider the type of customer who would be drawn to the updated iPhone SE because if there is a model that, in my view, has been designed with a purpose, it is this one. Do not get me wrong. Of course, Apple must consider the role that each model plays in its portfolio as well as the market. Still, usually, Apple does not narrow down the addressable market for a specific product. In the case of the iPhone SE, it is hard to neglect the most significant opportunity offered by much of the install base who is using a 4.7-inch iPhone. Whether it is the iPhone 6, the iPhone 7 or the iPhone 8, users have been accustomed to that size as well as features like the home button. It would be fair to characterize this user base as a more pragmatic one, that puts value on core features that have a long-lasting impact on their experience.

The Name

Leading up to today, the rumors on the name of Apple’s new phone were split between the iPhone 9 and the iPhone SE. Given the changes in the naming convention that we saw last September when Apple moved from the XR to the 11, one can understand why people thought we might have had an iPhone 9. At the time, I wrote:

“While not immediately evident at the start, it became clear that iPhone 11 is the new iPhone XR. The name is a smart move from Apple as it simplifies the naming convention but, even more so, because it does not label the product as inferior. You might not be able to afford the iPhone 11 Pro, or you might not see yourself as a pro user, but you do not feel like you are settling for a “second best” product by buying the iPhone 11.”

The iPhone SE feels like a different kind of product, though. It is not a model we should expect to be refreshed with the regular cadence we see in the rest of the portfolio. Instead, it’s a product that serves the purpose of getting the most pragmatic users to upgrade after holding on to their phones for years. These users might be coming from a hand-me-down or a secondhand iPhone or even be Android users looking for their first iPhone. Last September, I felt the new price of the iPhone XR at $599 and the iPhone 8 starting at $449 offered some great options for upgrades. But in some markets where installment plans are not as common or for those consumers on a tight budget, $50 makes a difference, so the new iPhone SE will certainly further widen the addressable market.

For Apple, upgrades are not only driving hardware sales nowadays, but they also assure that as many users as possible can take advantage of Apple’s new services, such as Apple TV+, which comes free for a year with the new iPhone SE.

Future Proof Purchase

Despite the irregular launch pattern of the iPhone SE, the model still fits into a portfolio and hitting the right price with the right features seems like a carefully balanced recipe. The iPhone SE offers a single camera system similar to the iPhone XR but with the computational capability of the A13 chip, which makes up for some of the limitations. The iPhone SE also starts with 64GB of memory compared to the meek 16GB of its predecessor, but now it also has a 128GB and 256GB at $449 and $549. At the higher configuration, the iPhone SE slides in where the iPhone 8 was and replaces it. Apple gave the iPhone SE wireless charging and fast charging (with the right adaptor), two features that are much appreciated by users and will allow the SE to better compete with similarly priced Android models. Finally, Touch ID instead of Face ID. While if you have an iPhone model with Face ID, you might never entertain the idea of going back to Touch ID (even with the current mask requirements) the consumers who are likely to be interested in the iPhone SE love their Touch ID.

If I had to guess when a good time for the next refresh of the iPhone SE might be, I would say that in another four years sounds like a good time considering that by then, 5G will be truly mass market.

Timing

Launching a product in the current environment is certainly not easy. I argued a few weeks ago, when Apple introduced the new iPad Pro, that this kind of device is likely to appeal to a segment of the market that might not be too concerned about the less favorable economic environment. In contrast, a more mainstream device like the iPhone SE, although competitively priced, might remain out of reach in the current uncertain economic climate. I was asked whether Apple should have delayed the launch further and to be honest, I am not sure if it would have made a difference if the economic downturn many are forecasting will materialize. It could be that Apple was hoping to have at least some stores open by now. The type of customers interested in the iPhone SE might be reluctant to purchase online without seeing the device first. That said, bringing a phone to market impacts the supply chain partners as well as the channel partners. Delaying any further might have had a channel domino effect on other products both from Apple and from other brands.

One last point on the iPhone SE is that it is one of those slow-burning models that will have a long-tail impact on sales and considering how much more we depend on technology the need for a phone that does the essentials well especially for consumers who use their phone as their primary computing device.

Apple Google Contact Tracing Effort Raises Fascinating New Questions

In a move that caught many off guard—in part because of its release on the notoriously slow news day of Good Friday—Apple and Google announced an effort to create a standardized means of sharing information about the spread of the COVID-19 virus. Utilizing the Bluetooth Low Energy (LE) technology that’s been built into smartphones for the last 6 or 7 years and some clever mechanisms for anonymizing the data, the companies are working on building a standard API (application programming interface) that can be used to inform people if they’ve come into contact with someone who’s tested positive for the virus.

Initially those efforts will require people to download and enable specialized applications from known health care providers, but eventually the two companies plan to embed this capability directly into their respective mobile phone operating systems: iOS and Android.

Numerous articles have already been written about some of the technical details of how it works, and the companies themselves have put together a relatively simple explanation of the process. Rather than focusing on those details, however, I’ve been thinking more about the second-order impacts from such a move and what they have to say about the state of technology in our lives.

First, it’s amazing to think how far-reaching and impactful an effort like this could prove to be. While it may be somewhat obvious on one hand, it’s also easy to forget how widespread and common these technologies have become. In an era when it’s often difficult to get coordinated efforts within a single country (or even state), with one decisive step, these two tech industry titans are working to put together a potential solution that could work for most of the world. (Roughly half the world’s population owns a smartphone that runs one of these OS’s and a large percentage of people who don’t have one likely live with others who do. That’s incredible.)

With a few notable exceptions, tech industry developments essentially ignore country boundaries and have become global in nature right before our eyes. At times like this, that’s a profoundly powerful position to be in—and a strong reason to hope that, despite potential difficulties, the effort is a success. Of course, because of that reach and power, it also wouldn’t be terribly surprising to see some governments raise concerns about these advancements as they are further developed and as the potential extent of their influence becomes more apparent. Ultimately, however, while there has been discussion in the past of the potential good that technology can bring to the world, this combined effort could prove to be an actual life and death example of that good.

Unfortunately, some of the concerns regarding security, privacy, and control that have been raised about this new effort also highlight one of the starkest examples of what the potential misuse of widespread technology could do. And this is where some of the biggest questions about this project are centered. Even people who understand that the best of intentions are at play also know that concerns about data manipulation, creating false hopes (or fears), and much more are certainly valid when you start talking about putting so many people’s lives and personal health data under this level of technical control and scrutiny.

While there are no easy answers to these types of questions, one positive outcome that I certainly hope to see as a result of this effort is enhanced scrutiny of any kind of personal tracking technologies, particularly those focused on location tracking. Many of these location-based or application-driven efforts to harvest data on what we’re doing, what we’re reading, where we’re going, and so on—most all of which are done for the absurdly unimportant task of “personalizing” advertisements—have already gotten way out of hand. In fact, it felt like many of these technologies were just starting to see some real push back as the pandemic hit.

Let’s hope that as more people get smarter about the type of tracking efforts that really do matter and can potentially impact people’s lives in a positive way, we’ll see much more scrutiny of these other unimportant tracking efforts. In fact, with any luck there will be much more concentrated efforts to roll back or, even better, completely ban these hidden, little understood and yet incredibly invasive technologies and the mountains of data they create. As it is, they have existed for far too long. The more light that can be shone into these darker sides of technology abuse, the more outrage it will undoubtedly cause, which should ultimately force change.

Finally, on a very different note, I am quite curious to see how this combined Apple Google effort could end up impacting the overall view of Google. While Apple is generally seen to be a trustworthy company, many people still harbor concerns around trusting Google because of some of the data collection policies (as well as ad targeting efforts) that the company has utilized in the past. If Google handles these efforts well—and uses the opportunity to become more forthright about its other data handling endeavors—I believe they could gain a great deal of trust back from many consumers. They’ve certainly started making efforts in that regard, so I hope they can use this experience to do even more.

Of course, if the overall efficacy of this joint effort doesn’t prove to be as useful or beneficial as the theory of it certainly sounds—and numerous concerns are already being raised—none of these second-order impacts will matter much. I am hopeful, however, that progress can be made, not only for the ongoing process of managing people’s health and information regarding the COVID-19 pandemic, but for how technology can be smartly leveraged in powerful and far-reaching ways.

Podcast: The Global Semiconductor Market

This week’s Techpinions podcast features Ben Bajarin, Mario Morales of IDC, and Bob O’Donnell discussing the state of the global semiconductor market and how the COVID-19 pandemic is impacting major chip and end device companies and the tech industry overall.

Here’s a link to the IDC Semiconductor market forecast that Mario discussed on the podcast: https://www.idc.com/getdoc.jsp?containerId=US46155720