Coronavirus-Induced Pause Gives Tech Industry Opportunity to Reflect

As the news has now made clear, the COVID-19 coronavirus is having a significant impact, not just on the tech industry, but on society and the globe as a whole. There are still huge numbers of unanswered questions about the virus and what it’s full effect will be. Importantly, and appropriately, most of the focus is on the health and well-being of those impacted and educating people about how to keep themselves and their loved ones safe. There’s also a lot being done to keep people accurately and adequately informed about which concerns are legitimate and which ones are unnecessarily overblown.

At the same time, it’s now very clear that there’s a very practical impact happening to people in the tech industry: their calendars are opening up in a way that many haven’t experience before. The reason? The cancellation and/or “digitization” of more and more events scheduled for this spring and, likely, into the summer. Not just big events like MWC, GDC and F8, but lots of small public and private events are being cancelled, rescheduled, or in the latest move, “virtualized” to streaming-only digital form.

Combine that with the travel restrictions in place for important tech-focused countries around the world, and the tangible result is that many people in the tech industry are going to be falling way short of their frequent flyer requirements this year. Practically speaking, they’re also going to have more time available to them.

The reality is that this “pause” in the action will likely be short-lived. If history has taught us nothing else, it is that “this too shall pass”, and there will come a time in the hopefully not-to-distant future when coronavirus-related concerns will be nothing but a memory.

For a while at least, though, things are going to be different for a lot of people in tech. So, the important question that comes to mind is, how are people going to be spending that extra time?

I don’t claim to have any brilliant answers, but I certainly hope that, in addition to maybe spending a little more time with our loved ones, some of that newfound time is spent thinking about the direction that some key tech industry trends are heading, and whether or not they’re moving in a manner that people really want or intended. On the privacy and security front, for example, there’s arguably a great deal of soul-searching that ought to be done about what kind of data can and/or should be collected about each of us as we go about our digital lives. Similarly, advertising and other information-driven services that leverage (or, in many cases, abuse) that information, might want to consider less invasive alternative approaches.

In the case of autonomous cars, I’d argue that’s it’s time to look past technological advances and figure out how real people actually want to interact with their vehicles. Similarly, it’s worth taking time to think more about how vehicles could be made safer without necessarily becoming dependent on autonomous control.
For many companies, the “found time” may (and should) also lead to more discussions about how to refine their messages and deliver information that doesn’t overpromise what’s possible (as the tech industry has become notorious for doing), but gives people a realistic set of expectations.

There are also bound to be some very interesting discussions about the overall merits of holding big (or even small) events. Again, society and the industry will make it through this, so it will be very interesting to see what people believe was lost and/or gained from the cancellations or recasting of these events. Yes, I’m sure we’ll see more discussions about working from home and video-based collaboration and those are all good things. However, there are still serious questions about how much people are willing to change their work habits for the long-term.

Of course, there are literally millions of other positive ways that people in tech can use this potential opportunity of extra time for good. What I’m afraid might happen, however, is that more of it will be spent on social media, adding yet more undeserved influence to a serious blight on the tech industry’s legacy that, among other things, has already cultivated a heightened level of fear and panic about the coronavirus.

It’s rare that an industry, or a society, suddenly finds itself with access to the precious resource of additional time. In the end, I think that’s one positive thing that we could end up realizing from the unfortunate reality that is now upon us. Let’s hope the newfound time gets used in a positive way.

Podcast: Coronavirus, Intel 5G, Asian Phone Launches, Qualcomm

This week’s Techpinions podcast features Carolina Milanesi and Bob O’Donnell discussing the ongoing impact of the Coronavirus on the tech industry, and several news announcements that were originally scheduled for MWC including Intel’s debut of new chips for 5G network infrastructure, the launch of several new Android phones from Huawei, Xiaomi and other Chinese vendors, and Qualcomm’s press briefing on 5G phone momentum, the third generation X60 5G modem, and more.

Intel Focuses on 5G Infrastructure

Despite the cancellation of this year’s Mobile World Congress show in Barcelona, quite a few tech companies are still making important announcements that were originally planned for the show. Not surprisingly, several of those announcements are focused on 5G.

One of them—perhaps somewhat surprisingly—comes from chip leader Intel. The company sold its 5G modem business assets to Apple last fall, and many considered that move to be the company’s exit from the 5G world. But Intel has a much bigger, though significantly less well-known, business creating chips that help power the network infrastructure equipment that sits at the heart of today’s 4G LTE and 5G networks, including base stations and other core components.

For years, much of the network silicon inside these devices was custom designed and built by the vendors making the equipment—companies like Ericsson, Nokia, Huawei, etc. However, with the growth of more software-based networking advancements, including things like network function virtualization (NFV), as well as increasing demand for general compute performance to handle applications like AI at the edge, Intel and other specialized vendors like Marvell have seen strong interest in their off-the-shelf “merchant” chips.

The basic idea is that, as with many other computing platforms, it’s the software that’s driving the biggest innovations in networking. By building on more standardized hardware platforms (x86 for Intel and Arm for Marvell) and leveraging open source software tools and platforms, like Linux, companies can create networking-related innovations at a faster and more efficient pace.

To better address those needs, Intel made several different announcements focused on adding the computing power and specialized resources that new 5G networks require at multiple points along the network path. Starting at the network edge, the company introduced a new version of its Atom processor, the P5900, that’s specifically targeted towards wireless base stations. Based on a 10nm process technology, the new SOC (system on chip) integrates a next-generation set of Atom CPU cores along with network acceleration functions that are specifically targeted for radio access functions, small cells, and other edge of the network applications. Given the strong expected market for 5G-focused small cells that millimeter wave, mid-band sub-6 GHz and CBRS-based spectrum demand—as well as the potential to do cloud-based computing workloads on the edge, such as AI—this looks to be a very interesting opportunity.

For more compute-intensive workloads at the core of the network, the company also chose to make a number of additions to its second-generation general-purpose Xeon Scalable server processors as part of this 5G announcement. Facing intensive pricing and performance pressure from AMD’s second generation Epyc server processors, Intel added 18 new SKUs to its lineup that offer more cores, faster clock speeds, and large cache sizes at lower prices than some of its initial second-gen Xeon Scalable parts. In terms of performance, Intel touted up to 56% improvement for NFV workloads versus some of its first-generation Xeon Scalable CPUs (though the company didn’t clarify performance improvements vs. some of the earlier second-generation parts).

Another key element that’s essential to speed up the performance of core telecom markets are programmable chips that can be optimized to run network packet processing and other functions that are critical to guaranteeing lower latency and meeting consistent quality of service requirements. These points are becoming particularly important for 5G, which has promised improved latency as one of its key benefits versus 4G.

FPGAs (Field Programmable Gate Arrays) have traditionally done much of this kind of work in telecom equipment, and Intel has a large, established FPGA business with its Agilex line of chips. The power and flexibility of FPGAs do come with a cost, however, in terms of both pricing and power, so Intel debuted its first all-new design in a chip category it’s calling a structured ASIC and a product that’s currently codenamed Diamond Mesa. The idea with a structured ASIC is that it’s essentially only partially programmable, and therefore sits between an FPGA and custom-designed ASIC. From a practical perspective, that means it offers faster time to market than building a custom ASIC at a lower price and power requirement than an FPGA. To ease the transition for existing FPGA users, however, Intel has designed Diamond Mesa to be footprint compatible with its FPGAs, making it easier to integrate into existing designs. The real-world benefit is that, used in conjunction with the latest Xeon Scalable CPUs, Diamond Mesa will let telco equipment providers create products that can handle the increased performance, latency, and security demands of 5G networks.

The last portion of the Intel announcement centered on, of all things, a wired ethernet adaptor. While much of the focus for 5G and any other telecom network is typically on wireless technologies, the reality is that much of the infrastructure still uses wired connections for interconnecting different components across the network core and to enable certain capabilities. Particularly for applications that require time-sensitive networking—including things like precise industrial automation—we’re still several years away from being able to ensure consistent real-time signal delivery over completely wireless networks. As a result, Intel’s new 700 series network adapter—which incorporates hardware-enhanced precision time protocol (PTP) support that leverages GPS clock signals for cross-network service synchronization, according to the company—still has an important, if not terribly exciting, function to fulfill in 5G networks.

All told, the Intel 5G network infrastructure story offers a pretty comprehensive set of offerings that highlight how the company has a bigger role to play in the latest generation wireless network than many people may initially realize. Of course, it’s a big field, with a lot of different opportunities for many different vendors, but there’s no doubt that Intel is serious about making its presence felt in 5G. With these announcements the company has made several important steps in that direction, and it will be interesting to see what the future brings.

Apple Coronavirus Warnings Highlight Complexities of Tech Supply Chains

As the impact of the coronavirus spreads, Apple issued a rare statement yesterday related to the coronavirus’ impact on its quarterly earnings guidance and that announcement is now reverberating throughout the tech industry as well. The company reported that its current quarter’s earnings will likely be negatively affected by several factors related to the virus, specifically its effect on the Chinese market and its global supply chain.

What makes the news even more disconcerting is that the company had already suggested on its last earnings call just a few weeks back that revenues for the quarter could fall into a much wider range of potential outcomes than they typically provide because of the uncertainties the virus was creating. A second negative statement just a few weeks later highlights that the impact of the virus is proving to be much worse than originally thought. The fact that they didn’t say how much the earnings guidance decline would be also emphasizes the uncertainty about the total extent of the virus’ impact.

Specifically, Apple said that sales of iPhones in China—an increasingly important market for the company—will be lower than it had predicted because many of its retail stores and other retail partners’ stores have been closed as a result of the virus. In addition, as stores reopen, the traffic in them has been significantly lighter than normal, leading to the slowdown in sales. Theoretically, online sales shouldn’t be impacted as strongly, but it’s not hard to imagine that the delivery mechanisms in China have also been slowed by the virus.

The second factor Apple cited—a slower ramp to full production after Chinese New Year—is potentially more troublesome, because it impacts the company’s entire global supply of iPhones and other devices. In addition, it certainly implies that other major tech hardware vendors could start feeling this soon as well.

As most people know, the vast majority of Apple’s devices are built in Chinese factories, so the company—like most every other hardware tech vendor—is currently very reliant on these Chinese factories cranking out products in huge quantities on a steady basis. And that’s really the problem, because if Apple is starting to notice the impact strongly enough that it felt the need to issue a statement on revised guidance, then we’re likely going to see a lot of other hardware-focused tech vendors do something similar over the next few days or weeks. In fact, even before Apple, Nintendo had disclosed that it won’t be able to build as many of its Switch gaming consoles as it would like, because its primary production partner Foxconn—who also happens to be Apple’s largest factory partner—was facing delays at its Chinese factories.

The other thing to bear in mind is that even companies that don’t have factories in the most affected areas of China can see their production slowed because of their dependence on certain parts or other components that do come from the most impacted regions. These days, the number of subcomponents that go into more sophisticated tech devices can easily reach over 100, and because so many of these subcomponents are built in China, the range of impact from the virus is potentially much wider than it first appears.

By the way, the timing here is also very important. One thing that many people don’t understand is that, as terrible as it is, the coronavirus started seriously impacting Chinese factories just before the one week in the year when they’re scheduled to be offline: Chinese New Year. If the virus had hit at another time of year, the impact could have been much worse. Now companies are trying to determine how many workers are returning to the factories after their scheduled break, and it’s those metrics that are going to be the most closely watched over the next few weeks.

In addition, out of an abundance of caution, I’ve also heard some hardware vendors say that the Chinese government is imposing mandatory factory shutdowns of 30 days if a single worker is discovered to be infected. Needless to say, that’s going to force companies to be very conservative about letting employees come back to work, which could also result in serious delays in production.

Ultimately, however, it’s essential to remember that this issue is an extremely challenging humanitarian crisis and that companies need to be (and, likely will be) sensitive to the issue and do whatever they can to keep their workers safe. Taking the big picture view, these production delays will likely (and hopefully) be little more than a blip on the long-term radar of tech industry production. Unfortunately, because many institutional investors are more concerned with short-term financial performance, this may cause some short-term challenges for those companies who are being impacted. Long-term, let’s hope the tech industry can learn from this crisis and figure out ways to both protect the workers who help bring products to life and to create supply chains that can withstand the inevitable challenges that lie ahead.

Podcast: Samsung S20 and Z Flip Launch, T-Mobile-Sprint Merger, MWC Cancellation

This week’s Techpinions podcast features Carolina Milanesi and Bob O’Donnell analyzing the new smartphone announcements from Samsung, including their 5G-enabled S20 line and the foldable Galaxy Z Flip, discussing the implications of the approval for the T-Mobile-Sprint merger on 5G competitiveness in the US, and chatting about the impact of the cancellation of the big Mobile World Congress trade show.

Arm Brings AI and Machine Learning to IoT and the Edge

For a company that doesn’t manufacture anything, Arm has a surprisingly large and broad impact, not only on the chip industry, but the overall tech industry and, increasingly, many other vertical industries as well.

The company—which creates semiconductor chip designs that it licenses as IP (intellectual property) and then Arm’s customers use the designs to build chips—is the brains behind virtually every smartphone ever made. In addition, it has a small but growing market in data center and other network infrastructure equipment and is the long-time leader in intelligent devices of various types—from toys to cars and nearly everything in between; essentially the “things” part of IoT (Internet of Things).

As a result, it’s not terribly surprising to see the company pushing ahead on new innovations to power more devices. What is unexpected about Arm’s latest announcements, however, is the degree of performance that it’s enabling in microcontrollers—tiny chips that power billions of devices. Specifically, with the launch of its new Cortex-M55 Processor, companion Ethos-U55 microNPU (Neural Processing Unit) accelerator, and new machine learning (ML) software, Arm is promising a staggering 480x improvement in ML performance for a huge range of low-power applications. That’s not the kind of performance numbers you typically hear about in the semiconductor industry these days. Practically speaking, it turns something that was nearly impossible into some very doable.

More importantly, because microcontrollers are the tiny, unsung heroes powering everything from connected toothbrushes to industrial equipment, the potential long-term impact could be huge. Most notably, the addition of AI intelligence to all these types of “things” offers the promise of finally getting the kind of smart devices that many hoped for in areas from smart home to factory automation and beyond. Imagine adding voice control to the smallest of devices or being able to get advanced warnings about potential part failures in slightly larger edge computing equipment through onboard predictive maintenance algorithms. The possibilities really are quite limitless.

The announcements are part of an overall strategy at Arm to bring AI capabilities to its full range of IP designs. Key to that is work the company is doing on software and development tools. Because Cortex-M series designs have been around for a long time, there’s a large base of applications that device designers can use to program them. However, because a great deal of ML and AI-based algorithm work is being created in frameworks, such as TensorFlow, the company is also bringing support for its new IP designs into TensorFlow Lite Micro, which is optimized for the types of smaller devices for which these new chips are intended.

In addition to software, there are several different hardware-centric capabilities that are worth calling out. First, the Cortex-M55 is the first microcontroller design to incorporate support for the company’s Helium vector process technology, previously only found on larger Arm CPU cores. The M55 also includes support for Arm Custom Instructions, an important new capability that lets chips designers create custom functions that can be optimized for specific workloads.

The new Ethos-U55 is a first of its kind dedicated AI accelerator architecture that was designed to pair with the M55 for devices in which the 15x improvement in ML performance that the M55’s new design offers is not enough. In addition, the combination of the M55 and the U55 was specifically intended to offer a balance of scalar, vector, and matrix processing, which is essential to efficiently running a wide range of machine-learning-based workloads.

Of course, there are quite a few steps between releasing new chip IP designs and seeing products that leverage these capabilities. Unfortunately, that means it will likely be sometime near the end of 2021 and into 2022 before we can really see the promised benefits of a nearly 500x improvement in machine learning performance. Plus, it remains to be seen how challenging it will be for low-power device designers to create the kinds of ML algorithms they’ll need to make their devices truly smart. Hopefully, we’ll see a large library of algorithms developed so that device designers with little to no previous AI programming experience can leverage them.

Ultimately, the promise of bringing machine learning to small, battery-powered devices is an intriguing one that opens up some very interesting possibilities for the future. It will be interesting to see how the “things” develop.

Podcast: Nvidia Cloud Gaming, Microsoft Reorg, China Android Store, Curated Content, Diversity

This week’s Techpinions podcast features Carolina Milanesi and Bob O’Donnell discussing the newly announced GeForce Now cloud-based gaming service from Nvidia, analyzing the reorganization at Microsoft that combines the Windows client and Surface teams, examining the potential impact of a proposed Android store alternative from major Chinese smartphone makers, and chatting about curated reading lists as well as diversity and inclusion issues in tech.

Nvidia Opens Next Chapter of Cloud Gaming

It’s not often you find something that people do just for fun to be tremendously impactful from both a revenue and technology perspective, but that’s exactly the case for the many options that people are going to have for playing games over the internet.

In fact, cloud-based gaming is expected to be one of the most important trends of the new decade. Not only does it offer the potential to drive large amounts of income for many different companies, the technical demands required are going to have a big impact across a number of different areas. Everything from cloud-based computing services, to 5G and other wireless network infrastructure, to individual device components and architectures are now being optimized to enable high-quality, cloud-based gaming services.

It’s not hard to see why. Gaming has become a huge global phenomenon, and interest in gaming across multiple devices has grown tremendously—a point that my 2019 study on Multi-Device Gaming made abundantly clear (see “PCs and Smartphones Duke it Out for Gaming Champion” for more). As a result, companies are eager to create solutions that can tap into the enormous interest in gaming in a way that gives consumers more flexibility (and better performance) than they’ve had before.

Not surprisingly, graphics chip leader Nvidia has been involved with several of these efforts, but none as directly as its own GeForce Now game streaming service, which just became generally available across the US and several other nations around the world starting today. The basic idea with GeForce Now—which has been in a private beta period for several years—is that it enables people to play high-quality, graphically intensive PC games across a range of different devices including PCs, Macs, Android Phones, Nvidia’s own Shield device, and certain smart TVs via an internet connection. (Support for Chrome-based devices is expected later this year.) Importantly, the games are running on cloud-based servers in Nvidia’s (or a few regional partners’) own dedicated data centers and are powered by the company’s high-end GeForce GPUs. As a result, the quality of the gaming experience is nearly what you’d expect it to be on one of today’s best dedicated gaming PCs. However, you can achieve that quality consistently on any of the different device types—even on older devices without any dedicated graphics acceleration hardware. Plus, you have the ability to start a game on one device and then pick up where you left off on another one, a capability that my previously mentioned research suggests is eagerly prized by many gamers.

Technologically, Nvidia is leveraging its ability to virtualize GPUs in its data centers and is using a variety of compression techniques and screen-sharing protocols to deliver remote access to its super-powered cloud-based computers. One nice improvement that the company is bringing to the GeForce Now service with its public launch is the ability to support its RTX real-time ray tracing technology (in games that use it). Until now, that capability has only been found in their highest end graphics cards, like the RTX2080, so this should bring it to a much wider audience.

Nvidia is taking an interesting, and different, approach to the games available on the GeForce Now platform than some of the other cloud-based game services that have been announced. Because it’s actually running PC games on PC hardware, it allows customers of the service to play their existing library of PC games—they simply have to provide proof of ownership of the title and they can access it via their GeForce account. In addition, there are hundreds of free-to-play games, and consumers can use their existing PC game store accounts. Also, because it’s all being stored and run in the cloud, game patches and driver updates (two common banes of PC gamers’ existence) are taken care of automatically, without any interventions on the user’s part. In other words, Nvidia is trying to make the process of using the service as seamless as possible for both casual and hardcore PC gamers.

From a pricing perspective, the company is providing two options with its public launch. You can have an unlimited number of up to 1 hour gaming sessions for free, or you can sign-up for the $4.99/month Founders account (the first three months are free), which gives you priority access to the service, lets you have up to 6-hour sessions, and turns on the RTX ray-tracing support.

In some ways, you could argue that GeForce Now is a bit of a risky business proposition for Nvidia, because, if enough consumers find the service to be sufficient for their needs, they could end up buying less dedicated gaming hardware. Plus, given the high cost of building out and maintaining the data centers necessary to run GeForce Now, especially in comparison to its very low pricing, it seems like profitability could be a challenge—at least initially. Ultimately, though, Nvidia seems confident that GeForce Now won’t replace dedicated gaming PCs for hard-core gamers and could even entice more casual gamers to better appreciate what high-quality PC gaming experiences can enable, which may in turn get them to purchase their own dedicated PC gaming rigs as well. If that proves to be the case, it could end up being a nice bit of incremental revenue as well as a technological showcase for what the best of PC gaming can offer.

LEGO Education SPIKE Prime Is STEAM Made Easy

Last week LEGO Education celebrated its 40th birthday with the launch of a new product called SPIKE Prime, a kit that brings together LEGO bricks, smart hubs and motors all brought to life by the Scratch based SPIKE app and 32 lesson plans.

LEGO’s Education SPIKE Prime costs $329.95 and the app is supported by Chrome, Windows 10, Mac, Android, and iOS devices and uses a LEGO’s own variant of Scratch. SPIKE Prime is intended for schools and the price reflects its target market. The real value of the solution is in the lesson plans that will be very helpful to teachers who are just getting started with STEAM or have little time at their disposal to design and plan their own lessons.

I had the opportunity to test SPIKE Prime with my daughter, who is in 6th grade and homeschooled. We walked through set up and followed a few lessons together. We are not new to coding and robotics, but we do not have a set curriculum for STEAM as part of our school day. We have had mixed results using products such as Sphero and Anki Cozmo, so I was quite curious to test ease of use and engagement on her part as well as the degree of effort on mine.

Familiarity Drives Confidence

The first thing that strikes you as you open up the SPIKE Prime box is how familiar the set looks. While there are bricks that are unique to the set, it all feels like something you have seen your kids do before or something you might have stepped on once or twice if your children are avid LEGO players. Sorting and storing LEGO bricks is probably one of the most challenging issues a parent or teacher can face in the process and the team has thought about this. The box has a secure click-lock mechanism and shelves with pictures of the pieces that need to be stored on each one so that finding what you are looking for when you are building is easier. I was dealing with just one kit, but, of course, in a classroom, you will have multiple sets being used at the same time, which turns into a bit of a logistical nightmare without some help.

The familiarity with the LEGO bricks helps with keeping an open mind with trial and error. In many ways, the process was no different than seeing my kid build her Captain Marvel Ship set over the Christmas holidays. The worse it could happen if you make a mistake with LEGO is that you need to undo a few steps and start over. What is different with SPIKE Prime, of course, is that once the construction part was over, we moved on to phase two:  experimenting with programming the hub. This took on testing hypotheses such as how many steps it took the Hopper to reach the end of the table or how heavy of an object was the Grabber able to lift without dropping. There is no better way to learn than while you are having fun with your peers. Learning and teaching blends into one over the lessons as kids are empowered to share ideas.

IMG_2032

If you are familiar with Scratch, you will have no problem programming the hub. I thought that adding the ability to program the hub with lights was a welcome creativity bonus to add a more personal touch to the creations.

With some help from the teachers, it is easy to see how a 30-minute lesson could turn into a teaching moment on speed and velocity or strength and weight.

Flexibility for Teachers

Teachers can undoubtedly build full lessons around SPIKE Prime that go beyond what the app offers and bring in math, science and other disciplines to help kids connect the dots in a more fun way by looking at cause and effect.

The biggest concerns teachers have when you mention STEAM usually involves one of two things: time and effort. How much time will they have to dedicate to learning something new and how much effort will it require?

In my short experience, I was happy to see that there is no real learning curve when it comes to the lessons. They are structured in such a way that the students can play the instructions and get it done on their own, or they can be expanded by integrating other learnings, as I suggested earlier. Either way, the most time the teacher will have to spend would be around how they best prefer to plan the lesson. The thing I did while my daughter was building was to pass her the bricks and then I asked questions about what she was testing when programming the hub.

Another aspect I appreciated about the app is that the instructions are also very similar to those you find in LEGO sets that put visuals over words. Although SPIKE Prime is aimed at middle schooler, who should be more confident readers, I find that usually coding programs tend to overdo it with words which tend to end up boring the kids.

A Path for Growth

One of the aspects of LEGO Education I love the most is the path for growth that they offer kids. Since LEGO Education was established in 1980 they thought of ways to expand their products and efforts so that they could incorporate more kids. Of course, expanding your addressable market makes good business sense but LEGO did so with thought out products and solutions that really catered to the age group they were targeting.

As much as STEAM is popular with schools and kids are being told that it is their future, any parent knows that getting kids to stick with something like an after school program is a real challenge. Having the opportunity to integrate LEGO Education SPIKE Prime in the curriculum will create stronger foundations that can be further developed through afterschool programs like First LEGO League, which brings into the equation designing testing and evaluating robotic prototypes.

The consistency in the approach that puts experience learning, problem-solving, collaboration and communication at the center of the experience assures that kids learn critical skills for the future. They will also build an appreciation for technical professions that they might consider as their own career path. Any step to bring more diversity in tech is a welcome one! If you have ever attended a LEGO First event, you know there is no pipeline issue in K-12. We just now need to find ways to support these interests and passions all the way into the workplace.

Cloud Workload Variations Highlight Diversity of Cloud Computing

One of the biggest misconceptions about cloud computing is that companies pick a single cloud computing platform and then stick with it for all their cloud computing efforts. As new research from TECHnalysis Research points out, today’s businesses are using a multiplicity of cloud providers and cloud types, and are putting different workloads in different places, often for different reasons.

In last week’s column (see “New Research Shows It’s a Hybrid and Multi-Cloud World”), I described the overall purpose and approach for the new Hybrid and Multi-Cloud Study. Also mentioned was the overall diversity of cloud computing efforts, including the fact that companies are using an average of 3.1 different public cloud providers across IaaS (Infrastructure as a Service) and PaaS (Platform as a Service) applications, and that usage of private and hybrid cloud environments is very strong. This week, I’ll dive a bit deeper into the types of workloads organizations are running in the cloud, where they’re running them, and why.

First, it’s interesting to get a perspective on what types of applications companies have moved to the cloud. As Fig. 1 illustrates, there is a wide variety of different applications that are being run in the cloud, most of which are being used by more than half of all the survey respondents. (Note that Figure 1 provides an overall view of cloud-based workloads across public, private, and hybrid clouds.)


Fig. 1

Not surprisingly, Databases are the most common workloads being used overall, and they also happen to be the top choice within public, private, and hybrid cloud installations individually. Of course, the types of Databases and their specific function can vary quite a bit, but it’s clear that businesses have become very comfortable running them in cloud environments. Analytics-based workloads were the second most popular overall, followed closely by Web or Content Hosting. While details can vary by company, many of the Analytics-focused workloads are likely new variations on Big Data efforts that have been at the core of enterprise computing for most of the last decade. Web/Content Hosting is, of course, ideally suited to a cloud-based environment and likely one of the first that many organizations chose to move to the cloud.

Interestingly, when you break down the workload types by just Public Cloud platforms, the order of the two are reversed, with Web/Content Hosting being second and Analytics-focused efforts being third, emphasizing the importance that web hosting plays in the Public Cloud. The story is much different when you break down the top workloads for Private/Hybrid Cloud environments, however. There vertical-specific Industry Market Solutions are second after Databases, followed by workloads focused on Legacy/App Migration to Containers. Both make perfect sense for the types of applications that businesses want to keep a bit more private as well as allow their internal development teams to create new company-specific software solutions.

The diversity in cloud computing choices extends to specific Public Cloud Computing providers as well. It turns out companies are selecting different providers for different workloads, as the data in Table 1 illustrates. Specifically, it lists the top 5 workloads that survey respondents said they were running at each of the top 3 public cloud platforms: Microsoft’s Azure, Amazon’s AWS, and Google’s GCP.


Table 1

As you can see, while there is some degree of commonality, there’s also a surprisingly large amount of differences, with Databases being the only workload that made the top 5 of all three providers. What’s interesting is that the top workloads in each platform reflect a bit of each company’s heritage in the cloud and overall. In Azure, for example, Microsoft’s heritage of database strength with its SQL Server platform clearly has an influence, while the strong tie between many vertical-specific applications (which are incorporated in Industry Market Solutions) and Windows also undoubtedly played a role. As the early leader in cloud computing environments, Amazon is the logical choice for many Web/Contest Hosting platforms, while its early PaaS efforts focused on data analysis capabilities. Finally, given Google’s heritage as an overall cloud innovator that built services based on large databases and is generally credited for developing many of the cloud-native software development tools and processes, the strength of Database and Software Development/DevOps workloads make sense as well.

Cloud-based computing is clearly continuing to take on an increasingly important role for companies of all sizes and, as this research illustrates, the range of cloud types and platforms now being used gives organizations a wealth of choices in determining how to spread out their various workloads. As with virtually anything related to enterprise computing, those choices can quickly become a bit overwhelming. At least now organizations can see that there aren’t necessarily wrong choices when it comes to the cloud, but rather, a host of options that they can consider.

(You can download a free copy, in PDF format, of the highlights of the TECHnalysis Research Hybrid and Multi-Cloud Study here. The full version 123-slide version is available for purchase.)

Cloud Study, Microsoft Edge Browser, Google Cookies, NBC Peacock

This week’s Techpinions podcast features Carolina Milanesi and Bob O’Donnell discussing the results of a new study on hybrid and multi-cloud computing, analyzing the impact of the official launch of Microsoft’s Chromium-based Edge Browser as well as Google’s plan to remove cookies from Chrome, and chatting about the launch of the NBC Universal/Comcast streaming service Peacock.

The Market for Used Phones, Tablets, and PCs Continues to Expand

The secondary market for devices (used and refurbished products) continues to grow, driven by a wide range of factors including increasing high-priced flagship products, changes in financing models, and hardware vendor’s increasing focus on adjacencies such as services and accessories. Another factor that will push growth of used markets outside of smartphones is the rise of Device-as-a-Service plans worldwide. Much of the industry is closely watching this space, as rapid growth here obviously has wide-ranging impacts across technology markets.

Used Phone Growth
Used and refurbished devices aren’t a new phenomenon, but the sheer volume of devices has increased dramatically in recent years. My IDC colleagues Anthony Scarsella and Will Stofega recently published a report that takes an in-depth look at the used smartphone market. In that report, Scarcella—an industry expert who has worked in and covered the secondary market for many years—estimates that the market for secondary phones grew to over 206 million units in 2019, up from about 176 million in 2018. And he expects the market to continue to grow, reaching close to 333 million phones by 2023. (Note: The secondary market consists of used and refurbished phones.)

That’s notable growth, especially when the market for new smartphones has seen growth slow or decline in recent years. But it shouldn’t be surprising to anyone who has been paying attention. One of the key reasons this market continues to grow is that most vendor’s top-of-the-line phones have increased in price over time. The result: These flagship phones are often out of reach for a big segment of the population, and they hold substantial value years after they launch into the market.

Another driving factor is the shift from subsidy models to financing, especially as top-tier vendors have sought to utilize trade-ins as a means of easing the burden of acquiring higher-priced phones. For example, right now Apple is offering up to $500 toward the purchase of an iPhone 11 Pro when you trade in an iPhone X Max. And the company will pay for phones all the way down the list to the iPhone 6 ($20).

Many of those trade-in phones go through a refurbishment process and end up in the secondary market. A promise of recouping value incentivizes people to take good care of their phones, as they hope to recoup value from them down the road. The result is better-quality phones flowing into the secondary market, very similar to the way the rise of new-car leasing has led to the greater availability of well-maintained vehicles on the used market.

People laser-focused on new phone shipments often express concern about the sale of used phones negatively impacting the market, and it’s a legitimate concern in a growth-challenged market. But it’s also missing the forest for the trees, as the secondary market drives a wealth of positive outcomes, too. Chief among them: Ecosystem growth. Used and refurbished phones make it possible for more customers to buy phones they couldn’t afford new. In the case of Apple, that means the growth of the iOS installed base, an increasingly important metric as the company focus on selling a wide range of services, from music and TV to games and news. Plus, a used phone is still a new phone to its next owner, and they will do what most people do when they get a new phone: buy new accessories for it. So, for example, that means a large percentage of secondary market phones will have driven the sale of two (or more) cases during its lifetime.

One other interesting fact about used phones: They’re not just for consumers anymore. Research shows an increasing number of companies who buy phones for their employees are utilizing the secondary market, too. Companies that might have opted for a lower-cost, less feature-rich new phone in the past to save money are now actively looking at higher-end used phones, which often include better security and privacy technologies.

Beyond Phones
The industry has largely focused its attention on used phones, but over time I expect interest to grow across device categories. High-end tablets, such as Apple’s iPad, have long been a part of the secondary market, as have both consumer- and commercial-focused notebooks. I expect both categories to also grow in importance over time, and one of the driving factors here will be the rise of Device as a Service (DaaS). I’ve talked at length in the past about this category, where companies move away from simply buying hardware outright and instead pay a monthly fee that includes hardware as well as a bundle of software and services. Implicit in the DaaS model is a faster refresh rate that typically sees companies deploying new hardware every 24 to 36 months. I expect an increasing percentage of companies to shift over to DaaS in the coming years, and the key to success with this model is the provider’s ability to capture the residual value of those reclaimed devices. As DaaS ramps up, we should see an increase in the availability of recent model, well maintained corporate PCs, tablets, and smartphones. As with phones, this will drive a wealth of mostly positive knock-on effects for the market.

In 2020, my colleagues and I will be taking a much closer look at the entire secondary market, inclusive of PCs, tablets, and smartphones. Huge technology and market changes are coming, and we plan to explore their impacts on both the new and used segments of the market. For example, what impact will the shift to 5G smartphones have on the used market? How is the industry dealing with increased incidents of fraud (including sales of used devices as new)? What is the long-term impact of the rising labor costs to refurbish devices? How will the restricted availability of rare earth elements (REEs) critical to device production manifest itself? Each of these could have profound impacts, so we’ll be watching closely. Stay tuned.

It’s 2020 and PCs are Alive and Kicking

It’s getting to be a familiar theme. As with last year’s event, some of the most interesting announcements from this year’s CES in Las Vegas are focused around PCs. In fact, this year, there are probably more PC developments from a wider variety of vendors than we’ve seen in quite some time. From foldable displays, to 5G, to AI silicon, to sustainable manufacturing, the latest crop of PCs highlights that the category isn’t just far from dead, it’s actually at the cutting edge of everything that’s expected to be a hot topic for this new decade.

On top of that, some of the most important advancements in PC-focused CPUs in a long time have also been announced at the show, promising big leaps in bread-and-butter performance metrics for the coming year as well. In short, it’s a real PC renaissance.

Probably the flashiest new PC from CES is technically one that’s already been hinted at before, but whose final details were just released at the show: Lenovo’s ThinkPad X1 Fold. Leveraging a plastic OLED display from LG Display (similar in concept to what’s used on foldable phones like the Samsung Galaxy Fold and Motorola Razr), the X1 Fold shrinks a 13.3” screen down to a small leather-wrapped portfolio size when it’s folded in half. Unlike the phone displays, however, the X1 Fold supports pen input from the included active stylus.

In addition, the Intel Hybrid Technology (formerly “Lakefield”)-powered X1 Fold supports several different modes of operation, including a completely unfolded tablet-style mode, and a partially folded traditional notebook style, which gives you the option to either use a soft keyboard or treat the display as two separate screens. Importantly, the $2,499 device includes a magnetic Bluetooth keyboard that functions as you would expect but can also be stored and charged inside the X1 Fold when it’s folded. That’s critical for the many people who have had challenges with (or simply stayed away from) early experiments with dual-screened notebooks. In addition, Lenovo plans to offer optional 5G support. The first version of the X1 Fold is expected mid-year and will run Windows 10, but the company also plans to offer the ability to run Windows 10 X (the forthcoming dual-screen and foldable-optimized version of the OS that Microsoft announced when it previewed its Surface Neo foldable device) later this year.

In conjunction with Qualcomm, Lenovo also showed what they claimed was the world’s first 5G PC, the $1,499 Yoga PC. The new notebook is an Arm-based Qualcomm 8cx-powered device that—somewhat surprisingly—supports both sub-6 and mmWave variations of 5G, thanks to some advanced antenna development work by Lenovo.

HP had a number of interesting new announcements at this year’s show, including an update to its super light-weight 13.3” DragonFly notebook, which features an integrated sub-6 GHz 5G modem (from Qualcomm), as well as another version that offers a built-in Tile device, for easily locating the notebook in the event it’s lost or stolen. (Unfortunately, however, both options use the M.2 slot, so they don’t have one that offers both yet.) The Intel-based DragonFly Elite G2 supports an optional 4K HDR display and an optional integrated privacy screen via the company’s Sure View Protect that prevents the screen from being read at an angle. Even more importantly, several components of the DragonFly are built from recycled materials, including the speaker enclosure, which is made from 100% ocean-bound plastics, and the chassis, which is 90% recycled magnesium.

For content creators and gamers, the company also debuted the first all-in-one desktop system featuring Nvidia’s RTX technology for real-time ray-tracing support. The HP Envy 32 AIO features a 31.5” 4K HDR-enabled display, the Nvidia RTX 2060 GPU, 9th generation Intel Core CPU and a Bang & Olufsen designed audio subsystem for a robust multimedia experience.

Dell showed off an updated version of its groundbreaking XPS13 that now extends its nearly bezel-less Infinity Display to all four sides, as well as a number of very cool-looking concept PCs, including its own foldable design and a gaming-focused device. In addition, Dell’s new Latitude 9750 2-in-1 is a 15-inch device weighing 3.2 pounds that features integrated sub-6 GHz 5G. The 9750 also leverages a number of AI-based features designed to subtly improve the performance and battery life behind the scenes thanks to some new Intel-developed software.

On the gaming side, Dell also unveiled the new $799 G5 SE notebook, which leverages AMD’s latest mobile CPUs and GPUs as well as its new SmartShift technology. Essentially Smart Shift allows the discrete CPU and GPU to function more like an integrated APU, thereby improving performance and increasing battery life.

Samsung is also kicking its PC and related peripherals business into higher gear with the official debut of the $849 Galaxy Book Flex α, the latest in its line of thin, QLED display-equipped 2-in-1 notebooks, as well as new gaming-specific monitors. (QLED technology is the same that the company uses in their current high-end TVs, including their new 95” 8K model, the Q950TS. On notebooks, QLED delivers brighter displays and, according to the company, longer battery life.) The Galaxy Book Flex α is 2.26-pound, Intel-based, pen-equipped device that, somewhat confusingly, is in addition to the already announced (and more powerfully spec’d) Galaxy Book Flex 2-in-1, which was announced last fall, but has yet to start shipping in the US. The company also introduced one of the first Intel Project Athena-verified Chromebooks, the Galaxy Chromebook, including one in a slick-looking red color.

For gaming monitors, Samsung is also leveraging QLED technology but in a 100° curved format that’s designed to match the peripheral vision range of the human eye. Available in both a 49” version with 32:9 Dual Quad HD resolution (that’s 5,120 x 1,440)—the G9, or 32”/27” versions with 16:9 Quad HD resolution (2,560 x 1,440)—the G7, both lines of monitors feature 240 MHz refresh rates, response times of 1 msec, and support for both Nvidia’s G-Sync and AMD’s FreeSync technologies.
Speaking of which, AMD and Intel both announced their latest generation CPU architectures at CES. Additionally, while Qualcomm debuted its latest PC CPUs last month, it made a point to say at their press conference that its biggest news for this year’s CES was in PCs (in part because of the Lenovo 5G PC mentioned earlier).

In AMD’s case, the company debuted the first mobile parts based on its Zen2 core, the Ryzen 4000 series in three different variations: ultrathin, gaming, and high-performance. In the desktop world, AMD’s 7nm Zen2 core-powered desktop CPUs have surpassed Intel in performance for the first time in about 20 years, so many people have been waiting for these mobile versions and early benchmarks provided by the company looked impressive. In fact, for the Ryzen 4800H version, which is a 45W mobile part, AMD showed it outperforming Intel’s top-end 95W desktop part.

Speaking of desktop, AMD also extended its Threadripper CPU line with the Ryzen Threadripper 3990 (also priced at $3,990) that offers a staggering 64 cores and 128 independent threads for performance on ultra high-end and demanding applications, such as editing 8K video. It’s clearly not for everyone but demonstrates the impressive levels of performance that AMD has been able to achieve.

In Intel’s case, the company formally unveiled its Tiger Lake CPU line, based on its 10nm+ process technology and, more importantly, a new CPU design. One of the most interesting bits of news about Tiger Lake is that it incorporates a new integrated graphics solution called Xe that’s based on the work the company has been doing on its upcoming, first-ever standalone GPU, codenamed DG1 (which was also demo’d at their press conference). Intel is claiming speed improvements across all aspects of its architecture with Tiger Lake, with a particularly large boost in AI processing. Up until now there’s been little focus on AI-specific tasks on PCs—particularly compared to smartphones—so it’s good to see the company highlighting that development for PCs. Finally, Intel also showed off a new prototype foldable PC design, codenamed “Horseshoe Bend”, featuring a future version of Tiger Lake that folds out into a 17” touchscreen display. Intel also discussed extending its Project Athena PC experience spec for a new line of foldable devices that the company expects to see this year and beyond.

In all, it was an impressive showing for a product category that many predicted would barely even make it into this decade. Based on the news from this year’s CES, it’s probably a safe bet that we’ll be talking about PCs as we enter the 2030s as well.

Podcast: Smart Home Consortium, Facebook OS, 2019 Tech Trends

This week’s Techpinions podcast features Carolina Milanesi and Bob O’Donnell discussing the recently announced Project Connected Home over IP consortium, chatting about a potential Facebook OS, and analyzing some of the top tech trends of the past year.

Five Words That Defined Tech in 2019

We are almost at the end of 2019, and as I prepare for CES, I look back to this year to see what characterized tech over the past twelve months. A lot happened this year so much so that remembering every product and every piece of news is almost impossible. When I look back at big themes that shaped tech, there were five that stood out to me: Streaming, China, Foldable, 5G, and Regulation.

Streaming

It seems as though 2019 was the year when everybody decided that consumers did not have enough options when it came to video streaming. Despite already having a long list of streaming services from Netflix. Hulu, Prime Video, HBO… two new brands came barging into what they believe is still a big opportunity for connecting to consumers but also getting a slice of the revenue pie.

After a big launch event in March, Apple started airing its Apple TV + content at the beginning of November, followed shortly after by Disney+. One day after launch, Disney shared that Disney+ had reached 10 million signups. Of course, signups don’t necessarily equal subscribers. Many people could opt-out after the first seven-day free trial. Some could be among the 20 million Verizon customers who received a free subscription for the first 12 months. Nevertheless, the numbers seem to bode well for revenue service.

Judging the success of Apple TV + will be equally difficult, considering that we shouldn’t be expecting much guidance from Apple’s earnings calls. Plus, any consumers who bought a new iPhone, iPad, Mac, or TV would have received a year’s of free subscription.

What is clear, however, is the fact that both services have yet to impact the fortunes of long-standing streaming service, Netflix, the brand, that most feared, would have been affected. It seems as though the hunger for content remains, and consumers seem happy to add to their budget for worthwhile content. Interestingly, though, both services ended up launching at a lower price than most people expected, maybe a sign of awareness of how crowded this market was already.

2020 seems to be as busy with new services expected from HBO, Quibi. Peacock and Discovery/BBC. I continue to believe that the biggest threat these new services represent is aimed at cable TV rather than one another. Cable subscriptions are not just costly, but they also fail to show a clear benefit associated with the amount users pay every month. It’s not so much about cord-cutting but more about making sure the money you pay sees a clear return. I think bundling will be the real battle going forward. We already started to see Disney+, Hulu, and ESPN coming together under one subscription, and I expect Apple will bundle some of its services in 2020. Whether it will be Apple TV + and Apple Music or Apple TV + and Apple News remains to be seen, but my bet is bundled services are on the map.

China

Whether you are thinking about Huawei, or the U.S./China tariffs, or the race to 5G and AI, China was very much at the center of many technology conversations during 2019. The growth and importance that China has represented in technology over the past few years seem to have shifted in 2019, from a market opportunity growth to a supplier opportunity growth. China was not just the market that offered a lot of sales potential, but it became the market where a lot of technologies had their first moment in both development and commercialization.

5G and AI are probably the most prominent tech trends of the next decade, and China is certainly on a mission to dominate both. While a lot of the innovation in these two areas was born in the US, China has been faster to deploy them thanks to government and regulation’s decision that support rather than hinder commercialization.

The tension between the two countries escalated when the US moved to halt Huawei’s ability to sell equipment in the US and, more importantly, to buy parts from US suppliers, by adding Huawei to a Commerce Department blacklist which included another eight Chinese companies. It soon became obvious that Huawei was used as a pawn by the Trump administration in its bargaining with China on favorable tariffs. In what seemed like a chess game, the administration granted temporary reprieves that allow US companies to continue to sell parts to Huawei. Microsoft for instance, said it got a license to sell “mass-market software” to Huawei. Yet, the US Federal Communications Commission voted to prohibit the use of federal subsidies to buy telecommunications equipment made by Huawei and ZTE and said it would consider requiring carriers now using the products to remove them.

The tariffs battle between the US and China culminated this month in the Chinese government mandating all government offices and public institutions to remove foreign computer devices and software within three years, adding to the complications non-Chinese OEMs are already facing with tariffs. Ironically, given the complaints made against Huawei, this mandate is part of China’s longer-term push for “secure and controllable” technology in government and critical industries as part of its Cyber Security Law, which should boost domestic PC makers share.

The ramifications of this power struggle between the two superpowers will continue into 2020 with both countries possibly opening up opportunities for other geographies like South East Asia for manufacturing.

Foldable

Right out of CES 2019, it was clear that this was the year of foldables, if not for sales, certainly for marketing buzz. After years of more or less attractive rectangular pieces of glass, people were excited about new phone form-factors and even about experimenting with PC designs.

Yet, as it is often the case with cutting edge technology, getting off the ground, wasn’t as smooth as many expected. Samsung’s launch of the Galaxy Fold was negatively impacted by some design decisions around screen protectors and hinge. Spring almost turned into Fall as Samsung delayed the official launch of a Galaxy Fold and made some changes to tighten the area around the hinge and tuck the screen protector under the bezel. The second highly anticipated foldable smartphone was the Huawei Mate X, which has, so far, been released only in China, and an international launch said to be dependent on carriers 5G rollout has, in reality, been put on hold by the lack of Google services.

Towards the end of the year, we also saw a more mainstream design with the Motorola Razr. The Razr injects some positivity back into this segment after the issues Samsung and Huawei have had. The inward folding solution protects the screen, and the design does not require developers to rethink their apps for a larger screen nor users to rethink how they use their phones. In other words, you can enjoy the Razr familiarity and uniqueness all at the same time.

Foldables also became a talking point in the PC market where Lenovo introduced the first foldable PC, the ThinkPad X1. As we know, however, people seem to be much more flexible and open to new form factors and experiences with smartphones than they are with PCs. This attitude, and the high price tag associated with the first devices, is why dual-screen devices will play a bigger role in 2020, both driven by efforts by Intel and its Athena project as well as Microsoft and their Surface Neo running Windows 10X, a new incarnation of the Windows operating system, focused on delivering a more agile set of workflows.

5G

It seems strange to talk about 2019 as the year of 5G because in fairness is more about the time that was spent talking about 5G, than the availability of 5G services. As we are closing the year, you will have quite a different opinion of 5G depending on which region you’re located. In the US, industry watchers, as well as consumers, seem to continue to have more questions than answers. And this is not because anyone is questioning the impact that 5G will eventually have on the way that both people and things will all be connected and communicate. It is more because much of the marketing and messaging, especially from the carriers, continues to confuse rather than excite or reassure. We continue to see marketing messages focusing on coverage maps and ideal speeds, but not much time is spent to really show the value of betting on 5G.

Today, as companies start to invest in private 5G networks, the value is becoming more transparent. Still, for consumers, the limited number of devices, the higher price points of these devices, and the extra cost associated with some 5G plans are leaving people skeptical. The elbow fight among carriers to push through as a leader is, unfortunately, ending up hurting 5G’s overall value proposition.

2020 will see broader device selection, as well as a broader price points, bringing the 5G functionality down in the mid-tier portfolio. And, more importantly, a lot of the confusion around network frequencies will go away as chipsets will be able to support multiple frequencies.

Regulation

Big tech was undoubtedly on the agenda in Washington this year! There was plenty to choose from: the FTC and the internet, the FTC and Qualcomm; Google, Twitter, and Facebook all went to Washington, breaking up Big Tech as big part of the 2020 elections agenda.

Privacy was on the agenda in the US and Europe as GDPR started rolling out earlier in the year. Apple was the only vendor who was able to pivot on the push for privacy and made it a big marketing message thought-out the year, talking about privacy as a fundamental civil right.

Antitrust was also on the agenda as politicians called for breaking up Big Tech mostly because it seemed the only way many felt they could hinder Facebook, Amazon, and Google from becoming even more powerful.

Other tech areas that have been under scrutiny are bitcoin, encryption, and AI. Regulators tried to put a stop on Libra, Facebook’s cryptocurrency, for fear of the impact that an unregulated cryptocurrency might have on traditional currencies. Luckily for them, initial vital partners such as Paypal, Visa, MasterCard, and eBay all backed away from the project.

This month, US lawmakers threatened to pass legislation that would force tech companies to provide court-ordered access to encrypted devices and messages. Law enforcement officials argue that encryption keeps them from accessing criminals’ devices, while tech companies continue to be concerned that creating a backdoor into any device or messaging service opens up the opportunity for bad actors to maliciously access them too. This is the same argument made by Apple against the FBI after the San Bernardino shooting.

After leading the way on privacy, Europe seems to want to shift its attention to AI with Ms. Vestager, Europe’s Commissioner for Competition pledging to create the world’s first regulations around artificial intelligence as well as delivering rights to gig economy workers like Uber drivers.

All these big topics will remain on the agenda for politicians in the US, Europe, and Australia. Still, I have to admit that I hope to see a much better understanding in the US not just on technology but also on business models and the role that technology will play in key areas such as education, transportation, and real estate.

2019 was also the year of tech leadership shakeups, Tik-Tok, silicon, diversity and inclusion (albeit more as a talking point), cloud, and edge. As we look ahead to 2020, we can rest assured we will see many of these big themes to further develop with some materially impacting our lives daily but with all certainly absorbing more marketing budget.


Subscribe to the Tech.pinions Think.tank and get weekly in-depth analysis of the latest technology industry trends, key news, and industry events that shape the market. Join the many business leaders, investors, and Fortune 100 executives who subscribe for the Think.tank’s thought shaping content.

Cisco Builds Custom Silicon to Power Future Internet

The future of just about everything tech-related right now, or so it seems, revolves around designing custom semiconductor chips. From smartphone makers like Apple and Samsung, to cloud computing providers like Amazon and Google, stretching even to automakers like Tesla, there’s been an enormous amount of effort among tech vendors recently to create their own specialized silicon parts.

The most recent example comes from networking powerhouse Cisco, which just unveiled a new silicon platform last week, called Silicon One, that they believe is necessary to power the next generation internet, as well as the infrastructure necessary to support 5G networks. Based on current growth rates and predicted demand, the amount of data traffic that each of these elements will demand (as well as the obvious tie-overs between them), will completely overwhelm the existing networking infrastructure. Plus, the economics of trying to scale these efforts with existing devices paints an even bleaker future—hence the need to take a radically different approach to networking gear.

To be completely accurate, Cisco has been designing and building networking-specific chips for over two decades. What’s different and significant about Silicon One is that, in addition to using it themselves, the company also plans to sell this chip to service providers and other potential partners who may want to build their own devices. That’s quite a change from a company that was never seen as a silicon supplier.

The other unique thing about the Silicon One platform is the design and functionality of the architecture. Cisco claims that they’ve been working on it for 5 years and started with a clean sheet of paper. When you look at what it’s designed to do, it’s easy to see why. Rather than pursuing enhancements to existing types of traditional routing and switching chips, they decided to create a more programmable structure that would allow a single chip family to perform a variety of different networking-related functions for different applications, including backhaul, core, edge and more. The goal was to achieve new levels of performance—over 10.8 terabits/sec—in a single rack space unit using the Q100 chip and to make the traditionally slower routing chips as fast as those that do switching.

Silicon One achieves this with an architecture that, at first glance, sounds somewhat like an FPGA (field-programmable gate array), which is a completely programmable set of circuits embedded into these chips that are often used in networking devices. Further conversations with Cisco representatives at their launch event in San Francisco last week clarified, however, that the Q100, which is the first specific iteration of the Silicon One family, isn’t an FPGA, but rather, a different type of ASIC (application-specific integrated circuit) design. The Silicon One family of chips (others are still to come) integrates multiple types of optimized networking functions within its design that can be turned on or off with a software API (application programming interface). This allows equipment builders to basically turn on and off various sets of functionalities as needed, depending on the specific tasks the device is intended to do.

So, for example, if either Cisco or one of its silicon customers wants to build a device that’s primarily dedicated to high-speed routing, they can enable those functions, whereas someone else building a piece of infrastructure equipment that needs more switching capabilities can turn those on—both with the same chip. According to Cisco, turning on or off certain functions doesn’t change the performance of the chip. The goal was simply to create a single silicon platform that could be more easily used and have software written for, across multiple different types of networking functions, thereby saving the capital costs involved in designing and testing multiple types of systems on multiple different chip architectures.

Speaking of software, the company also unveiled a new version of the operating system they use inside their devices called IOS XR7 (no, not that iOS) that’s been optimized to work with the new silicon. XR7 will work on the new line of 8000-series devices—the first to feature Silicon One-based chips—as well as previous generations going back to the NCS 540, 560, and 5600 lines. The new OS features a number of optimizations designed to allow it to scale to the size and speeds necessary for the next-generation network and to do so in a more automated way that large service and cloud providers, such as Microsoft, AT&T, Comcast and Facebook, need.

The final piece of the Cisco puzzle was the release of new silicon photonics-based advances—also built into the new 8000-series routers—that allow the company to reach the 400G speeds per port that are necessary to power the future Internet. Leveraging several acquisitions that the company made in this area over the last several years, notably Lightwire and Luxtera, Cisco announced important advances in the field that are allowing them to reduce the manufacturing costs for these components by integrating them into more traditional silicon manufacturing processes. Given that the optics costs can reach 75% of the total when scaling to 400G and higher, that’s a critical step. Plus, as with the Silicon One family of chips, Cisco has decided to sell its silicon optics components separately for potential partners who may be interested.

While it’s easy to write off something called the “Future of the Internet” as little more than hype, Cisco managed to present a compelling case as to what problems lay ahead with current networking infrastructure equipment, the need for a new approach, and the achievements they made to address those needs. As with most behind-the-scenes technologies, we may not see the capabilities that Silicon One will bring to both our future wired and wireless connections, but if it’s all done right, we should most definitely experience it.


Subscribe to the Tech.pinions Think.tank and get weekly in-depth analysis of the latest technology industry trends, key news, and industry events that shape the market. Join the many business leaders, investors, and Fortune 100 executives who subscribe for the Think.tank’s thought shaping content.

Away: Modern Communication and Women’s Leadership

Earlier this week, following an investigation from The Verge that highlighted a toxic culture at AWAY, its CEO Steph Korey resigned. As the story made its rounds, it was fascinating to see how people in the press, invest community and tech were responding to it. It has been a polarizing response for sure that had people split between condemning the toxic culture and people calling out the drive to business success.

The story was centered around employees’ complaints of exceedingly long work hours, high levels of scrutiny for their mistakes, a clique-like nature of a company whose leadership talked about fostering inclusivity. Korey was particularly singled out for sharing harsh comments about employees’ work and for Slacking late at night, expecting a reply.

You might disagree with me that Korey’s behavior showed at a minimum very poor judgment of what leadership should be. But, I am sure you agree that it would have been hard for her to remain as a CEO after getting such a degree of press attention.

There is one part of me that does wonder if this would have also been the case if Korey was a Stephen and not a Steph.

Women Leaders

Women and men have different leadership styles, and that is a fact! According to researchers, most women tend to be transformational leaders, while men are more transactional leaders. One style is not necessarily better than the other, but it might better fit a particular type of organization. Transformational leaders are more effective in people-centered companies, while transactional leaders might fare better in an organization where people are more independent.
Transformational leaders seek to be a role model, inspire their team, and emphasize authentic communication. All attributes that would point to Korey as being an exception to the rule I guess.

But if the management style is not something that necessarily determines the success for women leaders, could we find any evidence that how boards deal with crisis differs depending on whether or not the CEO is a woman? Well, according to a paper called “You’re Fired? Gender Disparities in CEO Dismissal,” published by Gupta et all. in 2018, there is a difference. Female CEOs are more likely to get the boot. In fact, they’re 45% more likely to be dismissed than their male counterparts. Interestingly, the gap is statistically insignificant when the company is doing poorly, but the difference is noticeable when you look at companies that are performing well.

As to why this happens, the researchers share their hypotheses. They suggest that when a company is performing poorly, the decision to fire the CEO is often straight forward. But when the company is not doing well, there is “considerable ambiguity about the CEO’s leadership of the firm and no clear script for the board to follow.” In that situation, board members are more likely to fall back on the gender stereotypes and decide that the female CEO doesn’t have the “leadership qualities” needed to continue the company’s winning run.

According to Away, the company was already looking for a new CEO before the story broke. Whether or not this was because the company already knew about the story or because they thought that sooner or later, Korey’s behavior would have caught up with her is unclear and to some extent, irrelevant. But it seems that, in this case, Korey, as a woman leader, did fit the statistics and was likely dismissed when a male counterpart would have either been lauded for being harsh or admired for being a jerk.

Modern Communication and Collaboration

Another fascinating aspect of this whole story is the role that Slack has played. A follow-up article from The Verge points to the role that Slack and other collaboration platforms play in today’s companies and society when you think of broader platforms like Facebook and Twitter. To some extent, they all make people “more accountable” ironically one of the asks Steph Korey had for her employees.

I am more interested, though, in how these tools, so popular in an enterprise environment, are impacting our communication skills. In a world that is becoming more comfortable with sharing thoughts, often very direct ones, and where no distinction is made whether we are talking to someone we know or to the President of the United States, I do wonder if we are we losing sight of the weight that words carry especially when we lead.

There is little room for formalities in today’s collaboration platform no matter if you are using Slack or Teams, the short and to be point nature of the exchange is what people love. Abbreviations and emojis borrowed from messaging apps facilitate a more relaxed exchange that is much richer in feelings than email ever was. There is nothing intrinsically bad in this way of communicating as long as you are self-aware of how you come across and you think back to whether or not it is appropriate for the role you hold.

The other aspect of these collaboration tools is the expectation they set for response time. In a way, it is not much different than texting over emailing someone. Email is much less real-time communication than messaging, and collaboration platforms are. Most people using Slack or Teams expect a real-time exchange. For my generation, it is similar to when you decide to call someone rather than text, for millennials, it might be more like picking WhatsApp over text messaging. As a leader, while you can choose to Slack at three in the morning, you need to be aware of the kind of bar you are setting for your employees. Startups are often known for driving long hours and work ethics that sees a minimal separation between work and play. The blended lifestyles millennials are having as they pursue their career is often a takeover of their personal life by the work-life. Startups where everybody is invested in the success of the company and where everybody benefits equally could condone such a work style, but larger companies where CEOs or managers exploit their position and ride on the dreams of their employees should think of the long term impact of driving such a company culture.

I am a bit less proud of being an Away owner, and I am saddened by the fact that Steph Korey might have forgotten that, right or wrong, more is expected from us women, when we lead. We cannot be jerks, and to be honest, we really should not be as there are enough jerks to go around, especially in tech and its fringes. Hopefully, this is a cautionary tale for all companies and leaders, but especially those that set out to make the world a better place and fall short of making their own work environment merely a good enough place.

Amazon’s Graviton2 CPU Highlights Arm Presence in Cloud Compute

Long-time semiconductor industry followers may recall that Arm, the chip IP company that completely dominates the smartphone market, has talked about making an impact on the enterprise and cloud computing markets for a very long time. Several years back, in fact, they made bold predictions about taking 20-25% of the server market. Despite a number of efforts in that direction from Arm semiconductor partners, however, that level of impact never occurred.

The company didn’t give up on its goals, though, and a few years ago, it unveiled a new brand, dubbed Neoverse, with a new chip architecture designed for infrastructure and other high-performance applications. However, those markets have been completely dominated by x86 processors from Intel and, more recently, AMD, so the initial acceptance of Arm-based compute engines—which often require recompiling or rewriting existing software—was modest.

Recently, the company has seen an enormous amount of momentum in the cloud computing space, capped by last week’s unveiling of the Amazon Web Services (AWS) Graviton2 CPU at Amazon’s re:Invent conference. Graviton 2 is a custom-designed SOC (system on chip), built on a 7nm process technology, based on sixty-four separate 64-bit Neoverse N1 cores that’s optimized for the kind of cloud computing applications for which AWS is known. As the name implies, this is actually the second-generation Arm-based chip from AWS—the original Graviton came out around this time last year. What’s particularly noteworthy about the Graviton2 is that it’s designed to directly compete on a performance basis with the high-end datacenter-focused CPU offerings from Intel and AMD. Best of all, the Graviton2 offerings come with a significant cost savings as well.

Traditionally, Arm’s promise in the datacenter and for large-scale cloud computing installations has been primarily about power savings—a critical factor when you’re talking about thousands and thousands of servers. With this new custom-designed CPU from AWS (leveraging the Annapurna Labs acquisition Amazon made back in 2015), however, the company is claiming to offer both power and performance improvements over existing solutions from Intel and AMD, as well as a reduction in cost. That’s a big step forward and, frankly, not something that many people expected could happen so soon.

The Graviton2 also reflects a level of commitment from Amazon that shows they are serious about increasing the diversity of the CPU suppliers and chip architectures that they want to support. In fact, the company launched the Graviton2 as part of its new sixth generation of what it calls EC2 (Elastic Compute Cloud) instances, which are intended for high-intensity workloads including application servers, micro-services, high-performance computing, gaming, and more. The original Graviton, on the other hand, supported a more limited set of general-purpose applications, such as web servers and data/log processing. In other words, Amazon is positioning its latest Arm-based offerings as serious competitors, on par with the big guys for some of the toughest workloads that exist. That’s about as strong an endorsement as you can get.

Part of the reason that Amazon is able to push Graviton2 so aggressively is that they’ve built a lightweight hypervisor layer, they call Nitro, that lets operating systems, applications, and utilities run independent of the underlying CPU architecture. As mentioned above, one the biggest challenges for Arm in the datacenter has been the need to either recompile or rewrite/refactor applications to work with the Arm instruction set, instead of X86, and that can often be a difficult, expensive process. Thanks to Nitro, however, Amazon is opening up a significantly wider array of software to be able to run on Graviton2-based devices. Because Amazon controls the whole hardware and software stack within AWS, they are able to create both hardware and software solutions that match their needs exactly, and that’s what they’re doing with Graviton2 and Nitro.

In fact, according to reports, Amazon plans to run a number of its own utilities and AWS services on Graviton2-based servers, including critical applications like load balancing, starting in 2020. More than just an interesting technical exercise, the reason AWS is doing this is because by leveraging their own hardware technology and software stack, along with the power and performance efficiencies enabled by the Arm architecture, the company can generate significant savings for its own operations thanks to the Graviton2.

Though no specific details were discussed, we may also see Arm-powered Graviton2 processors in edge computing applications, such as the new partnership that AWS also announced last week with Verizon to bring AWS to 5G networks in the US. The partnership will leverage Amazon’s new AWS Wavelength offering, which was specifically designed to take advantage of the ultra-low latency connections that are possible with 5G networks. AWS Wavelength will enable applications such as cloud-based gaming, autonomous industrial equipment, smart cities, connected AR and VR headsets, and much more to use AWS-powered compute resources at the very edge of the network. In the company’s press release, Amazon said that Wavelength will be used with EC2 instances. It seems logical and appropriate that Graviton2 might be used in those environments, because the power-based benefits of the Arm architecture always implied that it would be a good match for edge computing.

Several years ago, it would have been hard to predict that Arm-based chips could be part of such a significant announcement in the cloud computing world. AWS’ Graviton2 debut and the high-powered instances for which they are using it, however, clearly show that after a long build-up, Arm’s time to make an impact in the world of cloud and enterprise computing has finally come.

Podcast: Qualcomm Snapdragon Summit 2019

This week’s Techpinions podcast features Carolina Milanesi, Ben Bajarin and Bob O’Donnell analyzing the annual Qualcomm summit event, including discussion of their new Snapdragon 865 and 765 smartphone chips, their latest Arm-based PC processors, the forthcoming XR2 5G chip for AR and VR headsets, as well as what all of this says about the role of 5G in connected devices in 2020.

AT&T and Microsoft Partnership on Network Edge Compute Highlights Future of Cloud and 5G

It’s hard enough keeping track and making sense of one technology megatrend at a time, but when you start trying to co-mingle two or even three of them together, well, generally speaking, all bets are off. Yet despite that seemingly unscalable challenge (and the buzzword bingo bonanza it implies), that’s exactly what the latest extension to a relatively new partnership between AT&T and Microsoft is attempting to do. In particular, the two companies are working to tie together cloud computing, 5G, and edge computing into a meaningful way. Even more surprisingly, this combination actually makes a great deal of sense and provides a tantalizing glimpse into the future of where all three of these major trends are heading.

Specifically, the two companies announced a new effort called Network Edge Compute (NEC) that would bring Microsoft’s Azure Stack cloud computing platform to network infrastructure equipment sitting at the edge of AT&T’s millimeter wave (mmWave)-based 5G network. The combination, which is currently available in the Dallas, TX region on a trial basis, will allow companies to start experimenting on new types of generation-defining applications that many believe are possible with the latest generation mobile network. It’s a chance to figure out what kinds of applications can be the Uber/Lyft, AirBnB, or Netflix of 5G.

At this point, no one really knows for sure what those new types of applications might be—just as no one could predict the rise of Uber/Lyft, AirBnB, or Netflix when 4G first came on the scene. However, there’s a general sense that something along those lines could (or will) happen, so it’s important to put the necessary infrastructure in place to make it happen.

Now, some might argue that this announcement isn’t really a big deal. After all, each of these elements have been available for a while and there has been discussion of some type of combination for some time. What’s particularly interesting, however, is that it’s the first time that these pieces have been connected in such a complete and real manner. Plus, having the combination of a telco carrier with a major cloud computing platform not only adds more overall “gravitas” to the offering, it also points out the practical reality that it’s likely going to take these kinds of new partnerships to drive applications and services forward in the 5G era.

From a technology perspective, the ability to leverage the lower latency connections possible with 5G in conjunction with the flexibility of container-based cloud-native applications running at the very edge of the network presents a new opportunity for developers. Because it’s new, it’s a computing model that make them a while to figure out how to best take advantage of.

Some of the efforts that the companies mentioned in their initial announcement provide a hint as to where these new capabilities may be headed. Cloud-based gaming, for example, is commonly touted as a great potential application for 5G because of the possibility of reduced lag time when playing games. Not surprisingly, AT&T and Microsoft talked about some early efforts in that area with a company called Game Cloud Network, which is working to figure out how to maximize the combination of speedy connectivity and rapid access to computing horsepower.

Another interesting application includes the possibility of leveraging Network Edge Compute to do faster and higher-resolution image rendering for AR headsets, such as Microsoft’s HoloLens. Microsoft has already demoed similar capabilities in a controlled environment, but to bring that into the field would require exactly the type of high-speed, quick access computing resources that this new combined offering enables.

Yet another area that has been discussed for potential 5G uses is IoT, or Internet of Things, because of the new network standard’s potential ability to handle links to billions of different connected devices. Along those lines, AT&T and Microsoft also discussed working with an Israeli startup called Vorpal, which creates solutions that can track drones in areas where they can cause problems, such as airports and other commercial zones. To track up to thousands of drones in real-time requires a great deal of sensor input and fast, real-time computing that can be done by the network instead of on the devices themselves. In fact, it provides a whole new level of meaning to former Sun CEO Scott McNealy’s famous quip that the network is the computer.

One of the interesting side benefits of this combined AT&T-Microsoft product offering is that it also starts to put some real meat on the bone of edge computing. Up until now, edge computing has been seen by many as a vague concept that meant a lot of different things to different people. With examples like the ones that the two companies are discussing, however, the idea of an intelligent edge becomes much more concrete.

In fact, all of a sudden, the ties between an intelligent cloud, a connected intelligent edge, and a compute-enabled intelligent network start to make a lot more sense, and the combination of the three starts to look a lot more prescient.

Google Brings More Intelligence to G Suite

Now that we’re several years into the AI revolution, people are starting to expect that the applications they use will become more intelligent. After all, that was the high-level promise of artificial intelligence—smarter, more contextually aware applications that could handle tasks automatically or at least make them less tedious for us to do.

The problem is, that hasn’t really proven to be the case. Sure, we’ve seen a few reasonably intelligent features being added to certain applications. However, you’ve often had to go out of your way to find them, and interacting with them hasn’t often been intuitive.

Thankfully, we’re finally starting to see the kind of easy-to-use intelligence that many expected to see when AI-enhanced applications were first introduced. Some of the latest additions to Google’s G Suite productivity applications, for example, bring tangible enhancements to the common day-to-day tasks we all use.

A new beta version of Google Docs now has Smart Compose features—first introduced in Gmail last year—which can make automatic suggestions to your writing. For longer form documents created in Docs, Google’s AI-powered features have the ability to suggest entire sentences, not just individual words or phrases, and are likely to help speed up the writing process.

In addition, Docs also has neural network-powered technology to make better grammar and spelling suggestions within your documents. A small but very useful example is the ability to recognize words or acronyms that may be unique to an industry or even a company (such as an internal project code name) and automatically add those to the dictionary. Once that’s done, the feature can then recognize and correct when mistakes have been made in those new words.

For Google Calendar, the company is enabling the use of Google Assistant and voice commands to manage your calendar, including doing things such as creating meetings, updating the time and/or location, and more, all with spoken commands. It’s the kind of personal assistant technology that many people expected from the first generation of intelligent assistants, but didn’t get.

Similarly, the integration of Google Assistant into G Suite can now enable people to send quick email messages or dial into conference calls completely hands-free, thanks to voice commands and dictation. While these aren’t dramatic new features, they are the kind of simple yet practical things that AI-based intelligence is bringing to applications overall, and they’re indicative of what the technology can realistically do.

Finally, Google is integrating voice-based control of meeting hardware in conjunction with an Asus built Hangouts Meet Hardware device. Designed to integrate with a monitor and conference room cameras, the microphone and speaker-equipped box can respond to requests to start and end meetings, make phone calls, and more. In addition, Google added voice support for accessibility features to the device, such as being able to turn on spoken feedback for visually impaired users.

What’s interesting about many of these new G Suite additions is that they’re starting to leverage technological capabilities that Google first created in more standalone forms but are now incorporating into broader applications. Google Assistant capabilities, for example, are certainly interesting on their own and from a search-focused perspective, but they’re equally, yet differently, valuable as a true personal assistant feature for calendaring.

In fact, in general, it seems Google is starting to take advantage of a variety of core advances it has developed, particularly around areas like AI, analytics, and managing vast amounts of data, across many of its larger platforms, from G Suite to Google Cloud Platform (GCP) and beyond. Of course, this isn’t terribly surprising, but it’s certainly interesting to observe and highlights the potential that Google has to disrupt the markets in which it remains a smaller player.

Podcast: AT&T and Verizon 5G, Google Cloud Next, HPE Container Platform, Earbuds, Apple, Tesla

This week’s Techpinions podcast features Carolina Milanesi and Bob O’Donnell discussing the new 5G offerings from AT&T and Verizon, announcements from Google’s latest event covering GCP, GSuite and more, the launch of HPE’s Open Container Platform, and commenting on earbud news from Microsoft and Apple, the Tim Cook Austin factory visit, and the launch of Tesla’s CyberTruck.

HPE Debuts Container Platform

In the world of enterprise computing, few topics are as hot as hybrid cloud and cloud-native containerized applications. Practically every company that sells to enterprise IT now seems to have an offering and/or an angle that speaks directly to at least one, if not both, of those areas.

Most of the attention, of course, comes from software companies or the software divisions of larger conglomerates because of the critical role that software plays in enabling these technologies. As a company that’s been known almost exclusively for hardware over the last several years, HP Enterprise (HPE) was seemingly at a significant disadvantage—at least until their announcement this week at the KubeCon conference.

In a move that was both surprising and encouraging, the company debuted a new Kubernetes-based tool called the HPE Container Platform that it says will help organizations hasten their adoption of hybrid cloud architectures by, among other things, allowing legacy, non-native applications to be containerized and managed in a consistent fashion. Ever since Dell Technologies’ purchase of VMWare, in particular, HPE has been seen by many as a company that understood and evangelized the concept of hybrid cloud but didn’t really have the tools to back up that vision. With its Container Platform, however, HPE now has what appears to be a solid set of software tools that will allow organizations to address some of their biggest challenges around legacy software modernization.

Unbeknownst to many, HPE has been acquiring a number of smaller software companies over the last few years, most notably BlueData and MapR. It’s the combination of those companies’ technologies, mixed in with a healthy dose of pure, open source Kubernetes, that gave HPE the software capabilities it apparently needed to build out this new hybrid cloud-friendly platform.

As HPE and many other companies have pointed out—and the market itself has started to recognize—cloud-based software technologies and public cloud-style computing-as-a-service capabilities are incredibly powerful, but they don’t work for all types of applications and all types of companies. In fact, IaaS (Infrastructure as a Service) and PaaS (Platform as a Service) services represent only a small percentage of the workloads in most companies. Because of costs, regulation, complexity, data gravity (that is, the attraction of applications and services to large amounts of data, much of which has yet to migrate to the cloud because of storage costs, etc.), and most importantly, the wealth of difficult-to-change legacy applications that still play an incredibly important role in organizations, there’s been a significant shift in thinking over the last 12-18 months or so. Instead of presuming that everything would eventually move to the public cloud, there’s been a recognition that a hybrid computing model that supports both public cloud and on-premise private cloud is going to be with us as the mainstream option for many years to come. In fact, there’s still a huge percentage of total computing workloads that don’t have much, if any, connection to the cloud at all.

On the one hand, that recognition has brought a new sense of vigor to the enterprise hardware computing companies like HPE, Dell Technologies, Lenovo, Cisco, etc. that many had essentially written off as dead a few years back when the general thinking seemed to be that everything was going to move to the public-cloud. On the other hand, there have been learnings from the consumption-based business models of cloud computing (e.g., witness HPE’s GreenLake announcements from earlier in the year and Dell Technologies On Demand offering from just last week), as well as the cloud-native software development model of containerized microservices. As HPE’s Phil Davis succinctly points out, “The cloud is not a destination — it’s an experience and operating model.”

The end result is that organizations want to figure out ways in which they can combine many of the benefits of that cloud-based operating model with the reality of their own on-premise hardware and legacy applications, while fulfilling the unique requirements of those older applications. HPE’s Container Platform—which is expected to be available in early 2020—attempts to merge the two worlds by containerizing older applications without having to go through the long, painful, and expensive process of rewriting or refactoring them.

More importantly, Container Platform provides the ability to run those containerized legacy applications (as well as regular cloud-native containerized applications) on bare metal servers, without having to incur the costs of running virtual machines—a clear knock at Dell Technologies, and more specifically VMWare. In addition, the HPE Container Platform’s other twist is that it can automatically provide access to persistent storage for these containerized legacy apps. Many older apps need persistent storage to run properly, but that’s not a capability that containers easily enable. As a result, this one requirement has prevented many apps from being modernized and moved to the cloud. By directly addressing this need, HPE believes it can work with its base of customers—who are more likely to be running legacy applications anyway—to move them to a unified environment based on containers. That, in turn, should let them more easily manage their applications in a consistent fashion, thereby saving costs and reducing complexity for IT organizations.

The logic and vision behind this new platform strategy are sound, and it’s encouraging to see HPE take a significant new jump back into the software world. It remains to be seen, however, how well the company can convince potential customers of its software acumen and its ability to function as a key software platform provider. For certain customers, the capabilities of the HPE Container Platform seem like they could be very appealing, but the world of enterprise software is extremely complex and fragmented. Others with large existing investments in other platforms might have a harder time making a switch. Still, this seems like a strong strategic move by HPE and its management team, and one that’s clearly going to point the company in some interesting and exciting new directions.

Podcast: Dell Technologies, Citrix Workspace, Motorola Razr

This week’s Techpinions podcast features Carolina Milanesi and Bob O’Donnell discussing the announcements from Dell Technologies’ Summit event, including their On Demand services and their 2030 corporate goals, the Citrix Industry Analyst Summit and the latest for their Workspace product, and our brief hands-on experience with the new Motorola Razr foldable smartphone.