There have been some very interesting shifts and evolutions happening in the enterprise computing world over the last several years. It all started, of course, with the explosion of interest in cloud-based computing, as pioneered by Amazon’s Web Services (AWS) and then quickly followed by Microsoft’s Azure, IBM’s Cloud, Google Cloud Platform (GCP) and many more.
In the early days, there were untold proclamations and forecasts that virtually all business-focused-workloads would end up in the cloud, not just because of the nearly infinite range of computing resources the cloud provided, but because of the flexible pricing models allowing companies to only pay for what they used. This notion of consumption-based pricing was a radical concept at the time, particularly for an industry that had been based on paying a lot of money for expensive IT equipment, which sometimes sat unused or at other times proved to be woefully inadequate for a company’s real needs.
Fast forward to the present, however, and a much different picture has emerged. It turns out, trying to move everything into the cloud wasn’t practical and could get very expensive. As a result, it’s now widely recognized that most companies are trying to balance moving some of their workloads to the cloud, while keeping others on site within their own premises—a situation often shortened to “on prem.” For a variety of different reasons, including privacy, security, regulatory, monetary, computing architecture and more, the notion of “hybrid cloud” computing, in which you have a mix of off-site cloud computing workloads and some on-site private cloud workloads, has become the mainstream for enterprise computing.
Despite this pendulum swing back, however, it doesn’t mean that there wasn’t interest in some of the more radical types of usage, pricing, and consumption business models that cloud providers first introduced. The idea that companies didn’t have to own the physical computing assets that their workloads were using, in particular, was something that many companies latched onto. Essentially, they wanted to think about how they could move their IT investments from a capital expenditure to an operational expenditure, which allowed them to think about IT and what it provided as a service to the company in an entirely different way.
In fact, we’ve now seen a number of vendors pivot to start offering at least some of their enterprise-focused hardware on an as-a-service basis. HPE, for example, has said that within a few years they plan to offer everything they sell as a service (though, to be clear, they don’t expect everything to be purchased or consumed that way). At its annual analyst summit in Austin, Dell Technologies this week also took a big step in this direction with the announcement of a whole range of new “as a service” offerings called Dell Technologies On Demand that allows companies to have Dell-branded hardware installed within their datacenters, without an outright purchase. Instead, pricing is based on a consumption model in which companies pay for what they use.
Fundamentally, it’s a similar approach to what the cloud computing providers have offered, but now it’s being done for “on-prem” hardware. What’s even more interesting is that this is one of big announcements from the whole event and it is says a great deal about how the world of enterprise computing has evolved. For years, there were always dog-and-pony shows that highlighted the latest hardware (and software) advances, but now, instead of talking about features, companies like Dell Technologies are talking about business models and sales methodologies. And, importantly, it’s not only OK, it’s absolutely the right thing to do (and, arguably, the right time to do it).
Advancements in enterprise hardware and software are certainly going to continue. In fact, one of the other big announcements from this event was the new Dell EMC Power One system, which is a modular “datacenter in a box” that incorporates a hardware appliance that runs special microservices-based, cloud native, Kubernetes-managed software containers designed to automate a number of standard IT processes. The software features AI-powered intelligence that allows it to do tasks such as monitoring hardware, configuring VMware clusters, and dynamically assigning the required hardware demanded by certain workloads (in a cloud-like fashion), as needed. In essence, it brings the autonomous, dynamically shifting compute resources of the cloud into on-prem private cloud architectures.
What’s interesting though, is that we’re seeing increasing focus on very different aspects of the enterprise computing world that put less emphasis on speeds and feeds and much more on how companies can achieve the goals they have for IT organizations. Ultimately, it’s part of the long-term shift we’ve seen in organizations that expect to be able to use their IT capabilities in a digitally transformative way and that allows IT personnel to move beyond the humdrum maintenance of those resources into jobs that allow them to drive their organizations forward in more interesting and compelling ways.