Workload Churn and Balancing IT Environment Choices

Workload Churn and Balancing IT Environment Choices

Workload churn activity is an indication of the evolution of IT operations. While the public cloud delivers significant benefits and flexibility, the future of IT is hybrid.

The public cloud delivers significant benefits and flexibility, while on-premise environments, in many cases, deliver superior computing performance, data storage, data movement, and disaster recovery plans, as well as security and regulatory compliance requirements.

One of HPE’s leading workload hosting solutions is GreenLake, which combines the best of public and private cloud and delivers an elastic as-a-service platform that can run on-premise, at the edge, or in a co-location facility. HPE GreenLake integrates the simplicity and agility of the cloud with the governance, compliance, and visibility that comes with hybrid IT.

What is workload churn, and how common is it? 

Workload churn (also referred to as workload repatriation) is the process of migrating workloads from the public cloud to on-premise or dedicated off-premise environments. Workload churn is not a new concept. It began when the public cloud became a viable alternative to traditional datacentre operations. The value proposition of the public cloud was its ability to offload a datacentre and IT infrastructure management into a service provider’s datacentre. Rather than requiring substantial investment in IT infrastructure, public cloud aligned the operating expenses with service usage.

However, organisations quickly realised the limitations of the public cloud and that it couldn’t fully replace corporate IT operations. A “backward” migration occurred, involving workloads that either were moved into or started in the public cloud being moved into dedicated environments. IDC research shows that such repatriation activity happens almost as often as workload migration into the public cloud. In IDC’s February 2021 Servers and Storage Workload Survey, about three-quarters of respondents who run workloads in the public cloud indicated that they plan to move some of their workloads, partially or fully, into the dedicated cloud or non-cloud environments.

Does workload churn signal limited use of public cloud by enterprise IT? 

Workload churn activity is an indication of the evolution of IT operations. While the public cloud delivers significant benefits and flexibility, the future of IT is hybrid. The use of both dedicated and shared infrastructure by a single organisation and the movement of data and applications between various clouds will be common occurrences. Operations will be defined by each organisation and will depend on several factors, including the need for compute performance, data storage, data movement, and disaster recovery plans, as well as regulatory compliance.

The interoperability of dedicated and public clouds will be a major factor as well. Isolated islands of IT systems each dedicated to a particular workload often create inefficiencies in IT operations. Isolated clouds are also inefficient by modern IT standards. Finding the right balance between dedicated (private) and public cloud usage is more a continuous process for each organisation than a final state.

According to the Cloud Pulse Survey, there is increasing interest among businesses to expand the usage of dedicated cloud solutions. This trend is partially driven by the movement from non-cloud operations to cloud-based operations. The increased use of dedicated clouds is related, at least in part, to workload churn activity.

What are the major reasons for workload churn activities? 

Data security remains one of the biggest contributors to enterprises moving workloads off the public cloud into protected dedicated environments. Despite significant investments into improved security by cloud service providers, public cloud environments remain an attractive target for cybercriminals. In IDC’s 2021 Servers and Storage Workload Survey, 43% of respondents identified data security concerns as the major reason to engage in workload repatriation activity.

Another common and related concern is data privacy. While closely related to data security, data privacy has its nuances related to the exposure of private data to entities that shouldn’t have access to it. In the previously mentioned IDC workloads survey, 36% of respondents identified data privacy as the second most prevailing reason for moving workloads from public cloud to dedicated environments. Performance and bandwidth bottlenecks, the unpredictability of public cloud service pricing, and IT consolidation efforts are among the top reasons for workload churn. These reasons for moving workloads were cited by 15 – 25% of survey respondents.

Are there any differences in triggers for workload churn? 

While data security and privacy are commonly shared concerns across all workloads, workload profiles play a role in triggering repatriation activities. For example, collaborative, ERM, and CRM applications have a higher rate of private information shared between users, so data security concerns are strong factors when deciding to move at least part of the data into protected dedicated IT environments.

For other workloads, such as data management, the need for better performance plays a bigger role in determining workload repatriation. Bandwidth concerns impact a broad range of workloads that need a continuous movement of data, such as networking, security, technical, and data analytics workloads. The unpredictability of usage-based pricing often causes organisations to move VDI, application development, and collaboration workloads into dedicated environments with more predictable pricing.

What can enterprises do to evaluate optimal IT environments for the placement of workloads? 

As IDC’s Cloud Pulse Survey shows, enterprises are departing from the “public cloud only” paradigm and moving toward “public cloud first” and “public cloud also” approaches to planning their IT operations. This shift embraces the reality that hybrid cloud and multi-cloud approaches are more likely to serve organisational IT needs.

What became evident in the past few years is a move toward services-oriented IT. A variety of recent solutions give enterprise users the ability to achieve the cloud experience on dedicated infrastructure. The availability of such solutions helps with solving the dilemma of workload placement to a certain point.

Enterprises need to thoroughly evaluate the current and future needs of their workloads. What will serve their performance, storage, and data movement requirements better — on-premise solutions or public cloud services? Do the workloads require optimised infrastructure? What would be the migration path should the organisation decide to move to a public cloud? What costs are associated with different options? These are some of the questions important to answer to define the best way to serve the needs of workloads.

HPE GreenLake 

By deploying HPE Greenlake for their on-prem environments, businesses can continue to have a cloud experience including detailed reporting and resource management through a central console and the elasticity to grow without needing to immediately buy (and wait for more hardware).

Today, thousands of businesses are using HPE GreenLake across 50 countries in all industry sectors and sizes, including Fortune 500 companies, government and public sector organisations, and emerging enterprises.

HPE GreenLake offers a range of cloud services that accelerate innovation, including cloud services for computing, container management, data protection, HPC, machine learning operations, networking, SAP HANA, storage, VDI, bare metal, and VMs.

In March 2022, HPE made significant advancements to GreenLake by introducing a unified operating experience, new cloud services, and the availability of solutions in the online marketplaces of several leading distributors.

Multi-cloud experiences 

HPE GreenLake now supports multi-cloud experiences everywhere – including clouds that live on-premise, at the edge, in a co-location facility, and in the public cloud. HPE GreenLake continues to evolve and provide businesses with one easy-to-use platform to transform and modernise their organisation. The HPE GreenLake platform now provides the foundation for more than 50 cloud services, including electronic health records, ML Ops, payments, unified analytics, and SAP HANA, as well as a wide array of cloud services from partners.

Recent platform updates include a convergence with Aruba Central, a cloud-native, AI-powered network management solution. GreenLake has also added a unified operational experience that provides a simplified view and access to all cloud services, spanning the entire HPE portfolio, with single sign-on access, security, compliance, elasticity, and data protection.

HPE GreenLake for Aruba networking 

Delivering comprehensive edge connectivity networking solutions, HPE is building out its network as a service (NaaS) offerings with HPE GreenLake for Aruba networking. The new services simplify the process of procuring and deploying NaaS and allow customers to align network spending to usage needs while ensuring that the network is always ready to support business objectives.

The new services are built to satisfy growing demand for NaaS and the ability to operate in either a ‘traditional’ or managed service provider (MSP) model. Covering a full span of business use cases – including wired, wireless, and SD-Branch – the new services provide increased levels of velocity and flexibility, accelerating business time to revenue.

HPE GreenLake for Block Storage 

The industry’s first block storage as-a-service to deliver 100% data availability, HPE Greenlake for Block Storage guarantees built-in on a cloud operational model. It helps businesses transform faster and brings self-service agility to critical enterprise applications. The new offering delivers the following capabilities:

        • Self-service provisioning to provide line of business owners and database admins the agility required to build and deploy new apps, services, and projects faster
        • IT resources are freed to work on strategic, higher-value initiatives with 98% operational time savings

HPE Backup and Recovery Service

HPE has harnessed back up as a service with their built for hybrid cloud offering. Businesses can effortlessly protect their data for Virtual Machines, gain rapid recovery on-premise, and deliver a cost-effective approach to storing long-term backups in the public cloud. HPE Backup and Recovery Service is now available for Virtual Machines deployed on a heterogeneous infrastructure. HPE is advancing its ransomware recovery solutions by adding immutable data copies – on-premise or on Amazon Web Services (AWS) with HPE Backup and Recovery Service.

HPE GreenLake for High-Performance Computing 

HPE is further enhancing its HPE GreenLake for High-Performance Computing, making it more accessible for any enterprise to adopt the technology by adding new, purpose-built HPC capabilities. The new capabilities quickly tackle the most demanding compute and data-intensive workloads, power AI and ML initiatives, and accelerate time to insight. These also include lower entry points to HPC, with a smaller configuration of 10 nodes, to test workloads and scale as needed. New features and capabilities include:

        • Expanded GPU capabilities that will integrate with HPE’s Apollo 6500 Gen10 system to accelerate compute and advance data-intensive projects using the NVIDIA A100, A40, and A30 Tensor Core GPUs in increments of 2-4-8 accelerators. The new service will feature the NVIDIA NVLink for a seamless, high-speed connection between GPUs to work together as a single robust accelerator.
        • HPE Slingshot, the world’s only high-performance Ethernet fabric designed for HPC and AI solutions, delivers high-performance networking to address demands for higher speed and congestion control in larger data-intensive and AI workloads.
        • HPE Parallel File System Storage, a scalable, high-performance storage solution to deliver advanced throughput for broader HPC and AI needs.
        • Multi-cloud connector APIs can programmatically orchestrate HPC workflows on a diverse pool of computing resources, such as other HPE GreenLake for HPC or public clouds. The new model delivers more elasticity, scalability, and tools to optimise the usage of disaggregated resources. The capability improves collaboration by connecting to other projects between multiple sites and removing silos.

 

Conclusion 

Developers and IT operations can swiftly scale new apps and capabilities as the public cloud creates new possibilities in speed and agility, greatly simplifying IT. Despite the advantages of public cloud computing, due to data gravity, latency, application reliability, and regulatory compliance, many applications and data should be kept in data centres and co-location facilities.

HPE GreenLake’s as-a-service architecture blends the agility and affordability of the public cloud with the security and performance of on-premise IT to deliver on-demand capacity. With GreenLake services, we can help you accelerate your digital transformation by using cloud benefits like quick deployment, scalability, and pay-per-use economics while maintaining control over your on-premise environment. Contact us to discuss the possibilities of HPE Greenlake for your business.