Workload portability – the ability to move an application (or at least part of it) from one environment to another – is a common goal in hybrid cloud and multi-cloud environments.
In practice, this is easier said than done – the image of an engineer moving an entire application from one cloud to another on a daily whim isn’t really what the term is meant to convey. Rather, workload portability reflects that both short-term choice and long-term flexibility are both possible. Making a decision to run a particular application in a particular cloud or on-premises environment does not mean it needs to run in that environment forever.
Containerization, orchestration, and other facets of modern software development and operations are key foundations. Workload portability also suggests, of course, that you have at least two clouds and/or on-premises/bare metal environments to choose from.
[ Related read: How to explain modern software development in plain English. ]
Once those foundations are in place, several important questions and issues arise. Moving workloads from one cloud to another should be intentional – an important word for hybrid cloud and edge architectures, as Red Hat technology evangelist Gordon Haff said recently.
4 ways to approach moving cloud workloads
Below, we dig into four different groupings of tips for being more intentional about how you place and move workloads among multiple environments – and in your overall hybrid cloud or multi-cloud strategy.
1. Developing criteria for deciding what goes where
Many hybrid cloud and multi-cloud environments begin in an ad hoc or even accidental fashion. That’s natural, but they should eventually be replaced with – as Haff noted – a more intentional strategy.
That begins with having clear criteria for placing – and moving – workloads in a given environment.
"There are a lot of reasons to decide where to place a workload,” Matt Wallace, CTO at Faction. “The hard part comes when there’s no right answer because you have teams or partners in different clouds, or need access to different services.”
So focus on the tangible reasons that matter to you and let those guide your choices. Wallace shares several examples:
- Proximity to other apps and data – also known as “Data Gravity,” and often a driving factor whenever performance/latency is a major concern
- Collaboration with other teams & partners – if they use a particular cloud, it may make sense for you to as well
- The set of tools available in a particular cloud – they’re not all the same
- Geographic/locality concerns
- Cost (more on this below)
- Scale – such as the difference between a predictable, stable workload versus one that is likely to grow or have lots of spikes in resource demand
With some criteria, additional specificity in terms of your goals or requirements will be beneficial. “Performance” is quite broad, for example – as is its counterweight, latency. Defining what terms like those actually mean to your organization and its applications will give you a more granular decision-making matrix for matching workloads to the right environment.
The choice of clouds is similarly not one-size-fits-all – especially when you move beyond core infrastructure.
[ Ready to deploy a hybrid cloud strategy? Get the free eBook, Hybrid Cloud Strategy for Dummies. ]
“Infrastructure services are table stakes in any cloud environment,” says Eric Drobisewski, senior architect, Liberty Mutual Insurance. “Beyond those core services, identify the key elements of the public cloud providers that drive differentiating value for your business and look to leverage those to drive greater value more quickly.”
2. Making sure everyone and everything plays nice together
Over time, hybrid cloud and multi-cloud environments typically become ever more distributed and diverse. One key to effectively managing and moving workloads is the ability to make changes without breaking everything – you should be able to add a new tool or service and have it get along with your existing tech stack.
Wallace distills the strategy here into a word: architecture.
“Design beats stumbling into an interpretation,” he adds. “Leveraging tools that provide an abstraction that enables portability or consistency are useful. Standardizing other things like identity and authentication using centralized identity and SAML authentication are also useful.”
[Also read Cloud consultants: 4 questions to ask about your strategy. ]
Indeed, standardization is a big prong in an integration strategy. Open standards are even better, especially given the velocity of change in the cloud universe. Drobisewski notes this is beneficial for both the costs of initial integration and long-term flexibility.
“When possible, leveraging open specifications and standards that are being adopted across cloud providers will help ease integration and improve interoperability,” Drobisewski says.
It’s hard to keep everything in harmony when you don’t actually know what “everything” means. Justin Dempsey, senior software development manager, SAS, says his team has found it useful to create a matrix inventory of tools and applications they own across multiple cloud platforms. That can help with everything from identifying gaps to securing the software supply chain. It can also inform decisions about workload portability.
”Creating a matrix of tools that you control and noting which ones are cloud agnostic, non-cloud portable, or cloud-specific helps you assess the risk involved for moving from one cloud to another or creating architecture that needs to span cloud providers,” Dempsey says.
Managing as much as possible as code is another important strategy here.
“Working toward ‘everything as code’ is an approach that promotes consistent delivery, adherence to governance controls, and enforced testing standards that ensure new environments play nice with existing ones,” Dempsey says.
3. Managing and optimizing costs
Cloud costs are often oversimplified into absolutes and extremes – such as “Cloud is cheaper!” (which is not always true) or “Gahhhhhh, why is my cloud bill so big?” (There could be many reasons.)
This is another area where careful design and planning matter. Wallace from Faction points out that a lot of what might be categorized as infrastructure costs is actually an application-level concern.
“If you set up a three-tiered autoscaling architecture in the cloud to handle what could be done for a micro-fraction of the cost using an API gateway and a serverless function, you’re going to overpay dramatically for cloud,” Wallace says.
[ Related read: 5 tips for architects on managing cloud service provider spending ]
As Red Hat’s Haff told us previously, cloud service providers can indeed get expensive. That doesn’t mean you shouldn’t use them, Haff explains, “but you need to understand where they do provide good value for your organization and where you should consider taking workloads on-prem.”
A holistic view of costs is critical, especially when it comes to making informed decisions about placing and moving workloads and data. Wallace uses deep-freeze storage as another example, which might first appear inexpensive.
“In a certain provider, getting the data back out of the cloud costs more than four years of storage costs,” Wallace says. “This isn’t a knock on the provider – there is a huge need for ‘store it and forget it’ use cases that are alternatives to tapes buried in an offline vault – but if you don’t match the service to the use case it can be an expensive surprise.”
In terms of workload portability (and cloud costs in general), there are two big areas to focus on:
Visibility: Managing cloud costs efficiently boils down to your ability to answer this question: “Who is using what?” Optimizing cloud costs additionally being able to answer “why?” Wallace frames the question explicitly in financial terms: “Who is spending what money on what services?” If that’s a black box, you’ll struggle to meet your cost objectives.
Data Flows: Moving cloud workloads can generate additional (and sometimes unexpected) costs associated with the flow of data from in and out of an environment – commonly referred to as data ingress (in) and egress (out).
There can be charges for both, but data egress charges are usually the ones to watch out for.
“Egress charges can add up quickly, especially for data movement that spans multiple cloud providers or regions,” says Dempsey from SAS.
Wallace’s deep-archive storage example above is one of many possible scenarios involving data egress charges causing a surprise cloud bill.
“This is never more dramatic than with network flows, where it is possible in the public cloud to turn up a network gateway to connect your virtual networks, where you pay $2.40 per day for the gateway, but can generate $10,800 a day in data transfer charges, as an extreme example,” Wallace says.
The possibility of such charges increases when moving workloads among clouds.
“When it comes to multi-cloud, the risk is amplified, because network flows outside the cloud are much more likely to have a charge and for that charge to be more,” Wallace says. “This is a generalization, but a caution that you have to understand these data flows.”
4. Keeping things simple and fast for developers
Finally, don’t forget about your devs. (As we reported recently, developer experience is everything these days.)
As hybrid cloud and multi-cloud environments grow more diverse and complex, some of the upside – including the control and flexibility to match workloads with the best environment based on criteria you determine – depend on preventing unnecessary friction for the development team.
What this actually looks like really does depend on a lot of different factors: “It depends on your developers, your application portfolio, your codebase, your mission, and more,” Wallace notes.
An ideal scenario that pairs multi-cloud readiness with strong developer experience, according to Wallace, might be a serverless model that looks something like this: “Developers can develop locally or in cloud dev environments, have little to no infrastructure to maintain, and tools like throttling limits are built into components like API gateways to avoid runaway code in development from creating runaway costs.”
Great tools that minimize friction between writing, testing, and deploying code are good for both the business and the developer – and also a foundation for achieving real workload portability.
“This sort of design pattern is also great for maximizing portability between any cloud, an on-premises data center, and edge deployments,” Wallace says.
Drobisewski notes that one of the benefits of hybrid cloud and multi-cloud ecosystems – choice – can become overwhelming for developers. You can simplify that for them.
“Investing in a single marketplace that unifies technology enablement and curates sets of well-architected patterns, that are both secure and cost-optimized, will speed the enablement of developers while fostering a culture of reuse,” Drobisewski says.
Finally, if cloud workload portability is a priority, that actually pairs well will increase developer velocity. They can feed off of one another. Dempsey suggests avoiding becoming too committed to a particular methodology or project management style.
Like with costs, developer velocity should also be examined at the application level.
“Which of your applications don’t provide some type of abstraction – what facets of the application stack are tightly wed to specific technologies or vendors?” Dempsey says. (They may be a source of friction.) “Your goal is to decouple and focus on creating robust data delivery pipelines that will provide flexibility and integration opportunities for developers and data consumers over the long haul.”
[ Build application environments for reliability, productivity, and change. Download the eBook, Cloud-native meets hybrid cloud: A strategy guide ]