Behind the scenes of a technology’s surging adoption, you may find a gap between what executive leadership thinks they know and what the professionals on their teams really understand. As Kubernetes continues its rapid growth, this gap may exist in more organizations.
Time savings – leading to faster time to market of products and services – is one benefit that many executives seek. “A Kubernetes platform lets an enterprise take advantage of numerous cloud providers and grow as rapidly as you may need, without having to re-architect your infrastructure. This saves the need for significant deployment cycles and drastically improves your ability to provide new services as quickly as possible,” as Ernest Jones, vice president, North America sales, partners & alliances for Red Hat, recently noted. Workload portability and security also top the list of benefits enterprises want from choosing Kubernetes.
[ Get up to speed. Read our deep dive: Kubernetes: Everything you need to know. ]
But how can IT leaders help ensure teams realize those benefits? We asked a variety of experts: What do the folks working hands-on with Kubernetes need the bosses to know? Here are five realities of containers and orchestration that CIOs and other IT leaders should keep in mind.
1. Don't ask your team to wing it with Kubernetes
We hear often about the value of experimentation, continuous learning, and “failing fast.” They’re all good things, but none should be used as a cipher for “making it up as we go along” when you’re running Kubernetes in production. You need to be very intentional in how and why you’re using the tool. Give teams the resources they need to learn what they don’t know.
“There are a lot of concepts your teams need to know before preparing your services to be K8s-ready,” says Christian Melendez, senior cloud architect at Equinix and a Kubernetes instructor with DevelopIntelligence.
That preparation includes things like making applications fault-tolerant (especially when high availability is a driver or concern), exposing health endpoints (for monitoring), and knowing your resource limits, according to Melendez. You can certainly get a trial up and running relatively fast in a sandbox or local environment; just don’t ask the team to do a trial by production, so to speak. Ensure there’s a well-founded plan for seeing this past the initial stage of implementation.
[ Also read: Kubernetes: 3 ways to get started. ]
“Kubernetes is complex infrastructure software with many different abstraction layers and interconnected components,” says Wei Lien Dang, co-founder and chief strategy officer at StackRox. The platform’s power comes with a learning curve; Dang notes that it can be particularly steep when going into production and/or scaling for the first time.
“That’s why many in the community have started to emphasize the importance of ‘Day Two’ challenges and how to successfully realize the benefits that Kubernetes can unlock: faster innovation, more efficient resource usage, and time savings with automated orchestration,” Dang says. “CIOs and IT leaders need to ensure they have a blueprint for how their teams will build, use, and standardize their application architectures on Kubernetes.”
[ Kubernetes 101: An introduction to containers, Kubernetes, and OpenShift: Watch the on-demand Kubernetes 101 webinar. ]
2. You can navigate the initial learning curve
The initial challenges are worth it, provided you’re prepared for them: “Once you pass that learning curve, it becomes easier to replicate the same policies or procedures to the rest of the applications,” Melendez says.
Moreover, Kubernetes’ reputation for initial complexity is sometimes exaggerated.
“The biggest and most common misconception when it comes to Kubernetes is that it’s overly complex,” Eric Drobisewski, senior architect at Liberty Mutual Insurance. “There have been many talks given, blogs written, and articles published that portray Kubernetes as a difficult technology to tame into a mild-mannered enterprise computing platform.”
That can numb some IT leaders and organizations into inertia, even when Kubernetes’ capabilities and the surrounding cloud-native ecosystem might be the right fit for their needs.
Drobisewski acknowledges that Kubernetes can look a bit intimidating at first, and he agrees that organizations and teams need to plan for how they’ll learn, use, and manage the platform. But Drobisewski also points out that Kubernetes is helping to solve complex infrastructure problems that have hampered distributed computing for a long time.
“This initial learning curve is not insurmountable and will pay great dividends as teams ground themselves with a foundational understanding of Kubernetes and begin to expand into the rich ecosystem of open source projects that complement and extend upon the core value of Kubernetes,” Drobisewski says. “Creating an infrastructure fabric centered on Kubernetes will not only aid organizations in solving for today’s hybrid cloud challenges but will serve as a strategic foundation, providing extensibility for the future of distributed computing with the rapid expansion of edge, 5G, and IoT.”
3. Kubernetes won't cure your underlying problems
A common source of “failure to launch” burnouts with Kubernetes: Lobbing it on top of fundamental problems in the hopes that they’ll magically dissolve. This isn’t Oz – you can’t just click your ruby slippers, say “There’s no place like Kubernetes,” and wake up to find all of your digital transformation and other strategic plans perfectly in place.
“Every mistake you made on your previous infrastructure you can make on Kubernetes unless you continually examine the fundamental design of your infrastructure and applications,” says Miles Ward, CTO at SADA.
Also, some of the tactics you might have used in the past to address production fires may no longer apply.
“Kubernetes is more intentional in nature compared to previous application management systems,” Ward says. “There is no SSH-ing into production to patch a piece of code to get things back up.”
How you build and deploy applications matters, and Kubernetes won’t patch the leaks or grease the brittle parts of your pipeline for you.
“If your build and deploy system is cumbersome and your testing is lacking, you will get burned,” Ward says. “The reality of working with Kubernetes is that most of the work is not so much in Kubernetes as it is in refactoring bad decisions and building previously missing critical components, particularly CI/CD.”
This principle applies to process and culture, too. In some organizations, this might require significant change.
“Adopting Kubernetes is much more than just installing your applications in containers and running them on it,” says Peter Kreslins, CTO and co-founder at Digibee. “It’s more about changing your application development mindset to consider resilience, security, and automation [from] the beginning of the development lifecycle in an integrated way.”
Containers and Kubernetes shouldn’t be viewed as just another IT stack, according to Kreslins, nor as a sorcerer’s wand.
“CIOs must understand that Kubernetes is not automatically modernizing architectures just because you’re using it, but actually paving the way so applications can be effectively modernized,” Kreslins says.
4. Commercial platforms don't all offer the same thing in different clothing
The underlying Kubernetes open source project is an extensible, pluggable platform. Dang from StackRox points out that this means (among other things) there are lots of commercial platforms and tools built on top of (or adjacent) to it.
This is generally a good thing. There are choices and lots of help available for organizations that need it. Just don’t mistakenly assume all of the commercial and managed platforms are roughly the same thing. There can be important differences.
“A misconception is that different Kubernetes offerings provided by vendors are equivalent because they all use the Kubernetes API,” Dang says. “The reality is that these platforms differ in substantial ways – the cloud providers differ significantly in the capabilities of their managed Kubernetes services, for example. So IT leaders need to evaluate and assess which Kubernetes platforms, whether based in the cloud or run on-premises, best meet their needs.”
As Red Hat’s Jones noted, “As cloud service providers have evolved from just deploying CPU compute, memory, and storage to offering additional value-added services including containerization, artificial intelligence, and machine learning platforms, enterprises may encounter proprietary connections and API calls that are unique to the particular cloud platform. This creates portability and lock-in concerns.” An enterprise Kubernetes platform addresses those concerns.
[ Read also: OpenShift and Kubernetes: What’s the difference? ]
Another thing to remember: Kubernetes can essentially run anywhere. (It can even run on your laptop.) “Cloud-native” doesn’t necessarily mean “cloud-only.” So factor that into your evaluation, too.
“Public cloud providers are a great place to get started with cloud-native infrastructure, but so are your own data centers, as Kubernetes can be run on your existing hardware,” says Tom Manville, engineering lead at Kasten.
5. The long-term and day-to-day upsides are real
CIOs are right to be leery of tech hype. But the potential benefits of Kubernetes are real. We already discussed long-term portability, time savings, and security benefits, but Melendez takes us through what teams will see day-to-day:
Resiliency and reliability: “Services recover automatically from most of their failures as long as you have the proper configurations in place,” Melendez says. “For instance, if a service runs out of memory, Kubernetes restarts it. Or, if there’s a peak in traffic, Kubernetes can scale out and scale in automatically. IT pros can focus on improving other aspects of the service since they will be on-call for real emergencies only.”
Faster and/or more frequent delivery: The ability to automate and standardize deployments (and related tasks or processes) can be a significant advantage. When running containerized applications in production, this ability can simply become a basic necessity.
“Time-to-market could decrease through the ability to do one-click deployments where the state of the services is defined through YAML manifests in Git,” Melendez says. “Rollbacks, rollouts, and AB testing (experimentation) can be done automatically and fast. Everyone on the team can know how the services are configured in a production environment. This allows developers (or anyone else) to replicate production-like environments in dev or an environment that they can control.”
Resource utilization and optimization: “Once you can observe what’s happening to the services you deploy in Kubernetes, you’ll be able to better understand where you can optimize resources,” Melendez says. “For instance, a service might not be using all the memory assigned. Might this reduce the amount of servers you need, allowing you to reduce infrastructure costs? Moreover, you can know which services you need to focus on to optimize if they’re struggling with memory leaks or high CPU peaks.”
[ Get the eBook O’Reilly: Kubernetes Operators: Automating the Container Orchestration Platform. ]