What does the future hold for the hybrid cloud model? Kubernetes is at the core of the open hybrid cloud. It has emerged as the de facto standard for application-focused clustering technologies. Cluster management, cluster scheduling, and cluster orchestration are all in Kubernetes’ wheelhouse and it has really cemented its role in the industry in the past year. Cloud-native applications require a level of resource elasticity and are potentially responding to a global-scale load that requires a tool like Kubernetes to scale applications.
Just as Linux emerged as the focal point for open source development in the 2000s, Kubernetes is emerging as a focal point for building technologies and solutions (with Linux underpinning Kubernetes, of course.)
[ Kubernetes terminology, demystified: Get our Kubernetes glossary cheat sheet for IT and business leaders. ]
Meet Knative
A great example of this is Knative, a Kubernetes-based platform designed to offer a Kubernetes-native API for implementing serverless type functions, or to ease deploying applications and containers.
[ Read also: How to explain serverless in plain English. ]
While serverless isn’t new to the industry, it’s really exciting to see a fully open source project that melds well with the Kubernetes ecosystem and has a real chance of maturing and becoming a standard in its own right.
Meet Istio
Alongside serverless, we see the service mesh concept taking off. A service mesh is essentially platform-level automation for creating the network connectivity required by microservices-based software architectures. Istio is one service mesh implementation that we’ve been working with. Now in Technology Preview for OpenShift, Istio is also targeted for Kubernetes and has gained a lot of mindshare.
[ Read also: OpenShift and Kubernetes: What's the difference? ]
Meet KubeVirt
Another interesting emerging trend in the Kubernetes space is increasing interest from organizations who want to run Kubernetes on bare metal servers. When you combine this with the project KubeVirt that gives Kubernetes the ability to manage virtual machines, you have Kubernetes managing both containers and virtual machines side by side. This is a really powerful combination, enabling teams to focus on a single cluster management tool and cluster scheduler, Kubernetes, to support the on-premises portion of their hybrid cloud workloads.
Meet Kubeflow
You can’t really discuss emerging technology without talking about artificial intelligence (AI) and machine learning (ML). These are not new by any stretch, but they have been difficult for companies to implement and are largely the domain of pure data scientists.
As of late, we’re seeing projects that are trying to make AI and machine learning more accessible to software developers and lifecycle managed the way development and operations teams are used to doing lifecycle management. One project we have our eyes on is Kubeflow, a machine learning toolkit for Kubernetes. The idea behind Kubeflow is to make it simple to scale machine learning models and deploy them to production wherever Kubernetes is running. Once again, we see Kubernetes as the target platform of choice.
Hardware acceleration is a great way to improve the performance of AI and ML workloads. Making hardware like graphics processing units (GPUs) or field programmable gate arrays (FPGAs) easy for applications to access is another key to the success of ML workloads. Kubernetes has been focused on making it easy for developers to access these kinds of hardware accelerators from within containerized applications, so you can expect to see an increasing number of hardware-accelerated machine learning workloads targeting Kubernetes.
[ This article is adapted from a blog that originally ran on Redhat.com. Read the full blog by Chris Wright, here: Open Outlook: Emerging Technologies. ]