Even Match.com could not have done a better job finding a mate for microservices. Microservices – single-function services built by small teams, independent from other functions, and communicating only through public interfaces – simply make a great match for containers. Microservices plus containers represent a shift to delivering applications through modular services that can be reused and rewired to perform new tasks.
Why do containers and writing apps go together so well?
Containerizing services like messaging, mobile app development and support, and integration lets developers build applications, integrate with other systems, orchestrate using rules and processes, and then deploy across hybrid environments.
[ How are your peers using containers – and what speed bumps should you avoid? See our related article, 4 container adoption patterns: What you need to know. ]
But don’t think of this as merely putting middleware into the cloud in its traditional form. Think of it as reimagining enterprise app development for faster, easier, and less error-prone provisioning and configuration. That adds up to more productive – and hopefully, less stressed – developers, especially at a time when speed is a core requirement for business.
When apps meet containers
One key idea behind microservices: Instead of large monolithic applications, application design will increasingly use architectures composed of small, single-function, independent services that communicate through network interfaces. This suits agile and DevOps approaches, and reduces the unintended effects associated with making changes in one part of a large monolithic program.
Linux containers can technically encapsulate monolithic applications effectively, just as if they were in a virtual machine or on a “bare metal” physical server. However, modern standards-compliant Linux container technology encourages breaking down applications into their separate processes and provides the tools to do so. (The Open Container Initiative – OCI – maintains standard runtime and image specifications for containers.)
This granular approach has several advantages:
1. Modularity equals flexibility
The current approach to containerization emphasizes the ability to update, restart, and scale components of an application independently – without unnecessarily taking down the whole app. In addition to this microservices-based approach, you can share functionality among multiple apps in much the same manner as service-oriented architectures more broadly. This means you’re not rewriting common functions (often in subtly incompatible ways) for every application.
2. Layers and image version control: DevOps win
Each container image file is made up of a series of layers. When the image changes, a new layer is created that’s essentially a set of filesystem changes. Configuration metadata such as environment variables or default arguments are properties of the image as a whole rather than any particular layer.
A variety of projects can be used to create images. These include the upstream Docker project, which requires a Dockerfile and a runtime daemon, while Buildah from Project Atomic can build a container from scratch.
The image layers are reused when building a new container image. This makes the build process fast and has tremendous advantages for organizations applying DevOps practices like continuous integration and deployment (CI/CD). Intermediate changes are shared between images, further improving speed, size, and efficiency. Inherent to layering is version control. Every time there’s a new change, you essentially get a built-in change-log.
3. Rollback: Fail fast safely
Perhaps the best part about layering is the ability to roll back. Every image has layers. Don’t like the current iteration of an image? Roll it back to the previous version. This further supports an agile development approach and helps make CI/CD a reality from a tools perspective.
4. Rapid deployment: Precious time gains
Getting new hardware up, running, provisioned, and available used to take days. And the level of effort and overhead was burdensome. OCI-compliant containers can reduce deployment to seconds. By creating a container for each process, developers can quickly share those similar processes with new apps.
And because an operating system doesn’t need to restart in order to add or move a container, deployment times are substantially shorter.
Think of technology as being in support of a more granular, controllable, microservices-oriented approach that places greater value on efficiency.
5. Orchestration: Take it to the next level
An OCI-compliant container runtime by itself is very good at managing single containers. However, when you start using more and more containers and containerized apps, broken down into hundreds of pieces, management and orchestration gets tricky. Eventually, you need to take a step back and group containers to deliver services – such as networking, security, and telemetry – across your containers.
Furthermore, because containers are portable, it’s important that the management stack that’s associated with them be portable as well. That’s where orchestration technologies like Kubernetes come in, simplifying this need for IT.
Rethinking applications
While containers can be used simply to encapsulate and isolate applications in a similar manner to virtual machines, they’re most effective when used as a fundamentally new way of packaging and architecting applications. Do this and pair them up with more agile and iterative DevOps processes, and you get apps that are more flexible, more reusable, and delivered more quickly.
For much more on containers and how they rewrite ideas about software packaging and development process, get my new book, which I wrote with my colleague William Henry: From Pots and Vats to Programs and Apps, freely downloadable at https://goo.gl/FSfgky.