The SOE may not be the most glamorous of IT management strategies, but it exists for good reasons and its concepts remain well-known. Moreover, the principles of the SOE, or Standard Operating Environment, continue to lend themselves well to modern IT: Think containers, orchestration, automation platforms, security, and so forth.
Why? First, let’s make sure we’ve got some good working definitions of what a Standard Operating Environment means.
“[An] SOE is used to build a streamlined process of IT implementation in business,” says Akram Assaf, CTO at Bayt. “It’s used to quickly and efficiently implement large software and hardware assets.”
What is an SOE and what does it do?
Let’s underline a couple of fundamentals here. SOEs usually pay the biggest dividends in large environments where IT must manage a lot of stuff. And that “stuff” comprises both infrastructure – whether servers, desktops, laptops, or other endpoints – and the software that runs on it.
Here’s another definition, excerpted from an SOE primer at redhat.com: “An SOE is a standard operating environment, or a specific computer operating system and collection of software that an IT department defines as a standard build.”
[ How can automation free up more staff time for innovation? Get the free eBook: Managing IT with Automation. ]
The basic idea is pretty simple: The more unique or individual machines and applications you have to manually manage, the higher the level of effort, not just in terms of initial deployment and configuration, but in terms of maintenance over time. Standardizing can help simplify and streamline everything from provisioning and deployment to patching and other operational tasks. It is essentially an antidote to IT sprawl, and to nightmarish complexity in terms of enterprise service desks and infrastructure operations.
Take the traditional data center, for example, and a general shift toward standardization inside it.
“One long-term IT trend has been a shift away from ‘snowflake’ servers, which are unique individuals, to server instances that are as cookie-cutter as possible,” says Red Hat technology evangelist Gordon Haff. “It’s a trend that got going as data centers shifted from small numbers of ‘big iron’ servers to large numbers of relatively small standardized systems. The sprawl that virtualization created kicked the trend into overdrive. Standard Operating Environments (SOE) to the rescue.”
Haff notes that in an SOE, servers are provisioned with one of a small number of standard configurations. No more snowflake builds. Management software then monitors those servers for drift from their desired or original state. Updates can be applied to the SOE itself and deployed en masse, often in an automated fashion.
The same general principle also applies to individual workstations – which is especially helpful if you manage a ton of them. Ditto for the software that runs on all types of machines.
Four advantages of SOEs in the automation era
Let’s make explicit four benefits of SOEs.
1. Efficiency
This is the big one, and it encompasses everything from keeping costs under control to reducing the amount of time spent on repetitive management and support tasks.
“The main idea is to streamline the implementation process,” Assaf says. “Developing and automating a standard IT build is efficient. It reduces the time it takes to install and maintain systems.”
Pablo Listingart, executive director of ComIT, describes it from the standpoint of workstations and the software that runs on them.
“Companies that use SOE can benefit from a reduction in costs and time associated with the deployment, configuration, maintenance, and support of workstations,” Listingart says. “SOE focuses on creating a disk image that can be widely distributed to every workstation and provide employees with the necessary tools to perform their tasks. In other words, a standard list of operative systems and software tools will be installed, rendering it easier for IT teams to maintain the work environment and reduce unknown variables.”
2. SOE simplifies training and support
SOEs can be a friend for help desks and sysadmins.
“It is much easier to support a smaller number of systems that are consistently the same,” Michael Nelson, CEO of TLC Tech. “It allows the support team to become much deeper in their knowledge and be able to resolve issues faster.”
Ian Brady, managing director of Steadfast Solutions, points to several reasons from the software side of things. The SOE allows for “common lifecycle of operating systems, (e.g., everyone on the same version), fast deployment of machines to users without customization, easier licensing management, [and] remote deployment opportunities ([such as a] self-serve kiosk).”
All of these contribute to a more efficient operating environment. They also mean support is easier for the service desk, Brady says. There can be similar upsides in terms of onboarding new people and generally getting out of the business of intensively manual support work.
3. Enhanced compliance, governance, and security
Don Baham, president of Kraft Technology Group, recommends the SOE as a base layer for matters of compliance, governance, and risk.
“A business that wants to look beyond checkbox compliance and reach for best-practice cybersecurity maturity should be integrating a Standard Operating Environment procedure into their IT governance model,” Baham says. “One of the primary benefits of an SOE is that it allows IT departments as well as IT auditors to quickly identify exceptions and anomalies.”
Uniformity makes it easier to notice when something is amiss.
“It eliminates manual load, enhances compliance and management, and improves the overall security of the organization’s IT systems and applications,” Assaf says.
[ Related read: IT security automation: 3 ways to get started ]
4. More effective automation: The containers and Kubernetes connection
To borrow Haff’s term, “snowflakes” – whether referring to machines, processes, or other elements of your environment – don’t often make great candidates for automation. If you have to write a new script or do a lot of manual configuration every time you want to “automate” or run something, that defeats at least part of the purpose.
SOEs can be great foundations for a broader IT automation strategy because their uniformity (and corresponding lack of one-off or custom processes and systems) engenders repeatability. It can even help in terms of identifying opportunities for process improvements.
“A standardized solution not only facilitates the maintenance of the software environment but also fosters speed and replication by allowing process automation to deploy software on workstations,” Listingart says. “When every issue occurs in identical contexts, the number of unknown variables is greatly reduced, allowing IT teams to improve their response time and define processes that will resolve identified problems.”
It’s this last benefit – the natural relationship between standardization and automation – that makes SOEs more relevant as containerization, orchestration, CI/CD, and the general growth of IT automation continue unabated.
Haff from Red Hat notes that containers, for example, are a natural extension of traditional SOEs.
“Containers extend SOE practices and in some ways simplify them,” Haff says. “Containerized infrastructure is typically immutable, which is to say that running instances don’t change – so there can be no drift. Rather, if containers need to be updated, running containers are shut down and updated ones are fired up in their place.”
That can mitigate one of the challenges of SOEs: It’s not always easy to enforce the standards in an airtight fashion, especially in large-scale environments. Once you start patching and updating, drift happens. The immutable nature of containers, along with other technologies, can help.
“While DevOps processes and infrastructure-as-code have advanced the cause of a standard operating environment for servers, it is with containers, along with serverless functions, that the promise of a truly uniform operating environment can be achieved,” says Tsvi Korren, field CTO at Aqua Security.
Containers, Korren notes, can be built to be standard at scale: “Not only because they come from an image – servers can be deployed from an image as well – but because they are small work units that require no management, the programs they run are immutable, persist no application data in their filesystems, and they are designed to be replaced rather than patched or upgraded,” Korren says. “Container images themselves are built from modular layers. This makes it possible to separate the underlying operating system and middleware from the application code.”
Containerization doesn’t make SOEs antiquated; it makes real standardization more of an attainable reality. Korren says that a container-based SOE starts with using a standard set of hardened, approved base images. (This is not unlike the relatively small set of standard systems in the datacenter example above.)
“Application teams will then add their own layers of files, making a complete application component that is ready to run without further administrative action, and can remain unmodified over the lifetime of the container,” Korren says. “It is even possible to run containers with a read-only file system, something that in managed servers will be impossible.”
Orchestration adds another layer to the modern SOE. “Running these containers in an orchestrated environment, like Kubernetes, will further enforce uniformity,” Korren says. “When upgrading an application component, Kubernetes will handle the replacement of workloads to a new image, quickly and at scale.”
None of this completely eliminates the possibility of drift, much as the traditional SOE model in the data center, and on employee workstations, has never been perfect. Korren points out that if, say, unauthorized images are allowed to run, or new processes can be added to running containers, then the SOE ideal will break. But properly managed, containerization and cloud-centric technologies can fulfill the original promise of SOEs and of automation overall.
“Replacing human interaction with automation tools, and implementing a new set of controls over the orchestration system and containers themselves will detect and prevent drift, keeping the operating environment standard over time,” Korren says.
[ Get the free O'Reilly eBooks: Kubernetes Operators: Automating the Container Orchestration Platform and Kubernetes patterns for designing cloud-native apps. ]