Optimized Management Helps Deliver More Reliable Container Operation

Andreas Neeb, Chief Architect Financial Services Vertical, Red Hat
Andreas Neeb, Chief Architect Financial Services Vertical, Red Hat

In current Linux distributions, the system server process acts as a unit system to start, monitor, and end other processes, and it can also be used for basic management of container images. For small, simple applications, systemd may be sufficient. Conversely, large-scale business applications place high demands on container management, which can be summed up in seven points.

1. Optimized alignment of resources and workloads

Businesses that develop container applications very often plan for such applications to run in a public cloud, if not immediately then at least at a later date. In this respect, developers are harnessing the advantages of containers, which abstract from the underlying infrastructure. This means it is irrelevant for the container where it runs – whether directly on a server, in a virtualized environment, or in a public cloud. As a result, the container management solution must allow application containers to be transferred in any way between the on-premise data center and one or more public clouds.

2. Lifecycle management

A container management solution should not only start containers and ensure optimized resource utilization – which is the job of container scheduling. It should also monitor proper operation, identify and fix malfunctions at an early stage, and ensure availability. This also includes restarting a container that has stopped running for whatever reason on the current server or moving it if necessary to another server in the on-premise data center on in a public cloud.

To this end, a developer can also supply a simple test, for example, which performs an external check to determine whether the container is working properly. The container management solution receives this test as an input parameter and can then check at predetermined intervals whether the container is still performing its service as intended.

3. Security

A general security strategy can be summed up as follows:

All components should originate from trustworthy sources.

It should be clear that their security status is up-to-date and has not been changed without authorization.
As an additional layer, SELinux should be used on the container hosts to shield running containers from the host and from one another. SELinux isolates the containers and only allows access to necessary resources.

4. Service discovery

Since containers are inherently dynamic and volatile, and the container management solution places them depending on need, it cannot be ensured that a single or even a group of associated containers will always run on one particular server. The container management solution must therefore have service discovery functions available to it, so that associated containers can also be found by other services, regardless of whether they are on-premise or running in a public cloud.

5. Scaling applications and infrastructure

When it comes to scaling – a process to be supported by the container management system – there are two different types:
Scaling of container instances with the application itself: In this respect, the administrators can specify that, if a particular CPU load occurs, storage capacities are exceeded, or specific events happen, a predetermined number of additional container instances are started.

Scaling of the container infrastructure: In this respect, it must be possible for the applications running on the container platform to be expanded to hundreds of instances – for example, by extending the container platform into a public cloud. This is much more complex than starting new containers on servers.

6. Providing persistent storage

Using the Red Hat OpenShift Container Platform, for example, infrastructure administrators can provide application containers and persistent container native storage, which manages the Kubernetes orchestration framework. The Kubernetes Persistent Volume Framework provisions a pool of application containers, which run on distributed servers, with persistent storage. Using Persistent Volume Claims, developers are able to request PV resources, without having to have extensive information on the underlying storage infrastructure.

7. Open source solutions offer greater potential in terms of innovation

The open source-based Docker format is supported by many leading IT companies, stretching from Amazon, Google and HPE through to IBM, Microsoft, and Red Hat. Particularly because so many users and software producers use Linux containers, a highly dynamic market has developed that follows the principles of open source software. Increasingly, enterprises are adapting a microservice architecture and supplying container-based applications. This creates new requirements that have to be implemented as quickly as possible in the form of new functionalities. This would not be possible following the closed source model and with only a single software vendor. The same also applies to container management solutions.