top of page

Containerized Microservices



Containerized microservices are a concern to many experienced software engineers. Let’s address one of the most common microservice misconceptions feared by engineers — poor performance.


Containerization is an approach to software development in which an application and its versioned set of dependencies, plus its environment configuration abstracted as deployment manifest files, are packaged together as a container image, tested as a unit, and deployed to a host operating system.


A container is an isolated, resource controlled, and portable operating environment, where an application can run without touching the resources of other containers, or the host. Therefore, a container looks and acts like a newly installed physical computer or a virtual machine.


Containerized Benefits

Containerization offers significant benefits to developers and development teams. Among these are the following:

  • Portability: A container creates an executable package of software that is abstracted away from (not tied to or dependent upon) the host operating system, and hence, is portable and able to run uniformly and consistently across any platform or cloud.

  • Agility: The open source Docker Engine for running containers started the industry standard for containers with simple developer tools and a universal packaging approach that works on both Linux and Windows operating systems. The container ecosystem has shifted to engines managed by the Open Container Initiative (OCI). Software developers can continue using agile or DevOps tools and processes for rapid application development and enhancement.

  • Speed: Containers are often referred to as “lightweight,” meaning they share the machine’s operating system (OS) kernel and are not bogged down with this extra overhead. Not only does this drive higher server efficiencies, it also reduces server and licensing costs while speeding up start-times as there is no operating system to boot.

  • Fault isolation: Each containerized application is isolated and operates independently of others. The failure of one container does not affect the continued operation of any other containers. Development teams can identify and correct any technical issues within one container without any downtime in other containers. Also, the container engine can leverage any OS security isolation techniques—such as SELinux access control—to isolate faults within containers.

  • Efficiency: Software running in containerized environments shares the machine’s OS kernel, and application layers within a container can be shared across containers. Thus, containers are inherently smaller in capacity than a VM and require less start-up time, allowing far more containers to run on the same compute capacity as a single VM. This drives higher server efficiencies, reducing server and licensing costs.

  • Ease of management: A container orchestration platform automates the installation, scaling, and management of containerized workloads and services. Container orchestration platforms can ease management tasks such as scaling containerized apps, rolling out new versions of apps, and providing monitoring, logging and debugging, among other functions. Kubernetes, perhaps the most popular container orchestration system available, is an open source technology (originally open-sourced by Google, based on their internal project called Borg) that automates Linux container functions originally. Kubernetes works with many container engines, such as Docker, but it also works with any container system that conforms to the Open Container Initiative (OCI) standards for container image formats and runtimes.

  • Security: The isolation of applications as containers inherently prevents the invasion of malicious code from affecting other containers or the host system. Additionally, security permissions can be defined to automatically block unwanted components from entering containers or limit communications with unnecessary resources.



How Containerized Microservices Work

A great way to introduce how containerized microservices work is to describe the other strategies for running microservices (and explain what’s wrong with them):

  • Each microservice runs on its own physical server with its own operating system instance: This approach keeps the microservices isolated from each other, but it’s wasteful. Considering that modern servers have the processing power to handle multiple operating system instances, a separate physical server for each microservice isn’t necessary.

  • Multiple microservices run on one operating system instance on the same server: This is risky because it doesn’t keep the microservices autonomous from each other. There is an increased chance of failures caused by conflicting application components and library versions. A problem with one microservice can lead to failure cascades that interrupt the operation of others.

  • Multiple microservices run on different virtual machines (VM) on the same server: This provides a unique execution environment for each microservice to run autonomously. However, a VM replicates the operating system instance, so you’ll pay to license a separate OS for each VM. Also, running an entirely new OS is an unnecessary burden on system resources.

Running microservices in containers with their necessary executables and libraries means that each microservice operates autonomously with reduced interdependency on the others. Moreover, multiple containers can run on a single OS instance, which eliminates licensing costs and reduces system resource burdens.



The Tech Platform

0 comments
bottom of page