Container based Architectures I/III: Technical advantages
This is the first part of an article series about containers technical advantages in development, testing and production. The second part covers business benefits. The third part will compare 3 major cloud providers container services (AWS, Azure and Google).
What is containerization?
Containerization or container-based virtualization is an Operating System level virtualization method for deploying and running distributed applications without launching Virtual Machines for each application. The most popular implementation of containers, Docker, uses the resource isolation features of the Linux kernel such as cgroups and kernel namespaces, and a union-capable file system such as OverlayFS to allow independent “containers” to run within a single Linux instance, avoiding the overhead of starting and maintaining virtual machines.
Docker architecture overview
Docker uses a client-server architecture. The Docker client talks to the Docker daemon, which does the heavy lifting of building, running, and distributing your Docker containers. The Docker client and daemon can run on the same system, or you can connect a Docker client to a remote Docker daemon. The Docker client and daemon communicate using a REST API, over UNIX sockets or a network interface.
The Docker daemon manages Docker objects such as images, containers, networks, and volumes. A Docker registry stores Docker images. Docker Hub and Docker Cloud are public registries that anyone can use, and Docker is configured to look for images on Docker Hub by default. You can even run your own private registry.
Container based architecture compared with Virtual machines
Technical advantages in Development
- Isolated environments. Using containers while developing is a completely different experience to traditional development. Usually when you start a new job you would get a step-by-step tutorial to follow to configure your machine or a virtual machine image with a pre-configured environment. However, with Docker containers it is all about installing Docker and pulling/running containers to kick off your development environment.
- Homogeneous environments. All environments (Development, Testing, Staging, Pre-Production and Production) are setup in the same way. The whole environment is kept within the container definition.
- Continuous integration including infrastructure. Any changes in the container definition should trigger a new build and automated testing. Infrastructure is part of the development pipeline.
- Microservices. Using containers facilitates the development of a microservices architectural pattern since it is easier to develop discrete and separately deployable components. Of course, on the other hand by incrementing the number of applications, it increases maintenance complexity, network latency, monitoring. This article Modules vs microservices clearly describes the operational complexity of microservices.
- Only one virtual machine required. Usually a developer may be working on 2 or 3 different projects that may require different configurations or VMs. With Docker, different containers could be running on the same machine or VM.
Technical advantages in Testing and Quality Assurance environments
- Production containers testing. The exact same image that runs in Production can be run in Test and QA providing certainty that there will be no differences between environments.
- Easier creation of new testing environments. Creation of new environments becomes easier when multiple streams of work are running in parallel.
- Dynamic configuration of environments. Differences between environments could be kept in environment variables (e.g. database connection details) however this is not recommended. It is better to use a system for distributing and managing secrets like keywhiz.
Technical advantages in Production
- Immutability. Both servers and containers are disposable. The server basically needs to run Docker Engine and nothing else. Containers are immutable images, to get a new version in Production a new version is run and the old one removed.
- Isolation. Each application can only access its own container space. If an application gets compromised, it won’t be able to access other applications.
- Security by default. Swarm provides features that are enabled by default that improve application security such as read only containers. However, we need to be careful of “isolation myopia”. Diogo Mónica in Software Engineering Radio episode 290 mentioned that experts are mostly focusing on container level security leaving the application open for an attack. Regardless if the application is running on a VM or container, if it gets compromised its data will be exposed.
- Portability. A container could be run anywhere since the whole definition stays within. In the third part of this article series, I will compare 3 different options AWS, Azure and Google Cloud Platform to run containers.
- Scalability. To scale an application, more containers running it will need to be launched. Orchestration is required to achieve this. At the moment of writing this article the best options are Kubernetes, Mesos and Swarm.
- Easier elasticity. Dynamic provisioning becomes straightforward. Previously I wrote an article about Elasticity does not equal Scalability you may want to check later.
- Applications are decoupled from their running servers. Servers are scaled separately from applications since many applications can run in one server.
- Heterogeneous deployments. After the introduction of Docker on Windows, the Docker platform now represents a single set of tools, APIs and image formats for managing both Linux and Windows apps. As Linux and Windows apps and servers are dockerized, developers and IT-pros can bridge the operating system divide with shared Docker terminology and interfaces for managing and evolving complex microservices deployments both on-premise and in the cloud.
References
- [1] Containerization (container-based virtualization) by Wikipedia
- [2] Increasing Attacker Cost Using Docker and Immutable Infrastructure by Diogo Mónica
- [3] Why you shouldn’t use ENV variables for secret data by Diogo Mónica
- [4] Elasticity does not equal Scalability by Pablo Iorio
- [5] Modules vs microservices by Sander Mak
- [6] Learning Path: Delivering Applications with Docker by O’Reilly Media, Inc.
- [7] A Beginner-Friendly Introduction to Containers, VMs and Docker by Preethi Kasireddy
[ I. Technical advantages ] [ II. Business benefits ] [ III. Public cloud providers comparison ]