Not everything new is good and not everything revolutionary is necessary. With container technologies, like with every other “Next Big Thing”, we are seeing rampant invention of higher level abstractions followed by deployment in production, with entire CD/CI infrastructure being dependent on it and DevOps not understanding what it actually does.
Let’s begin with what containers actually were, historically. In the early 2000s, FreeBSD introduced the concept of “Jails” which offered a new environment, like a fresh install of the operating system which offers all the FreeBSD library and kernel infrastructure which is already in place. A clean slate for developers to test new software.
This is in stark contrast to VMWare, KVM or VirtualBox like technologies where entire hardware is virtualized, where, your host OS provisions a virtual set of CPU, RAM and other resources. Your guest operating system sits on top of those virtual hardware resources. Almost every layer of abstraction is repeated twice and resources like RAM and CPU once allocated to the guest are no longer available to the host (regardless of whether or not the guest uses them entirely).
Docker and Linux-y containers
With Operating system being virtualized, containers can be spun up with quotas set for their resource utilization. For example, if we set up a maximum limit of 2GB of RAM usage for container, it won’t be able to exceed it. On the other hand, since there is only one kernel in the loop, if the container is not using the entire RAM, the kernel can put the remaining resource to be used elsewhere.
The first drawback people realized with the container model is that since we are virtualizing the operating system and not the hardware, you can have multiple instances of the same operating system and you lose the capability of spinning up an arbitrary OS.
There is no such thing like Windows container on Linux or Linux containers on Windows. Docker on Windows, for example, uses Moby Linux which is actually running in a VM inside your Windows box.
When it comes to a Linux distribution, however, you can do a lot of interesting things. Since what we call Linux is just the kernel and it needs a GNU stack of libraries to provide a complete OS environment, you can emulate various distributions such as CentOS, Ubuntu, Alpine in different container instances.
This is true for both LXD and Docker.
Docker as a packaging mechanism
Docker will do to apt, what apt did to tar. That is to say, you will still be using apt but with an added layer of abstraction on top of it. To understand how, consider the following example.
You have an instance of your website running in a PHP5.6 and you need to run another web service on the same server using PHP7.0. Now running two different versions of PHP itself is a scary idea, not knowing what conflicts would arise out of them. Updating and upgrading will soon become a hopeless endeavor.
But what if we had our original web instance running inside a Docker container? Now, all we need is a new Docker container inside which we can install PHP7.0 and our second web service will work from this newly spun container. We will still be using apt in the background, just like apt uses tar in the background, but Docker would make sure that various applications from different containers don’t conflict with each other.
Docker is especially useful for running stateless applications and you will hear people saying often that you can’t run more than one process in a container. Although, that is false, running multiple stateful services in one container instance can often cause Docker to give inconsistent results. You will soon find yourself restarting the same set of containers over and over again.
LXD as a Hypervisor
With LXD containers what you get is much closer to a standalone operating system than what you get from Docker. Docker containers all share the same networking stack and storage stack.
This means basic commands like ping or ifconfig are unavailable from inside a Docker container. In fact, you can know almost nothing about the network you are on, from inside that container. Docker NAT running on the host’s networking stack offers most of the connectivity and facilities like port forwarding.
LXD containers are way ahead of the curve, supporting network bridges, macvlan and multiple other options. Your LXD containers and your host all form a private network of their own and can communicate with each other as if they are talking to different computers over a network.
The same is true with the storage stack. It is often much more practical to use LXD with ZFS pools where you can allocate datasets with quotas limiting the storage utilization. LXD is in direct competition with VMWare, KVM and other hypervisor technologies.
Using it, your cloud provider can now provision you your personal container which would smell and feel like a complete operating system and is still cheap and fast to spin up and kill, along with all the niceties of persistent data that you expect.
From the provider’s perspective, things are economical as well. Since not everyone uses the entire RAM that they ask for, you can cram many more containers on the same metal than you can VMs.
To the end users it might sound like cheating at first, but they win at the end as well, LX containers are faster to spin and kill making the process much more smooth and “scalable” (as people are fond of saying).
You can spin up containers on a compute node where your data resides, do the computation that you want to do and then destroy the container leaving the data intact. This is much faster than fetching relevant data all the way to your Virtual Machine which is running on some other data center. This is works especially well with ZFS in the loop.
To sum up all that we know, both LXD and Docker are containerization technologies. Docker is light-weight, simplistic and is well-suited for isolating applications from each other making it popular among DevOps and developers alike. One app per Docker container.
LXD on the other hand, is much better equipped and is much closer to a complete operating system environment with networking and storage interfaces. You can run multiple Docker containers nested inside LXD, if you want.