Containers Docker

Docker Compose — Memory Limits

Docker compose is a powerful utility. It saves time and reduces errors when deploying your Dockerized application. Usually, it is not a great idea to run the entire stack including the frontend, the database server, etc from inside a single a single container.

We spin up different containers to handle different workloads of an application and we use Docker Compose for doing this easily. Each logically different workload is listed as a different service. For example, your frontend http server will be listed as a frontend service running an Apache or an Nginx image as a container.

All the services, their networking needs, storage requirements, etc can be specified in a docker-compose.yml file. We will be focusing on specifying memory utilization here.

Prerequisites

You’d need the following tools in your arsenal to follow along:

  1. Basic understanding of Docker
  2. Docker for Windows or Mac or if you are running Linux, DockerCE for Linux
  3. Docker Compose binary (Windows and Mac users will already have this installed)

We shall be sticking to the version 2.4 for our docker-compose.yml files as that supports version 17.12 and higher of Docker Engine and higher. We could have gone with version 3 which is more recent but it doesn’t support old memory limitation syntax. If you try to use the newer syntax, it insists on using Docker in Swarm mode, instead. So to keep matter simple for regular Docker users I will stick to version 2.4

Most of the code would work just the same for version 3, and where there will be a difference, I will mention the newer syntax for Docker Swarm users.

Sample Application

Let’s try and run a simple Nginx service on port 80 using first the CLI and then a simple docker-compose.yml. In the next section, we shall explore its memory limitations and utilization and modify our docker-compose.yml to see how the custom limitations are imposed.

Let’s start a simple nginx server using Docker-CLI :

$ docker run -d --name my-nginx -p 80:80 nginx:latest

You can see the nginx server working by visiting http://localhost  or replace lcoalhost

With the IP address of your Docker host. This container can potentially utilize the entire available memory on your Docker host (in our case it is about 2GB). To check the memory utilization, among other things, we can use the command:

$ docker stats my-nginx

CONTAINER ID   NAME      CPU %  MEM USAGE / LIMIT   MEM %     NET I/O     BLOCK I/O  PIDS
6eb0091c0cf2  my-nginx  0.00%  2.133MiB / 1.934GiB 0.11%  3.14kB / 2.13kB  0B / 0B    2

The MEM USAGE/LIMIT is at 2.133MiB out of the total 1.934GiB. Let’s remove this container and start writing docker-compose scripts.

$ docker stop my-nginx
$ docker rm my-nginx

Equivalent yml file

The exact container as above can be created if we follow these steps:

$ mkdir my-compose
$ cd my-compose
$ vim docker-compose.yml

We create a new empty directory and create a file docker-compose.yml in it. When we will run docker-compose up from this directory, it will look for this specific file (ignoring everything else) and create our deployment accordingly.  Add the following contents inside this .yml file.

version: '3'
services:
my-nginx:
image: nginx:latest
ports:
- "80:80"
 
$ docker-compose up -d

The -d flag is added so that the newly created containers run in background. Otherwise, the terminal will attach itself to the containers and start printing reports from it.  Now we can see the stats of the newly created container(s):

$ docker stats -all

CONTAINER ID        NAME           CPU%  MEM USAGE/LIMIT   MEM%   NET I/O BLOCK I/O PIDS
5f8a1e2c08ac my-compose_my-nginx_1 0.00% 2.25MiB/1.934GiB 0.11% 1.65kB/0B 7.35MB/0B  2

You will notice that a similar container like before was created with similar memory limits and even utilization. From the same directory which contains the yml file. Run the following command to delete the newly created container, along with the customer bridge network that was created.

$ docker-compose down

This will return docker to a clean state with the exception of any volumes that were created (we didn’t create any so that’s not a concern.)

Memory Limits and Memory Reservations

Memory Limits and Memory Reservations are two different aspects to ensure a smooth functioning of your applications and the Docker host you are running atop.

Broadly speaking, Memory Limit imposes an upper limit to the amount of memory that can potentially be used by a Docker container. By default a Docker container, like any other system process, can use the entire available memory of the Docker host. This can cause Out-of-Memory-Exception and your system may very well crash. Even if it never comes to that, it can still starve other process (including other containers) from valuable resources, again hurting the performance. Memory Limits ensures that resource hungry containers don’t surpass a certain limit. This limits the blast radius of a poorly written application to a few containers, not the entire host.

Memory Reservations, on the other hand, is less rigid. When the system is running low on memory and tries to reclaim some of it. It tries to bring the container’s memory consumption at or below the reservation limit. If there’s an abundance of memory, however, the application can expand upto the hard set memory limit.

To summarize:

  1. Memory Limit: A strict upper limit to the amount of memory made available to a container.
  2. Memory Reservation: This should be set as the bare minimum amount of memory that an application needs to run properly. So it doesn’t crash or misbehave when the system is trying to reclaim some of the memory.

 

If memory reservation is greater than memory limit, memory limit takes precedence.

Specifying Memory Limits and Reservation

Version 2

Let’s go back to the docker-compose.yml we wrote earlier and add a memory limit to it. Change the version to 2.4 for reasons discussed in prerequisites section.

version: '2.4'
services:
my-nginx:
image: nginx:latest
ports:
- "80:80"
mem_limit: 300m

The last line sets the limit for my-nginx service to 300MiB. You can use k for KiB, and g for GiB and b for just bytes. However, the number before it must be an integer. You can’t use values like 2.4m, you would have to use 2400k instead. Now if you run:

$ docker stat --all

CONTAINER ID  NAME                  CPU%  MEM USAGE/LIMIT MEM % NET I/O   BLOCK I/O PIDS
44114d785d0a my-compose_my-nginx_1 0.00%  2.141MiB/300MiB 0.71% 1.16kB/0B    0B/0B   2

You will notice that the memory limit is set to 300 MiB. Setting memory reservation is equally easy, just add a line mem_reservation: xxx at the end.

version: '2.4'
services:
my-nginx:
image: nginx:latest
ports:
- "80:80"
mem_limit: 300m
mem_reservation: 100m

Version 3 (Optional)

To use version three you need to be running Docker in swarm mode. For Windows and Mac you can enable it using the Docker settings menu. Linux users would need to go run docker swarm init. More information on that can be found here. It is not a necessary step though, and if you have not enabled it, that’s fine as well. This section is for people already running in swarm mode and can make use of the newer version.

version: '3'
services:
my-nginx:
image: nginx:latest
ports:
- "80:80"
deploy:
resources:
limits:
memory: 300m
reservations:
memory: 100m

We define all of this under resources option. Limits and Reservation become primary keys of their own and memory is but one of the many resources being managed here. CPU being yet another important parameter.

Further Information

You can learn more about docker-compose from the official documentation linked here. Once you get the gist of how to write a compose file the documentation can help you with the specifics various parameters.

You don’t have to know everything, just search for what your application requires and the reference would guide you in implementing that.

About the author

Ranvir Singh

Ranvir Singh

I am a tech and science writer with quite a diverse range of interests. A strong believer of the Unix philosophy. Few of the things I am passionate about include system administration, computer hardware and physics.