While scaling any service in Docker compose, it may show the conflict that the binding port will be allocated to only one service. There are many ways to resolve the specified problem, but a load balancer is one of the most effective approaches for managing the traffic from different containers.
This blog will demonstrate how to scale a Docker container using nginx as a load balancer and reverse proxy.
How to Scale Docker Containers Using Nginx as Load Balancer and Reverse proxy?
The load balancer manages and distributes the traffic on the server from containers. It increases the reliability, capability, and availability of applications and services. As replicas of containers are executed on the same network over the same protocol, that may cause conflict, such as exposing port errors. For this purpose, an nginx reverse proxy or load balancer can be utilized to divide the traffic of scaling services using round-robin or other routing techniques.
To manage the scaling services using nginx as a load balancer, go through the instructions.
Step 1: Make Dockerfile
First, create a Dockerfile to containerize the program. For this purpose, we have specified the instructions to dockerize the “main.go” Golang program:
WORKDIR /go/src/app
COPY main.go .
RUN go build -o webserver .
ENTRYPOINT ["./webserver"]
Step 2: Create “docker-compose.yml” File
Next, create a “docker-compose.yml” file and copy the provided instructions into the file. These instructions contain:
- “services” key to configure the service. For instance, we have configured the “web” service and “nginx” service. Here, the “nginx” service acts as a load balancer to manage the traffic of scaling the “web” service.
- “build” key demonstrates that the “web” service will use Dockerfile to containerize the service.
- There is no need to provide the exposing port to the “web” service as the nginx load balancer manages it.
- “volumes” is used to bind the “conf” file to the container path:
- “depends_on” is utilized to determine the dependencies between compose services.
- “ports” is used to specify the nginx service exposing port where scaling services are managed through some routing technique:
services:
web:
build: .
nginx:
image: nginx:latest
volumes:
- ./nginx.conf:/etc/nginx/nginx.conf:ro
depends_on:
- web
ports:
- 8080:8080
Step 3: Make “nginx.conf” File
Next, make an “nginx.conf” file to use “nginx” as a load balancer and reverse proxy. For this purpose, we have specified the listed instructions in the file:
- “upstream all” defines the upstream services. Here, we have defined the “web” service expected to expose on port 8080.
- In the “server” brackets, we have provided the nginx load balancer listening port “8080” and defined the proxy “http://all/” to manage the upstream service:
events {
worker_connections 1000;
}
http {
upstream all {
server web:8080;
}
server {
listen 8080;
location / {
proxy_pass http://all/;
}
}
}
Step 4: Scale the Service and Fire up the Containers
Next, scale and start the service by utilizing the “–scale” option with the “docker-compose up” command. For instance, we have started the two replicas of the “web” service:
docker-compose up –scale web=2
After that, navigate the nginx service exposing the port and check if this is accepting the stream from the “web” service or not. Refresh the page to switch between the outputs of scaling services or replicas using the “nginx” load balancer:
This is all about how to scale a Docker container using nginx as a load balancer and reverse proxy.
Conclusion
To scale the Docker container by utilizing the nginx as a load balancer and reverse proxy, first configure the services in the compose file. Then, create an “nginx.conf” file and add instructions for the upstream service, listening port of the load balancer, and pass the proxy to upstream the service. After that, specify the “nginx” service in the “docker-compose.yml” file that acts as a load balancer. This write-up has demonstrated how to scale Docker containers using nginx as a load balancer.