In this article I am providing a quick start guide to docker swarm commands by working through a three nodes docker swarm cluster.
Please note, you should already be familiar with docker before going through this guide.
Definitions
curl -sSL https://get.docker.com | sh
). This command will pull a docker installation script and run it on the node.docker swarm init --advertise-addr <Node-01 IPAddress>
This command will set up the docker swarm and enable Node-01 as the manager node on the swarm.
docker swarm join-token manager # For a manager node token docker swarm join-token worker # For a worker node token
This command will display the docker swarm join token which is needed to add other nodes to the cluster or swarm as a manager or worker node. It will look like this
docker swarm join --token <swarm token> <Node-01 IP>:2377
Copy the join cluster command and run on the two other nodes to add them to the cluster. This will create a three node cluster with Node-01 as the lead manager node.
docker node update --role manager Node-02
Only manager nodes have access to the swarm commands, the worker nodes are just docker environments to execute tasks.
docker node ls
docker service create --name <ServiceName> -p <exposePort>:<serviceTaskPort> <image_name>:<image_tag>
docker service create --name <ServiceName> --replicas 3 -p <exposePort>:<serviceTaskPort> <image_name>:<image_tag>
docker service ls
Overlay network
Creates an internal private network that spans across all the nodes participating in the swarm cluster. So, Overlay networks facilitate communication between a docker swarm service and a standalone container, or between two standalone containers on different Docker Daemons.
docker network create --driver overlay <network_name>
docker network ls
docker service create --name <ServiceName> --network <network_name> -p <exposePort>:<serviceTaskPort> <image_name>:<image_tag>
Stacks accepts compose files as their declarative definition for services, network and volumes. docker stack deploy
docker stack deploy -c <compose-file.yml> <stack_name>
docker stack ls
docker stack ps <stack_name>
docker stack rm <stack_name>
docker stack services <stack_name>
version: "3.8" services: db: image: postgres:9.4 volumes: - db-data:/var/lib/postgresql/data networks: - backend deploy: placement: constraints: [node.role == manager]
- The Example will create a service call db using the image postgres:9.4 and run on a network call `backend`.
- The constraint property on the deploy tell the swarn that This service can only be deployed on a node with a manager role
/run/secrets/<secret_name>
or /run/secrets/<secret_alias>
To create a secretdocker secret create <secret_name> <secret.txt> # using a file
OR
docker secret create <secret_name> - # Then type the screat in std input
create Create a secret from a file or STDIN as content
inspect Display detailed information on one or more secrets
ls List secrets
rm Remove one or more secrets
docker service create --name psql --secret psql_user --secret psql_pwd -e POSTGRES_PASSWORD_FILE=/run/secrets/psql_pwd -e POSTGRES_USER_FILE=/run/secrets/psql_user postgres:9.4
psql_user
and psql_pwd
would have already been created in the swarm using docker secret create
file: docker-compose.yml
version: "3.8" services: psql: image: postgres:9.4 secrets: - psql_user - psql_pwd environment: POSTGRES_USER_FILE=/run/secrets/psql_user POSTGRES_PASSWORD_FILE=/run/secrets/psql_pwd volumes: - psql-data:/var/lib/postgresql/data secrets: psql_user: external: true psql_pwd: external: true volumes: psql-data:
file: docker-compose.yml
version: "3.8" services: psql: image: postgres:9.4 secrets: - psql_user - psql_pwd environment: POSTGRES_USER_FILE: /run/secrets/psql_user POSTGRES_PASSWORD_FILE: /run/secrets/psql_pwd volumes: - psql-data:/var/lib/postgresql/data secret: psql_user: file: ./psql_user.txt psql_pwd: file: ./psql_pwd.txt volumes: psql-data:
psql_user.txt
and psql_pwd.txt
are in same directory when creating the stackdocker container ls
ordocker container inspect
: check last 5 healthcheckdocker run \ -- health-cmd "curl -f localhost/_cluster/health || false" \ -- health-interval=5s \ -- health-retries=3s -- health-timeout=2s -- health-start-period=15s \ elasticsearch:2
version "3.8" services: web: image: eddytnk/myapp:latest healthcheck: test: ["CMD", "curl", "-f", "http://localhost"] interval: 1m30s timeout: 10s retries: 3 start_period: 40s