Deploy the application
You’ve built a Swarm cluster so now you are ready to build and deploy the voting application itself.
Step 1: Learn about the images
Some of the application’s containers are launched form existing images pulled directly from Docker Hub. Other containers are launched from custom images you must build. The list below shows which containers use custom images and which do not:
- Load balancer container: stock image (
ehazlett/interlock
) - Redis containers: stock image (official
redis
image) - Postgres (PostgreSQL) containers: stock image (official
postgres
image) - Web containers: custom built image
- Worker containers: custom built image
- Results containers: custom built image
All custom built images are built using Dockerfile’s pulled from the example application’s public GitHub repository.
If you haven’t already,
ssh
into the Swarmmanager
node.-
Clone the application’s GitHub repo
$ git clone https://github.com/docker/swarm-microservice-demo-v1 sudo: unable to resolve host master Cloning into 'swarm-microservice-demo-v1'... remote: Counting objects: 304, done. remote: Compressing objects: 100% (17/17), done. remote: Total 304 (delta 5), reused 0 (delta 0), pack-reused 287 Receiving objects: 100% (304/304), 2.24 MiB | 2.88 MiB/s, done. Resolving deltas: 100% (132/132), done. Checking connectivity... done.
This command creates a new directory structure inside of your working directory. The new directory contains all of the files and folders required to build the voting application images.
The
AWS
directory contains thecloudformation.json
file used to deploy the EC2 instances. TheVagrant
directory contains files and instructions required to deploy the application using Vagrant. Theresults-app
,vote-worker
, andweb-vote-app
directories contain the Dockerfiles and other files required to build the custom images for those particular components of the application. -
Change directory into the
swarm-microservice-demo-v1/web-vote-app
directory.$ cd swarm-microservice-demo-v1/web-vote-app/
-
View the Dockerfile contents.
$ cat Dockerfile # Using official python runtime base image FROM python:2.7 # Set the application directory WORKDIR /app # Install our requirements.txt ADD requirements.txt /app/requirements.txt RUN pip install -r requirements.txt # Copy our code from the current folder to /app inside the container ADD . /app # Make port 80 available for links and/or publish EXPOSE 80 # Define our command to be run when launching the container CMD ["python", "app.py"]
As you can see, the image is based on the official
Python:2.7
tagged image, adds a requirements file into the/app
directory, installs requirements, copies files from the build context into the container, exposes port80
and tells the container which command to run. Spend time investigating the other parts of the application by viewing the
results-app/Dockefile
and thevote-worker/Dockerfile
in the application.
Step 2. Build custom images
If you haven’t already,
ssh
into the Swarmmanager
node.-
Make sure you have DOCKER_HOST set
$ export DOCKER_HOST="tcp://192.168.33.11:3375"
Change to the root of your
swarm-microservice-demo-v1
clone.-
Build the
web-votes-app
image both the front end nodes.frontend01:
$ docker -H tcp://192.168.33.20:2375 build -t web-vote-app ./web-vote-app
frontend02:
$ docker -H tcp://192.168.33.21:2375 build -t web-vote-app ./web-vote-app
These commands build the
web-vote-app
image on thefrontend01
andfrontend02
nodes. To accomplish the operation, each command copies the contents of theswarm-microservice-demo-v1/web-vote-app
sub-directory from themanager
node to each frontend node. The command then instructs the Docker daemon on each frontend node to build the image and store it locally.You’ll notice this example uses a
-H
flag to pull an image to specific host. This is to help you conceptualize the architecture for this sample. In a production deployment, you’d omit this option and rely on the Swarm manager to distribute the image. The manager would pull the image to every node; so that any node can step in to run the image as needed.It may take a minute or so for each image to build. Wait for the builds to finish.
-
Build
vote-worker
image on theworker01
node$ docker -H tcp://192.168.33.200:2375 build -t vote-worker ./vote-worker
It may take a minute or so for the image to build. Wait for the build to finish.
-
Build the
results-app
on thestore
node$ docker -H tcp://192.168.33.250:2375 build -t results-app ./results-app
Each of the custom images required by the application is now built and stored locally on the nodes that will use them.
Step 3. Pull images from Docker Hub
For performance reasons, it is always better to pull any required Docker Hub images locally on each instance that needs them. This ensures that containers based on those images can start quickly.
Log into the Swarm
manager
node.-
Pull the
redis
image to your frontend nodes.frontend01:
$ docker -H tcp://192.168.33.20:2375 pull redis
frontend02:
$ docker -H tcp://192.168.33.21:2375 pull redis
-
Pull the
postgres
image to thestore
node$ docker -H tcp://192.168.33.250:2375 pull postgres
-
Pull the
ehazlett/interlock
image to theinterlock
node$ docker -H tcp://192.168.33.12:2375 pull ehazlett/interlock
Each node in the cluster, as well as the interlock
node, now has the required images stored locally as shown below.
Now that all images are built, pulled, and stored locally, the next step is to start the application.
Step 4. Start the voting application
In the following steps, your launch several containers to the voting application.
If you haven’t already,
ssh
into the Swarmmanager
node.-
Start the
interlock
container on theinterlock
node$ docker -H tcp://192.168.33.12:2375 run --restart=unless-stopped -p 80:80 --name interlock -d ehazlett/interlock --swarm-url tcp://192.168.33.11:3375 --plugin haproxy start
This command is issued against the
interlock
instance and maps port 80 on the instance to port 80 inside the container. This allows the container to load balance connections coming in over port 80 (HTTP). The command also applies the--restart=unless-stopped
policy to the container, telling Docker to restart the container if it exits unexpectedly. -
Verify the container is running.
$ docker -H tcp://192.168.33.12:2375 ps
-
Start a
redis
container on your front end nodes.frontend01:
$ docker run --restart=unless-stopped --env="constraint:node==frontend01" -p 6379:6379 --name redis01 --net mynet -d redis $ docker -H tcp://192.168.33.20:2375 ps
frontend02:
$ docker run --restart=unless-stopped --env="constraint:node==frontend02" -p 6379:6379 --name redis02 --net mynet -d redis $ docker -H tcp://192.168.33.21:2375 ps
These two commands are issued against the Swarm cluster. The commands specify node constraints, forcing Swarm to start the contaienrs on
frontend01
andfrontend02
. Port 6379 on each instance is mapped to port 6379 inside of each container for debugging purposes. The command also applies the--restart=unless-stopped
policy to the containers and attaches them to themynet
overlay network. -
Start a
web-vote-app
container the frontend nodes.frontend01:
$ docker run --restart=unless-stopped --env="constraint:node==frontend01" -d -p 5000:80 -e WEB_VOTE_NUMBER='01' --name frontend01 --net mynet --hostname votingapp.local web-vote-app
frontend02:
$ docker run --restart=unless-stopped --env="constraint:node==frontend02" -d -p 5000:80 -e WEB_VOTE_NUMBER='02' --name frontend02 --net mynet --hostname votingapp.local web-vote-app
These two commands are issued against the Swarm cluster. The commands specify node constraints, forcing Swarm to start the contaienrs on
frontend01
andfrontend02
. Port5000
on each node is mapped to port80
inside of each container. This allows connections to come in to each node on port5000
and be forwarded to port80
inside of each container.Both containers are attached to the
mynet
overlay network and both containers are given thevotingapp-local
hostname. The--restart=unless-stopped
policy is also applied to these containers. -
Start the
postgres
container on thestore
node$ docker run --restart=unless-stopped --env="constraint:node==store" --name pg -e POSTGRES_PASSWORD=pg8675309 --net mynet -p 5432:5432 -d postgres
This command is issued against the Swarm cluster and starts the container on
store
. It maps port 5432 on thestore
node to port 5432 inside the container and attaches the container to themynet
overlay network. The command also inserts the database password into the container via thePOSTGRES_PASSWORD
environment variable and applies the--restart=unless-stopped
policy to the container.Sharing passwords like this is not recommended for production use cases.
-
Start the
worker01
container on theworker01
node$ docker run --restart=unless-stopped --env="constraint:node==worker01" -d -e WORKER_NUMBER='01' -e FROM_REDIS_HOST=1 -e TO_REDIS_HOST=2 --name worker01 --net mynet vote-worker
This command is issued against the Swarm manager and uses a constraint to start the container on the
worker01
node. It passes configuration data into the container via environment variables, telling the worker container to clear the queues onfrontend01
andfrontend02
. It adds the container to themynet
overlay network and applies the--restart=unless-stopped
policy to the container. -
Start the
results-app
container on thestore
node$ docker run --restart=unless-stopped --env="constraint:node==store" -p 80:80 -d --name results-app --net mynet results-app
This command starts the results-app container on the
store
node by means of a node constraint. It maps port 80 on thestore
node to port 80 inside the container. It adds the container to themynet
overlay network and applies the--restart=unless-stopped
policy to the container.
The application is now fully deployed as shown in the diagram below.
Step 5. Test the application
Now that the application is deployed and running, it’s time to test it. To do this, you configure a DNS mapping on the machine where you are running your web browser. This maps the “votingapp.local” DNS name to the public IP address of the interlock
node.
-
Configure the DNS name resolution on your local machine for browsing.
- On Windows machines this is done by adding
votingapp.local <interlock-public-ip>
to theC:\Windows\System32\Drivers\etc\hosts file
. Modifying this file requires administrator privileges. To open the file with administrator privileges, right-clickC:\Windows\System32\notepad.exe
and selectRun as administrator
. Once Notepad is open, clickfile
>open
and open the file and make the edit. - On OSX machines this is done by adding
votingapp.local <interlock-public-ip>
to/private/etc/hosts
. - On most Linux machines this is done by adding
votingapp.local <interlock-public-ip>
to/etc/hosts
.
Be sure to replace
<interlock-public-ip>
with the public IP address of yourinterlock
node. You can find theinterlock
node’s Public IP by selecting yourinterlock
EC2 Instance from within the AWS EC2 console. - On Windows machines this is done by adding
-
Verify the mapping worked with a
ping
command from your local machine.ping votingapp.local Pinging votingapp.local [54.183.164.230] with 32 bytes of data: Reply from 54.183.164.230: bytes=32 time=164ms TTL=42 Reply from 54.183.164.230: bytes=32 time=163ms TTL=42 Reply from 54.183.164.230: bytes=32 time=169ms TTL=42
-
Point your web browser to http://votingapp.local
Notice the text at the bottom of the web page. This shows which web container serviced the request. In the diagram above, this is
frontend02
. If you refresh your web browser you should see this change as the Interlock load balancer shares incoming requests across both web containers.To see more detailed load balancer data from the Interlock service, point your web browser to http://stats:[email protected]/haproxy?stats
Cast your vote. It is recommended to choose “Dogs” ;-)
-
To see the results of the poll, you can point your web browser at the public IP of the
store
node
Next steps
Congratulations. You have successfully walked through manually deploying a microservice-based application to a Swarm cluster. Of course, not every deployment goes smoothly. Now that you’ve learned how to successfully deploy an application at scale, you should learn what to consider when troubleshooting large applications running on a Swarm cluster.
© 2017 Docker, Inc.
Licensed under the Apache License, Version 2.0.
Docker and the Docker logo are trademarks or registered trademarks of Docker, Inc. in the United States and/or other countries.
Docker, Inc. and other parties may also have trademark rights in other terms used herein.
https://docs.docker.com/v1.10/swarm/swarm_at_scale/04-deploy-app/