Download - Setup docker on existing application
Setup Docker environment on Node.js app
@lucjuggery
How Docker components work together ?
The base applicationQuick introduction to DockerThe runtime environmentBuild our application’s imagePublish the image to a Docker RegistryLink containers on a single Docker hostContainer networking on a single Docker hostContainer networking on multiple Docker hostsDeployment on a Docker Swarm
DetailsAPI HTTP Rest - Node.js (Sails.js) / MongoDBPrerequisite
nodejs 4.4.5 (LTS) - https://nodejs.org/en/mongo 3.2 - https://docs.mongodb.org/manual/installation/
CRUD on a “Message” modelHTTP verb URI Action
GET /message list all messages
GET /message/ID get message with ID
POST /message create a new message
PUT /message/ID modify message with ID
DELETE /message/ID delete message with ID
Setupusage of sailsjs framework (RoR of Node.js)
install sailsjs: sudo npm install sails -g (should install 0.12.3)
create the application: sails new messageApp && cd messageApp
link with local MongoDBusage of sails-mongo orm: npm install sails-mongo --save
change configuration
create API: sails generate api message
run the application: sails lift
API available on localhost:1337
config/model.js:module.exports.models = {
connection: mongo, migrate: 'safe'
};
config/connections.js:module.exports.connections = { mongo: { adapter: 'sails-mongo', url: process.env.MONGO_URL || 'mongodb://localhost/messageApp' }};
curl http://localhost:1337/message
curl -XPOST http://localhost:1337/message?text=hello
curl -XPOST http://localhost:1337/message?text=hola
curl http://localhost:1337/message
curl -XPUT http://localhost:1337/message/5638b363c5cd0825511690bd?text=hey
curl -XDELETE http://localhost:1337/message/5638b381c5cd0825511690be
curl http://localhost:1337/message
[ { "text": "hello", "createdAt": "2015-11-08T13:15:15.363Z", "updatedAt": "2015-11-08T13:15:15.363Z", "id": "5638b363c5cd0825511690bd" }, { "text": "hola", "createdAt": "2015-11-08T13:15:45.774Z", "updatedAt": "2015-11-08T13:15:45.774Z", "id": "5638b381c5cd0825511690be" }]
Exemples
[ { "text": "hey", "createdAt": "2015-11-08T13:15:15.363Z", "updatedAt": "2015-11-08T13:19:40.179Z", "id": "5638b363c5cd0825511690bd" }]
[ ]
⇒ API CRUD created in just a couple of lines with sailsjs
The base applicationQuick introduction to DockerThe runtime environmentBuild our application’s imagePublish the image to a Docker RegistryLink containers on a single Docker hostContainer networking on a single Docker hostContainer networking on multiple Docker hostsDeployment on a Docker Swarm
Containerscontainers
processes (nginx, ...)
libraries (nodejs runtime, debian libraries, …)
application code
Linux host
cgroups + namespaces
cgroups + namespaces
cgroups + namespaces
A container is a group of processes
cgroups and namespaces are used to isolate the container from the outside
- cgroups limits the resources (CPU, RAM, …)
- namespaces limits the visibility of the system (network, user, …)
Imageblueprint of a container
processes (nginx, ...)
libraries (nodejs runtime, debian libraries, …)
application codecgroups + namespaces
Dockerfile
text file describing the processes that will run on the container
Image
Built from the instructions of the Dockerfile. An image consist of multiple read-only layers.
Container
Instance of an image
The base applicationQuick introduction to DockerThe runtime environmentBuild our application’s imagePublish the image to a Docker RegistryLink containers on a single Docker hostContainer networking on a single Docker hostContainer networking on multiple Docker hostsDeployment on a Docker Swarm
Docker hostphysical or virtual host running Docker Engine
easily created with Docker Machine or using Docker for Mac / Windows beta
a lot of drivers available with Docker MachineOracle VirtualboxDigitalOceanAmazon Web ServiceMicrosoft AzureGoogle Compute Engine...
Creation locally with virtualbox driver
docker-machine create --driver virtualbox node1
setup in Docker host contexteval “$(docker-machine env node1)”
usage of regular Docker commands
get IP of newly created Docker host docker-machine ip node1 (⇒ 192.168.99.100)
The base applicationQuick introduction to DockerThe runtime environmentBuild our application’s imagePublish the image to a Docker RegistryLink containers on a single Docker hostContainer networking on a single Docker hostContainer networking on multiple Docker hostsDeployment on a Docker Swarm
One image for application, one image for databaseavoid to add too many services in a single image
usage of 2 images to package the applicationone image for the databaseone image for the application
application: several possibilitiesextend official Linux distribution image (Ubuntu, CentOS, ...) with Node.js runtimeusage of the official Node.js image (https://hub.docker.com/_/node/)
Databaseusage of the official MongoDB image
Dockerfiletext file describing all the commands needed to create an imageDockerfile for our application
usage of the official node:4.4.5 (LTS) imagecopy application sourcesinstall dependenciesexpose port to the outside from the Docker hostdefault command ran when instantiating the image
Create the imagedocker build -t message-app .
List all images available on the Docker hostdocker images
⇒ message-app image created
# Use node 4.4.5 LTSFROM node:4.4.5ENV LAST_UPDATED 20160605T165400
# Copy source codeCOPY . /app
# Change working directoryWORKDIR /app
# Install dependenciesRUN npm install
# Expose API port to the outsidePORT 80EXPOSE 80
# Launch applicationCMD ["npm","start"]
Dockerfile
Let’s instantiate a container$ docker run message-appnpm info it worked if it ends with ok...error: A hook (`orm`) failed to load!error: Error: Failed to connect to MongoDB. Are you sure your configured Mongo instance is running? Error details:{ [MongoError: connect ECONNREFUSED 127.0.0.1:27017] name: 'MongoError', message: 'connect ECONNREFUSED 127.0.0.1:27017' }] originalError: { [MongoError: connect ECONNREFUSED 127.0.0.1:27017] name: 'MongoError', message: 'connect ECONNREFUSED 127.0.0.1:27017' } }
The application cannot connect to a database as we did not provide external db information nor container running mongodb
The base applicationQuick introduction to DockerThe runtime environmentBuild our application’s imagePublish the image to a Docker RegistryLink containers on a single Docker hostContainer networking on a single Docker hostContainer networking on multiple Docker hostsDeployment on a Docker Swarm
Why is that needed ?provide access to the packaged application
public or private access
possible to use tags to handle all the versions of the applicationformat ⇒ username/image:tag (note: official images do not have the username
prefix, eg: mongo, redis, ...)mongo:3.2lucj/message-app (same as lucj/message-app:latest)
GitHub account can be linked to Docker hubbuild can be automatically triggered on a git push command
Creation of a repository on Docker Public Registry
hub.docker.com list of user’s repositories
repository detailsrepository created
⇒ the newly created repository will contain all the version of the application’s image
Publish imageimage needs to be created using username of the Docker hub account
docker build -t lucj/message-app .
identificationdocker login
publicationdocker push lucj/message-app
the image (public) can now be used from any Docker hostdocker pull lucj/message-appdocker run -dP lucj/message-app (will start with an error as no database information
is provided)
The base applicationQuick introduction to DockerThe runtime environmentBuild our application’s imagePublish the image to a Docker RegistryLink containers on a single Docker hostContainer networking on a single Docker hostContainer networking on multiple Docker hostsDeployment on a Docker Swarm
DeprecatedFeature prior to Docker 1.9
(only here for information purposes)
docker run --linkmongoDB container: docker run --name mongoDB -d mongo:3.0container link to mongoDB: docker run -ti --link mongoDB:db
busybox /bin/shwhat’s inside the second container ?
/ # envHOSTNAME=466ad6b628d1DB_PORT=tcp://172.17.0.1:27017DB_NAME=/furious_tesla/dbDB_PORT_27017_TCP_ADDR=172.17.0.1DB_PORT_27017_TCP_PORT=27017DB_PORT_27017_TCP_PROTO=tcpDB_PORT_27017_TCP=tcp://172.17.0.1:27017DB_ENV_MONGO_VERSION=3.0.7...
Environment variables and /etc/hosts are automatically modified within the second container when --link option is used
/ # cat /etc/hosts172.17.0.5 466ad6b628d1127.0.0.1 localhost::1 localhost ip6-localhost ip6-loopback172.17.0.1 db c99a75a05c4a mongoDB172.17.0.1 mongoDB172.17.0.1 mongoDB.bridge172.17.0.5 furious_tesla172.17.0.5 furious_tesla.bridge...
⇒ DB_PORT_27017_TCP_ADDR et DB_PORT_27017_TCP_PORT need to be used by the application to connect to the mongoDB container
Link application with database
message-app container
port: 80 host:172.17.0.28
mongocontainer
Docker host
port: 27017
DB_PORT_27017_TCP_ADDR DB_PORT_27017_TCP_PORT
module.exports.connections = { someMongodbServer: { adapter: 'sails-mongo', host: process.env.DB_PORT_27017_TCP_ADDR || 'localhost', port: process.env.DB_PORT_27017_TCP_PORT || 27017, database: ‘messageApp’' }}
link
modification of config/connection.js file ⇒ connect to the mongoDB database using environment
variables imported in the application container
Update application imageupdate Timestamp (LAST_UPDATED) within the application Dockerfile
ex: ENV LAST_UPDATED 20151108T203800this invalidates the cache (the layers cached during the previous builds are not used)
creation and publication of the new image versiondocker build -t lucj/message-app . && docker push lucj/message-app
run application containerdocker run -p 8000:80 --link mongoDB:db lucj/message-app
⇒ application available on port 8000 on the Docker host (192.168.99.100)
message-appcontainer
port: 80 host:172.17.0.28
mongocontainer
Docker hostport: 8000
port: 27017link
Testscurl http://192.168..99.100:8000/message
curl -XPOST http://192.168.99.100:8000/message?text=hello
curl http://192.168..99.100::8000/message
[ { "text": "hello", "createdAt": "2015-11-08T21:07:23.363Z", "updatedAt": "2015-11-08T21:07:23.363Z", "id": "5638b363c5cd08255116456b" }]
[ ]
Note: 192.168.99.100 is the Docker host IP address
The base applicationQuick introduction to DockerThe runtime environmentBuild our application’s imagePublish the image to a Docker RegistryLink containers on a single Docker hostContainer networking on a single Docker hostContainer networking on multiple Docker hostsDeployment on a Docker Swarm
New networking feature available
since version 1.9
Default networks3 default networks on node1 Docker host
$ docker network lsNETWORK ID NAME DRIVERd87b8fc4c466 bridge bridgeefaf610f57a5 host hostf7d0de539edd none null
By default, Docker engine attaches each container to the bridge network
Default bridge networkDocker engine attach each container to the default bridge network
$ docker run --name mongo -d mongo:3.2$ docker run --name box -d busybox top
$ docker network inspect --format='{{json .Containers}}' d87b8fc4c466 | python -m json.tool{ "0b8fedf4613c7275d89861037ea1b23ad4d65ab10f16df67bf976d9cb5652311": { "EndpointID": "0cf0cd3b2e0438c6f68c6a1e2f7587b63c48bda74911af55d1040f0d2fb117d2", "IPv4Address": "172.17.0.3/16", "IPv6Address": "", "MacAddress": "02:42:ac:11:00:03", "Name": "mongo" }, "6cb5e5f4a1bcc37925407b39f2dde41f2b370fc48a21f8289da91d17b3763a4c": { "EndpointID": "2a6412d3c3c25545a59ea148e317b2046965c0fe5c1eeae2c51f4f882aaa6b36", "IPv4Address": "172.17.0.2/16", "IPv6Address": "", "MacAddress": "02:42:ac:11:00:02", "Name": "box" }}
$ docker run -ti busybox /bin/sh/ # ping mongoping: bad address 'mongo'/ # ping boxping: bad address 'box'
A container cannot be addressed by its name :(
User defined bridge networkCreate a bridge network with Docker network commands
Run container in the new network
$ docker network create mongonetce9ea3b69d6ee2ecf56b40bd35b8a43f8505c8ca0473bc37bdede3711ecf60c1
$ docker network lsNETWORK ID NAME DRIVERd87b8fc4c466 bridge bridgeefaf610f57a5 host hostce9ea3b69d6e mongonet bridgef7d0de539edd none null
$ docker run --net mongonet -ti busybox /bin/sh/ # / # ping -c 3 mongoPING mongo (172.18.0.2): 56 data bytes64 bytes from 172.18.0.2: seq=0 ttl=64 time=0.058 ms64 bytes from 172.18.0.2: seq=1 ttl=64 time=0.085 ms64 bytes from 172.18.0.2: seq=2 ttl=64 time=0.072 ms
--- mongo ping statistics ---3 packets transmitted, 3 packets received, 0% packet lossround-trip min/avg/max = 0.058/0.071/0.085 ms
$ docker run --name mongo --net mongonet -d mongo:3.2
Containers can be address by their name through the DNS name server embedded in Docker 1.10+
Test our applicationRun db and application containers in the new bridge network
Test HTTP REST Api$ curl -XPOST http://192.168.99.100:8000/message?text=hello{ "text": "hello", "createdAt": "2016-06-06T14:01:05.764Z", "updatedAt": "2016-06-06T14:01:05.764Z", "id": "57558221a4461312009ce88c"}
$ docker run --name mongo --net mongonet -d mongo:3.2$ docker run --name app --net mongonet -p “8000:80” -d -e “MONGO_URL=mongodb://mongo/messageApp” message-app:v1
Use mongo container’s name in environment variable
$ curl -XGET http://192.168.99.100:8000/message[ { "text": "hello", "createdAt": "2016-06-06T14:01:05.764Z", "updatedAt": "2016-06-06T14:01:05.764Z", "id": "57558221a4461312009ce88c" }]
Application container is connected to mongo container using container name
Packaging of the application with Docker Composepackage of a multi containers application in a single file
docker-compose.ymldatabase container
api container
version: '2'services: mongo: image: mongo:3.2 volumes: - mongo-data:/data/db expose: - "27017" app: image: message-app:v1 ports: - "80" links: - mongo depends_on: - mongo environment: - MONGO_URL=mongodb://mongo/messageAppvolumes: mongo-data:
Internal port of app container is mapped to a random port on the host
Volume used to mount mongodb data folder
Application container is connected to mongo container using container name
Lifecycle and scalabilitylifecycle
docker-compose up (-d option enables the application to run in background)docker-compose psdocker-compose stop
scalabilitydocker-compose scale app=3
how are the new containers found ?
⇒ need to use a load balancer that will be updated each time a container is created orremoved
message-appcontainer
port: 80
mongocontainer Docker host
port: 32768
port: 27017
message-appcontainer
port: 80
port: 32769
message-appcontainer
port: 80
port: 32770
Usage of dockercloud/haproxy imagelisten to all Docker Engine events
http://docs.docker.com/engine/reference/commandline/events/
automatic update of load balancer configurationwhen a container is created or removed
message-appcontainer
port: 80ip: 172.17.0.30
mongocontainer
Docker host
port: 27017
message-appcontainer
port: 80ip: 172.17.0.31
message-appcontainer
port: 80ip: 172.17.0.32
link
port: 80
port: 8000
dockercloud/haproxycontainer
Adding load balancer to docker-compose.yml
Load balancer exposes port 8000 to the outside
App container only exposes port 80 internally
Services communicate with each other though their name (using Docker Engine embedded DNS name server)
version: '2'services: mongo: image: mongo:3.2 volumes: - mongo-data:/data/db expose: - "27017" lbapp: image: dockercloud/haproxy links: - app volumes: - /var/run/docker.sock:/var/run/docker.sock ports: - "8000:80" app: image: message-app expose: - "80" links: - mongo depends_on: - mongo environment: - MONGO_URL=mongodb://mongo/messageAppvolumes: mongo-data:
Test our applicationRun the new version of compose file
docker-compose updocker-compose scale app=3
Test HTTP REST Api
$ curl -XPOST http://192.168.99.100:8000/message?text=hola{ "text": "hola", "createdAt": "2016-06-08T13:30:18.298Z", "updatedAt": "2016-06-08T13:30:18.298Z", "id": "57581deacde05a1200877fa2"}
$ curl -XGET http://192.168.99.100:8000/message[ { "text": "hola", "createdAt": "2016-06-08T13:30:18.298Z", "updatedAt": "2016-06-08T13:30:18.298Z", "id": "57581deacde05a1200877fa2" }]
The base applicationQuick introduction to DockerThe runtime environmentBuild our application’s imagePublish the image to a Docker RegistryLink containers on a single Docker hostContainer networking on a single Docker hostContainer networking on multiple Docker hostsDeployment on a Docker Swarm
PrerequisiteDocker 1.9+
multihost networking available out the box with libnetwork
need to setup a Key Value storeeg: etcd / consul / zookeeperkeeps all the information regarding
networks / subnetworksIP addresses of Docker hosts / containers…
Creation of a key-value storecreation of a Docker host
docker-machine create -d virtualbox consul
switch to context of newly created machineeval "$(docker-machine env consul)"
run container based on Consul imagedocker run -d -p "8500:8500" -h "consul" progrium/consul -server -bootstrap
Creation of the Docker hosts$ docker-machine create \-d virtualbox \--engine-opt="cluster-store=consul://$(docker-machine ip consul):8500" \--engine-opt="cluster-advertise=eth1:2376" \host1
$ docker-machine create \-d virtualbox \--engine-opt="cluster-store=consul://$(docker-machine ip consul):8500" \--engine-opt="cluster-advertise=eth1:2376" \host2
$ docker $(docker-machine config host1) network lsNETWORK ID NAME DRIVER14753b15c63e bridge bridge2cc7d35a48e3 none nullad05eeca763a host host
$ docker $(docker-machine config host2) network lsNETWORK ID NAME DRIVERb7765c98adbf bridge bridge48244d2fca3b none null36a3858b68c8 host host
default networks available on each host: bridge / none / host
HOST1 HOST2
Creation of an overlay networkcreation of a network from host1
docker $(docker-machine config host1) network create -d overlay appnet
new network also visible from host2
$ docker $(docker-machine config host1) network lsNETWORK ID NAME DRIVERacd47b4c062d appnet overlay14753b15c63e bridge bridge2cc7d35a48e3 none nullad05eeca763a host host
$ docker $(docker-machine config host2) network lsNETWORK ID NAME DRIVERacd47b4c062d appnet overlayb7765c98adbf bridge bridge48244d2fca3b none null36a3858b68c8 host host
Creation of the containersrun mongo container on appnet network from host1
docker $(docker-machine config host1) run -d --name mongo --net=appnet mongo:3.0
run busybox container on appnet network from host2docker $(docker-machine config host2) run -ti --name box --net=appnet busybox sh
“box” container can communicate with “mongo” container using its name through the DNS name server embedded in Docker 1.10+
/ # ping mongoPING mongo (10.0.0.2): 56 data bytes64 bytes from 10.0.0.2: seq=0 ttl=64 time=0.553 ms…/ # ping mongo.appnetPING mongo.appnet (10.0.0.2): 56 data bytes64 bytes from 10.0.0.2: seq=0 ttl=64 time=0.474 ms…
The base applicationQuick introduction to DockerThe runtime environmentBuild our application’s imagePublish the image to a Docker RegistryLink containers on a single Docker hostContainer networking on a single Docker hostContainer networking on multiple Docker hostsDeployment on a Docker Swarm
Docker SwarmDocker hosts cluster
one or several swarm master (for HA)orchestrator / schedulerfailover
one Swarm agent per node
easy to create with Docker Machine
integration of Docker Machine / Docker Compose / Docker Swarm
Creation of a key-value storecreation of a Docker host
docker-machine create -d virtualbox consul
switch to context of newly created machineeval "$(docker-machine env consul)"
run container based on Consul imagedocker run -d -p "8500:8500" -h "consul" progrium/consul -server -bootstrap
Creation of a swarm$ docker-machine create \-d virtualbox \--swarm \--swarm-master \--swarm-discovery="consul://$(docker-machine ip consul):8500" \--engine-opt="cluster-store=consul://$(docker-machine ip consul):8500" \--engine-opt="cluster-advertise=eth1:2376" \demo0
$ docker-machine create \ -d virtualbox \--swarm \--swarm-discovery="consul://$(docker-machine ip consul):8500" \ --engine-opt="cluster-store=consul://$(docker-machine ip consul):8500" \--engine-opt="cluster-advertise=eth1:2376" \demo1
swarm master swarm agent
$ docker-machine ls
NAME ACTIVE DRIVER STATE URL SWARMconsul * virtualbox Running tcp://192.168.99.100:2376demo0 - virtualbox Running tcp://192.168.99.101:2376 demo0 (master)demo1 - virtualbox Running tcp://192.168.99.102:2376 demo1
⇒ 3 Docker hosts created (key-store, Swarm master, Swarm node)
Create DNS load balanceruser nginx;worker_processes 2;events { worker_connections 1024;}http { access_log /var/log/nginx/access.log; error_log /var/log/nginx/error.log;
# 127.0.0.11 is the address of the Docker embedded DNS server resolver 127.0.0.11 valid=1s; server { listen 80; # apps is the name of the network alias in Docker set $alias "apps";
location / { proxy_pass http://$alias; } }}
FROM nginx:1.9
# forward request and error logs to docker log collectorRUN ln -sf /dev/stdout /var/log/nginx/access.logRUN ln -sf /dev/stderr /var/log/nginx/error.log
COPY nginx.conf /etc/nginx/nginx.conf
EXPOSE 80
CMD ["nginx", "-g", "daemon off;"]
Dockerfile nginx.conf
# Create image$ docker build -t lucj/lb-dns .
# Publish image$ docker push -t lucj/lb-dns
Update docker-compose.ymlversion: '2'services: mongo: image: mongo:3.2 networks: - backend volumes: - mongo-data:/data/db expose: - "27017" environment: - "constraint:node==demo0" lbapp: image: lucj/lb-dns networks: - backend ports: - "8000:80" environment: - "constraint:node==demo0" app: image: lucj/message-app expose: - "80" environment: - MONGO_URL=mongodb://mongo/messageApp - "constraint:node==demo1" networks: backend: aliases: - apps depends_on: - lbappvolumes: mongo-data:networks: backend: driver: overlay
use lb load balancer
add constraints to choose the nodes
a new user defined overlay network is created
No need to use link between containers
Get images from Docker Hub
Deployment and scaling of the applicationswitch to the swarm master contexte
eval $(docker-machine env --swarm demo0)
run application using networking optiondocker-compose up
scalingdocker-compose scale app=5
messageApp API is available through http://192.168.99.101:8000/message
IP of the swarm masterPort of the load balancer
In summarysetup Docker for a simple Node.js / MongoDB application
created image for the applicationcontaining all the parts to run the application (runtime Node.js, librairies, application
code)
portable image (dev / test / qa / prod) available through the Docker Hub
scalability of the application (API) on a cluster of Docker hosts
several Docker components well integrated together
Nextscalability of the database tier
add a web front-end that uses the API
add a centralized gestion of the logsELK stack (Elasticsearch / Logstash / Kibana)
add a monitoring solution for all the running containers
Add a TLS termination (using https-portal)