docker at adams state universityrandall smith docker at adams state university one of the best...

33
Docker at Adams State University Randall Smith CHECO Fall 2018 Randall Smith Docker at Adams State University

Upload: others

Post on 03-Jun-2020

5 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Docker at Adams State UniversityRandall Smith Docker at Adams State University One of the best things about using Docker and GitLab CI to manage service deployments is that very little

Docker at Adams State University

Randall Smith

CHECO Fall 2018

Randall Smith Docker at Adams State University

Once Upon a Time

Randall Smith Docker at Adams State University

Great stories begin at Adams State Our Docker story started a few yearsago

In the Beginning

I Use Docker to package Perl web applicationsI Two stand-alone serversI No registryI Migrating services between servers was problematic

Randall Smith Docker at Adams State University

I had a couple of Perl applications that I had to upgrade and migrate to newhosts These were our help desk and account management systems Likemost Perl apps they required a large number of CPAN libraries WhenI learned about Docker I decided that I wanted to use it wrap thoseapplications into nice neat packages to make migration easier BuildingDocker images also made testing the apps easier because I could use thesame image in testing and productionI started with two Ubuntu servers that I manually managed the containerson The containers were started when the servers booted with upstartscriptsDocker has the concept of a registry to store images that can be usedanywhere I didnrsquot have one and I didnrsquot want to uploaded the publicly toDocker Hub This means that I had to rebuild the applications on eachserver That sometimes lead to slightly different versions of libraries whichbroke the applicationsThe lack of a registry made it difficult to migrate services to the otherhost Plus managing the upstart scripts was a fragile process and brokefrequently

We Needed an Orchestrator

Orchestration is needed to fully realize the power of Docker

Randall Smith Docker at Adams State University

Running standalone containers without orchestration can work on a smallscale To really realize the power of Docker an orchestration platform isneeded

What is Orchestration

Orchestration is the process of managing and automatingcontainers

Randall Smith Docker at Adams State University

Orchestration is the process of managing and automating containersThe number of containers can scale quickly Without good tooling thenumber of containers can quickly become unmanageable

What an Orchestrator Does

I Starts stops servicesI Scales up down replicasI Connects containers to storageI Manages container replacements for updates

Randall Smith Docker at Adams State University

An orchestrator is in charge of ensuring that services are running Ser-vices are spread across the cluster to attempt to balance the load Theorchestrator can also scale up and down the number of replicas of yourserviceThe orchestrator also manages the process of connecting running contain-ers to storage This lets you abstract out your storage from the containermaking it easier to switch clouds if desiredFinally the orchestrator schedules replacing containers if they get updatedFor example you can tell your orchestrator that a service should be runninga new image and it will stop the current image and start the new oneautomatically If your service can run multiple replicas this update can betransparent to your users

Orchestration Platforms

I Docker SwarmI KubernetesI MesosphereI FleetI Cattle

Randall Smith Docker at Adams State University

I did a survey of the available orchestration tools that existed at the timeThe big ones were Docker Swarm Kubernetes and Mesosphere I alsolooked at Fleet and Rancher CattleFor full details

My Published Works

Randall Smith Docker at Adams State University

see my published works Docker Orchestration in particular

The Findings Kubernetes

I Ceph RBD supportI Easier migration of legacy applicationsI Each service gets its own IP addressI Better feature set

Randall Smith Docker at Adams State University

In the end I wrote up peer review recommending that we use KubernetesKubernetes was the new project on the block but it was already gaining alarge following in the communityCeph is a distributed storage system that we use to provide block devicesto our VM stack I wanted to be able to use it with Docker as wellUnfortunately the lack of ceph support in most of the products at thetime limited the list to KubernetesKubernetes also provided the easiest migration path for legacy applicationinto a container environment The pod structure allows for multiple con-tainers to be combined and communicate with each other as if they wereon the same serverThe Kubernetes service model makes deploying multiple applications whichuse the same port much easier Since each service gets its own IP addressthere are no potential port conflicts as there are with the other toolsWhile RBD support has since been added to Swarm Kubernetes also has anumber features that are unique The most prominent are scheduled jobsand easy tools to run tasks in running containers

Things Didnrsquot Go as Planned

A new install of Snipe IT requiredpersistent storage which I couldnot support on my existingDocker servers

Randall Smith Docker at Adams State University

I was unable to deploy Kubernetes immediately due to other prioritiesThen we had a project come from our support services team to setupthe an inventory management system called Snipe IT The admin on ourteam that was running with the project wanted to use the official Snipe ITDocker containerThe problem is that container storage is not persistent I couldnrsquot handleit on my existing pair of servers because I didnrsquot have a way to deal withthe storage needsTo solve this I quickly stood up a small cluster running Docker SwarmThis is exactly the type of problem that orchestration tools help solve

Why Swarm

I Built into DockerI Encrypted overlay networksI Storage driver for Ceph RBD

Randall Smith Docker at Adams State University

Swarm is built into Docker Itrsquos almost trivial to setup Service andcontainer management is fairly simple and it still offered powerful featuressuch as zero-downtime updatesSwarm also provides a native encrypted overlay network to connect con-tainers running on separate hosts This rocks Communication betweenevery container in the cluster is encrypted transparently at the networklayer That allowed us to encrypt traffic for services such as MySQL whichcan be problematic to enable native encryptionMost importantly I was able to configure the Rexray storage driver forDocker to mount Ceph RBDs for persistent storageIn two days I was able to setup a fully orchestrated Docker cluster Ev-erything was coming up roses

The Real Win GitLab CI

Randall Smith Docker at Adams State University

The real win for us came in the form of GitLab CI GitLab became integralto our workflowWersquove been using GitLab for many years as a git server It was just as I wasstarting to look into orchestration platforms that GitLab added a Dockerregistry Since we didnrsquot have one yet this was a huge winShortly thereafter I started looking into GitLabrsquos continuous integrationfeatures

Automated Container Builds

Randall Smith Docker at Adams State University

It started early in the process before I stood up the Swarm I was pushingmy image configurations into git in GitLab I started using GitLab CI toautomatically build new Docker imagesWhen a commit is pushed to a repo with CI enabled a new build is trig-gered Once the the build is complete itrsquos pushed into the Docker registryand is ready to use anywhere Where before I was building new images onmy desktop or on the servers themselves now I could let GitLab do it formeWhat makes this especially cool is that we were able to make this availableto our PR department to build new images for the main website The newadamsedu runs on Wordpress running in Docker Our web developer canbuild and test new images automaticallyI also like this process because it makes it easier to stand up services in atest environment We can test the image before it goes into productionOnce our testing is done the image that is being deployed into productionis the exact image that passed our tests This eliminates nearly all of theanxiety that comes with deploying new changes

Automated Deployment of Services

Randall Smith Docker at Adams State University

Even better we added steps to the CI process which allows him to deployhis updated Wordpress images to the Swarm without root access or needingto talk to a member of our ops team By choice the deployment istriggered manually especially to production This helps to ensure that weare being deliberate in our processesGitLab keeps an audit trail of every deployment so we can see every de-ployment that happens and who triggered it It also makes it easy to rollback to previous images in the event of serious problemsMost of our other Docker services have been migrated to this processas well This allows us to fully audit any deployments of services Theconsistent process also makes it easier for anyone on the team to makechanges if neededDoes it take longer to make small changes Yes Yes it does Howeverthe deliberate process ensures that we are consistent It also provides arecord of what changed (thanks to git) That way if we start getting callson a service we can see if there were any recent changes what they wereand when they were deployed We can also roll back updates in mostcases if there are problemsThis is a very DevOps approach Wersquore treating servers and services ascode

Building Images

FROM wordpress498-apacheLABEL maintainer=Mike Henderson mhendersonadamsedu

RUN apt-get update ampamp apt-get install -y curl zip unzip git libldap2-dev ampamp rm -rf varlibaptlists ampamp docker-php-ext-configure ldap --with-libdir=libx86_64-linux-gnu ampamp docker-php-ext-install ldap ampamp apt-get purge -y --auto-remove libldap2-dev ampamp php -r readfile(rsquohttpgetcomposerorginstallerrsquo)

| php -- --install-dir=usrbin --filename=composer

git clone of themes and plugins removed

Pull in composer file and runCOPY composerjson RUN composer install --verbose --profile --prefer-dist --no-autoloader

Make NFS mount point set ownership symlink into wordpressRUN mkdir uploadsRUN chown www-datawww-data uploadsRUN ln -s uploads usrsrcwordpresswp-contentuploads

Randall Smith Docker at Adams State University

Building an image is the first step to running a service in Docker Using offthe shelf images can be a great way to get started but eventually yoursquollneed to roll your own image This is done via a DockerfileThe easiest place to start is from an existing image The slide shows asnippet from the Dockerfile that wersquore using to build the image for whatwill be our main web site In this case wersquore expanding off the officialWordpress image to build exactly what we need for adamseduOur web guru Mike Henderson built and maintains the Dockerfile Our CIprocess automates the build process when he pushes a change into GitLaband allows him to deploy it to testing or productionThe build process makes it easier for others to review and audit an imageUnlike a server everything that goes into a service is in the image Youdonrsquot have extra things hanging around that no one knows about becausethe person doing the install forgot about it

Deploying Services

docker stack deploy -c docker-composeyml www

Randall Smith Docker at Adams State University

So letrsquos dig into how we deploy a service This is done in Swarm witha docker-composeyml file In it you specify all of the services volumesand networks that your application needs All of this happens at run timeThis is the compose file that wersquore using for to test our wordpress deploy-ment for our new website

bull base images are usually very minimal

bull version tags allow rollback Go back to previous version

bull this db approach means that every service gets own dedicated DB

[ back to slide ] When wersquore ready to deploy we run docker stackdeploy and Swarm tries to make reality match what wersquove defined inthe compose fileThe combination of Dockerfiles and the compose file define the entireconfiguration for a service This fills the same role as server configurationmanagement

Zero-downtime Upgrades

I Requires the service to run multiple replicasI Running containers are replaced one at a time until they are all

replacedI The load balancer serves requests from all running services

Randall Smith Docker at Adams State University

One of the great features of Swarm and Kubernetes is the ability to per-form zero-downtime upgrades This feature is based firstly on runningmultiple replicas of the image Each running container is updated one ata time or in a configurable number of groupsThe built-in load balancer will serve requests from each running containerAs containers are shutdown they are removed from the load balancer Thenew containers are added and will start serving requests Eventually everycontainer will be replaced with ones running the new image The servicewill remain up the entire timeDuring the update process some requests may go to old containers whileothers go to the new ones As long as everything is backwards compatiblethis works wellEven for services that cannot run with multiple replicas such as databasesthey generally restart so quickly that downtime for upgrades is reduced toseconds

Who Needs a Server

docker run ictuopenvas-docker openvasrun_scanpy -v pathtoresultsopenvasreports host1host2host2 report-name

Randall Smith Docker at Adams State University

One of the great things about running services in containers is that westarted to kill off servers In some cases is was what you would expectThe server that was running a service isnrsquot needed anymore In one casein particular the change was even more drasticWe use OpenVAS to run regular security scans against our servers TheOpenVAS server included the security scanning and a web interface formanaging the scans and providing access to the reportsCameron started looking into how to move OpenVAS into a container anddiscovered a couple of pre-built images One of them allows you to passin a list of hosts to scan on startup The container will go through thelist scan every host and write the report to a volume Even better thecontainer downloads all of the latest checks when it starts so it is alwaysup-to-dateWe were able to replace an entire server with a one-liner We can schedulethat to run as a cron job or run it on demand anytime we need it

Who Needs a Cluster

I Use single node Docker Swarm for standalone servicesI All of the CI tooling is still availableI Take advantage of rollback and CI audit trail

Randall Smith Docker at Adams State University

We also found that in some cases it still might make sense to run aservice on a standalone server However you can still take advantage ofthe managed deployment and rollback that is available with Docker Swarmon a single hostAll of the CI build and deployment options are available as they wouldbe if the service were running on a full cluster Instead a specificgitlab-runner is used on the standalone host Deployments are thenconfigured to use that runnerThere are two big wins when taking this approach on standalone serversFirst you get the rapid rollback in the event of failure Second you getthe audit trail and accountability that comes from the CI environment

There Were Problems

I Swarm scheduler failedI Rexray driver doesnrsquot

consistently unmap RBDsI Overlay randomly network

stopped workingI IPAM assigns too many IP

addresses

Randall Smith Docker at Adams State University

As we started to use Swarm more we started to see problems appearFirst of all we ran into issues with the Swarm scheduler There was a bugthat triggered once in a while that prevented the Swarm from starting newservices Running containers were fine but we couldnrsquot start new onesEventually this was solved in a later Docker releaseThe other problem we have is the Rexray driver doesnrsquot always cleanupafter itself when a container is stopped It can leave RBDs mounted ormapped on a node preventing it from starting elsewhere in the SwarmThis can cause a service to take longer to start or prevent it from startingentirelyWe had two major network issues First of all the overlay network wouldsometimes stop talking Containers on the same nodes could continue toconnect but they couldnrsquot talk to others in the Swarm There turned outto be a conflict between the Kernel network timeout settings and the IPVSsettings which led to the Kernel dropping the overlay connectionsThe second problem is one wersquore still working around Therersquos a bug inthe IPAM module which assigns the virtual IPs If services are frequentlyupdated it can lead to multiple IPs being assigned to the service andthe old IPs not being cleaned up This can lead to strange connectivityproblems

Not A Total Loss

In the end Docker Swarm has proven to be a solid proof ofconcept

Randall Smith Docker at Adams State University

In the end Docker Swarm has proven to be a solid proof of conceptWe learned a lot about how to handle containers and the workflow fordeploying servicesDonrsquot be fooled with the proof of concept label Swarm is a solid prod-uct So solid that we are using this to run production services

Next Steps

Randall Smith Docker at Adams State University

The next step for us is to move into our new Kubernetes clusterI wish I could say that we were there now Unfortunately little things likemoving to Banner 9 took up a lot of time Herersquos where we are now andwhat our plans are moving forward

Architecture

Randall Smith Docker at Adams State University

The architecture for the new Kubernetes cluster mirrors what wersquore doingwith SwarmWersquore using haproxy on the gateway nodes to provide public access toservices running in the cluster It also provides TLS termination for webservices The nodes are managed by puppet (and GitLab CI)The Kubernetes cluster can use a number of different drivers to provide thenetwork overlay and enforce network access controls The Calico driverfor example uses BGP to route requests for services to the host where thecontainers running Using BGP means that standard routingnetworkingtools can be used to troubleshoot problemsOne of the best things about the native Docker overlay network is thatall traffic between all of the containers on that network in the Swarm canbe encrypted I didnrsquot want to lose that when moving to Kubernetes so Iuse puppet to automatically configure libreswan to build host-to-host linksbetween all of the hosts with IPSec and letsencrypt (Mostly automaticallyI still have to manually request the the certificate)Cluster management goes through kubeadmin01 A gitlab-runner onkubeadmin01 handles service updates coming from GitLab CI

Setup

I rke kubespray and kubeadm make installs easierI Shared storage via Ceph and NFSI Networking is the hard part

Randall Smith Docker at Adams State University

Kubernetes development is focused on the cloud Unfortunately thatmeans that deploying locally is not as easy as it could be Tools likerke kubespray and kubeadm make it easierOne of the things I like about rke is that it has an easy option to teardown a cluster This is great for testingWe use Ceph RBD to provide block devices to our Linux KVM clusterWe are also able to leverage that for Docker but the way itrsquos configureddoesnrsquot work well if volumes need to be shared We use NFS for thosecases where multiple services need to access the volume at the same timeThe cloud focus really shows up when it comes time to setup networkingTherersquos an expectation that the cluster use the load balancer from thecloud provider to provide access Obviously thatrsquos not the case whenrunning it locallyAll of the subnets used by Kubernetes are designed to be internal to thecluster That makes it hard to make services public Metallb has a coupleof options to do that The easiest is Layer 2 mode which publishes IPsdirectly on the nodes and use ARPs to make other servers aware of theIPs This is what wersquore using The other option is BGP with publishesthe IPs and their location in the cluster to other routers

Migration

I Convert from Docker Compose to Pods and ServicesI Add a CI jobs to deploy to the k8s clusterI Change DNS to point to the k8s serviceI No image rebuild needed

Randall Smith Docker at Adams State University

One of the best things about using Docker and GitLab CI to manageservice deployments is that very little needs to change There is a one-time conversion of the docker compose file to the Kubernetes config formatThe CI configuration also needs to be updated so that deployments go toKubernetes After that deployment through GitLab remains the same asit was with Swarm For services that do not have external volumes theycan be deployed to both clouds and tests run as neededFor many services migration can be as easy as changing a DNS recordRemember the exact same image that wersquore running in the Swarm will berunning in Kubernetes There are no surprises with the image No needto rebuild or re-install

How Can You Get Started

I Start with something simpleI Start with services that can tolerate downtimeI You may need to change your backup strategyI CI CI CI

Randall Smith Docker at Adams State University

Donrsquot do what I did Start with something simple Web services and tasksthat donrsquot require permanent local storage are a great start These areusually easier to build and more resilient to being frequently stopped andstarted This will happen a lot with your first servicesStart with services that can tolerate downtime Test services are great butyoursquoll eventually want to move something useful into the cluster Therersquosa learning curve Donrsquot let it bite you (or discourage you)When you move to containers how you back up your data may need tochange You donrsquot have a server to log into and run backup scripts Knowwhere your data lives and back it up thereCI CI CI Also CI Having a consistent workflow will make it easier foreveryone to start using the new cluster

Build vs Buy

Randall Smith Docker at Adams State University

You can build from scratch but there are a number of pre-built commercialservicesDocker Enterprise is a simple install on-prem and provides both Swarmand Kubernetes services at the same time It also provides a registry androle-based access control We are testing this approach nowYou can also go straight to the cloud with GKE AWS or Azure Infact using Kubernetes makes it much easier to transition between cloudproviders or even use multiple providers at the same time

Logging

I Containers generally log tostandard out

I Logs can be shipped to a logaggreagator such asElasticsearch

I View logs and createdashboards in Kibana

Randall Smith Docker at Adams State University

Moving to Docker changes a number of things not the least of which islogging The official accepted way for containers to log is to write theirlogs to standard out The logs are then available via Docker This alsoallows for a standard way to ship those logs to an aggregatorGenerally this done by shipping those logs to Elasticsearch One can thenuse a tool such as Kibana to access those logs build dashboards and eventrigger alerts based on the contents of those logs

Monitoring

I Collect metrics with PrometheusI View the metrics with GrafanaI Use InfluxDB for long-term storage

Randall Smith Docker at Adams State University

Prometheus has become the premier way to collect metrics on Kubernetesclusters It collects host stats as well as information on running podsGrafana is frequently used to build dashboards to view the collected met-ricsPrometheus only keeps metrics for a short amount of time Usually lessthan two weeks The metrics can be shipped to InfluxDB for long termstorage and analysis

Security Scans

I No built-in way to see if an image has out-of-date packagesI clair and Docker Enterprise can scan images for vulnerable

packagesI Doesnrsquot work as well for 3rd party packages

Randall Smith Docker at Adams State University

One of the things that you lose with containers is a built-in way to ensurethat software in an image is not running vulnerable packages You can useyum or apt to see if new updates need to be applied but containers donrsquothave thatTools such as clair can be used to scan images at build time as part ofthe CI process or to regularly scan all images in a registry This feature isalso included in Docker Enterprise This works well for packages installedwith standard tools (such as apt) but doesnrsquot work as well for softwareinstalled from sourceOne way to keep images up-to-date is run scheduled tasks in GitLab CIthat scans the image and rebuilds is with the latest updates installed ifsomething is found Remember that base images are generally very strippeddown so they will generally have fewer updates

BANDOCK Google Group

httpsgroupsgooglecomforumforumbandock

Randall Smith Docker at Adams State University

Virginia Tech started a Google Group for schools looking to run Banner inDocker VT is running Banner 9 in Docker SwarmTo request access go to the BANDOCK Google group and click the linkthat says Apply for membership

Questions

Feel free to ask questions after the conference

Email rbsmithadamseduPhone 719-587-7836Twitter PerlStalker

Randall Smith Docker at Adams State University

Thank You

Randall Smith Docker at Adams State University

References

bull Two Five and Seven painting the rosebush from AlicersquosAdventures in Wonderland

bull ELK Stack image - httpswwwelasticcoelk-stack

bull Docker Enterprise -httpswwwdockercomproductsdocker-enterprise

bull Book cover generated with the devto book generator -httpsdevtorly

bull Double Facepalm meme created at imgflipcom

bull GitLab - httpsaboutgitlabcom

bull Kubernetes - httpskubernetesio

bull Calico - httpswwwprojectcalicoorg

bull Metallb - httpsmetallbuniversetf

Page 2: Docker at Adams State UniversityRandall Smith Docker at Adams State University One of the best things about using Docker and GitLab CI to manage service deployments is that very little

Once Upon a Time

Randall Smith Docker at Adams State University

Great stories begin at Adams State Our Docker story started a few yearsago

In the Beginning

I Use Docker to package Perl web applicationsI Two stand-alone serversI No registryI Migrating services between servers was problematic

Randall Smith Docker at Adams State University

I had a couple of Perl applications that I had to upgrade and migrate to newhosts These were our help desk and account management systems Likemost Perl apps they required a large number of CPAN libraries WhenI learned about Docker I decided that I wanted to use it wrap thoseapplications into nice neat packages to make migration easier BuildingDocker images also made testing the apps easier because I could use thesame image in testing and productionI started with two Ubuntu servers that I manually managed the containerson The containers were started when the servers booted with upstartscriptsDocker has the concept of a registry to store images that can be usedanywhere I didnrsquot have one and I didnrsquot want to uploaded the publicly toDocker Hub This means that I had to rebuild the applications on eachserver That sometimes lead to slightly different versions of libraries whichbroke the applicationsThe lack of a registry made it difficult to migrate services to the otherhost Plus managing the upstart scripts was a fragile process and brokefrequently

We Needed an Orchestrator

Orchestration is needed to fully realize the power of Docker

Randall Smith Docker at Adams State University

Running standalone containers without orchestration can work on a smallscale To really realize the power of Docker an orchestration platform isneeded

What is Orchestration

Orchestration is the process of managing and automatingcontainers

Randall Smith Docker at Adams State University

Orchestration is the process of managing and automating containersThe number of containers can scale quickly Without good tooling thenumber of containers can quickly become unmanageable

What an Orchestrator Does

I Starts stops servicesI Scales up down replicasI Connects containers to storageI Manages container replacements for updates

Randall Smith Docker at Adams State University

An orchestrator is in charge of ensuring that services are running Ser-vices are spread across the cluster to attempt to balance the load Theorchestrator can also scale up and down the number of replicas of yourserviceThe orchestrator also manages the process of connecting running contain-ers to storage This lets you abstract out your storage from the containermaking it easier to switch clouds if desiredFinally the orchestrator schedules replacing containers if they get updatedFor example you can tell your orchestrator that a service should be runninga new image and it will stop the current image and start the new oneautomatically If your service can run multiple replicas this update can betransparent to your users

Orchestration Platforms

I Docker SwarmI KubernetesI MesosphereI FleetI Cattle

Randall Smith Docker at Adams State University

I did a survey of the available orchestration tools that existed at the timeThe big ones were Docker Swarm Kubernetes and Mesosphere I alsolooked at Fleet and Rancher CattleFor full details

My Published Works

Randall Smith Docker at Adams State University

see my published works Docker Orchestration in particular

The Findings Kubernetes

I Ceph RBD supportI Easier migration of legacy applicationsI Each service gets its own IP addressI Better feature set

Randall Smith Docker at Adams State University

In the end I wrote up peer review recommending that we use KubernetesKubernetes was the new project on the block but it was already gaining alarge following in the communityCeph is a distributed storage system that we use to provide block devicesto our VM stack I wanted to be able to use it with Docker as wellUnfortunately the lack of ceph support in most of the products at thetime limited the list to KubernetesKubernetes also provided the easiest migration path for legacy applicationinto a container environment The pod structure allows for multiple con-tainers to be combined and communicate with each other as if they wereon the same serverThe Kubernetes service model makes deploying multiple applications whichuse the same port much easier Since each service gets its own IP addressthere are no potential port conflicts as there are with the other toolsWhile RBD support has since been added to Swarm Kubernetes also has anumber features that are unique The most prominent are scheduled jobsand easy tools to run tasks in running containers

Things Didnrsquot Go as Planned

A new install of Snipe IT requiredpersistent storage which I couldnot support on my existingDocker servers

Randall Smith Docker at Adams State University

I was unable to deploy Kubernetes immediately due to other prioritiesThen we had a project come from our support services team to setupthe an inventory management system called Snipe IT The admin on ourteam that was running with the project wanted to use the official Snipe ITDocker containerThe problem is that container storage is not persistent I couldnrsquot handleit on my existing pair of servers because I didnrsquot have a way to deal withthe storage needsTo solve this I quickly stood up a small cluster running Docker SwarmThis is exactly the type of problem that orchestration tools help solve

Why Swarm

I Built into DockerI Encrypted overlay networksI Storage driver for Ceph RBD

Randall Smith Docker at Adams State University

Swarm is built into Docker Itrsquos almost trivial to setup Service andcontainer management is fairly simple and it still offered powerful featuressuch as zero-downtime updatesSwarm also provides a native encrypted overlay network to connect con-tainers running on separate hosts This rocks Communication betweenevery container in the cluster is encrypted transparently at the networklayer That allowed us to encrypt traffic for services such as MySQL whichcan be problematic to enable native encryptionMost importantly I was able to configure the Rexray storage driver forDocker to mount Ceph RBDs for persistent storageIn two days I was able to setup a fully orchestrated Docker cluster Ev-erything was coming up roses

The Real Win GitLab CI

Randall Smith Docker at Adams State University

The real win for us came in the form of GitLab CI GitLab became integralto our workflowWersquove been using GitLab for many years as a git server It was just as I wasstarting to look into orchestration platforms that GitLab added a Dockerregistry Since we didnrsquot have one yet this was a huge winShortly thereafter I started looking into GitLabrsquos continuous integrationfeatures

Automated Container Builds

Randall Smith Docker at Adams State University

It started early in the process before I stood up the Swarm I was pushingmy image configurations into git in GitLab I started using GitLab CI toautomatically build new Docker imagesWhen a commit is pushed to a repo with CI enabled a new build is trig-gered Once the the build is complete itrsquos pushed into the Docker registryand is ready to use anywhere Where before I was building new images onmy desktop or on the servers themselves now I could let GitLab do it formeWhat makes this especially cool is that we were able to make this availableto our PR department to build new images for the main website The newadamsedu runs on Wordpress running in Docker Our web developer canbuild and test new images automaticallyI also like this process because it makes it easier to stand up services in atest environment We can test the image before it goes into productionOnce our testing is done the image that is being deployed into productionis the exact image that passed our tests This eliminates nearly all of theanxiety that comes with deploying new changes

Automated Deployment of Services

Randall Smith Docker at Adams State University

Even better we added steps to the CI process which allows him to deployhis updated Wordpress images to the Swarm without root access or needingto talk to a member of our ops team By choice the deployment istriggered manually especially to production This helps to ensure that weare being deliberate in our processesGitLab keeps an audit trail of every deployment so we can see every de-ployment that happens and who triggered it It also makes it easy to rollback to previous images in the event of serious problemsMost of our other Docker services have been migrated to this processas well This allows us to fully audit any deployments of services Theconsistent process also makes it easier for anyone on the team to makechanges if neededDoes it take longer to make small changes Yes Yes it does Howeverthe deliberate process ensures that we are consistent It also provides arecord of what changed (thanks to git) That way if we start getting callson a service we can see if there were any recent changes what they wereand when they were deployed We can also roll back updates in mostcases if there are problemsThis is a very DevOps approach Wersquore treating servers and services ascode

Building Images

FROM wordpress498-apacheLABEL maintainer=Mike Henderson mhendersonadamsedu

RUN apt-get update ampamp apt-get install -y curl zip unzip git libldap2-dev ampamp rm -rf varlibaptlists ampamp docker-php-ext-configure ldap --with-libdir=libx86_64-linux-gnu ampamp docker-php-ext-install ldap ampamp apt-get purge -y --auto-remove libldap2-dev ampamp php -r readfile(rsquohttpgetcomposerorginstallerrsquo)

| php -- --install-dir=usrbin --filename=composer

git clone of themes and plugins removed

Pull in composer file and runCOPY composerjson RUN composer install --verbose --profile --prefer-dist --no-autoloader

Make NFS mount point set ownership symlink into wordpressRUN mkdir uploadsRUN chown www-datawww-data uploadsRUN ln -s uploads usrsrcwordpresswp-contentuploads

Randall Smith Docker at Adams State University

Building an image is the first step to running a service in Docker Using offthe shelf images can be a great way to get started but eventually yoursquollneed to roll your own image This is done via a DockerfileThe easiest place to start is from an existing image The slide shows asnippet from the Dockerfile that wersquore using to build the image for whatwill be our main web site In this case wersquore expanding off the officialWordpress image to build exactly what we need for adamseduOur web guru Mike Henderson built and maintains the Dockerfile Our CIprocess automates the build process when he pushes a change into GitLaband allows him to deploy it to testing or productionThe build process makes it easier for others to review and audit an imageUnlike a server everything that goes into a service is in the image Youdonrsquot have extra things hanging around that no one knows about becausethe person doing the install forgot about it

Deploying Services

docker stack deploy -c docker-composeyml www

Randall Smith Docker at Adams State University

So letrsquos dig into how we deploy a service This is done in Swarm witha docker-composeyml file In it you specify all of the services volumesand networks that your application needs All of this happens at run timeThis is the compose file that wersquore using for to test our wordpress deploy-ment for our new website

bull base images are usually very minimal

bull version tags allow rollback Go back to previous version

bull this db approach means that every service gets own dedicated DB

[ back to slide ] When wersquore ready to deploy we run docker stackdeploy and Swarm tries to make reality match what wersquove defined inthe compose fileThe combination of Dockerfiles and the compose file define the entireconfiguration for a service This fills the same role as server configurationmanagement

Zero-downtime Upgrades

I Requires the service to run multiple replicasI Running containers are replaced one at a time until they are all

replacedI The load balancer serves requests from all running services

Randall Smith Docker at Adams State University

One of the great features of Swarm and Kubernetes is the ability to per-form zero-downtime upgrades This feature is based firstly on runningmultiple replicas of the image Each running container is updated one ata time or in a configurable number of groupsThe built-in load balancer will serve requests from each running containerAs containers are shutdown they are removed from the load balancer Thenew containers are added and will start serving requests Eventually everycontainer will be replaced with ones running the new image The servicewill remain up the entire timeDuring the update process some requests may go to old containers whileothers go to the new ones As long as everything is backwards compatiblethis works wellEven for services that cannot run with multiple replicas such as databasesthey generally restart so quickly that downtime for upgrades is reduced toseconds

Who Needs a Server

docker run ictuopenvas-docker openvasrun_scanpy -v pathtoresultsopenvasreports host1host2host2 report-name

Randall Smith Docker at Adams State University

One of the great things about running services in containers is that westarted to kill off servers In some cases is was what you would expectThe server that was running a service isnrsquot needed anymore In one casein particular the change was even more drasticWe use OpenVAS to run regular security scans against our servers TheOpenVAS server included the security scanning and a web interface formanaging the scans and providing access to the reportsCameron started looking into how to move OpenVAS into a container anddiscovered a couple of pre-built images One of them allows you to passin a list of hosts to scan on startup The container will go through thelist scan every host and write the report to a volume Even better thecontainer downloads all of the latest checks when it starts so it is alwaysup-to-dateWe were able to replace an entire server with a one-liner We can schedulethat to run as a cron job or run it on demand anytime we need it

Who Needs a Cluster

I Use single node Docker Swarm for standalone servicesI All of the CI tooling is still availableI Take advantage of rollback and CI audit trail

Randall Smith Docker at Adams State University

We also found that in some cases it still might make sense to run aservice on a standalone server However you can still take advantage ofthe managed deployment and rollback that is available with Docker Swarmon a single hostAll of the CI build and deployment options are available as they wouldbe if the service were running on a full cluster Instead a specificgitlab-runner is used on the standalone host Deployments are thenconfigured to use that runnerThere are two big wins when taking this approach on standalone serversFirst you get the rapid rollback in the event of failure Second you getthe audit trail and accountability that comes from the CI environment

There Were Problems

I Swarm scheduler failedI Rexray driver doesnrsquot

consistently unmap RBDsI Overlay randomly network

stopped workingI IPAM assigns too many IP

addresses

Randall Smith Docker at Adams State University

As we started to use Swarm more we started to see problems appearFirst of all we ran into issues with the Swarm scheduler There was a bugthat triggered once in a while that prevented the Swarm from starting newservices Running containers were fine but we couldnrsquot start new onesEventually this was solved in a later Docker releaseThe other problem we have is the Rexray driver doesnrsquot always cleanupafter itself when a container is stopped It can leave RBDs mounted ormapped on a node preventing it from starting elsewhere in the SwarmThis can cause a service to take longer to start or prevent it from startingentirelyWe had two major network issues First of all the overlay network wouldsometimes stop talking Containers on the same nodes could continue toconnect but they couldnrsquot talk to others in the Swarm There turned outto be a conflict between the Kernel network timeout settings and the IPVSsettings which led to the Kernel dropping the overlay connectionsThe second problem is one wersquore still working around Therersquos a bug inthe IPAM module which assigns the virtual IPs If services are frequentlyupdated it can lead to multiple IPs being assigned to the service andthe old IPs not being cleaned up This can lead to strange connectivityproblems

Not A Total Loss

In the end Docker Swarm has proven to be a solid proof ofconcept

Randall Smith Docker at Adams State University

In the end Docker Swarm has proven to be a solid proof of conceptWe learned a lot about how to handle containers and the workflow fordeploying servicesDonrsquot be fooled with the proof of concept label Swarm is a solid prod-uct So solid that we are using this to run production services

Next Steps

Randall Smith Docker at Adams State University

The next step for us is to move into our new Kubernetes clusterI wish I could say that we were there now Unfortunately little things likemoving to Banner 9 took up a lot of time Herersquos where we are now andwhat our plans are moving forward

Architecture

Randall Smith Docker at Adams State University

The architecture for the new Kubernetes cluster mirrors what wersquore doingwith SwarmWersquore using haproxy on the gateway nodes to provide public access toservices running in the cluster It also provides TLS termination for webservices The nodes are managed by puppet (and GitLab CI)The Kubernetes cluster can use a number of different drivers to provide thenetwork overlay and enforce network access controls The Calico driverfor example uses BGP to route requests for services to the host where thecontainers running Using BGP means that standard routingnetworkingtools can be used to troubleshoot problemsOne of the best things about the native Docker overlay network is thatall traffic between all of the containers on that network in the Swarm canbe encrypted I didnrsquot want to lose that when moving to Kubernetes so Iuse puppet to automatically configure libreswan to build host-to-host linksbetween all of the hosts with IPSec and letsencrypt (Mostly automaticallyI still have to manually request the the certificate)Cluster management goes through kubeadmin01 A gitlab-runner onkubeadmin01 handles service updates coming from GitLab CI

Setup

I rke kubespray and kubeadm make installs easierI Shared storage via Ceph and NFSI Networking is the hard part

Randall Smith Docker at Adams State University

Kubernetes development is focused on the cloud Unfortunately thatmeans that deploying locally is not as easy as it could be Tools likerke kubespray and kubeadm make it easierOne of the things I like about rke is that it has an easy option to teardown a cluster This is great for testingWe use Ceph RBD to provide block devices to our Linux KVM clusterWe are also able to leverage that for Docker but the way itrsquos configureddoesnrsquot work well if volumes need to be shared We use NFS for thosecases where multiple services need to access the volume at the same timeThe cloud focus really shows up when it comes time to setup networkingTherersquos an expectation that the cluster use the load balancer from thecloud provider to provide access Obviously thatrsquos not the case whenrunning it locallyAll of the subnets used by Kubernetes are designed to be internal to thecluster That makes it hard to make services public Metallb has a coupleof options to do that The easiest is Layer 2 mode which publishes IPsdirectly on the nodes and use ARPs to make other servers aware of theIPs This is what wersquore using The other option is BGP with publishesthe IPs and their location in the cluster to other routers

Migration

I Convert from Docker Compose to Pods and ServicesI Add a CI jobs to deploy to the k8s clusterI Change DNS to point to the k8s serviceI No image rebuild needed

Randall Smith Docker at Adams State University

One of the best things about using Docker and GitLab CI to manageservice deployments is that very little needs to change There is a one-time conversion of the docker compose file to the Kubernetes config formatThe CI configuration also needs to be updated so that deployments go toKubernetes After that deployment through GitLab remains the same asit was with Swarm For services that do not have external volumes theycan be deployed to both clouds and tests run as neededFor many services migration can be as easy as changing a DNS recordRemember the exact same image that wersquore running in the Swarm will berunning in Kubernetes There are no surprises with the image No needto rebuild or re-install

How Can You Get Started

I Start with something simpleI Start with services that can tolerate downtimeI You may need to change your backup strategyI CI CI CI

Randall Smith Docker at Adams State University

Donrsquot do what I did Start with something simple Web services and tasksthat donrsquot require permanent local storage are a great start These areusually easier to build and more resilient to being frequently stopped andstarted This will happen a lot with your first servicesStart with services that can tolerate downtime Test services are great butyoursquoll eventually want to move something useful into the cluster Therersquosa learning curve Donrsquot let it bite you (or discourage you)When you move to containers how you back up your data may need tochange You donrsquot have a server to log into and run backup scripts Knowwhere your data lives and back it up thereCI CI CI Also CI Having a consistent workflow will make it easier foreveryone to start using the new cluster

Build vs Buy

Randall Smith Docker at Adams State University

You can build from scratch but there are a number of pre-built commercialservicesDocker Enterprise is a simple install on-prem and provides both Swarmand Kubernetes services at the same time It also provides a registry androle-based access control We are testing this approach nowYou can also go straight to the cloud with GKE AWS or Azure Infact using Kubernetes makes it much easier to transition between cloudproviders or even use multiple providers at the same time

Logging

I Containers generally log tostandard out

I Logs can be shipped to a logaggreagator such asElasticsearch

I View logs and createdashboards in Kibana

Randall Smith Docker at Adams State University

Moving to Docker changes a number of things not the least of which islogging The official accepted way for containers to log is to write theirlogs to standard out The logs are then available via Docker This alsoallows for a standard way to ship those logs to an aggregatorGenerally this done by shipping those logs to Elasticsearch One can thenuse a tool such as Kibana to access those logs build dashboards and eventrigger alerts based on the contents of those logs

Monitoring

I Collect metrics with PrometheusI View the metrics with GrafanaI Use InfluxDB for long-term storage

Randall Smith Docker at Adams State University

Prometheus has become the premier way to collect metrics on Kubernetesclusters It collects host stats as well as information on running podsGrafana is frequently used to build dashboards to view the collected met-ricsPrometheus only keeps metrics for a short amount of time Usually lessthan two weeks The metrics can be shipped to InfluxDB for long termstorage and analysis

Security Scans

I No built-in way to see if an image has out-of-date packagesI clair and Docker Enterprise can scan images for vulnerable

packagesI Doesnrsquot work as well for 3rd party packages

Randall Smith Docker at Adams State University

One of the things that you lose with containers is a built-in way to ensurethat software in an image is not running vulnerable packages You can useyum or apt to see if new updates need to be applied but containers donrsquothave thatTools such as clair can be used to scan images at build time as part ofthe CI process or to regularly scan all images in a registry This feature isalso included in Docker Enterprise This works well for packages installedwith standard tools (such as apt) but doesnrsquot work as well for softwareinstalled from sourceOne way to keep images up-to-date is run scheduled tasks in GitLab CIthat scans the image and rebuilds is with the latest updates installed ifsomething is found Remember that base images are generally very strippeddown so they will generally have fewer updates

BANDOCK Google Group

httpsgroupsgooglecomforumforumbandock

Randall Smith Docker at Adams State University

Virginia Tech started a Google Group for schools looking to run Banner inDocker VT is running Banner 9 in Docker SwarmTo request access go to the BANDOCK Google group and click the linkthat says Apply for membership

Questions

Feel free to ask questions after the conference

Email rbsmithadamseduPhone 719-587-7836Twitter PerlStalker

Randall Smith Docker at Adams State University

Thank You

Randall Smith Docker at Adams State University

References

bull Two Five and Seven painting the rosebush from AlicersquosAdventures in Wonderland

bull ELK Stack image - httpswwwelasticcoelk-stack

bull Docker Enterprise -httpswwwdockercomproductsdocker-enterprise

bull Book cover generated with the devto book generator -httpsdevtorly

bull Double Facepalm meme created at imgflipcom

bull GitLab - httpsaboutgitlabcom

bull Kubernetes - httpskubernetesio

bull Calico - httpswwwprojectcalicoorg

bull Metallb - httpsmetallbuniversetf

Page 3: Docker at Adams State UniversityRandall Smith Docker at Adams State University One of the best things about using Docker and GitLab CI to manage service deployments is that very little

In the Beginning

I Use Docker to package Perl web applicationsI Two stand-alone serversI No registryI Migrating services between servers was problematic

Randall Smith Docker at Adams State University

I had a couple of Perl applications that I had to upgrade and migrate to newhosts These were our help desk and account management systems Likemost Perl apps they required a large number of CPAN libraries WhenI learned about Docker I decided that I wanted to use it wrap thoseapplications into nice neat packages to make migration easier BuildingDocker images also made testing the apps easier because I could use thesame image in testing and productionI started with two Ubuntu servers that I manually managed the containerson The containers were started when the servers booted with upstartscriptsDocker has the concept of a registry to store images that can be usedanywhere I didnrsquot have one and I didnrsquot want to uploaded the publicly toDocker Hub This means that I had to rebuild the applications on eachserver That sometimes lead to slightly different versions of libraries whichbroke the applicationsThe lack of a registry made it difficult to migrate services to the otherhost Plus managing the upstart scripts was a fragile process and brokefrequently

We Needed an Orchestrator

Orchestration is needed to fully realize the power of Docker

Randall Smith Docker at Adams State University

Running standalone containers without orchestration can work on a smallscale To really realize the power of Docker an orchestration platform isneeded

What is Orchestration

Orchestration is the process of managing and automatingcontainers

Randall Smith Docker at Adams State University

Orchestration is the process of managing and automating containersThe number of containers can scale quickly Without good tooling thenumber of containers can quickly become unmanageable

What an Orchestrator Does

I Starts stops servicesI Scales up down replicasI Connects containers to storageI Manages container replacements for updates

Randall Smith Docker at Adams State University

An orchestrator is in charge of ensuring that services are running Ser-vices are spread across the cluster to attempt to balance the load Theorchestrator can also scale up and down the number of replicas of yourserviceThe orchestrator also manages the process of connecting running contain-ers to storage This lets you abstract out your storage from the containermaking it easier to switch clouds if desiredFinally the orchestrator schedules replacing containers if they get updatedFor example you can tell your orchestrator that a service should be runninga new image and it will stop the current image and start the new oneautomatically If your service can run multiple replicas this update can betransparent to your users

Orchestration Platforms

I Docker SwarmI KubernetesI MesosphereI FleetI Cattle

Randall Smith Docker at Adams State University

I did a survey of the available orchestration tools that existed at the timeThe big ones were Docker Swarm Kubernetes and Mesosphere I alsolooked at Fleet and Rancher CattleFor full details

My Published Works

Randall Smith Docker at Adams State University

see my published works Docker Orchestration in particular

The Findings Kubernetes

I Ceph RBD supportI Easier migration of legacy applicationsI Each service gets its own IP addressI Better feature set

Randall Smith Docker at Adams State University

In the end I wrote up peer review recommending that we use KubernetesKubernetes was the new project on the block but it was already gaining alarge following in the communityCeph is a distributed storage system that we use to provide block devicesto our VM stack I wanted to be able to use it with Docker as wellUnfortunately the lack of ceph support in most of the products at thetime limited the list to KubernetesKubernetes also provided the easiest migration path for legacy applicationinto a container environment The pod structure allows for multiple con-tainers to be combined and communicate with each other as if they wereon the same serverThe Kubernetes service model makes deploying multiple applications whichuse the same port much easier Since each service gets its own IP addressthere are no potential port conflicts as there are with the other toolsWhile RBD support has since been added to Swarm Kubernetes also has anumber features that are unique The most prominent are scheduled jobsand easy tools to run tasks in running containers

Things Didnrsquot Go as Planned

A new install of Snipe IT requiredpersistent storage which I couldnot support on my existingDocker servers

Randall Smith Docker at Adams State University

I was unable to deploy Kubernetes immediately due to other prioritiesThen we had a project come from our support services team to setupthe an inventory management system called Snipe IT The admin on ourteam that was running with the project wanted to use the official Snipe ITDocker containerThe problem is that container storage is not persistent I couldnrsquot handleit on my existing pair of servers because I didnrsquot have a way to deal withthe storage needsTo solve this I quickly stood up a small cluster running Docker SwarmThis is exactly the type of problem that orchestration tools help solve

Why Swarm

I Built into DockerI Encrypted overlay networksI Storage driver for Ceph RBD

Randall Smith Docker at Adams State University

Swarm is built into Docker Itrsquos almost trivial to setup Service andcontainer management is fairly simple and it still offered powerful featuressuch as zero-downtime updatesSwarm also provides a native encrypted overlay network to connect con-tainers running on separate hosts This rocks Communication betweenevery container in the cluster is encrypted transparently at the networklayer That allowed us to encrypt traffic for services such as MySQL whichcan be problematic to enable native encryptionMost importantly I was able to configure the Rexray storage driver forDocker to mount Ceph RBDs for persistent storageIn two days I was able to setup a fully orchestrated Docker cluster Ev-erything was coming up roses

The Real Win GitLab CI

Randall Smith Docker at Adams State University

The real win for us came in the form of GitLab CI GitLab became integralto our workflowWersquove been using GitLab for many years as a git server It was just as I wasstarting to look into orchestration platforms that GitLab added a Dockerregistry Since we didnrsquot have one yet this was a huge winShortly thereafter I started looking into GitLabrsquos continuous integrationfeatures

Automated Container Builds

Randall Smith Docker at Adams State University

It started early in the process before I stood up the Swarm I was pushingmy image configurations into git in GitLab I started using GitLab CI toautomatically build new Docker imagesWhen a commit is pushed to a repo with CI enabled a new build is trig-gered Once the the build is complete itrsquos pushed into the Docker registryand is ready to use anywhere Where before I was building new images onmy desktop or on the servers themselves now I could let GitLab do it formeWhat makes this especially cool is that we were able to make this availableto our PR department to build new images for the main website The newadamsedu runs on Wordpress running in Docker Our web developer canbuild and test new images automaticallyI also like this process because it makes it easier to stand up services in atest environment We can test the image before it goes into productionOnce our testing is done the image that is being deployed into productionis the exact image that passed our tests This eliminates nearly all of theanxiety that comes with deploying new changes

Automated Deployment of Services

Randall Smith Docker at Adams State University

Even better we added steps to the CI process which allows him to deployhis updated Wordpress images to the Swarm without root access or needingto talk to a member of our ops team By choice the deployment istriggered manually especially to production This helps to ensure that weare being deliberate in our processesGitLab keeps an audit trail of every deployment so we can see every de-ployment that happens and who triggered it It also makes it easy to rollback to previous images in the event of serious problemsMost of our other Docker services have been migrated to this processas well This allows us to fully audit any deployments of services Theconsistent process also makes it easier for anyone on the team to makechanges if neededDoes it take longer to make small changes Yes Yes it does Howeverthe deliberate process ensures that we are consistent It also provides arecord of what changed (thanks to git) That way if we start getting callson a service we can see if there were any recent changes what they wereand when they were deployed We can also roll back updates in mostcases if there are problemsThis is a very DevOps approach Wersquore treating servers and services ascode

Building Images

FROM wordpress498-apacheLABEL maintainer=Mike Henderson mhendersonadamsedu

RUN apt-get update ampamp apt-get install -y curl zip unzip git libldap2-dev ampamp rm -rf varlibaptlists ampamp docker-php-ext-configure ldap --with-libdir=libx86_64-linux-gnu ampamp docker-php-ext-install ldap ampamp apt-get purge -y --auto-remove libldap2-dev ampamp php -r readfile(rsquohttpgetcomposerorginstallerrsquo)

| php -- --install-dir=usrbin --filename=composer

git clone of themes and plugins removed

Pull in composer file and runCOPY composerjson RUN composer install --verbose --profile --prefer-dist --no-autoloader

Make NFS mount point set ownership symlink into wordpressRUN mkdir uploadsRUN chown www-datawww-data uploadsRUN ln -s uploads usrsrcwordpresswp-contentuploads

Randall Smith Docker at Adams State University

Building an image is the first step to running a service in Docker Using offthe shelf images can be a great way to get started but eventually yoursquollneed to roll your own image This is done via a DockerfileThe easiest place to start is from an existing image The slide shows asnippet from the Dockerfile that wersquore using to build the image for whatwill be our main web site In this case wersquore expanding off the officialWordpress image to build exactly what we need for adamseduOur web guru Mike Henderson built and maintains the Dockerfile Our CIprocess automates the build process when he pushes a change into GitLaband allows him to deploy it to testing or productionThe build process makes it easier for others to review and audit an imageUnlike a server everything that goes into a service is in the image Youdonrsquot have extra things hanging around that no one knows about becausethe person doing the install forgot about it

Deploying Services

docker stack deploy -c docker-composeyml www

Randall Smith Docker at Adams State University

So letrsquos dig into how we deploy a service This is done in Swarm witha docker-composeyml file In it you specify all of the services volumesand networks that your application needs All of this happens at run timeThis is the compose file that wersquore using for to test our wordpress deploy-ment for our new website

bull base images are usually very minimal

bull version tags allow rollback Go back to previous version

bull this db approach means that every service gets own dedicated DB

[ back to slide ] When wersquore ready to deploy we run docker stackdeploy and Swarm tries to make reality match what wersquove defined inthe compose fileThe combination of Dockerfiles and the compose file define the entireconfiguration for a service This fills the same role as server configurationmanagement

Zero-downtime Upgrades

I Requires the service to run multiple replicasI Running containers are replaced one at a time until they are all

replacedI The load balancer serves requests from all running services

Randall Smith Docker at Adams State University

One of the great features of Swarm and Kubernetes is the ability to per-form zero-downtime upgrades This feature is based firstly on runningmultiple replicas of the image Each running container is updated one ata time or in a configurable number of groupsThe built-in load balancer will serve requests from each running containerAs containers are shutdown they are removed from the load balancer Thenew containers are added and will start serving requests Eventually everycontainer will be replaced with ones running the new image The servicewill remain up the entire timeDuring the update process some requests may go to old containers whileothers go to the new ones As long as everything is backwards compatiblethis works wellEven for services that cannot run with multiple replicas such as databasesthey generally restart so quickly that downtime for upgrades is reduced toseconds

Who Needs a Server

docker run ictuopenvas-docker openvasrun_scanpy -v pathtoresultsopenvasreports host1host2host2 report-name

Randall Smith Docker at Adams State University

One of the great things about running services in containers is that westarted to kill off servers In some cases is was what you would expectThe server that was running a service isnrsquot needed anymore In one casein particular the change was even more drasticWe use OpenVAS to run regular security scans against our servers TheOpenVAS server included the security scanning and a web interface formanaging the scans and providing access to the reportsCameron started looking into how to move OpenVAS into a container anddiscovered a couple of pre-built images One of them allows you to passin a list of hosts to scan on startup The container will go through thelist scan every host and write the report to a volume Even better thecontainer downloads all of the latest checks when it starts so it is alwaysup-to-dateWe were able to replace an entire server with a one-liner We can schedulethat to run as a cron job or run it on demand anytime we need it

Who Needs a Cluster

I Use single node Docker Swarm for standalone servicesI All of the CI tooling is still availableI Take advantage of rollback and CI audit trail

Randall Smith Docker at Adams State University

We also found that in some cases it still might make sense to run aservice on a standalone server However you can still take advantage ofthe managed deployment and rollback that is available with Docker Swarmon a single hostAll of the CI build and deployment options are available as they wouldbe if the service were running on a full cluster Instead a specificgitlab-runner is used on the standalone host Deployments are thenconfigured to use that runnerThere are two big wins when taking this approach on standalone serversFirst you get the rapid rollback in the event of failure Second you getthe audit trail and accountability that comes from the CI environment

There Were Problems

I Swarm scheduler failedI Rexray driver doesnrsquot

consistently unmap RBDsI Overlay randomly network

stopped workingI IPAM assigns too many IP

addresses

Randall Smith Docker at Adams State University

As we started to use Swarm more we started to see problems appearFirst of all we ran into issues with the Swarm scheduler There was a bugthat triggered once in a while that prevented the Swarm from starting newservices Running containers were fine but we couldnrsquot start new onesEventually this was solved in a later Docker releaseThe other problem we have is the Rexray driver doesnrsquot always cleanupafter itself when a container is stopped It can leave RBDs mounted ormapped on a node preventing it from starting elsewhere in the SwarmThis can cause a service to take longer to start or prevent it from startingentirelyWe had two major network issues First of all the overlay network wouldsometimes stop talking Containers on the same nodes could continue toconnect but they couldnrsquot talk to others in the Swarm There turned outto be a conflict between the Kernel network timeout settings and the IPVSsettings which led to the Kernel dropping the overlay connectionsThe second problem is one wersquore still working around Therersquos a bug inthe IPAM module which assigns the virtual IPs If services are frequentlyupdated it can lead to multiple IPs being assigned to the service andthe old IPs not being cleaned up This can lead to strange connectivityproblems

Not A Total Loss

In the end Docker Swarm has proven to be a solid proof ofconcept

Randall Smith Docker at Adams State University

In the end Docker Swarm has proven to be a solid proof of conceptWe learned a lot about how to handle containers and the workflow fordeploying servicesDonrsquot be fooled with the proof of concept label Swarm is a solid prod-uct So solid that we are using this to run production services

Next Steps

Randall Smith Docker at Adams State University

The next step for us is to move into our new Kubernetes clusterI wish I could say that we were there now Unfortunately little things likemoving to Banner 9 took up a lot of time Herersquos where we are now andwhat our plans are moving forward

Architecture

Randall Smith Docker at Adams State University

The architecture for the new Kubernetes cluster mirrors what wersquore doingwith SwarmWersquore using haproxy on the gateway nodes to provide public access toservices running in the cluster It also provides TLS termination for webservices The nodes are managed by puppet (and GitLab CI)The Kubernetes cluster can use a number of different drivers to provide thenetwork overlay and enforce network access controls The Calico driverfor example uses BGP to route requests for services to the host where thecontainers running Using BGP means that standard routingnetworkingtools can be used to troubleshoot problemsOne of the best things about the native Docker overlay network is thatall traffic between all of the containers on that network in the Swarm canbe encrypted I didnrsquot want to lose that when moving to Kubernetes so Iuse puppet to automatically configure libreswan to build host-to-host linksbetween all of the hosts with IPSec and letsencrypt (Mostly automaticallyI still have to manually request the the certificate)Cluster management goes through kubeadmin01 A gitlab-runner onkubeadmin01 handles service updates coming from GitLab CI

Setup

I rke kubespray and kubeadm make installs easierI Shared storage via Ceph and NFSI Networking is the hard part

Randall Smith Docker at Adams State University

Kubernetes development is focused on the cloud Unfortunately thatmeans that deploying locally is not as easy as it could be Tools likerke kubespray and kubeadm make it easierOne of the things I like about rke is that it has an easy option to teardown a cluster This is great for testingWe use Ceph RBD to provide block devices to our Linux KVM clusterWe are also able to leverage that for Docker but the way itrsquos configureddoesnrsquot work well if volumes need to be shared We use NFS for thosecases where multiple services need to access the volume at the same timeThe cloud focus really shows up when it comes time to setup networkingTherersquos an expectation that the cluster use the load balancer from thecloud provider to provide access Obviously thatrsquos not the case whenrunning it locallyAll of the subnets used by Kubernetes are designed to be internal to thecluster That makes it hard to make services public Metallb has a coupleof options to do that The easiest is Layer 2 mode which publishes IPsdirectly on the nodes and use ARPs to make other servers aware of theIPs This is what wersquore using The other option is BGP with publishesthe IPs and their location in the cluster to other routers

Migration

I Convert from Docker Compose to Pods and ServicesI Add a CI jobs to deploy to the k8s clusterI Change DNS to point to the k8s serviceI No image rebuild needed

Randall Smith Docker at Adams State University

One of the best things about using Docker and GitLab CI to manageservice deployments is that very little needs to change There is a one-time conversion of the docker compose file to the Kubernetes config formatThe CI configuration also needs to be updated so that deployments go toKubernetes After that deployment through GitLab remains the same asit was with Swarm For services that do not have external volumes theycan be deployed to both clouds and tests run as neededFor many services migration can be as easy as changing a DNS recordRemember the exact same image that wersquore running in the Swarm will berunning in Kubernetes There are no surprises with the image No needto rebuild or re-install

How Can You Get Started

I Start with something simpleI Start with services that can tolerate downtimeI You may need to change your backup strategyI CI CI CI

Randall Smith Docker at Adams State University

Donrsquot do what I did Start with something simple Web services and tasksthat donrsquot require permanent local storage are a great start These areusually easier to build and more resilient to being frequently stopped andstarted This will happen a lot with your first servicesStart with services that can tolerate downtime Test services are great butyoursquoll eventually want to move something useful into the cluster Therersquosa learning curve Donrsquot let it bite you (or discourage you)When you move to containers how you back up your data may need tochange You donrsquot have a server to log into and run backup scripts Knowwhere your data lives and back it up thereCI CI CI Also CI Having a consistent workflow will make it easier foreveryone to start using the new cluster

Build vs Buy

Randall Smith Docker at Adams State University

You can build from scratch but there are a number of pre-built commercialservicesDocker Enterprise is a simple install on-prem and provides both Swarmand Kubernetes services at the same time It also provides a registry androle-based access control We are testing this approach nowYou can also go straight to the cloud with GKE AWS or Azure Infact using Kubernetes makes it much easier to transition between cloudproviders or even use multiple providers at the same time

Logging

I Containers generally log tostandard out

I Logs can be shipped to a logaggreagator such asElasticsearch

I View logs and createdashboards in Kibana

Randall Smith Docker at Adams State University

Moving to Docker changes a number of things not the least of which islogging The official accepted way for containers to log is to write theirlogs to standard out The logs are then available via Docker This alsoallows for a standard way to ship those logs to an aggregatorGenerally this done by shipping those logs to Elasticsearch One can thenuse a tool such as Kibana to access those logs build dashboards and eventrigger alerts based on the contents of those logs

Monitoring

I Collect metrics with PrometheusI View the metrics with GrafanaI Use InfluxDB for long-term storage

Randall Smith Docker at Adams State University

Prometheus has become the premier way to collect metrics on Kubernetesclusters It collects host stats as well as information on running podsGrafana is frequently used to build dashboards to view the collected met-ricsPrometheus only keeps metrics for a short amount of time Usually lessthan two weeks The metrics can be shipped to InfluxDB for long termstorage and analysis

Security Scans

I No built-in way to see if an image has out-of-date packagesI clair and Docker Enterprise can scan images for vulnerable

packagesI Doesnrsquot work as well for 3rd party packages

Randall Smith Docker at Adams State University

One of the things that you lose with containers is a built-in way to ensurethat software in an image is not running vulnerable packages You can useyum or apt to see if new updates need to be applied but containers donrsquothave thatTools such as clair can be used to scan images at build time as part ofthe CI process or to regularly scan all images in a registry This feature isalso included in Docker Enterprise This works well for packages installedwith standard tools (such as apt) but doesnrsquot work as well for softwareinstalled from sourceOne way to keep images up-to-date is run scheduled tasks in GitLab CIthat scans the image and rebuilds is with the latest updates installed ifsomething is found Remember that base images are generally very strippeddown so they will generally have fewer updates

BANDOCK Google Group

httpsgroupsgooglecomforumforumbandock

Randall Smith Docker at Adams State University

Virginia Tech started a Google Group for schools looking to run Banner inDocker VT is running Banner 9 in Docker SwarmTo request access go to the BANDOCK Google group and click the linkthat says Apply for membership

Questions

Feel free to ask questions after the conference

Email rbsmithadamseduPhone 719-587-7836Twitter PerlStalker

Randall Smith Docker at Adams State University

Thank You

Randall Smith Docker at Adams State University

References

bull Two Five and Seven painting the rosebush from AlicersquosAdventures in Wonderland

bull ELK Stack image - httpswwwelasticcoelk-stack

bull Docker Enterprise -httpswwwdockercomproductsdocker-enterprise

bull Book cover generated with the devto book generator -httpsdevtorly

bull Double Facepalm meme created at imgflipcom

bull GitLab - httpsaboutgitlabcom

bull Kubernetes - httpskubernetesio

bull Calico - httpswwwprojectcalicoorg

bull Metallb - httpsmetallbuniversetf

Page 4: Docker at Adams State UniversityRandall Smith Docker at Adams State University One of the best things about using Docker and GitLab CI to manage service deployments is that very little

We Needed an Orchestrator

Orchestration is needed to fully realize the power of Docker

Randall Smith Docker at Adams State University

Running standalone containers without orchestration can work on a smallscale To really realize the power of Docker an orchestration platform isneeded

What is Orchestration

Orchestration is the process of managing and automatingcontainers

Randall Smith Docker at Adams State University

Orchestration is the process of managing and automating containersThe number of containers can scale quickly Without good tooling thenumber of containers can quickly become unmanageable

What an Orchestrator Does

I Starts stops servicesI Scales up down replicasI Connects containers to storageI Manages container replacements for updates

Randall Smith Docker at Adams State University

An orchestrator is in charge of ensuring that services are running Ser-vices are spread across the cluster to attempt to balance the load Theorchestrator can also scale up and down the number of replicas of yourserviceThe orchestrator also manages the process of connecting running contain-ers to storage This lets you abstract out your storage from the containermaking it easier to switch clouds if desiredFinally the orchestrator schedules replacing containers if they get updatedFor example you can tell your orchestrator that a service should be runninga new image and it will stop the current image and start the new oneautomatically If your service can run multiple replicas this update can betransparent to your users

Orchestration Platforms

I Docker SwarmI KubernetesI MesosphereI FleetI Cattle

Randall Smith Docker at Adams State University

I did a survey of the available orchestration tools that existed at the timeThe big ones were Docker Swarm Kubernetes and Mesosphere I alsolooked at Fleet and Rancher CattleFor full details

My Published Works

Randall Smith Docker at Adams State University

see my published works Docker Orchestration in particular

The Findings Kubernetes

I Ceph RBD supportI Easier migration of legacy applicationsI Each service gets its own IP addressI Better feature set

Randall Smith Docker at Adams State University

In the end I wrote up peer review recommending that we use KubernetesKubernetes was the new project on the block but it was already gaining alarge following in the communityCeph is a distributed storage system that we use to provide block devicesto our VM stack I wanted to be able to use it with Docker as wellUnfortunately the lack of ceph support in most of the products at thetime limited the list to KubernetesKubernetes also provided the easiest migration path for legacy applicationinto a container environment The pod structure allows for multiple con-tainers to be combined and communicate with each other as if they wereon the same serverThe Kubernetes service model makes deploying multiple applications whichuse the same port much easier Since each service gets its own IP addressthere are no potential port conflicts as there are with the other toolsWhile RBD support has since been added to Swarm Kubernetes also has anumber features that are unique The most prominent are scheduled jobsand easy tools to run tasks in running containers

Things Didnrsquot Go as Planned

A new install of Snipe IT requiredpersistent storage which I couldnot support on my existingDocker servers

Randall Smith Docker at Adams State University

I was unable to deploy Kubernetes immediately due to other prioritiesThen we had a project come from our support services team to setupthe an inventory management system called Snipe IT The admin on ourteam that was running with the project wanted to use the official Snipe ITDocker containerThe problem is that container storage is not persistent I couldnrsquot handleit on my existing pair of servers because I didnrsquot have a way to deal withthe storage needsTo solve this I quickly stood up a small cluster running Docker SwarmThis is exactly the type of problem that orchestration tools help solve

Why Swarm

I Built into DockerI Encrypted overlay networksI Storage driver for Ceph RBD

Randall Smith Docker at Adams State University

Swarm is built into Docker Itrsquos almost trivial to setup Service andcontainer management is fairly simple and it still offered powerful featuressuch as zero-downtime updatesSwarm also provides a native encrypted overlay network to connect con-tainers running on separate hosts This rocks Communication betweenevery container in the cluster is encrypted transparently at the networklayer That allowed us to encrypt traffic for services such as MySQL whichcan be problematic to enable native encryptionMost importantly I was able to configure the Rexray storage driver forDocker to mount Ceph RBDs for persistent storageIn two days I was able to setup a fully orchestrated Docker cluster Ev-erything was coming up roses

The Real Win GitLab CI

Randall Smith Docker at Adams State University

The real win for us came in the form of GitLab CI GitLab became integralto our workflowWersquove been using GitLab for many years as a git server It was just as I wasstarting to look into orchestration platforms that GitLab added a Dockerregistry Since we didnrsquot have one yet this was a huge winShortly thereafter I started looking into GitLabrsquos continuous integrationfeatures

Automated Container Builds

Randall Smith Docker at Adams State University

It started early in the process before I stood up the Swarm I was pushingmy image configurations into git in GitLab I started using GitLab CI toautomatically build new Docker imagesWhen a commit is pushed to a repo with CI enabled a new build is trig-gered Once the the build is complete itrsquos pushed into the Docker registryand is ready to use anywhere Where before I was building new images onmy desktop or on the servers themselves now I could let GitLab do it formeWhat makes this especially cool is that we were able to make this availableto our PR department to build new images for the main website The newadamsedu runs on Wordpress running in Docker Our web developer canbuild and test new images automaticallyI also like this process because it makes it easier to stand up services in atest environment We can test the image before it goes into productionOnce our testing is done the image that is being deployed into productionis the exact image that passed our tests This eliminates nearly all of theanxiety that comes with deploying new changes

Automated Deployment of Services

Randall Smith Docker at Adams State University

Even better we added steps to the CI process which allows him to deployhis updated Wordpress images to the Swarm without root access or needingto talk to a member of our ops team By choice the deployment istriggered manually especially to production This helps to ensure that weare being deliberate in our processesGitLab keeps an audit trail of every deployment so we can see every de-ployment that happens and who triggered it It also makes it easy to rollback to previous images in the event of serious problemsMost of our other Docker services have been migrated to this processas well This allows us to fully audit any deployments of services Theconsistent process also makes it easier for anyone on the team to makechanges if neededDoes it take longer to make small changes Yes Yes it does Howeverthe deliberate process ensures that we are consistent It also provides arecord of what changed (thanks to git) That way if we start getting callson a service we can see if there were any recent changes what they wereand when they were deployed We can also roll back updates in mostcases if there are problemsThis is a very DevOps approach Wersquore treating servers and services ascode

Building Images

FROM wordpress498-apacheLABEL maintainer=Mike Henderson mhendersonadamsedu

RUN apt-get update ampamp apt-get install -y curl zip unzip git libldap2-dev ampamp rm -rf varlibaptlists ampamp docker-php-ext-configure ldap --with-libdir=libx86_64-linux-gnu ampamp docker-php-ext-install ldap ampamp apt-get purge -y --auto-remove libldap2-dev ampamp php -r readfile(rsquohttpgetcomposerorginstallerrsquo)

| php -- --install-dir=usrbin --filename=composer

git clone of themes and plugins removed

Pull in composer file and runCOPY composerjson RUN composer install --verbose --profile --prefer-dist --no-autoloader

Make NFS mount point set ownership symlink into wordpressRUN mkdir uploadsRUN chown www-datawww-data uploadsRUN ln -s uploads usrsrcwordpresswp-contentuploads

Randall Smith Docker at Adams State University

Building an image is the first step to running a service in Docker Using offthe shelf images can be a great way to get started but eventually yoursquollneed to roll your own image This is done via a DockerfileThe easiest place to start is from an existing image The slide shows asnippet from the Dockerfile that wersquore using to build the image for whatwill be our main web site In this case wersquore expanding off the officialWordpress image to build exactly what we need for adamseduOur web guru Mike Henderson built and maintains the Dockerfile Our CIprocess automates the build process when he pushes a change into GitLaband allows him to deploy it to testing or productionThe build process makes it easier for others to review and audit an imageUnlike a server everything that goes into a service is in the image Youdonrsquot have extra things hanging around that no one knows about becausethe person doing the install forgot about it

Deploying Services

docker stack deploy -c docker-composeyml www

Randall Smith Docker at Adams State University

So letrsquos dig into how we deploy a service This is done in Swarm witha docker-composeyml file In it you specify all of the services volumesand networks that your application needs All of this happens at run timeThis is the compose file that wersquore using for to test our wordpress deploy-ment for our new website

bull base images are usually very minimal

bull version tags allow rollback Go back to previous version

bull this db approach means that every service gets own dedicated DB

[ back to slide ] When wersquore ready to deploy we run docker stackdeploy and Swarm tries to make reality match what wersquove defined inthe compose fileThe combination of Dockerfiles and the compose file define the entireconfiguration for a service This fills the same role as server configurationmanagement

Zero-downtime Upgrades

I Requires the service to run multiple replicasI Running containers are replaced one at a time until they are all

replacedI The load balancer serves requests from all running services

Randall Smith Docker at Adams State University

One of the great features of Swarm and Kubernetes is the ability to per-form zero-downtime upgrades This feature is based firstly on runningmultiple replicas of the image Each running container is updated one ata time or in a configurable number of groupsThe built-in load balancer will serve requests from each running containerAs containers are shutdown they are removed from the load balancer Thenew containers are added and will start serving requests Eventually everycontainer will be replaced with ones running the new image The servicewill remain up the entire timeDuring the update process some requests may go to old containers whileothers go to the new ones As long as everything is backwards compatiblethis works wellEven for services that cannot run with multiple replicas such as databasesthey generally restart so quickly that downtime for upgrades is reduced toseconds

Who Needs a Server

docker run ictuopenvas-docker openvasrun_scanpy -v pathtoresultsopenvasreports host1host2host2 report-name

Randall Smith Docker at Adams State University

One of the great things about running services in containers is that westarted to kill off servers In some cases is was what you would expectThe server that was running a service isnrsquot needed anymore In one casein particular the change was even more drasticWe use OpenVAS to run regular security scans against our servers TheOpenVAS server included the security scanning and a web interface formanaging the scans and providing access to the reportsCameron started looking into how to move OpenVAS into a container anddiscovered a couple of pre-built images One of them allows you to passin a list of hosts to scan on startup The container will go through thelist scan every host and write the report to a volume Even better thecontainer downloads all of the latest checks when it starts so it is alwaysup-to-dateWe were able to replace an entire server with a one-liner We can schedulethat to run as a cron job or run it on demand anytime we need it

Who Needs a Cluster

I Use single node Docker Swarm for standalone servicesI All of the CI tooling is still availableI Take advantage of rollback and CI audit trail

Randall Smith Docker at Adams State University

We also found that in some cases it still might make sense to run aservice on a standalone server However you can still take advantage ofthe managed deployment and rollback that is available with Docker Swarmon a single hostAll of the CI build and deployment options are available as they wouldbe if the service were running on a full cluster Instead a specificgitlab-runner is used on the standalone host Deployments are thenconfigured to use that runnerThere are two big wins when taking this approach on standalone serversFirst you get the rapid rollback in the event of failure Second you getthe audit trail and accountability that comes from the CI environment

There Were Problems

I Swarm scheduler failedI Rexray driver doesnrsquot

consistently unmap RBDsI Overlay randomly network

stopped workingI IPAM assigns too many IP

addresses

Randall Smith Docker at Adams State University

As we started to use Swarm more we started to see problems appearFirst of all we ran into issues with the Swarm scheduler There was a bugthat triggered once in a while that prevented the Swarm from starting newservices Running containers were fine but we couldnrsquot start new onesEventually this was solved in a later Docker releaseThe other problem we have is the Rexray driver doesnrsquot always cleanupafter itself when a container is stopped It can leave RBDs mounted ormapped on a node preventing it from starting elsewhere in the SwarmThis can cause a service to take longer to start or prevent it from startingentirelyWe had two major network issues First of all the overlay network wouldsometimes stop talking Containers on the same nodes could continue toconnect but they couldnrsquot talk to others in the Swarm There turned outto be a conflict between the Kernel network timeout settings and the IPVSsettings which led to the Kernel dropping the overlay connectionsThe second problem is one wersquore still working around Therersquos a bug inthe IPAM module which assigns the virtual IPs If services are frequentlyupdated it can lead to multiple IPs being assigned to the service andthe old IPs not being cleaned up This can lead to strange connectivityproblems

Not A Total Loss

In the end Docker Swarm has proven to be a solid proof ofconcept

Randall Smith Docker at Adams State University

In the end Docker Swarm has proven to be a solid proof of conceptWe learned a lot about how to handle containers and the workflow fordeploying servicesDonrsquot be fooled with the proof of concept label Swarm is a solid prod-uct So solid that we are using this to run production services

Next Steps

Randall Smith Docker at Adams State University

The next step for us is to move into our new Kubernetes clusterI wish I could say that we were there now Unfortunately little things likemoving to Banner 9 took up a lot of time Herersquos where we are now andwhat our plans are moving forward

Architecture

Randall Smith Docker at Adams State University

The architecture for the new Kubernetes cluster mirrors what wersquore doingwith SwarmWersquore using haproxy on the gateway nodes to provide public access toservices running in the cluster It also provides TLS termination for webservices The nodes are managed by puppet (and GitLab CI)The Kubernetes cluster can use a number of different drivers to provide thenetwork overlay and enforce network access controls The Calico driverfor example uses BGP to route requests for services to the host where thecontainers running Using BGP means that standard routingnetworkingtools can be used to troubleshoot problemsOne of the best things about the native Docker overlay network is thatall traffic between all of the containers on that network in the Swarm canbe encrypted I didnrsquot want to lose that when moving to Kubernetes so Iuse puppet to automatically configure libreswan to build host-to-host linksbetween all of the hosts with IPSec and letsencrypt (Mostly automaticallyI still have to manually request the the certificate)Cluster management goes through kubeadmin01 A gitlab-runner onkubeadmin01 handles service updates coming from GitLab CI

Setup

I rke kubespray and kubeadm make installs easierI Shared storage via Ceph and NFSI Networking is the hard part

Randall Smith Docker at Adams State University

Kubernetes development is focused on the cloud Unfortunately thatmeans that deploying locally is not as easy as it could be Tools likerke kubespray and kubeadm make it easierOne of the things I like about rke is that it has an easy option to teardown a cluster This is great for testingWe use Ceph RBD to provide block devices to our Linux KVM clusterWe are also able to leverage that for Docker but the way itrsquos configureddoesnrsquot work well if volumes need to be shared We use NFS for thosecases where multiple services need to access the volume at the same timeThe cloud focus really shows up when it comes time to setup networkingTherersquos an expectation that the cluster use the load balancer from thecloud provider to provide access Obviously thatrsquos not the case whenrunning it locallyAll of the subnets used by Kubernetes are designed to be internal to thecluster That makes it hard to make services public Metallb has a coupleof options to do that The easiest is Layer 2 mode which publishes IPsdirectly on the nodes and use ARPs to make other servers aware of theIPs This is what wersquore using The other option is BGP with publishesthe IPs and their location in the cluster to other routers

Migration

I Convert from Docker Compose to Pods and ServicesI Add a CI jobs to deploy to the k8s clusterI Change DNS to point to the k8s serviceI No image rebuild needed

Randall Smith Docker at Adams State University

One of the best things about using Docker and GitLab CI to manageservice deployments is that very little needs to change There is a one-time conversion of the docker compose file to the Kubernetes config formatThe CI configuration also needs to be updated so that deployments go toKubernetes After that deployment through GitLab remains the same asit was with Swarm For services that do not have external volumes theycan be deployed to both clouds and tests run as neededFor many services migration can be as easy as changing a DNS recordRemember the exact same image that wersquore running in the Swarm will berunning in Kubernetes There are no surprises with the image No needto rebuild or re-install

How Can You Get Started

I Start with something simpleI Start with services that can tolerate downtimeI You may need to change your backup strategyI CI CI CI

Randall Smith Docker at Adams State University

Donrsquot do what I did Start with something simple Web services and tasksthat donrsquot require permanent local storage are a great start These areusually easier to build and more resilient to being frequently stopped andstarted This will happen a lot with your first servicesStart with services that can tolerate downtime Test services are great butyoursquoll eventually want to move something useful into the cluster Therersquosa learning curve Donrsquot let it bite you (or discourage you)When you move to containers how you back up your data may need tochange You donrsquot have a server to log into and run backup scripts Knowwhere your data lives and back it up thereCI CI CI Also CI Having a consistent workflow will make it easier foreveryone to start using the new cluster

Build vs Buy

Randall Smith Docker at Adams State University

You can build from scratch but there are a number of pre-built commercialservicesDocker Enterprise is a simple install on-prem and provides both Swarmand Kubernetes services at the same time It also provides a registry androle-based access control We are testing this approach nowYou can also go straight to the cloud with GKE AWS or Azure Infact using Kubernetes makes it much easier to transition between cloudproviders or even use multiple providers at the same time

Logging

I Containers generally log tostandard out

I Logs can be shipped to a logaggreagator such asElasticsearch

I View logs and createdashboards in Kibana

Randall Smith Docker at Adams State University

Moving to Docker changes a number of things not the least of which islogging The official accepted way for containers to log is to write theirlogs to standard out The logs are then available via Docker This alsoallows for a standard way to ship those logs to an aggregatorGenerally this done by shipping those logs to Elasticsearch One can thenuse a tool such as Kibana to access those logs build dashboards and eventrigger alerts based on the contents of those logs

Monitoring

I Collect metrics with PrometheusI View the metrics with GrafanaI Use InfluxDB for long-term storage

Randall Smith Docker at Adams State University

Prometheus has become the premier way to collect metrics on Kubernetesclusters It collects host stats as well as information on running podsGrafana is frequently used to build dashboards to view the collected met-ricsPrometheus only keeps metrics for a short amount of time Usually lessthan two weeks The metrics can be shipped to InfluxDB for long termstorage and analysis

Security Scans

I No built-in way to see if an image has out-of-date packagesI clair and Docker Enterprise can scan images for vulnerable

packagesI Doesnrsquot work as well for 3rd party packages

Randall Smith Docker at Adams State University

One of the things that you lose with containers is a built-in way to ensurethat software in an image is not running vulnerable packages You can useyum or apt to see if new updates need to be applied but containers donrsquothave thatTools such as clair can be used to scan images at build time as part ofthe CI process or to regularly scan all images in a registry This feature isalso included in Docker Enterprise This works well for packages installedwith standard tools (such as apt) but doesnrsquot work as well for softwareinstalled from sourceOne way to keep images up-to-date is run scheduled tasks in GitLab CIthat scans the image and rebuilds is with the latest updates installed ifsomething is found Remember that base images are generally very strippeddown so they will generally have fewer updates

BANDOCK Google Group

httpsgroupsgooglecomforumforumbandock

Randall Smith Docker at Adams State University

Virginia Tech started a Google Group for schools looking to run Banner inDocker VT is running Banner 9 in Docker SwarmTo request access go to the BANDOCK Google group and click the linkthat says Apply for membership

Questions

Feel free to ask questions after the conference

Email rbsmithadamseduPhone 719-587-7836Twitter PerlStalker

Randall Smith Docker at Adams State University

Thank You

Randall Smith Docker at Adams State University

References

bull Two Five and Seven painting the rosebush from AlicersquosAdventures in Wonderland

bull ELK Stack image - httpswwwelasticcoelk-stack

bull Docker Enterprise -httpswwwdockercomproductsdocker-enterprise

bull Book cover generated with the devto book generator -httpsdevtorly

bull Double Facepalm meme created at imgflipcom

bull GitLab - httpsaboutgitlabcom

bull Kubernetes - httpskubernetesio

bull Calico - httpswwwprojectcalicoorg

bull Metallb - httpsmetallbuniversetf

Page 5: Docker at Adams State UniversityRandall Smith Docker at Adams State University One of the best things about using Docker and GitLab CI to manage service deployments is that very little

What is Orchestration

Orchestration is the process of managing and automatingcontainers

Randall Smith Docker at Adams State University

Orchestration is the process of managing and automating containersThe number of containers can scale quickly Without good tooling thenumber of containers can quickly become unmanageable

What an Orchestrator Does

I Starts stops servicesI Scales up down replicasI Connects containers to storageI Manages container replacements for updates

Randall Smith Docker at Adams State University

An orchestrator is in charge of ensuring that services are running Ser-vices are spread across the cluster to attempt to balance the load Theorchestrator can also scale up and down the number of replicas of yourserviceThe orchestrator also manages the process of connecting running contain-ers to storage This lets you abstract out your storage from the containermaking it easier to switch clouds if desiredFinally the orchestrator schedules replacing containers if they get updatedFor example you can tell your orchestrator that a service should be runninga new image and it will stop the current image and start the new oneautomatically If your service can run multiple replicas this update can betransparent to your users

Orchestration Platforms

I Docker SwarmI KubernetesI MesosphereI FleetI Cattle

Randall Smith Docker at Adams State University

I did a survey of the available orchestration tools that existed at the timeThe big ones were Docker Swarm Kubernetes and Mesosphere I alsolooked at Fleet and Rancher CattleFor full details

My Published Works

Randall Smith Docker at Adams State University

see my published works Docker Orchestration in particular

The Findings Kubernetes

I Ceph RBD supportI Easier migration of legacy applicationsI Each service gets its own IP addressI Better feature set

Randall Smith Docker at Adams State University

In the end I wrote up peer review recommending that we use KubernetesKubernetes was the new project on the block but it was already gaining alarge following in the communityCeph is a distributed storage system that we use to provide block devicesto our VM stack I wanted to be able to use it with Docker as wellUnfortunately the lack of ceph support in most of the products at thetime limited the list to KubernetesKubernetes also provided the easiest migration path for legacy applicationinto a container environment The pod structure allows for multiple con-tainers to be combined and communicate with each other as if they wereon the same serverThe Kubernetes service model makes deploying multiple applications whichuse the same port much easier Since each service gets its own IP addressthere are no potential port conflicts as there are with the other toolsWhile RBD support has since been added to Swarm Kubernetes also has anumber features that are unique The most prominent are scheduled jobsand easy tools to run tasks in running containers

Things Didnrsquot Go as Planned

A new install of Snipe IT requiredpersistent storage which I couldnot support on my existingDocker servers

Randall Smith Docker at Adams State University

I was unable to deploy Kubernetes immediately due to other prioritiesThen we had a project come from our support services team to setupthe an inventory management system called Snipe IT The admin on ourteam that was running with the project wanted to use the official Snipe ITDocker containerThe problem is that container storage is not persistent I couldnrsquot handleit on my existing pair of servers because I didnrsquot have a way to deal withthe storage needsTo solve this I quickly stood up a small cluster running Docker SwarmThis is exactly the type of problem that orchestration tools help solve

Why Swarm

I Built into DockerI Encrypted overlay networksI Storage driver for Ceph RBD

Randall Smith Docker at Adams State University

Swarm is built into Docker Itrsquos almost trivial to setup Service andcontainer management is fairly simple and it still offered powerful featuressuch as zero-downtime updatesSwarm also provides a native encrypted overlay network to connect con-tainers running on separate hosts This rocks Communication betweenevery container in the cluster is encrypted transparently at the networklayer That allowed us to encrypt traffic for services such as MySQL whichcan be problematic to enable native encryptionMost importantly I was able to configure the Rexray storage driver forDocker to mount Ceph RBDs for persistent storageIn two days I was able to setup a fully orchestrated Docker cluster Ev-erything was coming up roses

The Real Win GitLab CI

Randall Smith Docker at Adams State University

The real win for us came in the form of GitLab CI GitLab became integralto our workflowWersquove been using GitLab for many years as a git server It was just as I wasstarting to look into orchestration platforms that GitLab added a Dockerregistry Since we didnrsquot have one yet this was a huge winShortly thereafter I started looking into GitLabrsquos continuous integrationfeatures

Automated Container Builds

Randall Smith Docker at Adams State University

It started early in the process before I stood up the Swarm I was pushingmy image configurations into git in GitLab I started using GitLab CI toautomatically build new Docker imagesWhen a commit is pushed to a repo with CI enabled a new build is trig-gered Once the the build is complete itrsquos pushed into the Docker registryand is ready to use anywhere Where before I was building new images onmy desktop or on the servers themselves now I could let GitLab do it formeWhat makes this especially cool is that we were able to make this availableto our PR department to build new images for the main website The newadamsedu runs on Wordpress running in Docker Our web developer canbuild and test new images automaticallyI also like this process because it makes it easier to stand up services in atest environment We can test the image before it goes into productionOnce our testing is done the image that is being deployed into productionis the exact image that passed our tests This eliminates nearly all of theanxiety that comes with deploying new changes

Automated Deployment of Services

Randall Smith Docker at Adams State University

Even better we added steps to the CI process which allows him to deployhis updated Wordpress images to the Swarm without root access or needingto talk to a member of our ops team By choice the deployment istriggered manually especially to production This helps to ensure that weare being deliberate in our processesGitLab keeps an audit trail of every deployment so we can see every de-ployment that happens and who triggered it It also makes it easy to rollback to previous images in the event of serious problemsMost of our other Docker services have been migrated to this processas well This allows us to fully audit any deployments of services Theconsistent process also makes it easier for anyone on the team to makechanges if neededDoes it take longer to make small changes Yes Yes it does Howeverthe deliberate process ensures that we are consistent It also provides arecord of what changed (thanks to git) That way if we start getting callson a service we can see if there were any recent changes what they wereand when they were deployed We can also roll back updates in mostcases if there are problemsThis is a very DevOps approach Wersquore treating servers and services ascode

Building Images

FROM wordpress498-apacheLABEL maintainer=Mike Henderson mhendersonadamsedu

RUN apt-get update ampamp apt-get install -y curl zip unzip git libldap2-dev ampamp rm -rf varlibaptlists ampamp docker-php-ext-configure ldap --with-libdir=libx86_64-linux-gnu ampamp docker-php-ext-install ldap ampamp apt-get purge -y --auto-remove libldap2-dev ampamp php -r readfile(rsquohttpgetcomposerorginstallerrsquo)

| php -- --install-dir=usrbin --filename=composer

git clone of themes and plugins removed

Pull in composer file and runCOPY composerjson RUN composer install --verbose --profile --prefer-dist --no-autoloader

Make NFS mount point set ownership symlink into wordpressRUN mkdir uploadsRUN chown www-datawww-data uploadsRUN ln -s uploads usrsrcwordpresswp-contentuploads

Randall Smith Docker at Adams State University

Building an image is the first step to running a service in Docker Using offthe shelf images can be a great way to get started but eventually yoursquollneed to roll your own image This is done via a DockerfileThe easiest place to start is from an existing image The slide shows asnippet from the Dockerfile that wersquore using to build the image for whatwill be our main web site In this case wersquore expanding off the officialWordpress image to build exactly what we need for adamseduOur web guru Mike Henderson built and maintains the Dockerfile Our CIprocess automates the build process when he pushes a change into GitLaband allows him to deploy it to testing or productionThe build process makes it easier for others to review and audit an imageUnlike a server everything that goes into a service is in the image Youdonrsquot have extra things hanging around that no one knows about becausethe person doing the install forgot about it

Deploying Services

docker stack deploy -c docker-composeyml www

Randall Smith Docker at Adams State University

So letrsquos dig into how we deploy a service This is done in Swarm witha docker-composeyml file In it you specify all of the services volumesand networks that your application needs All of this happens at run timeThis is the compose file that wersquore using for to test our wordpress deploy-ment for our new website

bull base images are usually very minimal

bull version tags allow rollback Go back to previous version

bull this db approach means that every service gets own dedicated DB

[ back to slide ] When wersquore ready to deploy we run docker stackdeploy and Swarm tries to make reality match what wersquove defined inthe compose fileThe combination of Dockerfiles and the compose file define the entireconfiguration for a service This fills the same role as server configurationmanagement

Zero-downtime Upgrades

I Requires the service to run multiple replicasI Running containers are replaced one at a time until they are all

replacedI The load balancer serves requests from all running services

Randall Smith Docker at Adams State University

One of the great features of Swarm and Kubernetes is the ability to per-form zero-downtime upgrades This feature is based firstly on runningmultiple replicas of the image Each running container is updated one ata time or in a configurable number of groupsThe built-in load balancer will serve requests from each running containerAs containers are shutdown they are removed from the load balancer Thenew containers are added and will start serving requests Eventually everycontainer will be replaced with ones running the new image The servicewill remain up the entire timeDuring the update process some requests may go to old containers whileothers go to the new ones As long as everything is backwards compatiblethis works wellEven for services that cannot run with multiple replicas such as databasesthey generally restart so quickly that downtime for upgrades is reduced toseconds

Who Needs a Server

docker run ictuopenvas-docker openvasrun_scanpy -v pathtoresultsopenvasreports host1host2host2 report-name

Randall Smith Docker at Adams State University

One of the great things about running services in containers is that westarted to kill off servers In some cases is was what you would expectThe server that was running a service isnrsquot needed anymore In one casein particular the change was even more drasticWe use OpenVAS to run regular security scans against our servers TheOpenVAS server included the security scanning and a web interface formanaging the scans and providing access to the reportsCameron started looking into how to move OpenVAS into a container anddiscovered a couple of pre-built images One of them allows you to passin a list of hosts to scan on startup The container will go through thelist scan every host and write the report to a volume Even better thecontainer downloads all of the latest checks when it starts so it is alwaysup-to-dateWe were able to replace an entire server with a one-liner We can schedulethat to run as a cron job or run it on demand anytime we need it

Who Needs a Cluster

I Use single node Docker Swarm for standalone servicesI All of the CI tooling is still availableI Take advantage of rollback and CI audit trail

Randall Smith Docker at Adams State University

We also found that in some cases it still might make sense to run aservice on a standalone server However you can still take advantage ofthe managed deployment and rollback that is available with Docker Swarmon a single hostAll of the CI build and deployment options are available as they wouldbe if the service were running on a full cluster Instead a specificgitlab-runner is used on the standalone host Deployments are thenconfigured to use that runnerThere are two big wins when taking this approach on standalone serversFirst you get the rapid rollback in the event of failure Second you getthe audit trail and accountability that comes from the CI environment

There Were Problems

I Swarm scheduler failedI Rexray driver doesnrsquot

consistently unmap RBDsI Overlay randomly network

stopped workingI IPAM assigns too many IP

addresses

Randall Smith Docker at Adams State University

As we started to use Swarm more we started to see problems appearFirst of all we ran into issues with the Swarm scheduler There was a bugthat triggered once in a while that prevented the Swarm from starting newservices Running containers were fine but we couldnrsquot start new onesEventually this was solved in a later Docker releaseThe other problem we have is the Rexray driver doesnrsquot always cleanupafter itself when a container is stopped It can leave RBDs mounted ormapped on a node preventing it from starting elsewhere in the SwarmThis can cause a service to take longer to start or prevent it from startingentirelyWe had two major network issues First of all the overlay network wouldsometimes stop talking Containers on the same nodes could continue toconnect but they couldnrsquot talk to others in the Swarm There turned outto be a conflict between the Kernel network timeout settings and the IPVSsettings which led to the Kernel dropping the overlay connectionsThe second problem is one wersquore still working around Therersquos a bug inthe IPAM module which assigns the virtual IPs If services are frequentlyupdated it can lead to multiple IPs being assigned to the service andthe old IPs not being cleaned up This can lead to strange connectivityproblems

Not A Total Loss

In the end Docker Swarm has proven to be a solid proof ofconcept

Randall Smith Docker at Adams State University

In the end Docker Swarm has proven to be a solid proof of conceptWe learned a lot about how to handle containers and the workflow fordeploying servicesDonrsquot be fooled with the proof of concept label Swarm is a solid prod-uct So solid that we are using this to run production services

Next Steps

Randall Smith Docker at Adams State University

The next step for us is to move into our new Kubernetes clusterI wish I could say that we were there now Unfortunately little things likemoving to Banner 9 took up a lot of time Herersquos where we are now andwhat our plans are moving forward

Architecture

Randall Smith Docker at Adams State University

The architecture for the new Kubernetes cluster mirrors what wersquore doingwith SwarmWersquore using haproxy on the gateway nodes to provide public access toservices running in the cluster It also provides TLS termination for webservices The nodes are managed by puppet (and GitLab CI)The Kubernetes cluster can use a number of different drivers to provide thenetwork overlay and enforce network access controls The Calico driverfor example uses BGP to route requests for services to the host where thecontainers running Using BGP means that standard routingnetworkingtools can be used to troubleshoot problemsOne of the best things about the native Docker overlay network is thatall traffic between all of the containers on that network in the Swarm canbe encrypted I didnrsquot want to lose that when moving to Kubernetes so Iuse puppet to automatically configure libreswan to build host-to-host linksbetween all of the hosts with IPSec and letsencrypt (Mostly automaticallyI still have to manually request the the certificate)Cluster management goes through kubeadmin01 A gitlab-runner onkubeadmin01 handles service updates coming from GitLab CI

Setup

I rke kubespray and kubeadm make installs easierI Shared storage via Ceph and NFSI Networking is the hard part

Randall Smith Docker at Adams State University

Kubernetes development is focused on the cloud Unfortunately thatmeans that deploying locally is not as easy as it could be Tools likerke kubespray and kubeadm make it easierOne of the things I like about rke is that it has an easy option to teardown a cluster This is great for testingWe use Ceph RBD to provide block devices to our Linux KVM clusterWe are also able to leverage that for Docker but the way itrsquos configureddoesnrsquot work well if volumes need to be shared We use NFS for thosecases where multiple services need to access the volume at the same timeThe cloud focus really shows up when it comes time to setup networkingTherersquos an expectation that the cluster use the load balancer from thecloud provider to provide access Obviously thatrsquos not the case whenrunning it locallyAll of the subnets used by Kubernetes are designed to be internal to thecluster That makes it hard to make services public Metallb has a coupleof options to do that The easiest is Layer 2 mode which publishes IPsdirectly on the nodes and use ARPs to make other servers aware of theIPs This is what wersquore using The other option is BGP with publishesthe IPs and their location in the cluster to other routers

Migration

I Convert from Docker Compose to Pods and ServicesI Add a CI jobs to deploy to the k8s clusterI Change DNS to point to the k8s serviceI No image rebuild needed

Randall Smith Docker at Adams State University

One of the best things about using Docker and GitLab CI to manageservice deployments is that very little needs to change There is a one-time conversion of the docker compose file to the Kubernetes config formatThe CI configuration also needs to be updated so that deployments go toKubernetes After that deployment through GitLab remains the same asit was with Swarm For services that do not have external volumes theycan be deployed to both clouds and tests run as neededFor many services migration can be as easy as changing a DNS recordRemember the exact same image that wersquore running in the Swarm will berunning in Kubernetes There are no surprises with the image No needto rebuild or re-install

How Can You Get Started

I Start with something simpleI Start with services that can tolerate downtimeI You may need to change your backup strategyI CI CI CI

Randall Smith Docker at Adams State University

Donrsquot do what I did Start with something simple Web services and tasksthat donrsquot require permanent local storage are a great start These areusually easier to build and more resilient to being frequently stopped andstarted This will happen a lot with your first servicesStart with services that can tolerate downtime Test services are great butyoursquoll eventually want to move something useful into the cluster Therersquosa learning curve Donrsquot let it bite you (or discourage you)When you move to containers how you back up your data may need tochange You donrsquot have a server to log into and run backup scripts Knowwhere your data lives and back it up thereCI CI CI Also CI Having a consistent workflow will make it easier foreveryone to start using the new cluster

Build vs Buy

Randall Smith Docker at Adams State University

You can build from scratch but there are a number of pre-built commercialservicesDocker Enterprise is a simple install on-prem and provides both Swarmand Kubernetes services at the same time It also provides a registry androle-based access control We are testing this approach nowYou can also go straight to the cloud with GKE AWS or Azure Infact using Kubernetes makes it much easier to transition between cloudproviders or even use multiple providers at the same time

Logging

I Containers generally log tostandard out

I Logs can be shipped to a logaggreagator such asElasticsearch

I View logs and createdashboards in Kibana

Randall Smith Docker at Adams State University

Moving to Docker changes a number of things not the least of which islogging The official accepted way for containers to log is to write theirlogs to standard out The logs are then available via Docker This alsoallows for a standard way to ship those logs to an aggregatorGenerally this done by shipping those logs to Elasticsearch One can thenuse a tool such as Kibana to access those logs build dashboards and eventrigger alerts based on the contents of those logs

Monitoring

I Collect metrics with PrometheusI View the metrics with GrafanaI Use InfluxDB for long-term storage

Randall Smith Docker at Adams State University

Prometheus has become the premier way to collect metrics on Kubernetesclusters It collects host stats as well as information on running podsGrafana is frequently used to build dashboards to view the collected met-ricsPrometheus only keeps metrics for a short amount of time Usually lessthan two weeks The metrics can be shipped to InfluxDB for long termstorage and analysis

Security Scans

I No built-in way to see if an image has out-of-date packagesI clair and Docker Enterprise can scan images for vulnerable

packagesI Doesnrsquot work as well for 3rd party packages

Randall Smith Docker at Adams State University

One of the things that you lose with containers is a built-in way to ensurethat software in an image is not running vulnerable packages You can useyum or apt to see if new updates need to be applied but containers donrsquothave thatTools such as clair can be used to scan images at build time as part ofthe CI process or to regularly scan all images in a registry This feature isalso included in Docker Enterprise This works well for packages installedwith standard tools (such as apt) but doesnrsquot work as well for softwareinstalled from sourceOne way to keep images up-to-date is run scheduled tasks in GitLab CIthat scans the image and rebuilds is with the latest updates installed ifsomething is found Remember that base images are generally very strippeddown so they will generally have fewer updates

BANDOCK Google Group

httpsgroupsgooglecomforumforumbandock

Randall Smith Docker at Adams State University

Virginia Tech started a Google Group for schools looking to run Banner inDocker VT is running Banner 9 in Docker SwarmTo request access go to the BANDOCK Google group and click the linkthat says Apply for membership

Questions

Feel free to ask questions after the conference

Email rbsmithadamseduPhone 719-587-7836Twitter PerlStalker

Randall Smith Docker at Adams State University

Thank You

Randall Smith Docker at Adams State University

References

bull Two Five and Seven painting the rosebush from AlicersquosAdventures in Wonderland

bull ELK Stack image - httpswwwelasticcoelk-stack

bull Docker Enterprise -httpswwwdockercomproductsdocker-enterprise

bull Book cover generated with the devto book generator -httpsdevtorly

bull Double Facepalm meme created at imgflipcom

bull GitLab - httpsaboutgitlabcom

bull Kubernetes - httpskubernetesio

bull Calico - httpswwwprojectcalicoorg

bull Metallb - httpsmetallbuniversetf

Page 6: Docker at Adams State UniversityRandall Smith Docker at Adams State University One of the best things about using Docker and GitLab CI to manage service deployments is that very little

What an Orchestrator Does

I Starts stops servicesI Scales up down replicasI Connects containers to storageI Manages container replacements for updates

Randall Smith Docker at Adams State University

An orchestrator is in charge of ensuring that services are running Ser-vices are spread across the cluster to attempt to balance the load Theorchestrator can also scale up and down the number of replicas of yourserviceThe orchestrator also manages the process of connecting running contain-ers to storage This lets you abstract out your storage from the containermaking it easier to switch clouds if desiredFinally the orchestrator schedules replacing containers if they get updatedFor example you can tell your orchestrator that a service should be runninga new image and it will stop the current image and start the new oneautomatically If your service can run multiple replicas this update can betransparent to your users

Orchestration Platforms

I Docker SwarmI KubernetesI MesosphereI FleetI Cattle

Randall Smith Docker at Adams State University

I did a survey of the available orchestration tools that existed at the timeThe big ones were Docker Swarm Kubernetes and Mesosphere I alsolooked at Fleet and Rancher CattleFor full details

My Published Works

Randall Smith Docker at Adams State University

see my published works Docker Orchestration in particular

The Findings Kubernetes

I Ceph RBD supportI Easier migration of legacy applicationsI Each service gets its own IP addressI Better feature set

Randall Smith Docker at Adams State University

In the end I wrote up peer review recommending that we use KubernetesKubernetes was the new project on the block but it was already gaining alarge following in the communityCeph is a distributed storage system that we use to provide block devicesto our VM stack I wanted to be able to use it with Docker as wellUnfortunately the lack of ceph support in most of the products at thetime limited the list to KubernetesKubernetes also provided the easiest migration path for legacy applicationinto a container environment The pod structure allows for multiple con-tainers to be combined and communicate with each other as if they wereon the same serverThe Kubernetes service model makes deploying multiple applications whichuse the same port much easier Since each service gets its own IP addressthere are no potential port conflicts as there are with the other toolsWhile RBD support has since been added to Swarm Kubernetes also has anumber features that are unique The most prominent are scheduled jobsand easy tools to run tasks in running containers

Things Didnrsquot Go as Planned

A new install of Snipe IT requiredpersistent storage which I couldnot support on my existingDocker servers

Randall Smith Docker at Adams State University

I was unable to deploy Kubernetes immediately due to other prioritiesThen we had a project come from our support services team to setupthe an inventory management system called Snipe IT The admin on ourteam that was running with the project wanted to use the official Snipe ITDocker containerThe problem is that container storage is not persistent I couldnrsquot handleit on my existing pair of servers because I didnrsquot have a way to deal withthe storage needsTo solve this I quickly stood up a small cluster running Docker SwarmThis is exactly the type of problem that orchestration tools help solve

Why Swarm

I Built into DockerI Encrypted overlay networksI Storage driver for Ceph RBD

Randall Smith Docker at Adams State University

Swarm is built into Docker Itrsquos almost trivial to setup Service andcontainer management is fairly simple and it still offered powerful featuressuch as zero-downtime updatesSwarm also provides a native encrypted overlay network to connect con-tainers running on separate hosts This rocks Communication betweenevery container in the cluster is encrypted transparently at the networklayer That allowed us to encrypt traffic for services such as MySQL whichcan be problematic to enable native encryptionMost importantly I was able to configure the Rexray storage driver forDocker to mount Ceph RBDs for persistent storageIn two days I was able to setup a fully orchestrated Docker cluster Ev-erything was coming up roses

The Real Win GitLab CI

Randall Smith Docker at Adams State University

The real win for us came in the form of GitLab CI GitLab became integralto our workflowWersquove been using GitLab for many years as a git server It was just as I wasstarting to look into orchestration platforms that GitLab added a Dockerregistry Since we didnrsquot have one yet this was a huge winShortly thereafter I started looking into GitLabrsquos continuous integrationfeatures

Automated Container Builds

Randall Smith Docker at Adams State University

It started early in the process before I stood up the Swarm I was pushingmy image configurations into git in GitLab I started using GitLab CI toautomatically build new Docker imagesWhen a commit is pushed to a repo with CI enabled a new build is trig-gered Once the the build is complete itrsquos pushed into the Docker registryand is ready to use anywhere Where before I was building new images onmy desktop or on the servers themselves now I could let GitLab do it formeWhat makes this especially cool is that we were able to make this availableto our PR department to build new images for the main website The newadamsedu runs on Wordpress running in Docker Our web developer canbuild and test new images automaticallyI also like this process because it makes it easier to stand up services in atest environment We can test the image before it goes into productionOnce our testing is done the image that is being deployed into productionis the exact image that passed our tests This eliminates nearly all of theanxiety that comes with deploying new changes

Automated Deployment of Services

Randall Smith Docker at Adams State University

Even better we added steps to the CI process which allows him to deployhis updated Wordpress images to the Swarm without root access or needingto talk to a member of our ops team By choice the deployment istriggered manually especially to production This helps to ensure that weare being deliberate in our processesGitLab keeps an audit trail of every deployment so we can see every de-ployment that happens and who triggered it It also makes it easy to rollback to previous images in the event of serious problemsMost of our other Docker services have been migrated to this processas well This allows us to fully audit any deployments of services Theconsistent process also makes it easier for anyone on the team to makechanges if neededDoes it take longer to make small changes Yes Yes it does Howeverthe deliberate process ensures that we are consistent It also provides arecord of what changed (thanks to git) That way if we start getting callson a service we can see if there were any recent changes what they wereand when they were deployed We can also roll back updates in mostcases if there are problemsThis is a very DevOps approach Wersquore treating servers and services ascode

Building Images

FROM wordpress498-apacheLABEL maintainer=Mike Henderson mhendersonadamsedu

RUN apt-get update ampamp apt-get install -y curl zip unzip git libldap2-dev ampamp rm -rf varlibaptlists ampamp docker-php-ext-configure ldap --with-libdir=libx86_64-linux-gnu ampamp docker-php-ext-install ldap ampamp apt-get purge -y --auto-remove libldap2-dev ampamp php -r readfile(rsquohttpgetcomposerorginstallerrsquo)

| php -- --install-dir=usrbin --filename=composer

git clone of themes and plugins removed

Pull in composer file and runCOPY composerjson RUN composer install --verbose --profile --prefer-dist --no-autoloader

Make NFS mount point set ownership symlink into wordpressRUN mkdir uploadsRUN chown www-datawww-data uploadsRUN ln -s uploads usrsrcwordpresswp-contentuploads

Randall Smith Docker at Adams State University

Building an image is the first step to running a service in Docker Using offthe shelf images can be a great way to get started but eventually yoursquollneed to roll your own image This is done via a DockerfileThe easiest place to start is from an existing image The slide shows asnippet from the Dockerfile that wersquore using to build the image for whatwill be our main web site In this case wersquore expanding off the officialWordpress image to build exactly what we need for adamseduOur web guru Mike Henderson built and maintains the Dockerfile Our CIprocess automates the build process when he pushes a change into GitLaband allows him to deploy it to testing or productionThe build process makes it easier for others to review and audit an imageUnlike a server everything that goes into a service is in the image Youdonrsquot have extra things hanging around that no one knows about becausethe person doing the install forgot about it

Deploying Services

docker stack deploy -c docker-composeyml www

Randall Smith Docker at Adams State University

So letrsquos dig into how we deploy a service This is done in Swarm witha docker-composeyml file In it you specify all of the services volumesand networks that your application needs All of this happens at run timeThis is the compose file that wersquore using for to test our wordpress deploy-ment for our new website

bull base images are usually very minimal

bull version tags allow rollback Go back to previous version

bull this db approach means that every service gets own dedicated DB

[ back to slide ] When wersquore ready to deploy we run docker stackdeploy and Swarm tries to make reality match what wersquove defined inthe compose fileThe combination of Dockerfiles and the compose file define the entireconfiguration for a service This fills the same role as server configurationmanagement

Zero-downtime Upgrades

I Requires the service to run multiple replicasI Running containers are replaced one at a time until they are all

replacedI The load balancer serves requests from all running services

Randall Smith Docker at Adams State University

One of the great features of Swarm and Kubernetes is the ability to per-form zero-downtime upgrades This feature is based firstly on runningmultiple replicas of the image Each running container is updated one ata time or in a configurable number of groupsThe built-in load balancer will serve requests from each running containerAs containers are shutdown they are removed from the load balancer Thenew containers are added and will start serving requests Eventually everycontainer will be replaced with ones running the new image The servicewill remain up the entire timeDuring the update process some requests may go to old containers whileothers go to the new ones As long as everything is backwards compatiblethis works wellEven for services that cannot run with multiple replicas such as databasesthey generally restart so quickly that downtime for upgrades is reduced toseconds

Who Needs a Server

docker run ictuopenvas-docker openvasrun_scanpy -v pathtoresultsopenvasreports host1host2host2 report-name

Randall Smith Docker at Adams State University

One of the great things about running services in containers is that westarted to kill off servers In some cases is was what you would expectThe server that was running a service isnrsquot needed anymore In one casein particular the change was even more drasticWe use OpenVAS to run regular security scans against our servers TheOpenVAS server included the security scanning and a web interface formanaging the scans and providing access to the reportsCameron started looking into how to move OpenVAS into a container anddiscovered a couple of pre-built images One of them allows you to passin a list of hosts to scan on startup The container will go through thelist scan every host and write the report to a volume Even better thecontainer downloads all of the latest checks when it starts so it is alwaysup-to-dateWe were able to replace an entire server with a one-liner We can schedulethat to run as a cron job or run it on demand anytime we need it

Who Needs a Cluster

I Use single node Docker Swarm for standalone servicesI All of the CI tooling is still availableI Take advantage of rollback and CI audit trail

Randall Smith Docker at Adams State University

We also found that in some cases it still might make sense to run aservice on a standalone server However you can still take advantage ofthe managed deployment and rollback that is available with Docker Swarmon a single hostAll of the CI build and deployment options are available as they wouldbe if the service were running on a full cluster Instead a specificgitlab-runner is used on the standalone host Deployments are thenconfigured to use that runnerThere are two big wins when taking this approach on standalone serversFirst you get the rapid rollback in the event of failure Second you getthe audit trail and accountability that comes from the CI environment

There Were Problems

I Swarm scheduler failedI Rexray driver doesnrsquot

consistently unmap RBDsI Overlay randomly network

stopped workingI IPAM assigns too many IP

addresses

Randall Smith Docker at Adams State University

As we started to use Swarm more we started to see problems appearFirst of all we ran into issues with the Swarm scheduler There was a bugthat triggered once in a while that prevented the Swarm from starting newservices Running containers were fine but we couldnrsquot start new onesEventually this was solved in a later Docker releaseThe other problem we have is the Rexray driver doesnrsquot always cleanupafter itself when a container is stopped It can leave RBDs mounted ormapped on a node preventing it from starting elsewhere in the SwarmThis can cause a service to take longer to start or prevent it from startingentirelyWe had two major network issues First of all the overlay network wouldsometimes stop talking Containers on the same nodes could continue toconnect but they couldnrsquot talk to others in the Swarm There turned outto be a conflict between the Kernel network timeout settings and the IPVSsettings which led to the Kernel dropping the overlay connectionsThe second problem is one wersquore still working around Therersquos a bug inthe IPAM module which assigns the virtual IPs If services are frequentlyupdated it can lead to multiple IPs being assigned to the service andthe old IPs not being cleaned up This can lead to strange connectivityproblems

Not A Total Loss

In the end Docker Swarm has proven to be a solid proof ofconcept

Randall Smith Docker at Adams State University

In the end Docker Swarm has proven to be a solid proof of conceptWe learned a lot about how to handle containers and the workflow fordeploying servicesDonrsquot be fooled with the proof of concept label Swarm is a solid prod-uct So solid that we are using this to run production services

Next Steps

Randall Smith Docker at Adams State University

The next step for us is to move into our new Kubernetes clusterI wish I could say that we were there now Unfortunately little things likemoving to Banner 9 took up a lot of time Herersquos where we are now andwhat our plans are moving forward

Architecture

Randall Smith Docker at Adams State University

The architecture for the new Kubernetes cluster mirrors what wersquore doingwith SwarmWersquore using haproxy on the gateway nodes to provide public access toservices running in the cluster It also provides TLS termination for webservices The nodes are managed by puppet (and GitLab CI)The Kubernetes cluster can use a number of different drivers to provide thenetwork overlay and enforce network access controls The Calico driverfor example uses BGP to route requests for services to the host where thecontainers running Using BGP means that standard routingnetworkingtools can be used to troubleshoot problemsOne of the best things about the native Docker overlay network is thatall traffic between all of the containers on that network in the Swarm canbe encrypted I didnrsquot want to lose that when moving to Kubernetes so Iuse puppet to automatically configure libreswan to build host-to-host linksbetween all of the hosts with IPSec and letsencrypt (Mostly automaticallyI still have to manually request the the certificate)Cluster management goes through kubeadmin01 A gitlab-runner onkubeadmin01 handles service updates coming from GitLab CI

Setup

I rke kubespray and kubeadm make installs easierI Shared storage via Ceph and NFSI Networking is the hard part

Randall Smith Docker at Adams State University

Kubernetes development is focused on the cloud Unfortunately thatmeans that deploying locally is not as easy as it could be Tools likerke kubespray and kubeadm make it easierOne of the things I like about rke is that it has an easy option to teardown a cluster This is great for testingWe use Ceph RBD to provide block devices to our Linux KVM clusterWe are also able to leverage that for Docker but the way itrsquos configureddoesnrsquot work well if volumes need to be shared We use NFS for thosecases where multiple services need to access the volume at the same timeThe cloud focus really shows up when it comes time to setup networkingTherersquos an expectation that the cluster use the load balancer from thecloud provider to provide access Obviously thatrsquos not the case whenrunning it locallyAll of the subnets used by Kubernetes are designed to be internal to thecluster That makes it hard to make services public Metallb has a coupleof options to do that The easiest is Layer 2 mode which publishes IPsdirectly on the nodes and use ARPs to make other servers aware of theIPs This is what wersquore using The other option is BGP with publishesthe IPs and their location in the cluster to other routers

Migration

I Convert from Docker Compose to Pods and ServicesI Add a CI jobs to deploy to the k8s clusterI Change DNS to point to the k8s serviceI No image rebuild needed

Randall Smith Docker at Adams State University

One of the best things about using Docker and GitLab CI to manageservice deployments is that very little needs to change There is a one-time conversion of the docker compose file to the Kubernetes config formatThe CI configuration also needs to be updated so that deployments go toKubernetes After that deployment through GitLab remains the same asit was with Swarm For services that do not have external volumes theycan be deployed to both clouds and tests run as neededFor many services migration can be as easy as changing a DNS recordRemember the exact same image that wersquore running in the Swarm will berunning in Kubernetes There are no surprises with the image No needto rebuild or re-install

How Can You Get Started

I Start with something simpleI Start with services that can tolerate downtimeI You may need to change your backup strategyI CI CI CI

Randall Smith Docker at Adams State University

Donrsquot do what I did Start with something simple Web services and tasksthat donrsquot require permanent local storage are a great start These areusually easier to build and more resilient to being frequently stopped andstarted This will happen a lot with your first servicesStart with services that can tolerate downtime Test services are great butyoursquoll eventually want to move something useful into the cluster Therersquosa learning curve Donrsquot let it bite you (or discourage you)When you move to containers how you back up your data may need tochange You donrsquot have a server to log into and run backup scripts Knowwhere your data lives and back it up thereCI CI CI Also CI Having a consistent workflow will make it easier foreveryone to start using the new cluster

Build vs Buy

Randall Smith Docker at Adams State University

You can build from scratch but there are a number of pre-built commercialservicesDocker Enterprise is a simple install on-prem and provides both Swarmand Kubernetes services at the same time It also provides a registry androle-based access control We are testing this approach nowYou can also go straight to the cloud with GKE AWS or Azure Infact using Kubernetes makes it much easier to transition between cloudproviders or even use multiple providers at the same time

Logging

I Containers generally log tostandard out

I Logs can be shipped to a logaggreagator such asElasticsearch

I View logs and createdashboards in Kibana

Randall Smith Docker at Adams State University

Moving to Docker changes a number of things not the least of which islogging The official accepted way for containers to log is to write theirlogs to standard out The logs are then available via Docker This alsoallows for a standard way to ship those logs to an aggregatorGenerally this done by shipping those logs to Elasticsearch One can thenuse a tool such as Kibana to access those logs build dashboards and eventrigger alerts based on the contents of those logs

Monitoring

I Collect metrics with PrometheusI View the metrics with GrafanaI Use InfluxDB for long-term storage

Randall Smith Docker at Adams State University

Prometheus has become the premier way to collect metrics on Kubernetesclusters It collects host stats as well as information on running podsGrafana is frequently used to build dashboards to view the collected met-ricsPrometheus only keeps metrics for a short amount of time Usually lessthan two weeks The metrics can be shipped to InfluxDB for long termstorage and analysis

Security Scans

I No built-in way to see if an image has out-of-date packagesI clair and Docker Enterprise can scan images for vulnerable

packagesI Doesnrsquot work as well for 3rd party packages

Randall Smith Docker at Adams State University

One of the things that you lose with containers is a built-in way to ensurethat software in an image is not running vulnerable packages You can useyum or apt to see if new updates need to be applied but containers donrsquothave thatTools such as clair can be used to scan images at build time as part ofthe CI process or to regularly scan all images in a registry This feature isalso included in Docker Enterprise This works well for packages installedwith standard tools (such as apt) but doesnrsquot work as well for softwareinstalled from sourceOne way to keep images up-to-date is run scheduled tasks in GitLab CIthat scans the image and rebuilds is with the latest updates installed ifsomething is found Remember that base images are generally very strippeddown so they will generally have fewer updates

BANDOCK Google Group

httpsgroupsgooglecomforumforumbandock

Randall Smith Docker at Adams State University

Virginia Tech started a Google Group for schools looking to run Banner inDocker VT is running Banner 9 in Docker SwarmTo request access go to the BANDOCK Google group and click the linkthat says Apply for membership

Questions

Feel free to ask questions after the conference

Email rbsmithadamseduPhone 719-587-7836Twitter PerlStalker

Randall Smith Docker at Adams State University

Thank You

Randall Smith Docker at Adams State University

References

bull Two Five and Seven painting the rosebush from AlicersquosAdventures in Wonderland

bull ELK Stack image - httpswwwelasticcoelk-stack

bull Docker Enterprise -httpswwwdockercomproductsdocker-enterprise

bull Book cover generated with the devto book generator -httpsdevtorly

bull Double Facepalm meme created at imgflipcom

bull GitLab - httpsaboutgitlabcom

bull Kubernetes - httpskubernetesio

bull Calico - httpswwwprojectcalicoorg

bull Metallb - httpsmetallbuniversetf

Page 7: Docker at Adams State UniversityRandall Smith Docker at Adams State University One of the best things about using Docker and GitLab CI to manage service deployments is that very little

Orchestration Platforms

I Docker SwarmI KubernetesI MesosphereI FleetI Cattle

Randall Smith Docker at Adams State University

I did a survey of the available orchestration tools that existed at the timeThe big ones were Docker Swarm Kubernetes and Mesosphere I alsolooked at Fleet and Rancher CattleFor full details

My Published Works

Randall Smith Docker at Adams State University

see my published works Docker Orchestration in particular

The Findings Kubernetes

I Ceph RBD supportI Easier migration of legacy applicationsI Each service gets its own IP addressI Better feature set

Randall Smith Docker at Adams State University

In the end I wrote up peer review recommending that we use KubernetesKubernetes was the new project on the block but it was already gaining alarge following in the communityCeph is a distributed storage system that we use to provide block devicesto our VM stack I wanted to be able to use it with Docker as wellUnfortunately the lack of ceph support in most of the products at thetime limited the list to KubernetesKubernetes also provided the easiest migration path for legacy applicationinto a container environment The pod structure allows for multiple con-tainers to be combined and communicate with each other as if they wereon the same serverThe Kubernetes service model makes deploying multiple applications whichuse the same port much easier Since each service gets its own IP addressthere are no potential port conflicts as there are with the other toolsWhile RBD support has since been added to Swarm Kubernetes also has anumber features that are unique The most prominent are scheduled jobsand easy tools to run tasks in running containers

Things Didnrsquot Go as Planned

A new install of Snipe IT requiredpersistent storage which I couldnot support on my existingDocker servers

Randall Smith Docker at Adams State University

I was unable to deploy Kubernetes immediately due to other prioritiesThen we had a project come from our support services team to setupthe an inventory management system called Snipe IT The admin on ourteam that was running with the project wanted to use the official Snipe ITDocker containerThe problem is that container storage is not persistent I couldnrsquot handleit on my existing pair of servers because I didnrsquot have a way to deal withthe storage needsTo solve this I quickly stood up a small cluster running Docker SwarmThis is exactly the type of problem that orchestration tools help solve

Why Swarm

I Built into DockerI Encrypted overlay networksI Storage driver for Ceph RBD

Randall Smith Docker at Adams State University

Swarm is built into Docker Itrsquos almost trivial to setup Service andcontainer management is fairly simple and it still offered powerful featuressuch as zero-downtime updatesSwarm also provides a native encrypted overlay network to connect con-tainers running on separate hosts This rocks Communication betweenevery container in the cluster is encrypted transparently at the networklayer That allowed us to encrypt traffic for services such as MySQL whichcan be problematic to enable native encryptionMost importantly I was able to configure the Rexray storage driver forDocker to mount Ceph RBDs for persistent storageIn two days I was able to setup a fully orchestrated Docker cluster Ev-erything was coming up roses

The Real Win GitLab CI

Randall Smith Docker at Adams State University

The real win for us came in the form of GitLab CI GitLab became integralto our workflowWersquove been using GitLab for many years as a git server It was just as I wasstarting to look into orchestration platforms that GitLab added a Dockerregistry Since we didnrsquot have one yet this was a huge winShortly thereafter I started looking into GitLabrsquos continuous integrationfeatures

Automated Container Builds

Randall Smith Docker at Adams State University

It started early in the process before I stood up the Swarm I was pushingmy image configurations into git in GitLab I started using GitLab CI toautomatically build new Docker imagesWhen a commit is pushed to a repo with CI enabled a new build is trig-gered Once the the build is complete itrsquos pushed into the Docker registryand is ready to use anywhere Where before I was building new images onmy desktop or on the servers themselves now I could let GitLab do it formeWhat makes this especially cool is that we were able to make this availableto our PR department to build new images for the main website The newadamsedu runs on Wordpress running in Docker Our web developer canbuild and test new images automaticallyI also like this process because it makes it easier to stand up services in atest environment We can test the image before it goes into productionOnce our testing is done the image that is being deployed into productionis the exact image that passed our tests This eliminates nearly all of theanxiety that comes with deploying new changes

Automated Deployment of Services

Randall Smith Docker at Adams State University

Even better we added steps to the CI process which allows him to deployhis updated Wordpress images to the Swarm without root access or needingto talk to a member of our ops team By choice the deployment istriggered manually especially to production This helps to ensure that weare being deliberate in our processesGitLab keeps an audit trail of every deployment so we can see every de-ployment that happens and who triggered it It also makes it easy to rollback to previous images in the event of serious problemsMost of our other Docker services have been migrated to this processas well This allows us to fully audit any deployments of services Theconsistent process also makes it easier for anyone on the team to makechanges if neededDoes it take longer to make small changes Yes Yes it does Howeverthe deliberate process ensures that we are consistent It also provides arecord of what changed (thanks to git) That way if we start getting callson a service we can see if there were any recent changes what they wereand when they were deployed We can also roll back updates in mostcases if there are problemsThis is a very DevOps approach Wersquore treating servers and services ascode

Building Images

FROM wordpress498-apacheLABEL maintainer=Mike Henderson mhendersonadamsedu

RUN apt-get update ampamp apt-get install -y curl zip unzip git libldap2-dev ampamp rm -rf varlibaptlists ampamp docker-php-ext-configure ldap --with-libdir=libx86_64-linux-gnu ampamp docker-php-ext-install ldap ampamp apt-get purge -y --auto-remove libldap2-dev ampamp php -r readfile(rsquohttpgetcomposerorginstallerrsquo)

| php -- --install-dir=usrbin --filename=composer

git clone of themes and plugins removed

Pull in composer file and runCOPY composerjson RUN composer install --verbose --profile --prefer-dist --no-autoloader

Make NFS mount point set ownership symlink into wordpressRUN mkdir uploadsRUN chown www-datawww-data uploadsRUN ln -s uploads usrsrcwordpresswp-contentuploads

Randall Smith Docker at Adams State University

Building an image is the first step to running a service in Docker Using offthe shelf images can be a great way to get started but eventually yoursquollneed to roll your own image This is done via a DockerfileThe easiest place to start is from an existing image The slide shows asnippet from the Dockerfile that wersquore using to build the image for whatwill be our main web site In this case wersquore expanding off the officialWordpress image to build exactly what we need for adamseduOur web guru Mike Henderson built and maintains the Dockerfile Our CIprocess automates the build process when he pushes a change into GitLaband allows him to deploy it to testing or productionThe build process makes it easier for others to review and audit an imageUnlike a server everything that goes into a service is in the image Youdonrsquot have extra things hanging around that no one knows about becausethe person doing the install forgot about it

Deploying Services

docker stack deploy -c docker-composeyml www

Randall Smith Docker at Adams State University

So letrsquos dig into how we deploy a service This is done in Swarm witha docker-composeyml file In it you specify all of the services volumesand networks that your application needs All of this happens at run timeThis is the compose file that wersquore using for to test our wordpress deploy-ment for our new website

bull base images are usually very minimal

bull version tags allow rollback Go back to previous version

bull this db approach means that every service gets own dedicated DB

[ back to slide ] When wersquore ready to deploy we run docker stackdeploy and Swarm tries to make reality match what wersquove defined inthe compose fileThe combination of Dockerfiles and the compose file define the entireconfiguration for a service This fills the same role as server configurationmanagement

Zero-downtime Upgrades

I Requires the service to run multiple replicasI Running containers are replaced one at a time until they are all

replacedI The load balancer serves requests from all running services

Randall Smith Docker at Adams State University

One of the great features of Swarm and Kubernetes is the ability to per-form zero-downtime upgrades This feature is based firstly on runningmultiple replicas of the image Each running container is updated one ata time or in a configurable number of groupsThe built-in load balancer will serve requests from each running containerAs containers are shutdown they are removed from the load balancer Thenew containers are added and will start serving requests Eventually everycontainer will be replaced with ones running the new image The servicewill remain up the entire timeDuring the update process some requests may go to old containers whileothers go to the new ones As long as everything is backwards compatiblethis works wellEven for services that cannot run with multiple replicas such as databasesthey generally restart so quickly that downtime for upgrades is reduced toseconds

Who Needs a Server

docker run ictuopenvas-docker openvasrun_scanpy -v pathtoresultsopenvasreports host1host2host2 report-name

Randall Smith Docker at Adams State University

One of the great things about running services in containers is that westarted to kill off servers In some cases is was what you would expectThe server that was running a service isnrsquot needed anymore In one casein particular the change was even more drasticWe use OpenVAS to run regular security scans against our servers TheOpenVAS server included the security scanning and a web interface formanaging the scans and providing access to the reportsCameron started looking into how to move OpenVAS into a container anddiscovered a couple of pre-built images One of them allows you to passin a list of hosts to scan on startup The container will go through thelist scan every host and write the report to a volume Even better thecontainer downloads all of the latest checks when it starts so it is alwaysup-to-dateWe were able to replace an entire server with a one-liner We can schedulethat to run as a cron job or run it on demand anytime we need it

Who Needs a Cluster

I Use single node Docker Swarm for standalone servicesI All of the CI tooling is still availableI Take advantage of rollback and CI audit trail

Randall Smith Docker at Adams State University

We also found that in some cases it still might make sense to run aservice on a standalone server However you can still take advantage ofthe managed deployment and rollback that is available with Docker Swarmon a single hostAll of the CI build and deployment options are available as they wouldbe if the service were running on a full cluster Instead a specificgitlab-runner is used on the standalone host Deployments are thenconfigured to use that runnerThere are two big wins when taking this approach on standalone serversFirst you get the rapid rollback in the event of failure Second you getthe audit trail and accountability that comes from the CI environment

There Were Problems

I Swarm scheduler failedI Rexray driver doesnrsquot

consistently unmap RBDsI Overlay randomly network

stopped workingI IPAM assigns too many IP

addresses

Randall Smith Docker at Adams State University

As we started to use Swarm more we started to see problems appearFirst of all we ran into issues with the Swarm scheduler There was a bugthat triggered once in a while that prevented the Swarm from starting newservices Running containers were fine but we couldnrsquot start new onesEventually this was solved in a later Docker releaseThe other problem we have is the Rexray driver doesnrsquot always cleanupafter itself when a container is stopped It can leave RBDs mounted ormapped on a node preventing it from starting elsewhere in the SwarmThis can cause a service to take longer to start or prevent it from startingentirelyWe had two major network issues First of all the overlay network wouldsometimes stop talking Containers on the same nodes could continue toconnect but they couldnrsquot talk to others in the Swarm There turned outto be a conflict between the Kernel network timeout settings and the IPVSsettings which led to the Kernel dropping the overlay connectionsThe second problem is one wersquore still working around Therersquos a bug inthe IPAM module which assigns the virtual IPs If services are frequentlyupdated it can lead to multiple IPs being assigned to the service andthe old IPs not being cleaned up This can lead to strange connectivityproblems

Not A Total Loss

In the end Docker Swarm has proven to be a solid proof ofconcept

Randall Smith Docker at Adams State University

In the end Docker Swarm has proven to be a solid proof of conceptWe learned a lot about how to handle containers and the workflow fordeploying servicesDonrsquot be fooled with the proof of concept label Swarm is a solid prod-uct So solid that we are using this to run production services

Next Steps

Randall Smith Docker at Adams State University

The next step for us is to move into our new Kubernetes clusterI wish I could say that we were there now Unfortunately little things likemoving to Banner 9 took up a lot of time Herersquos where we are now andwhat our plans are moving forward

Architecture

Randall Smith Docker at Adams State University

The architecture for the new Kubernetes cluster mirrors what wersquore doingwith SwarmWersquore using haproxy on the gateway nodes to provide public access toservices running in the cluster It also provides TLS termination for webservices The nodes are managed by puppet (and GitLab CI)The Kubernetes cluster can use a number of different drivers to provide thenetwork overlay and enforce network access controls The Calico driverfor example uses BGP to route requests for services to the host where thecontainers running Using BGP means that standard routingnetworkingtools can be used to troubleshoot problemsOne of the best things about the native Docker overlay network is thatall traffic between all of the containers on that network in the Swarm canbe encrypted I didnrsquot want to lose that when moving to Kubernetes so Iuse puppet to automatically configure libreswan to build host-to-host linksbetween all of the hosts with IPSec and letsencrypt (Mostly automaticallyI still have to manually request the the certificate)Cluster management goes through kubeadmin01 A gitlab-runner onkubeadmin01 handles service updates coming from GitLab CI

Setup

I rke kubespray and kubeadm make installs easierI Shared storage via Ceph and NFSI Networking is the hard part

Randall Smith Docker at Adams State University

Kubernetes development is focused on the cloud Unfortunately thatmeans that deploying locally is not as easy as it could be Tools likerke kubespray and kubeadm make it easierOne of the things I like about rke is that it has an easy option to teardown a cluster This is great for testingWe use Ceph RBD to provide block devices to our Linux KVM clusterWe are also able to leverage that for Docker but the way itrsquos configureddoesnrsquot work well if volumes need to be shared We use NFS for thosecases where multiple services need to access the volume at the same timeThe cloud focus really shows up when it comes time to setup networkingTherersquos an expectation that the cluster use the load balancer from thecloud provider to provide access Obviously thatrsquos not the case whenrunning it locallyAll of the subnets used by Kubernetes are designed to be internal to thecluster That makes it hard to make services public Metallb has a coupleof options to do that The easiest is Layer 2 mode which publishes IPsdirectly on the nodes and use ARPs to make other servers aware of theIPs This is what wersquore using The other option is BGP with publishesthe IPs and their location in the cluster to other routers

Migration

I Convert from Docker Compose to Pods and ServicesI Add a CI jobs to deploy to the k8s clusterI Change DNS to point to the k8s serviceI No image rebuild needed

Randall Smith Docker at Adams State University

One of the best things about using Docker and GitLab CI to manageservice deployments is that very little needs to change There is a one-time conversion of the docker compose file to the Kubernetes config formatThe CI configuration also needs to be updated so that deployments go toKubernetes After that deployment through GitLab remains the same asit was with Swarm For services that do not have external volumes theycan be deployed to both clouds and tests run as neededFor many services migration can be as easy as changing a DNS recordRemember the exact same image that wersquore running in the Swarm will berunning in Kubernetes There are no surprises with the image No needto rebuild or re-install

How Can You Get Started

I Start with something simpleI Start with services that can tolerate downtimeI You may need to change your backup strategyI CI CI CI

Randall Smith Docker at Adams State University

Donrsquot do what I did Start with something simple Web services and tasksthat donrsquot require permanent local storage are a great start These areusually easier to build and more resilient to being frequently stopped andstarted This will happen a lot with your first servicesStart with services that can tolerate downtime Test services are great butyoursquoll eventually want to move something useful into the cluster Therersquosa learning curve Donrsquot let it bite you (or discourage you)When you move to containers how you back up your data may need tochange You donrsquot have a server to log into and run backup scripts Knowwhere your data lives and back it up thereCI CI CI Also CI Having a consistent workflow will make it easier foreveryone to start using the new cluster

Build vs Buy

Randall Smith Docker at Adams State University

You can build from scratch but there are a number of pre-built commercialservicesDocker Enterprise is a simple install on-prem and provides both Swarmand Kubernetes services at the same time It also provides a registry androle-based access control We are testing this approach nowYou can also go straight to the cloud with GKE AWS or Azure Infact using Kubernetes makes it much easier to transition between cloudproviders or even use multiple providers at the same time

Logging

I Containers generally log tostandard out

I Logs can be shipped to a logaggreagator such asElasticsearch

I View logs and createdashboards in Kibana

Randall Smith Docker at Adams State University

Moving to Docker changes a number of things not the least of which islogging The official accepted way for containers to log is to write theirlogs to standard out The logs are then available via Docker This alsoallows for a standard way to ship those logs to an aggregatorGenerally this done by shipping those logs to Elasticsearch One can thenuse a tool such as Kibana to access those logs build dashboards and eventrigger alerts based on the contents of those logs

Monitoring

I Collect metrics with PrometheusI View the metrics with GrafanaI Use InfluxDB for long-term storage

Randall Smith Docker at Adams State University

Prometheus has become the premier way to collect metrics on Kubernetesclusters It collects host stats as well as information on running podsGrafana is frequently used to build dashboards to view the collected met-ricsPrometheus only keeps metrics for a short amount of time Usually lessthan two weeks The metrics can be shipped to InfluxDB for long termstorage and analysis

Security Scans

I No built-in way to see if an image has out-of-date packagesI clair and Docker Enterprise can scan images for vulnerable

packagesI Doesnrsquot work as well for 3rd party packages

Randall Smith Docker at Adams State University

One of the things that you lose with containers is a built-in way to ensurethat software in an image is not running vulnerable packages You can useyum or apt to see if new updates need to be applied but containers donrsquothave thatTools such as clair can be used to scan images at build time as part ofthe CI process or to regularly scan all images in a registry This feature isalso included in Docker Enterprise This works well for packages installedwith standard tools (such as apt) but doesnrsquot work as well for softwareinstalled from sourceOne way to keep images up-to-date is run scheduled tasks in GitLab CIthat scans the image and rebuilds is with the latest updates installed ifsomething is found Remember that base images are generally very strippeddown so they will generally have fewer updates

BANDOCK Google Group

httpsgroupsgooglecomforumforumbandock

Randall Smith Docker at Adams State University

Virginia Tech started a Google Group for schools looking to run Banner inDocker VT is running Banner 9 in Docker SwarmTo request access go to the BANDOCK Google group and click the linkthat says Apply for membership

Questions

Feel free to ask questions after the conference

Email rbsmithadamseduPhone 719-587-7836Twitter PerlStalker

Randall Smith Docker at Adams State University

Thank You

Randall Smith Docker at Adams State University

References

bull Two Five and Seven painting the rosebush from AlicersquosAdventures in Wonderland

bull ELK Stack image - httpswwwelasticcoelk-stack

bull Docker Enterprise -httpswwwdockercomproductsdocker-enterprise

bull Book cover generated with the devto book generator -httpsdevtorly

bull Double Facepalm meme created at imgflipcom

bull GitLab - httpsaboutgitlabcom

bull Kubernetes - httpskubernetesio

bull Calico - httpswwwprojectcalicoorg

bull Metallb - httpsmetallbuniversetf

Page 8: Docker at Adams State UniversityRandall Smith Docker at Adams State University One of the best things about using Docker and GitLab CI to manage service deployments is that very little

My Published Works

Randall Smith Docker at Adams State University

see my published works Docker Orchestration in particular

The Findings Kubernetes

I Ceph RBD supportI Easier migration of legacy applicationsI Each service gets its own IP addressI Better feature set

Randall Smith Docker at Adams State University

In the end I wrote up peer review recommending that we use KubernetesKubernetes was the new project on the block but it was already gaining alarge following in the communityCeph is a distributed storage system that we use to provide block devicesto our VM stack I wanted to be able to use it with Docker as wellUnfortunately the lack of ceph support in most of the products at thetime limited the list to KubernetesKubernetes also provided the easiest migration path for legacy applicationinto a container environment The pod structure allows for multiple con-tainers to be combined and communicate with each other as if they wereon the same serverThe Kubernetes service model makes deploying multiple applications whichuse the same port much easier Since each service gets its own IP addressthere are no potential port conflicts as there are with the other toolsWhile RBD support has since been added to Swarm Kubernetes also has anumber features that are unique The most prominent are scheduled jobsand easy tools to run tasks in running containers

Things Didnrsquot Go as Planned

A new install of Snipe IT requiredpersistent storage which I couldnot support on my existingDocker servers

Randall Smith Docker at Adams State University

I was unable to deploy Kubernetes immediately due to other prioritiesThen we had a project come from our support services team to setupthe an inventory management system called Snipe IT The admin on ourteam that was running with the project wanted to use the official Snipe ITDocker containerThe problem is that container storage is not persistent I couldnrsquot handleit on my existing pair of servers because I didnrsquot have a way to deal withthe storage needsTo solve this I quickly stood up a small cluster running Docker SwarmThis is exactly the type of problem that orchestration tools help solve

Why Swarm

I Built into DockerI Encrypted overlay networksI Storage driver for Ceph RBD

Randall Smith Docker at Adams State University

Swarm is built into Docker Itrsquos almost trivial to setup Service andcontainer management is fairly simple and it still offered powerful featuressuch as zero-downtime updatesSwarm also provides a native encrypted overlay network to connect con-tainers running on separate hosts This rocks Communication betweenevery container in the cluster is encrypted transparently at the networklayer That allowed us to encrypt traffic for services such as MySQL whichcan be problematic to enable native encryptionMost importantly I was able to configure the Rexray storage driver forDocker to mount Ceph RBDs for persistent storageIn two days I was able to setup a fully orchestrated Docker cluster Ev-erything was coming up roses

The Real Win GitLab CI

Randall Smith Docker at Adams State University

The real win for us came in the form of GitLab CI GitLab became integralto our workflowWersquove been using GitLab for many years as a git server It was just as I wasstarting to look into orchestration platforms that GitLab added a Dockerregistry Since we didnrsquot have one yet this was a huge winShortly thereafter I started looking into GitLabrsquos continuous integrationfeatures

Automated Container Builds

Randall Smith Docker at Adams State University

It started early in the process before I stood up the Swarm I was pushingmy image configurations into git in GitLab I started using GitLab CI toautomatically build new Docker imagesWhen a commit is pushed to a repo with CI enabled a new build is trig-gered Once the the build is complete itrsquos pushed into the Docker registryand is ready to use anywhere Where before I was building new images onmy desktop or on the servers themselves now I could let GitLab do it formeWhat makes this especially cool is that we were able to make this availableto our PR department to build new images for the main website The newadamsedu runs on Wordpress running in Docker Our web developer canbuild and test new images automaticallyI also like this process because it makes it easier to stand up services in atest environment We can test the image before it goes into productionOnce our testing is done the image that is being deployed into productionis the exact image that passed our tests This eliminates nearly all of theanxiety that comes with deploying new changes

Automated Deployment of Services

Randall Smith Docker at Adams State University

Even better we added steps to the CI process which allows him to deployhis updated Wordpress images to the Swarm without root access or needingto talk to a member of our ops team By choice the deployment istriggered manually especially to production This helps to ensure that weare being deliberate in our processesGitLab keeps an audit trail of every deployment so we can see every de-ployment that happens and who triggered it It also makes it easy to rollback to previous images in the event of serious problemsMost of our other Docker services have been migrated to this processas well This allows us to fully audit any deployments of services Theconsistent process also makes it easier for anyone on the team to makechanges if neededDoes it take longer to make small changes Yes Yes it does Howeverthe deliberate process ensures that we are consistent It also provides arecord of what changed (thanks to git) That way if we start getting callson a service we can see if there were any recent changes what they wereand when they were deployed We can also roll back updates in mostcases if there are problemsThis is a very DevOps approach Wersquore treating servers and services ascode

Building Images

FROM wordpress498-apacheLABEL maintainer=Mike Henderson mhendersonadamsedu

RUN apt-get update ampamp apt-get install -y curl zip unzip git libldap2-dev ampamp rm -rf varlibaptlists ampamp docker-php-ext-configure ldap --with-libdir=libx86_64-linux-gnu ampamp docker-php-ext-install ldap ampamp apt-get purge -y --auto-remove libldap2-dev ampamp php -r readfile(rsquohttpgetcomposerorginstallerrsquo)

| php -- --install-dir=usrbin --filename=composer

git clone of themes and plugins removed

Pull in composer file and runCOPY composerjson RUN composer install --verbose --profile --prefer-dist --no-autoloader

Make NFS mount point set ownership symlink into wordpressRUN mkdir uploadsRUN chown www-datawww-data uploadsRUN ln -s uploads usrsrcwordpresswp-contentuploads

Randall Smith Docker at Adams State University

Building an image is the first step to running a service in Docker Using offthe shelf images can be a great way to get started but eventually yoursquollneed to roll your own image This is done via a DockerfileThe easiest place to start is from an existing image The slide shows asnippet from the Dockerfile that wersquore using to build the image for whatwill be our main web site In this case wersquore expanding off the officialWordpress image to build exactly what we need for adamseduOur web guru Mike Henderson built and maintains the Dockerfile Our CIprocess automates the build process when he pushes a change into GitLaband allows him to deploy it to testing or productionThe build process makes it easier for others to review and audit an imageUnlike a server everything that goes into a service is in the image Youdonrsquot have extra things hanging around that no one knows about becausethe person doing the install forgot about it

Deploying Services

docker stack deploy -c docker-composeyml www

Randall Smith Docker at Adams State University

So letrsquos dig into how we deploy a service This is done in Swarm witha docker-composeyml file In it you specify all of the services volumesand networks that your application needs All of this happens at run timeThis is the compose file that wersquore using for to test our wordpress deploy-ment for our new website

bull base images are usually very minimal

bull version tags allow rollback Go back to previous version

bull this db approach means that every service gets own dedicated DB

[ back to slide ] When wersquore ready to deploy we run docker stackdeploy and Swarm tries to make reality match what wersquove defined inthe compose fileThe combination of Dockerfiles and the compose file define the entireconfiguration for a service This fills the same role as server configurationmanagement

Zero-downtime Upgrades

I Requires the service to run multiple replicasI Running containers are replaced one at a time until they are all

replacedI The load balancer serves requests from all running services

Randall Smith Docker at Adams State University

One of the great features of Swarm and Kubernetes is the ability to per-form zero-downtime upgrades This feature is based firstly on runningmultiple replicas of the image Each running container is updated one ata time or in a configurable number of groupsThe built-in load balancer will serve requests from each running containerAs containers are shutdown they are removed from the load balancer Thenew containers are added and will start serving requests Eventually everycontainer will be replaced with ones running the new image The servicewill remain up the entire timeDuring the update process some requests may go to old containers whileothers go to the new ones As long as everything is backwards compatiblethis works wellEven for services that cannot run with multiple replicas such as databasesthey generally restart so quickly that downtime for upgrades is reduced toseconds

Who Needs a Server

docker run ictuopenvas-docker openvasrun_scanpy -v pathtoresultsopenvasreports host1host2host2 report-name

Randall Smith Docker at Adams State University

One of the great things about running services in containers is that westarted to kill off servers In some cases is was what you would expectThe server that was running a service isnrsquot needed anymore In one casein particular the change was even more drasticWe use OpenVAS to run regular security scans against our servers TheOpenVAS server included the security scanning and a web interface formanaging the scans and providing access to the reportsCameron started looking into how to move OpenVAS into a container anddiscovered a couple of pre-built images One of them allows you to passin a list of hosts to scan on startup The container will go through thelist scan every host and write the report to a volume Even better thecontainer downloads all of the latest checks when it starts so it is alwaysup-to-dateWe were able to replace an entire server with a one-liner We can schedulethat to run as a cron job or run it on demand anytime we need it

Who Needs a Cluster

I Use single node Docker Swarm for standalone servicesI All of the CI tooling is still availableI Take advantage of rollback and CI audit trail

Randall Smith Docker at Adams State University

We also found that in some cases it still might make sense to run aservice on a standalone server However you can still take advantage ofthe managed deployment and rollback that is available with Docker Swarmon a single hostAll of the CI build and deployment options are available as they wouldbe if the service were running on a full cluster Instead a specificgitlab-runner is used on the standalone host Deployments are thenconfigured to use that runnerThere are two big wins when taking this approach on standalone serversFirst you get the rapid rollback in the event of failure Second you getthe audit trail and accountability that comes from the CI environment

There Were Problems

I Swarm scheduler failedI Rexray driver doesnrsquot

consistently unmap RBDsI Overlay randomly network

stopped workingI IPAM assigns too many IP

addresses

Randall Smith Docker at Adams State University

As we started to use Swarm more we started to see problems appearFirst of all we ran into issues with the Swarm scheduler There was a bugthat triggered once in a while that prevented the Swarm from starting newservices Running containers were fine but we couldnrsquot start new onesEventually this was solved in a later Docker releaseThe other problem we have is the Rexray driver doesnrsquot always cleanupafter itself when a container is stopped It can leave RBDs mounted ormapped on a node preventing it from starting elsewhere in the SwarmThis can cause a service to take longer to start or prevent it from startingentirelyWe had two major network issues First of all the overlay network wouldsometimes stop talking Containers on the same nodes could continue toconnect but they couldnrsquot talk to others in the Swarm There turned outto be a conflict between the Kernel network timeout settings and the IPVSsettings which led to the Kernel dropping the overlay connectionsThe second problem is one wersquore still working around Therersquos a bug inthe IPAM module which assigns the virtual IPs If services are frequentlyupdated it can lead to multiple IPs being assigned to the service andthe old IPs not being cleaned up This can lead to strange connectivityproblems

Not A Total Loss

In the end Docker Swarm has proven to be a solid proof ofconcept

Randall Smith Docker at Adams State University

In the end Docker Swarm has proven to be a solid proof of conceptWe learned a lot about how to handle containers and the workflow fordeploying servicesDonrsquot be fooled with the proof of concept label Swarm is a solid prod-uct So solid that we are using this to run production services

Next Steps

Randall Smith Docker at Adams State University

The next step for us is to move into our new Kubernetes clusterI wish I could say that we were there now Unfortunately little things likemoving to Banner 9 took up a lot of time Herersquos where we are now andwhat our plans are moving forward

Architecture

Randall Smith Docker at Adams State University

The architecture for the new Kubernetes cluster mirrors what wersquore doingwith SwarmWersquore using haproxy on the gateway nodes to provide public access toservices running in the cluster It also provides TLS termination for webservices The nodes are managed by puppet (and GitLab CI)The Kubernetes cluster can use a number of different drivers to provide thenetwork overlay and enforce network access controls The Calico driverfor example uses BGP to route requests for services to the host where thecontainers running Using BGP means that standard routingnetworkingtools can be used to troubleshoot problemsOne of the best things about the native Docker overlay network is thatall traffic between all of the containers on that network in the Swarm canbe encrypted I didnrsquot want to lose that when moving to Kubernetes so Iuse puppet to automatically configure libreswan to build host-to-host linksbetween all of the hosts with IPSec and letsencrypt (Mostly automaticallyI still have to manually request the the certificate)Cluster management goes through kubeadmin01 A gitlab-runner onkubeadmin01 handles service updates coming from GitLab CI

Setup

I rke kubespray and kubeadm make installs easierI Shared storage via Ceph and NFSI Networking is the hard part

Randall Smith Docker at Adams State University

Kubernetes development is focused on the cloud Unfortunately thatmeans that deploying locally is not as easy as it could be Tools likerke kubespray and kubeadm make it easierOne of the things I like about rke is that it has an easy option to teardown a cluster This is great for testingWe use Ceph RBD to provide block devices to our Linux KVM clusterWe are also able to leverage that for Docker but the way itrsquos configureddoesnrsquot work well if volumes need to be shared We use NFS for thosecases where multiple services need to access the volume at the same timeThe cloud focus really shows up when it comes time to setup networkingTherersquos an expectation that the cluster use the load balancer from thecloud provider to provide access Obviously thatrsquos not the case whenrunning it locallyAll of the subnets used by Kubernetes are designed to be internal to thecluster That makes it hard to make services public Metallb has a coupleof options to do that The easiest is Layer 2 mode which publishes IPsdirectly on the nodes and use ARPs to make other servers aware of theIPs This is what wersquore using The other option is BGP with publishesthe IPs and their location in the cluster to other routers

Migration

I Convert from Docker Compose to Pods and ServicesI Add a CI jobs to deploy to the k8s clusterI Change DNS to point to the k8s serviceI No image rebuild needed

Randall Smith Docker at Adams State University

One of the best things about using Docker and GitLab CI to manageservice deployments is that very little needs to change There is a one-time conversion of the docker compose file to the Kubernetes config formatThe CI configuration also needs to be updated so that deployments go toKubernetes After that deployment through GitLab remains the same asit was with Swarm For services that do not have external volumes theycan be deployed to both clouds and tests run as neededFor many services migration can be as easy as changing a DNS recordRemember the exact same image that wersquore running in the Swarm will berunning in Kubernetes There are no surprises with the image No needto rebuild or re-install

How Can You Get Started

I Start with something simpleI Start with services that can tolerate downtimeI You may need to change your backup strategyI CI CI CI

Randall Smith Docker at Adams State University

Donrsquot do what I did Start with something simple Web services and tasksthat donrsquot require permanent local storage are a great start These areusually easier to build and more resilient to being frequently stopped andstarted This will happen a lot with your first servicesStart with services that can tolerate downtime Test services are great butyoursquoll eventually want to move something useful into the cluster Therersquosa learning curve Donrsquot let it bite you (or discourage you)When you move to containers how you back up your data may need tochange You donrsquot have a server to log into and run backup scripts Knowwhere your data lives and back it up thereCI CI CI Also CI Having a consistent workflow will make it easier foreveryone to start using the new cluster

Build vs Buy

Randall Smith Docker at Adams State University

You can build from scratch but there are a number of pre-built commercialservicesDocker Enterprise is a simple install on-prem and provides both Swarmand Kubernetes services at the same time It also provides a registry androle-based access control We are testing this approach nowYou can also go straight to the cloud with GKE AWS or Azure Infact using Kubernetes makes it much easier to transition between cloudproviders or even use multiple providers at the same time

Logging

I Containers generally log tostandard out

I Logs can be shipped to a logaggreagator such asElasticsearch

I View logs and createdashboards in Kibana

Randall Smith Docker at Adams State University

Moving to Docker changes a number of things not the least of which islogging The official accepted way for containers to log is to write theirlogs to standard out The logs are then available via Docker This alsoallows for a standard way to ship those logs to an aggregatorGenerally this done by shipping those logs to Elasticsearch One can thenuse a tool such as Kibana to access those logs build dashboards and eventrigger alerts based on the contents of those logs

Monitoring

I Collect metrics with PrometheusI View the metrics with GrafanaI Use InfluxDB for long-term storage

Randall Smith Docker at Adams State University

Prometheus has become the premier way to collect metrics on Kubernetesclusters It collects host stats as well as information on running podsGrafana is frequently used to build dashboards to view the collected met-ricsPrometheus only keeps metrics for a short amount of time Usually lessthan two weeks The metrics can be shipped to InfluxDB for long termstorage and analysis

Security Scans

I No built-in way to see if an image has out-of-date packagesI clair and Docker Enterprise can scan images for vulnerable

packagesI Doesnrsquot work as well for 3rd party packages

Randall Smith Docker at Adams State University

One of the things that you lose with containers is a built-in way to ensurethat software in an image is not running vulnerable packages You can useyum or apt to see if new updates need to be applied but containers donrsquothave thatTools such as clair can be used to scan images at build time as part ofthe CI process or to regularly scan all images in a registry This feature isalso included in Docker Enterprise This works well for packages installedwith standard tools (such as apt) but doesnrsquot work as well for softwareinstalled from sourceOne way to keep images up-to-date is run scheduled tasks in GitLab CIthat scans the image and rebuilds is with the latest updates installed ifsomething is found Remember that base images are generally very strippeddown so they will generally have fewer updates

BANDOCK Google Group

httpsgroupsgooglecomforumforumbandock

Randall Smith Docker at Adams State University

Virginia Tech started a Google Group for schools looking to run Banner inDocker VT is running Banner 9 in Docker SwarmTo request access go to the BANDOCK Google group and click the linkthat says Apply for membership

Questions

Feel free to ask questions after the conference

Email rbsmithadamseduPhone 719-587-7836Twitter PerlStalker

Randall Smith Docker at Adams State University

Thank You

Randall Smith Docker at Adams State University

References

bull Two Five and Seven painting the rosebush from AlicersquosAdventures in Wonderland

bull ELK Stack image - httpswwwelasticcoelk-stack

bull Docker Enterprise -httpswwwdockercomproductsdocker-enterprise

bull Book cover generated with the devto book generator -httpsdevtorly

bull Double Facepalm meme created at imgflipcom

bull GitLab - httpsaboutgitlabcom

bull Kubernetes - httpskubernetesio

bull Calico - httpswwwprojectcalicoorg

bull Metallb - httpsmetallbuniversetf

Page 9: Docker at Adams State UniversityRandall Smith Docker at Adams State University One of the best things about using Docker and GitLab CI to manage service deployments is that very little

The Findings Kubernetes

I Ceph RBD supportI Easier migration of legacy applicationsI Each service gets its own IP addressI Better feature set

Randall Smith Docker at Adams State University

In the end I wrote up peer review recommending that we use KubernetesKubernetes was the new project on the block but it was already gaining alarge following in the communityCeph is a distributed storage system that we use to provide block devicesto our VM stack I wanted to be able to use it with Docker as wellUnfortunately the lack of ceph support in most of the products at thetime limited the list to KubernetesKubernetes also provided the easiest migration path for legacy applicationinto a container environment The pod structure allows for multiple con-tainers to be combined and communicate with each other as if they wereon the same serverThe Kubernetes service model makes deploying multiple applications whichuse the same port much easier Since each service gets its own IP addressthere are no potential port conflicts as there are with the other toolsWhile RBD support has since been added to Swarm Kubernetes also has anumber features that are unique The most prominent are scheduled jobsand easy tools to run tasks in running containers

Things Didnrsquot Go as Planned

A new install of Snipe IT requiredpersistent storage which I couldnot support on my existingDocker servers

Randall Smith Docker at Adams State University

I was unable to deploy Kubernetes immediately due to other prioritiesThen we had a project come from our support services team to setupthe an inventory management system called Snipe IT The admin on ourteam that was running with the project wanted to use the official Snipe ITDocker containerThe problem is that container storage is not persistent I couldnrsquot handleit on my existing pair of servers because I didnrsquot have a way to deal withthe storage needsTo solve this I quickly stood up a small cluster running Docker SwarmThis is exactly the type of problem that orchestration tools help solve

Why Swarm

I Built into DockerI Encrypted overlay networksI Storage driver for Ceph RBD

Randall Smith Docker at Adams State University

Swarm is built into Docker Itrsquos almost trivial to setup Service andcontainer management is fairly simple and it still offered powerful featuressuch as zero-downtime updatesSwarm also provides a native encrypted overlay network to connect con-tainers running on separate hosts This rocks Communication betweenevery container in the cluster is encrypted transparently at the networklayer That allowed us to encrypt traffic for services such as MySQL whichcan be problematic to enable native encryptionMost importantly I was able to configure the Rexray storage driver forDocker to mount Ceph RBDs for persistent storageIn two days I was able to setup a fully orchestrated Docker cluster Ev-erything was coming up roses

The Real Win GitLab CI

Randall Smith Docker at Adams State University

The real win for us came in the form of GitLab CI GitLab became integralto our workflowWersquove been using GitLab for many years as a git server It was just as I wasstarting to look into orchestration platforms that GitLab added a Dockerregistry Since we didnrsquot have one yet this was a huge winShortly thereafter I started looking into GitLabrsquos continuous integrationfeatures

Automated Container Builds

Randall Smith Docker at Adams State University

It started early in the process before I stood up the Swarm I was pushingmy image configurations into git in GitLab I started using GitLab CI toautomatically build new Docker imagesWhen a commit is pushed to a repo with CI enabled a new build is trig-gered Once the the build is complete itrsquos pushed into the Docker registryand is ready to use anywhere Where before I was building new images onmy desktop or on the servers themselves now I could let GitLab do it formeWhat makes this especially cool is that we were able to make this availableto our PR department to build new images for the main website The newadamsedu runs on Wordpress running in Docker Our web developer canbuild and test new images automaticallyI also like this process because it makes it easier to stand up services in atest environment We can test the image before it goes into productionOnce our testing is done the image that is being deployed into productionis the exact image that passed our tests This eliminates nearly all of theanxiety that comes with deploying new changes

Automated Deployment of Services

Randall Smith Docker at Adams State University

Even better we added steps to the CI process which allows him to deployhis updated Wordpress images to the Swarm without root access or needingto talk to a member of our ops team By choice the deployment istriggered manually especially to production This helps to ensure that weare being deliberate in our processesGitLab keeps an audit trail of every deployment so we can see every de-ployment that happens and who triggered it It also makes it easy to rollback to previous images in the event of serious problemsMost of our other Docker services have been migrated to this processas well This allows us to fully audit any deployments of services Theconsistent process also makes it easier for anyone on the team to makechanges if neededDoes it take longer to make small changes Yes Yes it does Howeverthe deliberate process ensures that we are consistent It also provides arecord of what changed (thanks to git) That way if we start getting callson a service we can see if there were any recent changes what they wereand when they were deployed We can also roll back updates in mostcases if there are problemsThis is a very DevOps approach Wersquore treating servers and services ascode

Building Images

FROM wordpress498-apacheLABEL maintainer=Mike Henderson mhendersonadamsedu

RUN apt-get update ampamp apt-get install -y curl zip unzip git libldap2-dev ampamp rm -rf varlibaptlists ampamp docker-php-ext-configure ldap --with-libdir=libx86_64-linux-gnu ampamp docker-php-ext-install ldap ampamp apt-get purge -y --auto-remove libldap2-dev ampamp php -r readfile(rsquohttpgetcomposerorginstallerrsquo)

| php -- --install-dir=usrbin --filename=composer

git clone of themes and plugins removed

Pull in composer file and runCOPY composerjson RUN composer install --verbose --profile --prefer-dist --no-autoloader

Make NFS mount point set ownership symlink into wordpressRUN mkdir uploadsRUN chown www-datawww-data uploadsRUN ln -s uploads usrsrcwordpresswp-contentuploads

Randall Smith Docker at Adams State University

Building an image is the first step to running a service in Docker Using offthe shelf images can be a great way to get started but eventually yoursquollneed to roll your own image This is done via a DockerfileThe easiest place to start is from an existing image The slide shows asnippet from the Dockerfile that wersquore using to build the image for whatwill be our main web site In this case wersquore expanding off the officialWordpress image to build exactly what we need for adamseduOur web guru Mike Henderson built and maintains the Dockerfile Our CIprocess automates the build process when he pushes a change into GitLaband allows him to deploy it to testing or productionThe build process makes it easier for others to review and audit an imageUnlike a server everything that goes into a service is in the image Youdonrsquot have extra things hanging around that no one knows about becausethe person doing the install forgot about it

Deploying Services

docker stack deploy -c docker-composeyml www

Randall Smith Docker at Adams State University

So letrsquos dig into how we deploy a service This is done in Swarm witha docker-composeyml file In it you specify all of the services volumesand networks that your application needs All of this happens at run timeThis is the compose file that wersquore using for to test our wordpress deploy-ment for our new website

bull base images are usually very minimal

bull version tags allow rollback Go back to previous version

bull this db approach means that every service gets own dedicated DB

[ back to slide ] When wersquore ready to deploy we run docker stackdeploy and Swarm tries to make reality match what wersquove defined inthe compose fileThe combination of Dockerfiles and the compose file define the entireconfiguration for a service This fills the same role as server configurationmanagement

Zero-downtime Upgrades

I Requires the service to run multiple replicasI Running containers are replaced one at a time until they are all

replacedI The load balancer serves requests from all running services

Randall Smith Docker at Adams State University

One of the great features of Swarm and Kubernetes is the ability to per-form zero-downtime upgrades This feature is based firstly on runningmultiple replicas of the image Each running container is updated one ata time or in a configurable number of groupsThe built-in load balancer will serve requests from each running containerAs containers are shutdown they are removed from the load balancer Thenew containers are added and will start serving requests Eventually everycontainer will be replaced with ones running the new image The servicewill remain up the entire timeDuring the update process some requests may go to old containers whileothers go to the new ones As long as everything is backwards compatiblethis works wellEven for services that cannot run with multiple replicas such as databasesthey generally restart so quickly that downtime for upgrades is reduced toseconds

Who Needs a Server

docker run ictuopenvas-docker openvasrun_scanpy -v pathtoresultsopenvasreports host1host2host2 report-name

Randall Smith Docker at Adams State University

One of the great things about running services in containers is that westarted to kill off servers In some cases is was what you would expectThe server that was running a service isnrsquot needed anymore In one casein particular the change was even more drasticWe use OpenVAS to run regular security scans against our servers TheOpenVAS server included the security scanning and a web interface formanaging the scans and providing access to the reportsCameron started looking into how to move OpenVAS into a container anddiscovered a couple of pre-built images One of them allows you to passin a list of hosts to scan on startup The container will go through thelist scan every host and write the report to a volume Even better thecontainer downloads all of the latest checks when it starts so it is alwaysup-to-dateWe were able to replace an entire server with a one-liner We can schedulethat to run as a cron job or run it on demand anytime we need it

Who Needs a Cluster

I Use single node Docker Swarm for standalone servicesI All of the CI tooling is still availableI Take advantage of rollback and CI audit trail

Randall Smith Docker at Adams State University

We also found that in some cases it still might make sense to run aservice on a standalone server However you can still take advantage ofthe managed deployment and rollback that is available with Docker Swarmon a single hostAll of the CI build and deployment options are available as they wouldbe if the service were running on a full cluster Instead a specificgitlab-runner is used on the standalone host Deployments are thenconfigured to use that runnerThere are two big wins when taking this approach on standalone serversFirst you get the rapid rollback in the event of failure Second you getthe audit trail and accountability that comes from the CI environment

There Were Problems

I Swarm scheduler failedI Rexray driver doesnrsquot

consistently unmap RBDsI Overlay randomly network

stopped workingI IPAM assigns too many IP

addresses

Randall Smith Docker at Adams State University

As we started to use Swarm more we started to see problems appearFirst of all we ran into issues with the Swarm scheduler There was a bugthat triggered once in a while that prevented the Swarm from starting newservices Running containers were fine but we couldnrsquot start new onesEventually this was solved in a later Docker releaseThe other problem we have is the Rexray driver doesnrsquot always cleanupafter itself when a container is stopped It can leave RBDs mounted ormapped on a node preventing it from starting elsewhere in the SwarmThis can cause a service to take longer to start or prevent it from startingentirelyWe had two major network issues First of all the overlay network wouldsometimes stop talking Containers on the same nodes could continue toconnect but they couldnrsquot talk to others in the Swarm There turned outto be a conflict between the Kernel network timeout settings and the IPVSsettings which led to the Kernel dropping the overlay connectionsThe second problem is one wersquore still working around Therersquos a bug inthe IPAM module which assigns the virtual IPs If services are frequentlyupdated it can lead to multiple IPs being assigned to the service andthe old IPs not being cleaned up This can lead to strange connectivityproblems

Not A Total Loss

In the end Docker Swarm has proven to be a solid proof ofconcept

Randall Smith Docker at Adams State University

In the end Docker Swarm has proven to be a solid proof of conceptWe learned a lot about how to handle containers and the workflow fordeploying servicesDonrsquot be fooled with the proof of concept label Swarm is a solid prod-uct So solid that we are using this to run production services

Next Steps

Randall Smith Docker at Adams State University

The next step for us is to move into our new Kubernetes clusterI wish I could say that we were there now Unfortunately little things likemoving to Banner 9 took up a lot of time Herersquos where we are now andwhat our plans are moving forward

Architecture

Randall Smith Docker at Adams State University

The architecture for the new Kubernetes cluster mirrors what wersquore doingwith SwarmWersquore using haproxy on the gateway nodes to provide public access toservices running in the cluster It also provides TLS termination for webservices The nodes are managed by puppet (and GitLab CI)The Kubernetes cluster can use a number of different drivers to provide thenetwork overlay and enforce network access controls The Calico driverfor example uses BGP to route requests for services to the host where thecontainers running Using BGP means that standard routingnetworkingtools can be used to troubleshoot problemsOne of the best things about the native Docker overlay network is thatall traffic between all of the containers on that network in the Swarm canbe encrypted I didnrsquot want to lose that when moving to Kubernetes so Iuse puppet to automatically configure libreswan to build host-to-host linksbetween all of the hosts with IPSec and letsencrypt (Mostly automaticallyI still have to manually request the the certificate)Cluster management goes through kubeadmin01 A gitlab-runner onkubeadmin01 handles service updates coming from GitLab CI

Setup

I rke kubespray and kubeadm make installs easierI Shared storage via Ceph and NFSI Networking is the hard part

Randall Smith Docker at Adams State University

Kubernetes development is focused on the cloud Unfortunately thatmeans that deploying locally is not as easy as it could be Tools likerke kubespray and kubeadm make it easierOne of the things I like about rke is that it has an easy option to teardown a cluster This is great for testingWe use Ceph RBD to provide block devices to our Linux KVM clusterWe are also able to leverage that for Docker but the way itrsquos configureddoesnrsquot work well if volumes need to be shared We use NFS for thosecases where multiple services need to access the volume at the same timeThe cloud focus really shows up when it comes time to setup networkingTherersquos an expectation that the cluster use the load balancer from thecloud provider to provide access Obviously thatrsquos not the case whenrunning it locallyAll of the subnets used by Kubernetes are designed to be internal to thecluster That makes it hard to make services public Metallb has a coupleof options to do that The easiest is Layer 2 mode which publishes IPsdirectly on the nodes and use ARPs to make other servers aware of theIPs This is what wersquore using The other option is BGP with publishesthe IPs and their location in the cluster to other routers

Migration

I Convert from Docker Compose to Pods and ServicesI Add a CI jobs to deploy to the k8s clusterI Change DNS to point to the k8s serviceI No image rebuild needed

Randall Smith Docker at Adams State University

One of the best things about using Docker and GitLab CI to manageservice deployments is that very little needs to change There is a one-time conversion of the docker compose file to the Kubernetes config formatThe CI configuration also needs to be updated so that deployments go toKubernetes After that deployment through GitLab remains the same asit was with Swarm For services that do not have external volumes theycan be deployed to both clouds and tests run as neededFor many services migration can be as easy as changing a DNS recordRemember the exact same image that wersquore running in the Swarm will berunning in Kubernetes There are no surprises with the image No needto rebuild or re-install

How Can You Get Started

I Start with something simpleI Start with services that can tolerate downtimeI You may need to change your backup strategyI CI CI CI

Randall Smith Docker at Adams State University

Donrsquot do what I did Start with something simple Web services and tasksthat donrsquot require permanent local storage are a great start These areusually easier to build and more resilient to being frequently stopped andstarted This will happen a lot with your first servicesStart with services that can tolerate downtime Test services are great butyoursquoll eventually want to move something useful into the cluster Therersquosa learning curve Donrsquot let it bite you (or discourage you)When you move to containers how you back up your data may need tochange You donrsquot have a server to log into and run backup scripts Knowwhere your data lives and back it up thereCI CI CI Also CI Having a consistent workflow will make it easier foreveryone to start using the new cluster

Build vs Buy

Randall Smith Docker at Adams State University

You can build from scratch but there are a number of pre-built commercialservicesDocker Enterprise is a simple install on-prem and provides both Swarmand Kubernetes services at the same time It also provides a registry androle-based access control We are testing this approach nowYou can also go straight to the cloud with GKE AWS or Azure Infact using Kubernetes makes it much easier to transition between cloudproviders or even use multiple providers at the same time

Logging

I Containers generally log tostandard out

I Logs can be shipped to a logaggreagator such asElasticsearch

I View logs and createdashboards in Kibana

Randall Smith Docker at Adams State University

Moving to Docker changes a number of things not the least of which islogging The official accepted way for containers to log is to write theirlogs to standard out The logs are then available via Docker This alsoallows for a standard way to ship those logs to an aggregatorGenerally this done by shipping those logs to Elasticsearch One can thenuse a tool such as Kibana to access those logs build dashboards and eventrigger alerts based on the contents of those logs

Monitoring

I Collect metrics with PrometheusI View the metrics with GrafanaI Use InfluxDB for long-term storage

Randall Smith Docker at Adams State University

Prometheus has become the premier way to collect metrics on Kubernetesclusters It collects host stats as well as information on running podsGrafana is frequently used to build dashboards to view the collected met-ricsPrometheus only keeps metrics for a short amount of time Usually lessthan two weeks The metrics can be shipped to InfluxDB for long termstorage and analysis

Security Scans

I No built-in way to see if an image has out-of-date packagesI clair and Docker Enterprise can scan images for vulnerable

packagesI Doesnrsquot work as well for 3rd party packages

Randall Smith Docker at Adams State University

One of the things that you lose with containers is a built-in way to ensurethat software in an image is not running vulnerable packages You can useyum or apt to see if new updates need to be applied but containers donrsquothave thatTools such as clair can be used to scan images at build time as part ofthe CI process or to regularly scan all images in a registry This feature isalso included in Docker Enterprise This works well for packages installedwith standard tools (such as apt) but doesnrsquot work as well for softwareinstalled from sourceOne way to keep images up-to-date is run scheduled tasks in GitLab CIthat scans the image and rebuilds is with the latest updates installed ifsomething is found Remember that base images are generally very strippeddown so they will generally have fewer updates

BANDOCK Google Group

httpsgroupsgooglecomforumforumbandock

Randall Smith Docker at Adams State University

Virginia Tech started a Google Group for schools looking to run Banner inDocker VT is running Banner 9 in Docker SwarmTo request access go to the BANDOCK Google group and click the linkthat says Apply for membership

Questions

Feel free to ask questions after the conference

Email rbsmithadamseduPhone 719-587-7836Twitter PerlStalker

Randall Smith Docker at Adams State University

Thank You

Randall Smith Docker at Adams State University

References

bull Two Five and Seven painting the rosebush from AlicersquosAdventures in Wonderland

bull ELK Stack image - httpswwwelasticcoelk-stack

bull Docker Enterprise -httpswwwdockercomproductsdocker-enterprise

bull Book cover generated with the devto book generator -httpsdevtorly

bull Double Facepalm meme created at imgflipcom

bull GitLab - httpsaboutgitlabcom

bull Kubernetes - httpskubernetesio

bull Calico - httpswwwprojectcalicoorg

bull Metallb - httpsmetallbuniversetf

Page 10: Docker at Adams State UniversityRandall Smith Docker at Adams State University One of the best things about using Docker and GitLab CI to manage service deployments is that very little

Things Didnrsquot Go as Planned

A new install of Snipe IT requiredpersistent storage which I couldnot support on my existingDocker servers

Randall Smith Docker at Adams State University

I was unable to deploy Kubernetes immediately due to other prioritiesThen we had a project come from our support services team to setupthe an inventory management system called Snipe IT The admin on ourteam that was running with the project wanted to use the official Snipe ITDocker containerThe problem is that container storage is not persistent I couldnrsquot handleit on my existing pair of servers because I didnrsquot have a way to deal withthe storage needsTo solve this I quickly stood up a small cluster running Docker SwarmThis is exactly the type of problem that orchestration tools help solve

Why Swarm

I Built into DockerI Encrypted overlay networksI Storage driver for Ceph RBD

Randall Smith Docker at Adams State University

Swarm is built into Docker Itrsquos almost trivial to setup Service andcontainer management is fairly simple and it still offered powerful featuressuch as zero-downtime updatesSwarm also provides a native encrypted overlay network to connect con-tainers running on separate hosts This rocks Communication betweenevery container in the cluster is encrypted transparently at the networklayer That allowed us to encrypt traffic for services such as MySQL whichcan be problematic to enable native encryptionMost importantly I was able to configure the Rexray storage driver forDocker to mount Ceph RBDs for persistent storageIn two days I was able to setup a fully orchestrated Docker cluster Ev-erything was coming up roses

The Real Win GitLab CI

Randall Smith Docker at Adams State University

The real win for us came in the form of GitLab CI GitLab became integralto our workflowWersquove been using GitLab for many years as a git server It was just as I wasstarting to look into orchestration platforms that GitLab added a Dockerregistry Since we didnrsquot have one yet this was a huge winShortly thereafter I started looking into GitLabrsquos continuous integrationfeatures

Automated Container Builds

Randall Smith Docker at Adams State University

It started early in the process before I stood up the Swarm I was pushingmy image configurations into git in GitLab I started using GitLab CI toautomatically build new Docker imagesWhen a commit is pushed to a repo with CI enabled a new build is trig-gered Once the the build is complete itrsquos pushed into the Docker registryand is ready to use anywhere Where before I was building new images onmy desktop or on the servers themselves now I could let GitLab do it formeWhat makes this especially cool is that we were able to make this availableto our PR department to build new images for the main website The newadamsedu runs on Wordpress running in Docker Our web developer canbuild and test new images automaticallyI also like this process because it makes it easier to stand up services in atest environment We can test the image before it goes into productionOnce our testing is done the image that is being deployed into productionis the exact image that passed our tests This eliminates nearly all of theanxiety that comes with deploying new changes

Automated Deployment of Services

Randall Smith Docker at Adams State University

Even better we added steps to the CI process which allows him to deployhis updated Wordpress images to the Swarm without root access or needingto talk to a member of our ops team By choice the deployment istriggered manually especially to production This helps to ensure that weare being deliberate in our processesGitLab keeps an audit trail of every deployment so we can see every de-ployment that happens and who triggered it It also makes it easy to rollback to previous images in the event of serious problemsMost of our other Docker services have been migrated to this processas well This allows us to fully audit any deployments of services Theconsistent process also makes it easier for anyone on the team to makechanges if neededDoes it take longer to make small changes Yes Yes it does Howeverthe deliberate process ensures that we are consistent It also provides arecord of what changed (thanks to git) That way if we start getting callson a service we can see if there were any recent changes what they wereand when they were deployed We can also roll back updates in mostcases if there are problemsThis is a very DevOps approach Wersquore treating servers and services ascode

Building Images

FROM wordpress498-apacheLABEL maintainer=Mike Henderson mhendersonadamsedu

RUN apt-get update ampamp apt-get install -y curl zip unzip git libldap2-dev ampamp rm -rf varlibaptlists ampamp docker-php-ext-configure ldap --with-libdir=libx86_64-linux-gnu ampamp docker-php-ext-install ldap ampamp apt-get purge -y --auto-remove libldap2-dev ampamp php -r readfile(rsquohttpgetcomposerorginstallerrsquo)

| php -- --install-dir=usrbin --filename=composer

git clone of themes and plugins removed

Pull in composer file and runCOPY composerjson RUN composer install --verbose --profile --prefer-dist --no-autoloader

Make NFS mount point set ownership symlink into wordpressRUN mkdir uploadsRUN chown www-datawww-data uploadsRUN ln -s uploads usrsrcwordpresswp-contentuploads

Randall Smith Docker at Adams State University

Building an image is the first step to running a service in Docker Using offthe shelf images can be a great way to get started but eventually yoursquollneed to roll your own image This is done via a DockerfileThe easiest place to start is from an existing image The slide shows asnippet from the Dockerfile that wersquore using to build the image for whatwill be our main web site In this case wersquore expanding off the officialWordpress image to build exactly what we need for adamseduOur web guru Mike Henderson built and maintains the Dockerfile Our CIprocess automates the build process when he pushes a change into GitLaband allows him to deploy it to testing or productionThe build process makes it easier for others to review and audit an imageUnlike a server everything that goes into a service is in the image Youdonrsquot have extra things hanging around that no one knows about becausethe person doing the install forgot about it

Deploying Services

docker stack deploy -c docker-composeyml www

Randall Smith Docker at Adams State University

So letrsquos dig into how we deploy a service This is done in Swarm witha docker-composeyml file In it you specify all of the services volumesand networks that your application needs All of this happens at run timeThis is the compose file that wersquore using for to test our wordpress deploy-ment for our new website

bull base images are usually very minimal

bull version tags allow rollback Go back to previous version

bull this db approach means that every service gets own dedicated DB

[ back to slide ] When wersquore ready to deploy we run docker stackdeploy and Swarm tries to make reality match what wersquove defined inthe compose fileThe combination of Dockerfiles and the compose file define the entireconfiguration for a service This fills the same role as server configurationmanagement

Zero-downtime Upgrades

I Requires the service to run multiple replicasI Running containers are replaced one at a time until they are all

replacedI The load balancer serves requests from all running services

Randall Smith Docker at Adams State University

One of the great features of Swarm and Kubernetes is the ability to per-form zero-downtime upgrades This feature is based firstly on runningmultiple replicas of the image Each running container is updated one ata time or in a configurable number of groupsThe built-in load balancer will serve requests from each running containerAs containers are shutdown they are removed from the load balancer Thenew containers are added and will start serving requests Eventually everycontainer will be replaced with ones running the new image The servicewill remain up the entire timeDuring the update process some requests may go to old containers whileothers go to the new ones As long as everything is backwards compatiblethis works wellEven for services that cannot run with multiple replicas such as databasesthey generally restart so quickly that downtime for upgrades is reduced toseconds

Who Needs a Server

docker run ictuopenvas-docker openvasrun_scanpy -v pathtoresultsopenvasreports host1host2host2 report-name

Randall Smith Docker at Adams State University

One of the great things about running services in containers is that westarted to kill off servers In some cases is was what you would expectThe server that was running a service isnrsquot needed anymore In one casein particular the change was even more drasticWe use OpenVAS to run regular security scans against our servers TheOpenVAS server included the security scanning and a web interface formanaging the scans and providing access to the reportsCameron started looking into how to move OpenVAS into a container anddiscovered a couple of pre-built images One of them allows you to passin a list of hosts to scan on startup The container will go through thelist scan every host and write the report to a volume Even better thecontainer downloads all of the latest checks when it starts so it is alwaysup-to-dateWe were able to replace an entire server with a one-liner We can schedulethat to run as a cron job or run it on demand anytime we need it

Who Needs a Cluster

I Use single node Docker Swarm for standalone servicesI All of the CI tooling is still availableI Take advantage of rollback and CI audit trail

Randall Smith Docker at Adams State University

We also found that in some cases it still might make sense to run aservice on a standalone server However you can still take advantage ofthe managed deployment and rollback that is available with Docker Swarmon a single hostAll of the CI build and deployment options are available as they wouldbe if the service were running on a full cluster Instead a specificgitlab-runner is used on the standalone host Deployments are thenconfigured to use that runnerThere are two big wins when taking this approach on standalone serversFirst you get the rapid rollback in the event of failure Second you getthe audit trail and accountability that comes from the CI environment

There Were Problems

I Swarm scheduler failedI Rexray driver doesnrsquot

consistently unmap RBDsI Overlay randomly network

stopped workingI IPAM assigns too many IP

addresses

Randall Smith Docker at Adams State University

As we started to use Swarm more we started to see problems appearFirst of all we ran into issues with the Swarm scheduler There was a bugthat triggered once in a while that prevented the Swarm from starting newservices Running containers were fine but we couldnrsquot start new onesEventually this was solved in a later Docker releaseThe other problem we have is the Rexray driver doesnrsquot always cleanupafter itself when a container is stopped It can leave RBDs mounted ormapped on a node preventing it from starting elsewhere in the SwarmThis can cause a service to take longer to start or prevent it from startingentirelyWe had two major network issues First of all the overlay network wouldsometimes stop talking Containers on the same nodes could continue toconnect but they couldnrsquot talk to others in the Swarm There turned outto be a conflict between the Kernel network timeout settings and the IPVSsettings which led to the Kernel dropping the overlay connectionsThe second problem is one wersquore still working around Therersquos a bug inthe IPAM module which assigns the virtual IPs If services are frequentlyupdated it can lead to multiple IPs being assigned to the service andthe old IPs not being cleaned up This can lead to strange connectivityproblems

Not A Total Loss

In the end Docker Swarm has proven to be a solid proof ofconcept

Randall Smith Docker at Adams State University

In the end Docker Swarm has proven to be a solid proof of conceptWe learned a lot about how to handle containers and the workflow fordeploying servicesDonrsquot be fooled with the proof of concept label Swarm is a solid prod-uct So solid that we are using this to run production services

Next Steps

Randall Smith Docker at Adams State University

The next step for us is to move into our new Kubernetes clusterI wish I could say that we were there now Unfortunately little things likemoving to Banner 9 took up a lot of time Herersquos where we are now andwhat our plans are moving forward

Architecture

Randall Smith Docker at Adams State University

The architecture for the new Kubernetes cluster mirrors what wersquore doingwith SwarmWersquore using haproxy on the gateway nodes to provide public access toservices running in the cluster It also provides TLS termination for webservices The nodes are managed by puppet (and GitLab CI)The Kubernetes cluster can use a number of different drivers to provide thenetwork overlay and enforce network access controls The Calico driverfor example uses BGP to route requests for services to the host where thecontainers running Using BGP means that standard routingnetworkingtools can be used to troubleshoot problemsOne of the best things about the native Docker overlay network is thatall traffic between all of the containers on that network in the Swarm canbe encrypted I didnrsquot want to lose that when moving to Kubernetes so Iuse puppet to automatically configure libreswan to build host-to-host linksbetween all of the hosts with IPSec and letsencrypt (Mostly automaticallyI still have to manually request the the certificate)Cluster management goes through kubeadmin01 A gitlab-runner onkubeadmin01 handles service updates coming from GitLab CI

Setup

I rke kubespray and kubeadm make installs easierI Shared storage via Ceph and NFSI Networking is the hard part

Randall Smith Docker at Adams State University

Kubernetes development is focused on the cloud Unfortunately thatmeans that deploying locally is not as easy as it could be Tools likerke kubespray and kubeadm make it easierOne of the things I like about rke is that it has an easy option to teardown a cluster This is great for testingWe use Ceph RBD to provide block devices to our Linux KVM clusterWe are also able to leverage that for Docker but the way itrsquos configureddoesnrsquot work well if volumes need to be shared We use NFS for thosecases where multiple services need to access the volume at the same timeThe cloud focus really shows up when it comes time to setup networkingTherersquos an expectation that the cluster use the load balancer from thecloud provider to provide access Obviously thatrsquos not the case whenrunning it locallyAll of the subnets used by Kubernetes are designed to be internal to thecluster That makes it hard to make services public Metallb has a coupleof options to do that The easiest is Layer 2 mode which publishes IPsdirectly on the nodes and use ARPs to make other servers aware of theIPs This is what wersquore using The other option is BGP with publishesthe IPs and their location in the cluster to other routers

Migration

I Convert from Docker Compose to Pods and ServicesI Add a CI jobs to deploy to the k8s clusterI Change DNS to point to the k8s serviceI No image rebuild needed

Randall Smith Docker at Adams State University

One of the best things about using Docker and GitLab CI to manageservice deployments is that very little needs to change There is a one-time conversion of the docker compose file to the Kubernetes config formatThe CI configuration also needs to be updated so that deployments go toKubernetes After that deployment through GitLab remains the same asit was with Swarm For services that do not have external volumes theycan be deployed to both clouds and tests run as neededFor many services migration can be as easy as changing a DNS recordRemember the exact same image that wersquore running in the Swarm will berunning in Kubernetes There are no surprises with the image No needto rebuild or re-install

How Can You Get Started

I Start with something simpleI Start with services that can tolerate downtimeI You may need to change your backup strategyI CI CI CI

Randall Smith Docker at Adams State University

Donrsquot do what I did Start with something simple Web services and tasksthat donrsquot require permanent local storage are a great start These areusually easier to build and more resilient to being frequently stopped andstarted This will happen a lot with your first servicesStart with services that can tolerate downtime Test services are great butyoursquoll eventually want to move something useful into the cluster Therersquosa learning curve Donrsquot let it bite you (or discourage you)When you move to containers how you back up your data may need tochange You donrsquot have a server to log into and run backup scripts Knowwhere your data lives and back it up thereCI CI CI Also CI Having a consistent workflow will make it easier foreveryone to start using the new cluster

Build vs Buy

Randall Smith Docker at Adams State University

You can build from scratch but there are a number of pre-built commercialservicesDocker Enterprise is a simple install on-prem and provides both Swarmand Kubernetes services at the same time It also provides a registry androle-based access control We are testing this approach nowYou can also go straight to the cloud with GKE AWS or Azure Infact using Kubernetes makes it much easier to transition between cloudproviders or even use multiple providers at the same time

Logging

I Containers generally log tostandard out

I Logs can be shipped to a logaggreagator such asElasticsearch

I View logs and createdashboards in Kibana

Randall Smith Docker at Adams State University

Moving to Docker changes a number of things not the least of which islogging The official accepted way for containers to log is to write theirlogs to standard out The logs are then available via Docker This alsoallows for a standard way to ship those logs to an aggregatorGenerally this done by shipping those logs to Elasticsearch One can thenuse a tool such as Kibana to access those logs build dashboards and eventrigger alerts based on the contents of those logs

Monitoring

I Collect metrics with PrometheusI View the metrics with GrafanaI Use InfluxDB for long-term storage

Randall Smith Docker at Adams State University

Prometheus has become the premier way to collect metrics on Kubernetesclusters It collects host stats as well as information on running podsGrafana is frequently used to build dashboards to view the collected met-ricsPrometheus only keeps metrics for a short amount of time Usually lessthan two weeks The metrics can be shipped to InfluxDB for long termstorage and analysis

Security Scans

I No built-in way to see if an image has out-of-date packagesI clair and Docker Enterprise can scan images for vulnerable

packagesI Doesnrsquot work as well for 3rd party packages

Randall Smith Docker at Adams State University

One of the things that you lose with containers is a built-in way to ensurethat software in an image is not running vulnerable packages You can useyum or apt to see if new updates need to be applied but containers donrsquothave thatTools such as clair can be used to scan images at build time as part ofthe CI process or to regularly scan all images in a registry This feature isalso included in Docker Enterprise This works well for packages installedwith standard tools (such as apt) but doesnrsquot work as well for softwareinstalled from sourceOne way to keep images up-to-date is run scheduled tasks in GitLab CIthat scans the image and rebuilds is with the latest updates installed ifsomething is found Remember that base images are generally very strippeddown so they will generally have fewer updates

BANDOCK Google Group

httpsgroupsgooglecomforumforumbandock

Randall Smith Docker at Adams State University

Virginia Tech started a Google Group for schools looking to run Banner inDocker VT is running Banner 9 in Docker SwarmTo request access go to the BANDOCK Google group and click the linkthat says Apply for membership

Questions

Feel free to ask questions after the conference

Email rbsmithadamseduPhone 719-587-7836Twitter PerlStalker

Randall Smith Docker at Adams State University

Thank You

Randall Smith Docker at Adams State University

References

bull Two Five and Seven painting the rosebush from AlicersquosAdventures in Wonderland

bull ELK Stack image - httpswwwelasticcoelk-stack

bull Docker Enterprise -httpswwwdockercomproductsdocker-enterprise

bull Book cover generated with the devto book generator -httpsdevtorly

bull Double Facepalm meme created at imgflipcom

bull GitLab - httpsaboutgitlabcom

bull Kubernetes - httpskubernetesio

bull Calico - httpswwwprojectcalicoorg

bull Metallb - httpsmetallbuniversetf

Page 11: Docker at Adams State UniversityRandall Smith Docker at Adams State University One of the best things about using Docker and GitLab CI to manage service deployments is that very little

Why Swarm

I Built into DockerI Encrypted overlay networksI Storage driver for Ceph RBD

Randall Smith Docker at Adams State University

Swarm is built into Docker Itrsquos almost trivial to setup Service andcontainer management is fairly simple and it still offered powerful featuressuch as zero-downtime updatesSwarm also provides a native encrypted overlay network to connect con-tainers running on separate hosts This rocks Communication betweenevery container in the cluster is encrypted transparently at the networklayer That allowed us to encrypt traffic for services such as MySQL whichcan be problematic to enable native encryptionMost importantly I was able to configure the Rexray storage driver forDocker to mount Ceph RBDs for persistent storageIn two days I was able to setup a fully orchestrated Docker cluster Ev-erything was coming up roses

The Real Win GitLab CI

Randall Smith Docker at Adams State University

The real win for us came in the form of GitLab CI GitLab became integralto our workflowWersquove been using GitLab for many years as a git server It was just as I wasstarting to look into orchestration platforms that GitLab added a Dockerregistry Since we didnrsquot have one yet this was a huge winShortly thereafter I started looking into GitLabrsquos continuous integrationfeatures

Automated Container Builds

Randall Smith Docker at Adams State University

It started early in the process before I stood up the Swarm I was pushingmy image configurations into git in GitLab I started using GitLab CI toautomatically build new Docker imagesWhen a commit is pushed to a repo with CI enabled a new build is trig-gered Once the the build is complete itrsquos pushed into the Docker registryand is ready to use anywhere Where before I was building new images onmy desktop or on the servers themselves now I could let GitLab do it formeWhat makes this especially cool is that we were able to make this availableto our PR department to build new images for the main website The newadamsedu runs on Wordpress running in Docker Our web developer canbuild and test new images automaticallyI also like this process because it makes it easier to stand up services in atest environment We can test the image before it goes into productionOnce our testing is done the image that is being deployed into productionis the exact image that passed our tests This eliminates nearly all of theanxiety that comes with deploying new changes

Automated Deployment of Services

Randall Smith Docker at Adams State University

Even better we added steps to the CI process which allows him to deployhis updated Wordpress images to the Swarm without root access or needingto talk to a member of our ops team By choice the deployment istriggered manually especially to production This helps to ensure that weare being deliberate in our processesGitLab keeps an audit trail of every deployment so we can see every de-ployment that happens and who triggered it It also makes it easy to rollback to previous images in the event of serious problemsMost of our other Docker services have been migrated to this processas well This allows us to fully audit any deployments of services Theconsistent process also makes it easier for anyone on the team to makechanges if neededDoes it take longer to make small changes Yes Yes it does Howeverthe deliberate process ensures that we are consistent It also provides arecord of what changed (thanks to git) That way if we start getting callson a service we can see if there were any recent changes what they wereand when they were deployed We can also roll back updates in mostcases if there are problemsThis is a very DevOps approach Wersquore treating servers and services ascode

Building Images

FROM wordpress498-apacheLABEL maintainer=Mike Henderson mhendersonadamsedu

RUN apt-get update ampamp apt-get install -y curl zip unzip git libldap2-dev ampamp rm -rf varlibaptlists ampamp docker-php-ext-configure ldap --with-libdir=libx86_64-linux-gnu ampamp docker-php-ext-install ldap ampamp apt-get purge -y --auto-remove libldap2-dev ampamp php -r readfile(rsquohttpgetcomposerorginstallerrsquo)

| php -- --install-dir=usrbin --filename=composer

git clone of themes and plugins removed

Pull in composer file and runCOPY composerjson RUN composer install --verbose --profile --prefer-dist --no-autoloader

Make NFS mount point set ownership symlink into wordpressRUN mkdir uploadsRUN chown www-datawww-data uploadsRUN ln -s uploads usrsrcwordpresswp-contentuploads

Randall Smith Docker at Adams State University

Building an image is the first step to running a service in Docker Using offthe shelf images can be a great way to get started but eventually yoursquollneed to roll your own image This is done via a DockerfileThe easiest place to start is from an existing image The slide shows asnippet from the Dockerfile that wersquore using to build the image for whatwill be our main web site In this case wersquore expanding off the officialWordpress image to build exactly what we need for adamseduOur web guru Mike Henderson built and maintains the Dockerfile Our CIprocess automates the build process when he pushes a change into GitLaband allows him to deploy it to testing or productionThe build process makes it easier for others to review and audit an imageUnlike a server everything that goes into a service is in the image Youdonrsquot have extra things hanging around that no one knows about becausethe person doing the install forgot about it

Deploying Services

docker stack deploy -c docker-composeyml www

Randall Smith Docker at Adams State University

So letrsquos dig into how we deploy a service This is done in Swarm witha docker-composeyml file In it you specify all of the services volumesand networks that your application needs All of this happens at run timeThis is the compose file that wersquore using for to test our wordpress deploy-ment for our new website

bull base images are usually very minimal

bull version tags allow rollback Go back to previous version

bull this db approach means that every service gets own dedicated DB

[ back to slide ] When wersquore ready to deploy we run docker stackdeploy and Swarm tries to make reality match what wersquove defined inthe compose fileThe combination of Dockerfiles and the compose file define the entireconfiguration for a service This fills the same role as server configurationmanagement

Zero-downtime Upgrades

I Requires the service to run multiple replicasI Running containers are replaced one at a time until they are all

replacedI The load balancer serves requests from all running services

Randall Smith Docker at Adams State University

One of the great features of Swarm and Kubernetes is the ability to per-form zero-downtime upgrades This feature is based firstly on runningmultiple replicas of the image Each running container is updated one ata time or in a configurable number of groupsThe built-in load balancer will serve requests from each running containerAs containers are shutdown they are removed from the load balancer Thenew containers are added and will start serving requests Eventually everycontainer will be replaced with ones running the new image The servicewill remain up the entire timeDuring the update process some requests may go to old containers whileothers go to the new ones As long as everything is backwards compatiblethis works wellEven for services that cannot run with multiple replicas such as databasesthey generally restart so quickly that downtime for upgrades is reduced toseconds

Who Needs a Server

docker run ictuopenvas-docker openvasrun_scanpy -v pathtoresultsopenvasreports host1host2host2 report-name

Randall Smith Docker at Adams State University

One of the great things about running services in containers is that westarted to kill off servers In some cases is was what you would expectThe server that was running a service isnrsquot needed anymore In one casein particular the change was even more drasticWe use OpenVAS to run regular security scans against our servers TheOpenVAS server included the security scanning and a web interface formanaging the scans and providing access to the reportsCameron started looking into how to move OpenVAS into a container anddiscovered a couple of pre-built images One of them allows you to passin a list of hosts to scan on startup The container will go through thelist scan every host and write the report to a volume Even better thecontainer downloads all of the latest checks when it starts so it is alwaysup-to-dateWe were able to replace an entire server with a one-liner We can schedulethat to run as a cron job or run it on demand anytime we need it

Who Needs a Cluster

I Use single node Docker Swarm for standalone servicesI All of the CI tooling is still availableI Take advantage of rollback and CI audit trail

Randall Smith Docker at Adams State University

We also found that in some cases it still might make sense to run aservice on a standalone server However you can still take advantage ofthe managed deployment and rollback that is available with Docker Swarmon a single hostAll of the CI build and deployment options are available as they wouldbe if the service were running on a full cluster Instead a specificgitlab-runner is used on the standalone host Deployments are thenconfigured to use that runnerThere are two big wins when taking this approach on standalone serversFirst you get the rapid rollback in the event of failure Second you getthe audit trail and accountability that comes from the CI environment

There Were Problems

I Swarm scheduler failedI Rexray driver doesnrsquot

consistently unmap RBDsI Overlay randomly network

stopped workingI IPAM assigns too many IP

addresses

Randall Smith Docker at Adams State University

As we started to use Swarm more we started to see problems appearFirst of all we ran into issues with the Swarm scheduler There was a bugthat triggered once in a while that prevented the Swarm from starting newservices Running containers were fine but we couldnrsquot start new onesEventually this was solved in a later Docker releaseThe other problem we have is the Rexray driver doesnrsquot always cleanupafter itself when a container is stopped It can leave RBDs mounted ormapped on a node preventing it from starting elsewhere in the SwarmThis can cause a service to take longer to start or prevent it from startingentirelyWe had two major network issues First of all the overlay network wouldsometimes stop talking Containers on the same nodes could continue toconnect but they couldnrsquot talk to others in the Swarm There turned outto be a conflict between the Kernel network timeout settings and the IPVSsettings which led to the Kernel dropping the overlay connectionsThe second problem is one wersquore still working around Therersquos a bug inthe IPAM module which assigns the virtual IPs If services are frequentlyupdated it can lead to multiple IPs being assigned to the service andthe old IPs not being cleaned up This can lead to strange connectivityproblems

Not A Total Loss

In the end Docker Swarm has proven to be a solid proof ofconcept

Randall Smith Docker at Adams State University

In the end Docker Swarm has proven to be a solid proof of conceptWe learned a lot about how to handle containers and the workflow fordeploying servicesDonrsquot be fooled with the proof of concept label Swarm is a solid prod-uct So solid that we are using this to run production services

Next Steps

Randall Smith Docker at Adams State University

The next step for us is to move into our new Kubernetes clusterI wish I could say that we were there now Unfortunately little things likemoving to Banner 9 took up a lot of time Herersquos where we are now andwhat our plans are moving forward

Architecture

Randall Smith Docker at Adams State University

The architecture for the new Kubernetes cluster mirrors what wersquore doingwith SwarmWersquore using haproxy on the gateway nodes to provide public access toservices running in the cluster It also provides TLS termination for webservices The nodes are managed by puppet (and GitLab CI)The Kubernetes cluster can use a number of different drivers to provide thenetwork overlay and enforce network access controls The Calico driverfor example uses BGP to route requests for services to the host where thecontainers running Using BGP means that standard routingnetworkingtools can be used to troubleshoot problemsOne of the best things about the native Docker overlay network is thatall traffic between all of the containers on that network in the Swarm canbe encrypted I didnrsquot want to lose that when moving to Kubernetes so Iuse puppet to automatically configure libreswan to build host-to-host linksbetween all of the hosts with IPSec and letsencrypt (Mostly automaticallyI still have to manually request the the certificate)Cluster management goes through kubeadmin01 A gitlab-runner onkubeadmin01 handles service updates coming from GitLab CI

Setup

I rke kubespray and kubeadm make installs easierI Shared storage via Ceph and NFSI Networking is the hard part

Randall Smith Docker at Adams State University

Kubernetes development is focused on the cloud Unfortunately thatmeans that deploying locally is not as easy as it could be Tools likerke kubespray and kubeadm make it easierOne of the things I like about rke is that it has an easy option to teardown a cluster This is great for testingWe use Ceph RBD to provide block devices to our Linux KVM clusterWe are also able to leverage that for Docker but the way itrsquos configureddoesnrsquot work well if volumes need to be shared We use NFS for thosecases where multiple services need to access the volume at the same timeThe cloud focus really shows up when it comes time to setup networkingTherersquos an expectation that the cluster use the load balancer from thecloud provider to provide access Obviously thatrsquos not the case whenrunning it locallyAll of the subnets used by Kubernetes are designed to be internal to thecluster That makes it hard to make services public Metallb has a coupleof options to do that The easiest is Layer 2 mode which publishes IPsdirectly on the nodes and use ARPs to make other servers aware of theIPs This is what wersquore using The other option is BGP with publishesthe IPs and their location in the cluster to other routers

Migration

I Convert from Docker Compose to Pods and ServicesI Add a CI jobs to deploy to the k8s clusterI Change DNS to point to the k8s serviceI No image rebuild needed

Randall Smith Docker at Adams State University

One of the best things about using Docker and GitLab CI to manageservice deployments is that very little needs to change There is a one-time conversion of the docker compose file to the Kubernetes config formatThe CI configuration also needs to be updated so that deployments go toKubernetes After that deployment through GitLab remains the same asit was with Swarm For services that do not have external volumes theycan be deployed to both clouds and tests run as neededFor many services migration can be as easy as changing a DNS recordRemember the exact same image that wersquore running in the Swarm will berunning in Kubernetes There are no surprises with the image No needto rebuild or re-install

How Can You Get Started

I Start with something simpleI Start with services that can tolerate downtimeI You may need to change your backup strategyI CI CI CI

Randall Smith Docker at Adams State University

Donrsquot do what I did Start with something simple Web services and tasksthat donrsquot require permanent local storage are a great start These areusually easier to build and more resilient to being frequently stopped andstarted This will happen a lot with your first servicesStart with services that can tolerate downtime Test services are great butyoursquoll eventually want to move something useful into the cluster Therersquosa learning curve Donrsquot let it bite you (or discourage you)When you move to containers how you back up your data may need tochange You donrsquot have a server to log into and run backup scripts Knowwhere your data lives and back it up thereCI CI CI Also CI Having a consistent workflow will make it easier foreveryone to start using the new cluster

Build vs Buy

Randall Smith Docker at Adams State University

You can build from scratch but there are a number of pre-built commercialservicesDocker Enterprise is a simple install on-prem and provides both Swarmand Kubernetes services at the same time It also provides a registry androle-based access control We are testing this approach nowYou can also go straight to the cloud with GKE AWS or Azure Infact using Kubernetes makes it much easier to transition between cloudproviders or even use multiple providers at the same time

Logging

I Containers generally log tostandard out

I Logs can be shipped to a logaggreagator such asElasticsearch

I View logs and createdashboards in Kibana

Randall Smith Docker at Adams State University

Moving to Docker changes a number of things not the least of which islogging The official accepted way for containers to log is to write theirlogs to standard out The logs are then available via Docker This alsoallows for a standard way to ship those logs to an aggregatorGenerally this done by shipping those logs to Elasticsearch One can thenuse a tool such as Kibana to access those logs build dashboards and eventrigger alerts based on the contents of those logs

Monitoring

I Collect metrics with PrometheusI View the metrics with GrafanaI Use InfluxDB for long-term storage

Randall Smith Docker at Adams State University

Prometheus has become the premier way to collect metrics on Kubernetesclusters It collects host stats as well as information on running podsGrafana is frequently used to build dashboards to view the collected met-ricsPrometheus only keeps metrics for a short amount of time Usually lessthan two weeks The metrics can be shipped to InfluxDB for long termstorage and analysis

Security Scans

I No built-in way to see if an image has out-of-date packagesI clair and Docker Enterprise can scan images for vulnerable

packagesI Doesnrsquot work as well for 3rd party packages

Randall Smith Docker at Adams State University

One of the things that you lose with containers is a built-in way to ensurethat software in an image is not running vulnerable packages You can useyum or apt to see if new updates need to be applied but containers donrsquothave thatTools such as clair can be used to scan images at build time as part ofthe CI process or to regularly scan all images in a registry This feature isalso included in Docker Enterprise This works well for packages installedwith standard tools (such as apt) but doesnrsquot work as well for softwareinstalled from sourceOne way to keep images up-to-date is run scheduled tasks in GitLab CIthat scans the image and rebuilds is with the latest updates installed ifsomething is found Remember that base images are generally very strippeddown so they will generally have fewer updates

BANDOCK Google Group

httpsgroupsgooglecomforumforumbandock

Randall Smith Docker at Adams State University

Virginia Tech started a Google Group for schools looking to run Banner inDocker VT is running Banner 9 in Docker SwarmTo request access go to the BANDOCK Google group and click the linkthat says Apply for membership

Questions

Feel free to ask questions after the conference

Email rbsmithadamseduPhone 719-587-7836Twitter PerlStalker

Randall Smith Docker at Adams State University

Thank You

Randall Smith Docker at Adams State University

References

bull Two Five and Seven painting the rosebush from AlicersquosAdventures in Wonderland

bull ELK Stack image - httpswwwelasticcoelk-stack

bull Docker Enterprise -httpswwwdockercomproductsdocker-enterprise

bull Book cover generated with the devto book generator -httpsdevtorly

bull Double Facepalm meme created at imgflipcom

bull GitLab - httpsaboutgitlabcom

bull Kubernetes - httpskubernetesio

bull Calico - httpswwwprojectcalicoorg

bull Metallb - httpsmetallbuniversetf

Page 12: Docker at Adams State UniversityRandall Smith Docker at Adams State University One of the best things about using Docker and GitLab CI to manage service deployments is that very little

The Real Win GitLab CI

Randall Smith Docker at Adams State University

The real win for us came in the form of GitLab CI GitLab became integralto our workflowWersquove been using GitLab for many years as a git server It was just as I wasstarting to look into orchestration platforms that GitLab added a Dockerregistry Since we didnrsquot have one yet this was a huge winShortly thereafter I started looking into GitLabrsquos continuous integrationfeatures

Automated Container Builds

Randall Smith Docker at Adams State University

It started early in the process before I stood up the Swarm I was pushingmy image configurations into git in GitLab I started using GitLab CI toautomatically build new Docker imagesWhen a commit is pushed to a repo with CI enabled a new build is trig-gered Once the the build is complete itrsquos pushed into the Docker registryand is ready to use anywhere Where before I was building new images onmy desktop or on the servers themselves now I could let GitLab do it formeWhat makes this especially cool is that we were able to make this availableto our PR department to build new images for the main website The newadamsedu runs on Wordpress running in Docker Our web developer canbuild and test new images automaticallyI also like this process because it makes it easier to stand up services in atest environment We can test the image before it goes into productionOnce our testing is done the image that is being deployed into productionis the exact image that passed our tests This eliminates nearly all of theanxiety that comes with deploying new changes

Automated Deployment of Services

Randall Smith Docker at Adams State University

Even better we added steps to the CI process which allows him to deployhis updated Wordpress images to the Swarm without root access or needingto talk to a member of our ops team By choice the deployment istriggered manually especially to production This helps to ensure that weare being deliberate in our processesGitLab keeps an audit trail of every deployment so we can see every de-ployment that happens and who triggered it It also makes it easy to rollback to previous images in the event of serious problemsMost of our other Docker services have been migrated to this processas well This allows us to fully audit any deployments of services Theconsistent process also makes it easier for anyone on the team to makechanges if neededDoes it take longer to make small changes Yes Yes it does Howeverthe deliberate process ensures that we are consistent It also provides arecord of what changed (thanks to git) That way if we start getting callson a service we can see if there were any recent changes what they wereand when they were deployed We can also roll back updates in mostcases if there are problemsThis is a very DevOps approach Wersquore treating servers and services ascode

Building Images

FROM wordpress498-apacheLABEL maintainer=Mike Henderson mhendersonadamsedu

RUN apt-get update ampamp apt-get install -y curl zip unzip git libldap2-dev ampamp rm -rf varlibaptlists ampamp docker-php-ext-configure ldap --with-libdir=libx86_64-linux-gnu ampamp docker-php-ext-install ldap ampamp apt-get purge -y --auto-remove libldap2-dev ampamp php -r readfile(rsquohttpgetcomposerorginstallerrsquo)

| php -- --install-dir=usrbin --filename=composer

git clone of themes and plugins removed

Pull in composer file and runCOPY composerjson RUN composer install --verbose --profile --prefer-dist --no-autoloader

Make NFS mount point set ownership symlink into wordpressRUN mkdir uploadsRUN chown www-datawww-data uploadsRUN ln -s uploads usrsrcwordpresswp-contentuploads

Randall Smith Docker at Adams State University

Building an image is the first step to running a service in Docker Using offthe shelf images can be a great way to get started but eventually yoursquollneed to roll your own image This is done via a DockerfileThe easiest place to start is from an existing image The slide shows asnippet from the Dockerfile that wersquore using to build the image for whatwill be our main web site In this case wersquore expanding off the officialWordpress image to build exactly what we need for adamseduOur web guru Mike Henderson built and maintains the Dockerfile Our CIprocess automates the build process when he pushes a change into GitLaband allows him to deploy it to testing or productionThe build process makes it easier for others to review and audit an imageUnlike a server everything that goes into a service is in the image Youdonrsquot have extra things hanging around that no one knows about becausethe person doing the install forgot about it

Deploying Services

docker stack deploy -c docker-composeyml www

Randall Smith Docker at Adams State University

So letrsquos dig into how we deploy a service This is done in Swarm witha docker-composeyml file In it you specify all of the services volumesand networks that your application needs All of this happens at run timeThis is the compose file that wersquore using for to test our wordpress deploy-ment for our new website

bull base images are usually very minimal

bull version tags allow rollback Go back to previous version

bull this db approach means that every service gets own dedicated DB

[ back to slide ] When wersquore ready to deploy we run docker stackdeploy and Swarm tries to make reality match what wersquove defined inthe compose fileThe combination of Dockerfiles and the compose file define the entireconfiguration for a service This fills the same role as server configurationmanagement

Zero-downtime Upgrades

I Requires the service to run multiple replicasI Running containers are replaced one at a time until they are all

replacedI The load balancer serves requests from all running services

Randall Smith Docker at Adams State University

One of the great features of Swarm and Kubernetes is the ability to per-form zero-downtime upgrades This feature is based firstly on runningmultiple replicas of the image Each running container is updated one ata time or in a configurable number of groupsThe built-in load balancer will serve requests from each running containerAs containers are shutdown they are removed from the load balancer Thenew containers are added and will start serving requests Eventually everycontainer will be replaced with ones running the new image The servicewill remain up the entire timeDuring the update process some requests may go to old containers whileothers go to the new ones As long as everything is backwards compatiblethis works wellEven for services that cannot run with multiple replicas such as databasesthey generally restart so quickly that downtime for upgrades is reduced toseconds

Who Needs a Server

docker run ictuopenvas-docker openvasrun_scanpy -v pathtoresultsopenvasreports host1host2host2 report-name

Randall Smith Docker at Adams State University

One of the great things about running services in containers is that westarted to kill off servers In some cases is was what you would expectThe server that was running a service isnrsquot needed anymore In one casein particular the change was even more drasticWe use OpenVAS to run regular security scans against our servers TheOpenVAS server included the security scanning and a web interface formanaging the scans and providing access to the reportsCameron started looking into how to move OpenVAS into a container anddiscovered a couple of pre-built images One of them allows you to passin a list of hosts to scan on startup The container will go through thelist scan every host and write the report to a volume Even better thecontainer downloads all of the latest checks when it starts so it is alwaysup-to-dateWe were able to replace an entire server with a one-liner We can schedulethat to run as a cron job or run it on demand anytime we need it

Who Needs a Cluster

I Use single node Docker Swarm for standalone servicesI All of the CI tooling is still availableI Take advantage of rollback and CI audit trail

Randall Smith Docker at Adams State University

We also found that in some cases it still might make sense to run aservice on a standalone server However you can still take advantage ofthe managed deployment and rollback that is available with Docker Swarmon a single hostAll of the CI build and deployment options are available as they wouldbe if the service were running on a full cluster Instead a specificgitlab-runner is used on the standalone host Deployments are thenconfigured to use that runnerThere are two big wins when taking this approach on standalone serversFirst you get the rapid rollback in the event of failure Second you getthe audit trail and accountability that comes from the CI environment

There Were Problems

I Swarm scheduler failedI Rexray driver doesnrsquot

consistently unmap RBDsI Overlay randomly network

stopped workingI IPAM assigns too many IP

addresses

Randall Smith Docker at Adams State University

As we started to use Swarm more we started to see problems appearFirst of all we ran into issues with the Swarm scheduler There was a bugthat triggered once in a while that prevented the Swarm from starting newservices Running containers were fine but we couldnrsquot start new onesEventually this was solved in a later Docker releaseThe other problem we have is the Rexray driver doesnrsquot always cleanupafter itself when a container is stopped It can leave RBDs mounted ormapped on a node preventing it from starting elsewhere in the SwarmThis can cause a service to take longer to start or prevent it from startingentirelyWe had two major network issues First of all the overlay network wouldsometimes stop talking Containers on the same nodes could continue toconnect but they couldnrsquot talk to others in the Swarm There turned outto be a conflict between the Kernel network timeout settings and the IPVSsettings which led to the Kernel dropping the overlay connectionsThe second problem is one wersquore still working around Therersquos a bug inthe IPAM module which assigns the virtual IPs If services are frequentlyupdated it can lead to multiple IPs being assigned to the service andthe old IPs not being cleaned up This can lead to strange connectivityproblems

Not A Total Loss

In the end Docker Swarm has proven to be a solid proof ofconcept

Randall Smith Docker at Adams State University

In the end Docker Swarm has proven to be a solid proof of conceptWe learned a lot about how to handle containers and the workflow fordeploying servicesDonrsquot be fooled with the proof of concept label Swarm is a solid prod-uct So solid that we are using this to run production services

Next Steps

Randall Smith Docker at Adams State University

The next step for us is to move into our new Kubernetes clusterI wish I could say that we were there now Unfortunately little things likemoving to Banner 9 took up a lot of time Herersquos where we are now andwhat our plans are moving forward

Architecture

Randall Smith Docker at Adams State University

The architecture for the new Kubernetes cluster mirrors what wersquore doingwith SwarmWersquore using haproxy on the gateway nodes to provide public access toservices running in the cluster It also provides TLS termination for webservices The nodes are managed by puppet (and GitLab CI)The Kubernetes cluster can use a number of different drivers to provide thenetwork overlay and enforce network access controls The Calico driverfor example uses BGP to route requests for services to the host where thecontainers running Using BGP means that standard routingnetworkingtools can be used to troubleshoot problemsOne of the best things about the native Docker overlay network is thatall traffic between all of the containers on that network in the Swarm canbe encrypted I didnrsquot want to lose that when moving to Kubernetes so Iuse puppet to automatically configure libreswan to build host-to-host linksbetween all of the hosts with IPSec and letsencrypt (Mostly automaticallyI still have to manually request the the certificate)Cluster management goes through kubeadmin01 A gitlab-runner onkubeadmin01 handles service updates coming from GitLab CI

Setup

I rke kubespray and kubeadm make installs easierI Shared storage via Ceph and NFSI Networking is the hard part

Randall Smith Docker at Adams State University

Kubernetes development is focused on the cloud Unfortunately thatmeans that deploying locally is not as easy as it could be Tools likerke kubespray and kubeadm make it easierOne of the things I like about rke is that it has an easy option to teardown a cluster This is great for testingWe use Ceph RBD to provide block devices to our Linux KVM clusterWe are also able to leverage that for Docker but the way itrsquos configureddoesnrsquot work well if volumes need to be shared We use NFS for thosecases where multiple services need to access the volume at the same timeThe cloud focus really shows up when it comes time to setup networkingTherersquos an expectation that the cluster use the load balancer from thecloud provider to provide access Obviously thatrsquos not the case whenrunning it locallyAll of the subnets used by Kubernetes are designed to be internal to thecluster That makes it hard to make services public Metallb has a coupleof options to do that The easiest is Layer 2 mode which publishes IPsdirectly on the nodes and use ARPs to make other servers aware of theIPs This is what wersquore using The other option is BGP with publishesthe IPs and their location in the cluster to other routers

Migration

I Convert from Docker Compose to Pods and ServicesI Add a CI jobs to deploy to the k8s clusterI Change DNS to point to the k8s serviceI No image rebuild needed

Randall Smith Docker at Adams State University

One of the best things about using Docker and GitLab CI to manageservice deployments is that very little needs to change There is a one-time conversion of the docker compose file to the Kubernetes config formatThe CI configuration also needs to be updated so that deployments go toKubernetes After that deployment through GitLab remains the same asit was with Swarm For services that do not have external volumes theycan be deployed to both clouds and tests run as neededFor many services migration can be as easy as changing a DNS recordRemember the exact same image that wersquore running in the Swarm will berunning in Kubernetes There are no surprises with the image No needto rebuild or re-install

How Can You Get Started

I Start with something simpleI Start with services that can tolerate downtimeI You may need to change your backup strategyI CI CI CI

Randall Smith Docker at Adams State University

Donrsquot do what I did Start with something simple Web services and tasksthat donrsquot require permanent local storage are a great start These areusually easier to build and more resilient to being frequently stopped andstarted This will happen a lot with your first servicesStart with services that can tolerate downtime Test services are great butyoursquoll eventually want to move something useful into the cluster Therersquosa learning curve Donrsquot let it bite you (or discourage you)When you move to containers how you back up your data may need tochange You donrsquot have a server to log into and run backup scripts Knowwhere your data lives and back it up thereCI CI CI Also CI Having a consistent workflow will make it easier foreveryone to start using the new cluster

Build vs Buy

Randall Smith Docker at Adams State University

You can build from scratch but there are a number of pre-built commercialservicesDocker Enterprise is a simple install on-prem and provides both Swarmand Kubernetes services at the same time It also provides a registry androle-based access control We are testing this approach nowYou can also go straight to the cloud with GKE AWS or Azure Infact using Kubernetes makes it much easier to transition between cloudproviders or even use multiple providers at the same time

Logging

I Containers generally log tostandard out

I Logs can be shipped to a logaggreagator such asElasticsearch

I View logs and createdashboards in Kibana

Randall Smith Docker at Adams State University

Moving to Docker changes a number of things not the least of which islogging The official accepted way for containers to log is to write theirlogs to standard out The logs are then available via Docker This alsoallows for a standard way to ship those logs to an aggregatorGenerally this done by shipping those logs to Elasticsearch One can thenuse a tool such as Kibana to access those logs build dashboards and eventrigger alerts based on the contents of those logs

Monitoring

I Collect metrics with PrometheusI View the metrics with GrafanaI Use InfluxDB for long-term storage

Randall Smith Docker at Adams State University

Prometheus has become the premier way to collect metrics on Kubernetesclusters It collects host stats as well as information on running podsGrafana is frequently used to build dashboards to view the collected met-ricsPrometheus only keeps metrics for a short amount of time Usually lessthan two weeks The metrics can be shipped to InfluxDB for long termstorage and analysis

Security Scans

I No built-in way to see if an image has out-of-date packagesI clair and Docker Enterprise can scan images for vulnerable

packagesI Doesnrsquot work as well for 3rd party packages

Randall Smith Docker at Adams State University

One of the things that you lose with containers is a built-in way to ensurethat software in an image is not running vulnerable packages You can useyum or apt to see if new updates need to be applied but containers donrsquothave thatTools such as clair can be used to scan images at build time as part ofthe CI process or to regularly scan all images in a registry This feature isalso included in Docker Enterprise This works well for packages installedwith standard tools (such as apt) but doesnrsquot work as well for softwareinstalled from sourceOne way to keep images up-to-date is run scheduled tasks in GitLab CIthat scans the image and rebuilds is with the latest updates installed ifsomething is found Remember that base images are generally very strippeddown so they will generally have fewer updates

BANDOCK Google Group

httpsgroupsgooglecomforumforumbandock

Randall Smith Docker at Adams State University

Virginia Tech started a Google Group for schools looking to run Banner inDocker VT is running Banner 9 in Docker SwarmTo request access go to the BANDOCK Google group and click the linkthat says Apply for membership

Questions

Feel free to ask questions after the conference

Email rbsmithadamseduPhone 719-587-7836Twitter PerlStalker

Randall Smith Docker at Adams State University

Thank You

Randall Smith Docker at Adams State University

References

bull Two Five and Seven painting the rosebush from AlicersquosAdventures in Wonderland

bull ELK Stack image - httpswwwelasticcoelk-stack

bull Docker Enterprise -httpswwwdockercomproductsdocker-enterprise

bull Book cover generated with the devto book generator -httpsdevtorly

bull Double Facepalm meme created at imgflipcom

bull GitLab - httpsaboutgitlabcom

bull Kubernetes - httpskubernetesio

bull Calico - httpswwwprojectcalicoorg

bull Metallb - httpsmetallbuniversetf

Page 13: Docker at Adams State UniversityRandall Smith Docker at Adams State University One of the best things about using Docker and GitLab CI to manage service deployments is that very little

Automated Container Builds

Randall Smith Docker at Adams State University

It started early in the process before I stood up the Swarm I was pushingmy image configurations into git in GitLab I started using GitLab CI toautomatically build new Docker imagesWhen a commit is pushed to a repo with CI enabled a new build is trig-gered Once the the build is complete itrsquos pushed into the Docker registryand is ready to use anywhere Where before I was building new images onmy desktop or on the servers themselves now I could let GitLab do it formeWhat makes this especially cool is that we were able to make this availableto our PR department to build new images for the main website The newadamsedu runs on Wordpress running in Docker Our web developer canbuild and test new images automaticallyI also like this process because it makes it easier to stand up services in atest environment We can test the image before it goes into productionOnce our testing is done the image that is being deployed into productionis the exact image that passed our tests This eliminates nearly all of theanxiety that comes with deploying new changes

Automated Deployment of Services

Randall Smith Docker at Adams State University

Even better we added steps to the CI process which allows him to deployhis updated Wordpress images to the Swarm without root access or needingto talk to a member of our ops team By choice the deployment istriggered manually especially to production This helps to ensure that weare being deliberate in our processesGitLab keeps an audit trail of every deployment so we can see every de-ployment that happens and who triggered it It also makes it easy to rollback to previous images in the event of serious problemsMost of our other Docker services have been migrated to this processas well This allows us to fully audit any deployments of services Theconsistent process also makes it easier for anyone on the team to makechanges if neededDoes it take longer to make small changes Yes Yes it does Howeverthe deliberate process ensures that we are consistent It also provides arecord of what changed (thanks to git) That way if we start getting callson a service we can see if there were any recent changes what they wereand when they were deployed We can also roll back updates in mostcases if there are problemsThis is a very DevOps approach Wersquore treating servers and services ascode

Building Images

FROM wordpress498-apacheLABEL maintainer=Mike Henderson mhendersonadamsedu

RUN apt-get update ampamp apt-get install -y curl zip unzip git libldap2-dev ampamp rm -rf varlibaptlists ampamp docker-php-ext-configure ldap --with-libdir=libx86_64-linux-gnu ampamp docker-php-ext-install ldap ampamp apt-get purge -y --auto-remove libldap2-dev ampamp php -r readfile(rsquohttpgetcomposerorginstallerrsquo)

| php -- --install-dir=usrbin --filename=composer

git clone of themes and plugins removed

Pull in composer file and runCOPY composerjson RUN composer install --verbose --profile --prefer-dist --no-autoloader

Make NFS mount point set ownership symlink into wordpressRUN mkdir uploadsRUN chown www-datawww-data uploadsRUN ln -s uploads usrsrcwordpresswp-contentuploads

Randall Smith Docker at Adams State University

Building an image is the first step to running a service in Docker Using offthe shelf images can be a great way to get started but eventually yoursquollneed to roll your own image This is done via a DockerfileThe easiest place to start is from an existing image The slide shows asnippet from the Dockerfile that wersquore using to build the image for whatwill be our main web site In this case wersquore expanding off the officialWordpress image to build exactly what we need for adamseduOur web guru Mike Henderson built and maintains the Dockerfile Our CIprocess automates the build process when he pushes a change into GitLaband allows him to deploy it to testing or productionThe build process makes it easier for others to review and audit an imageUnlike a server everything that goes into a service is in the image Youdonrsquot have extra things hanging around that no one knows about becausethe person doing the install forgot about it

Deploying Services

docker stack deploy -c docker-composeyml www

Randall Smith Docker at Adams State University

So letrsquos dig into how we deploy a service This is done in Swarm witha docker-composeyml file In it you specify all of the services volumesand networks that your application needs All of this happens at run timeThis is the compose file that wersquore using for to test our wordpress deploy-ment for our new website

bull base images are usually very minimal

bull version tags allow rollback Go back to previous version

bull this db approach means that every service gets own dedicated DB

[ back to slide ] When wersquore ready to deploy we run docker stackdeploy and Swarm tries to make reality match what wersquove defined inthe compose fileThe combination of Dockerfiles and the compose file define the entireconfiguration for a service This fills the same role as server configurationmanagement

Zero-downtime Upgrades

I Requires the service to run multiple replicasI Running containers are replaced one at a time until they are all

replacedI The load balancer serves requests from all running services

Randall Smith Docker at Adams State University

One of the great features of Swarm and Kubernetes is the ability to per-form zero-downtime upgrades This feature is based firstly on runningmultiple replicas of the image Each running container is updated one ata time or in a configurable number of groupsThe built-in load balancer will serve requests from each running containerAs containers are shutdown they are removed from the load balancer Thenew containers are added and will start serving requests Eventually everycontainer will be replaced with ones running the new image The servicewill remain up the entire timeDuring the update process some requests may go to old containers whileothers go to the new ones As long as everything is backwards compatiblethis works wellEven for services that cannot run with multiple replicas such as databasesthey generally restart so quickly that downtime for upgrades is reduced toseconds

Who Needs a Server

docker run ictuopenvas-docker openvasrun_scanpy -v pathtoresultsopenvasreports host1host2host2 report-name

Randall Smith Docker at Adams State University

One of the great things about running services in containers is that westarted to kill off servers In some cases is was what you would expectThe server that was running a service isnrsquot needed anymore In one casein particular the change was even more drasticWe use OpenVAS to run regular security scans against our servers TheOpenVAS server included the security scanning and a web interface formanaging the scans and providing access to the reportsCameron started looking into how to move OpenVAS into a container anddiscovered a couple of pre-built images One of them allows you to passin a list of hosts to scan on startup The container will go through thelist scan every host and write the report to a volume Even better thecontainer downloads all of the latest checks when it starts so it is alwaysup-to-dateWe were able to replace an entire server with a one-liner We can schedulethat to run as a cron job or run it on demand anytime we need it

Who Needs a Cluster

I Use single node Docker Swarm for standalone servicesI All of the CI tooling is still availableI Take advantage of rollback and CI audit trail

Randall Smith Docker at Adams State University

We also found that in some cases it still might make sense to run aservice on a standalone server However you can still take advantage ofthe managed deployment and rollback that is available with Docker Swarmon a single hostAll of the CI build and deployment options are available as they wouldbe if the service were running on a full cluster Instead a specificgitlab-runner is used on the standalone host Deployments are thenconfigured to use that runnerThere are two big wins when taking this approach on standalone serversFirst you get the rapid rollback in the event of failure Second you getthe audit trail and accountability that comes from the CI environment

There Were Problems

I Swarm scheduler failedI Rexray driver doesnrsquot

consistently unmap RBDsI Overlay randomly network

stopped workingI IPAM assigns too many IP

addresses

Randall Smith Docker at Adams State University

As we started to use Swarm more we started to see problems appearFirst of all we ran into issues with the Swarm scheduler There was a bugthat triggered once in a while that prevented the Swarm from starting newservices Running containers were fine but we couldnrsquot start new onesEventually this was solved in a later Docker releaseThe other problem we have is the Rexray driver doesnrsquot always cleanupafter itself when a container is stopped It can leave RBDs mounted ormapped on a node preventing it from starting elsewhere in the SwarmThis can cause a service to take longer to start or prevent it from startingentirelyWe had two major network issues First of all the overlay network wouldsometimes stop talking Containers on the same nodes could continue toconnect but they couldnrsquot talk to others in the Swarm There turned outto be a conflict between the Kernel network timeout settings and the IPVSsettings which led to the Kernel dropping the overlay connectionsThe second problem is one wersquore still working around Therersquos a bug inthe IPAM module which assigns the virtual IPs If services are frequentlyupdated it can lead to multiple IPs being assigned to the service andthe old IPs not being cleaned up This can lead to strange connectivityproblems

Not A Total Loss

In the end Docker Swarm has proven to be a solid proof ofconcept

Randall Smith Docker at Adams State University

In the end Docker Swarm has proven to be a solid proof of conceptWe learned a lot about how to handle containers and the workflow fordeploying servicesDonrsquot be fooled with the proof of concept label Swarm is a solid prod-uct So solid that we are using this to run production services

Next Steps

Randall Smith Docker at Adams State University

The next step for us is to move into our new Kubernetes clusterI wish I could say that we were there now Unfortunately little things likemoving to Banner 9 took up a lot of time Herersquos where we are now andwhat our plans are moving forward

Architecture

Randall Smith Docker at Adams State University

The architecture for the new Kubernetes cluster mirrors what wersquore doingwith SwarmWersquore using haproxy on the gateway nodes to provide public access toservices running in the cluster It also provides TLS termination for webservices The nodes are managed by puppet (and GitLab CI)The Kubernetes cluster can use a number of different drivers to provide thenetwork overlay and enforce network access controls The Calico driverfor example uses BGP to route requests for services to the host where thecontainers running Using BGP means that standard routingnetworkingtools can be used to troubleshoot problemsOne of the best things about the native Docker overlay network is thatall traffic between all of the containers on that network in the Swarm canbe encrypted I didnrsquot want to lose that when moving to Kubernetes so Iuse puppet to automatically configure libreswan to build host-to-host linksbetween all of the hosts with IPSec and letsencrypt (Mostly automaticallyI still have to manually request the the certificate)Cluster management goes through kubeadmin01 A gitlab-runner onkubeadmin01 handles service updates coming from GitLab CI

Setup

I rke kubespray and kubeadm make installs easierI Shared storage via Ceph and NFSI Networking is the hard part

Randall Smith Docker at Adams State University

Kubernetes development is focused on the cloud Unfortunately thatmeans that deploying locally is not as easy as it could be Tools likerke kubespray and kubeadm make it easierOne of the things I like about rke is that it has an easy option to teardown a cluster This is great for testingWe use Ceph RBD to provide block devices to our Linux KVM clusterWe are also able to leverage that for Docker but the way itrsquos configureddoesnrsquot work well if volumes need to be shared We use NFS for thosecases where multiple services need to access the volume at the same timeThe cloud focus really shows up when it comes time to setup networkingTherersquos an expectation that the cluster use the load balancer from thecloud provider to provide access Obviously thatrsquos not the case whenrunning it locallyAll of the subnets used by Kubernetes are designed to be internal to thecluster That makes it hard to make services public Metallb has a coupleof options to do that The easiest is Layer 2 mode which publishes IPsdirectly on the nodes and use ARPs to make other servers aware of theIPs This is what wersquore using The other option is BGP with publishesthe IPs and their location in the cluster to other routers

Migration

I Convert from Docker Compose to Pods and ServicesI Add a CI jobs to deploy to the k8s clusterI Change DNS to point to the k8s serviceI No image rebuild needed

Randall Smith Docker at Adams State University

One of the best things about using Docker and GitLab CI to manageservice deployments is that very little needs to change There is a one-time conversion of the docker compose file to the Kubernetes config formatThe CI configuration also needs to be updated so that deployments go toKubernetes After that deployment through GitLab remains the same asit was with Swarm For services that do not have external volumes theycan be deployed to both clouds and tests run as neededFor many services migration can be as easy as changing a DNS recordRemember the exact same image that wersquore running in the Swarm will berunning in Kubernetes There are no surprises with the image No needto rebuild or re-install

How Can You Get Started

I Start with something simpleI Start with services that can tolerate downtimeI You may need to change your backup strategyI CI CI CI

Randall Smith Docker at Adams State University

Donrsquot do what I did Start with something simple Web services and tasksthat donrsquot require permanent local storage are a great start These areusually easier to build and more resilient to being frequently stopped andstarted This will happen a lot with your first servicesStart with services that can tolerate downtime Test services are great butyoursquoll eventually want to move something useful into the cluster Therersquosa learning curve Donrsquot let it bite you (or discourage you)When you move to containers how you back up your data may need tochange You donrsquot have a server to log into and run backup scripts Knowwhere your data lives and back it up thereCI CI CI Also CI Having a consistent workflow will make it easier foreveryone to start using the new cluster

Build vs Buy

Randall Smith Docker at Adams State University

You can build from scratch but there are a number of pre-built commercialservicesDocker Enterprise is a simple install on-prem and provides both Swarmand Kubernetes services at the same time It also provides a registry androle-based access control We are testing this approach nowYou can also go straight to the cloud with GKE AWS or Azure Infact using Kubernetes makes it much easier to transition between cloudproviders or even use multiple providers at the same time

Logging

I Containers generally log tostandard out

I Logs can be shipped to a logaggreagator such asElasticsearch

I View logs and createdashboards in Kibana

Randall Smith Docker at Adams State University

Moving to Docker changes a number of things not the least of which islogging The official accepted way for containers to log is to write theirlogs to standard out The logs are then available via Docker This alsoallows for a standard way to ship those logs to an aggregatorGenerally this done by shipping those logs to Elasticsearch One can thenuse a tool such as Kibana to access those logs build dashboards and eventrigger alerts based on the contents of those logs

Monitoring

I Collect metrics with PrometheusI View the metrics with GrafanaI Use InfluxDB for long-term storage

Randall Smith Docker at Adams State University

Prometheus has become the premier way to collect metrics on Kubernetesclusters It collects host stats as well as information on running podsGrafana is frequently used to build dashboards to view the collected met-ricsPrometheus only keeps metrics for a short amount of time Usually lessthan two weeks The metrics can be shipped to InfluxDB for long termstorage and analysis

Security Scans

I No built-in way to see if an image has out-of-date packagesI clair and Docker Enterprise can scan images for vulnerable

packagesI Doesnrsquot work as well for 3rd party packages

Randall Smith Docker at Adams State University

One of the things that you lose with containers is a built-in way to ensurethat software in an image is not running vulnerable packages You can useyum or apt to see if new updates need to be applied but containers donrsquothave thatTools such as clair can be used to scan images at build time as part ofthe CI process or to regularly scan all images in a registry This feature isalso included in Docker Enterprise This works well for packages installedwith standard tools (such as apt) but doesnrsquot work as well for softwareinstalled from sourceOne way to keep images up-to-date is run scheduled tasks in GitLab CIthat scans the image and rebuilds is with the latest updates installed ifsomething is found Remember that base images are generally very strippeddown so they will generally have fewer updates

BANDOCK Google Group

httpsgroupsgooglecomforumforumbandock

Randall Smith Docker at Adams State University

Virginia Tech started a Google Group for schools looking to run Banner inDocker VT is running Banner 9 in Docker SwarmTo request access go to the BANDOCK Google group and click the linkthat says Apply for membership

Questions

Feel free to ask questions after the conference

Email rbsmithadamseduPhone 719-587-7836Twitter PerlStalker

Randall Smith Docker at Adams State University

Thank You

Randall Smith Docker at Adams State University

References

bull Two Five and Seven painting the rosebush from AlicersquosAdventures in Wonderland

bull ELK Stack image - httpswwwelasticcoelk-stack

bull Docker Enterprise -httpswwwdockercomproductsdocker-enterprise

bull Book cover generated with the devto book generator -httpsdevtorly

bull Double Facepalm meme created at imgflipcom

bull GitLab - httpsaboutgitlabcom

bull Kubernetes - httpskubernetesio

bull Calico - httpswwwprojectcalicoorg

bull Metallb - httpsmetallbuniversetf

Page 14: Docker at Adams State UniversityRandall Smith Docker at Adams State University One of the best things about using Docker and GitLab CI to manage service deployments is that very little

Automated Deployment of Services

Randall Smith Docker at Adams State University

Even better we added steps to the CI process which allows him to deployhis updated Wordpress images to the Swarm without root access or needingto talk to a member of our ops team By choice the deployment istriggered manually especially to production This helps to ensure that weare being deliberate in our processesGitLab keeps an audit trail of every deployment so we can see every de-ployment that happens and who triggered it It also makes it easy to rollback to previous images in the event of serious problemsMost of our other Docker services have been migrated to this processas well This allows us to fully audit any deployments of services Theconsistent process also makes it easier for anyone on the team to makechanges if neededDoes it take longer to make small changes Yes Yes it does Howeverthe deliberate process ensures that we are consistent It also provides arecord of what changed (thanks to git) That way if we start getting callson a service we can see if there were any recent changes what they wereand when they were deployed We can also roll back updates in mostcases if there are problemsThis is a very DevOps approach Wersquore treating servers and services ascode

Building Images

FROM wordpress498-apacheLABEL maintainer=Mike Henderson mhendersonadamsedu

RUN apt-get update ampamp apt-get install -y curl zip unzip git libldap2-dev ampamp rm -rf varlibaptlists ampamp docker-php-ext-configure ldap --with-libdir=libx86_64-linux-gnu ampamp docker-php-ext-install ldap ampamp apt-get purge -y --auto-remove libldap2-dev ampamp php -r readfile(rsquohttpgetcomposerorginstallerrsquo)

| php -- --install-dir=usrbin --filename=composer

git clone of themes and plugins removed

Pull in composer file and runCOPY composerjson RUN composer install --verbose --profile --prefer-dist --no-autoloader

Make NFS mount point set ownership symlink into wordpressRUN mkdir uploadsRUN chown www-datawww-data uploadsRUN ln -s uploads usrsrcwordpresswp-contentuploads

Randall Smith Docker at Adams State University

Building an image is the first step to running a service in Docker Using offthe shelf images can be a great way to get started but eventually yoursquollneed to roll your own image This is done via a DockerfileThe easiest place to start is from an existing image The slide shows asnippet from the Dockerfile that wersquore using to build the image for whatwill be our main web site In this case wersquore expanding off the officialWordpress image to build exactly what we need for adamseduOur web guru Mike Henderson built and maintains the Dockerfile Our CIprocess automates the build process when he pushes a change into GitLaband allows him to deploy it to testing or productionThe build process makes it easier for others to review and audit an imageUnlike a server everything that goes into a service is in the image Youdonrsquot have extra things hanging around that no one knows about becausethe person doing the install forgot about it

Deploying Services

docker stack deploy -c docker-composeyml www

Randall Smith Docker at Adams State University

So letrsquos dig into how we deploy a service This is done in Swarm witha docker-composeyml file In it you specify all of the services volumesand networks that your application needs All of this happens at run timeThis is the compose file that wersquore using for to test our wordpress deploy-ment for our new website

bull base images are usually very minimal

bull version tags allow rollback Go back to previous version

bull this db approach means that every service gets own dedicated DB

[ back to slide ] When wersquore ready to deploy we run docker stackdeploy and Swarm tries to make reality match what wersquove defined inthe compose fileThe combination of Dockerfiles and the compose file define the entireconfiguration for a service This fills the same role as server configurationmanagement

Zero-downtime Upgrades

I Requires the service to run multiple replicasI Running containers are replaced one at a time until they are all

replacedI The load balancer serves requests from all running services

Randall Smith Docker at Adams State University

One of the great features of Swarm and Kubernetes is the ability to per-form zero-downtime upgrades This feature is based firstly on runningmultiple replicas of the image Each running container is updated one ata time or in a configurable number of groupsThe built-in load balancer will serve requests from each running containerAs containers are shutdown they are removed from the load balancer Thenew containers are added and will start serving requests Eventually everycontainer will be replaced with ones running the new image The servicewill remain up the entire timeDuring the update process some requests may go to old containers whileothers go to the new ones As long as everything is backwards compatiblethis works wellEven for services that cannot run with multiple replicas such as databasesthey generally restart so quickly that downtime for upgrades is reduced toseconds

Who Needs a Server

docker run ictuopenvas-docker openvasrun_scanpy -v pathtoresultsopenvasreports host1host2host2 report-name

Randall Smith Docker at Adams State University

One of the great things about running services in containers is that westarted to kill off servers In some cases is was what you would expectThe server that was running a service isnrsquot needed anymore In one casein particular the change was even more drasticWe use OpenVAS to run regular security scans against our servers TheOpenVAS server included the security scanning and a web interface formanaging the scans and providing access to the reportsCameron started looking into how to move OpenVAS into a container anddiscovered a couple of pre-built images One of them allows you to passin a list of hosts to scan on startup The container will go through thelist scan every host and write the report to a volume Even better thecontainer downloads all of the latest checks when it starts so it is alwaysup-to-dateWe were able to replace an entire server with a one-liner We can schedulethat to run as a cron job or run it on demand anytime we need it

Who Needs a Cluster

I Use single node Docker Swarm for standalone servicesI All of the CI tooling is still availableI Take advantage of rollback and CI audit trail

Randall Smith Docker at Adams State University

We also found that in some cases it still might make sense to run aservice on a standalone server However you can still take advantage ofthe managed deployment and rollback that is available with Docker Swarmon a single hostAll of the CI build and deployment options are available as they wouldbe if the service were running on a full cluster Instead a specificgitlab-runner is used on the standalone host Deployments are thenconfigured to use that runnerThere are two big wins when taking this approach on standalone serversFirst you get the rapid rollback in the event of failure Second you getthe audit trail and accountability that comes from the CI environment

There Were Problems

I Swarm scheduler failedI Rexray driver doesnrsquot

consistently unmap RBDsI Overlay randomly network

stopped workingI IPAM assigns too many IP

addresses

Randall Smith Docker at Adams State University

As we started to use Swarm more we started to see problems appearFirst of all we ran into issues with the Swarm scheduler There was a bugthat triggered once in a while that prevented the Swarm from starting newservices Running containers were fine but we couldnrsquot start new onesEventually this was solved in a later Docker releaseThe other problem we have is the Rexray driver doesnrsquot always cleanupafter itself when a container is stopped It can leave RBDs mounted ormapped on a node preventing it from starting elsewhere in the SwarmThis can cause a service to take longer to start or prevent it from startingentirelyWe had two major network issues First of all the overlay network wouldsometimes stop talking Containers on the same nodes could continue toconnect but they couldnrsquot talk to others in the Swarm There turned outto be a conflict between the Kernel network timeout settings and the IPVSsettings which led to the Kernel dropping the overlay connectionsThe second problem is one wersquore still working around Therersquos a bug inthe IPAM module which assigns the virtual IPs If services are frequentlyupdated it can lead to multiple IPs being assigned to the service andthe old IPs not being cleaned up This can lead to strange connectivityproblems

Not A Total Loss

In the end Docker Swarm has proven to be a solid proof ofconcept

Randall Smith Docker at Adams State University

In the end Docker Swarm has proven to be a solid proof of conceptWe learned a lot about how to handle containers and the workflow fordeploying servicesDonrsquot be fooled with the proof of concept label Swarm is a solid prod-uct So solid that we are using this to run production services

Next Steps

Randall Smith Docker at Adams State University

The next step for us is to move into our new Kubernetes clusterI wish I could say that we were there now Unfortunately little things likemoving to Banner 9 took up a lot of time Herersquos where we are now andwhat our plans are moving forward

Architecture

Randall Smith Docker at Adams State University

The architecture for the new Kubernetes cluster mirrors what wersquore doingwith SwarmWersquore using haproxy on the gateway nodes to provide public access toservices running in the cluster It also provides TLS termination for webservices The nodes are managed by puppet (and GitLab CI)The Kubernetes cluster can use a number of different drivers to provide thenetwork overlay and enforce network access controls The Calico driverfor example uses BGP to route requests for services to the host where thecontainers running Using BGP means that standard routingnetworkingtools can be used to troubleshoot problemsOne of the best things about the native Docker overlay network is thatall traffic between all of the containers on that network in the Swarm canbe encrypted I didnrsquot want to lose that when moving to Kubernetes so Iuse puppet to automatically configure libreswan to build host-to-host linksbetween all of the hosts with IPSec and letsencrypt (Mostly automaticallyI still have to manually request the the certificate)Cluster management goes through kubeadmin01 A gitlab-runner onkubeadmin01 handles service updates coming from GitLab CI

Setup

I rke kubespray and kubeadm make installs easierI Shared storage via Ceph and NFSI Networking is the hard part

Randall Smith Docker at Adams State University

Kubernetes development is focused on the cloud Unfortunately thatmeans that deploying locally is not as easy as it could be Tools likerke kubespray and kubeadm make it easierOne of the things I like about rke is that it has an easy option to teardown a cluster This is great for testingWe use Ceph RBD to provide block devices to our Linux KVM clusterWe are also able to leverage that for Docker but the way itrsquos configureddoesnrsquot work well if volumes need to be shared We use NFS for thosecases where multiple services need to access the volume at the same timeThe cloud focus really shows up when it comes time to setup networkingTherersquos an expectation that the cluster use the load balancer from thecloud provider to provide access Obviously thatrsquos not the case whenrunning it locallyAll of the subnets used by Kubernetes are designed to be internal to thecluster That makes it hard to make services public Metallb has a coupleof options to do that The easiest is Layer 2 mode which publishes IPsdirectly on the nodes and use ARPs to make other servers aware of theIPs This is what wersquore using The other option is BGP with publishesthe IPs and their location in the cluster to other routers

Migration

I Convert from Docker Compose to Pods and ServicesI Add a CI jobs to deploy to the k8s clusterI Change DNS to point to the k8s serviceI No image rebuild needed

Randall Smith Docker at Adams State University

One of the best things about using Docker and GitLab CI to manageservice deployments is that very little needs to change There is a one-time conversion of the docker compose file to the Kubernetes config formatThe CI configuration also needs to be updated so that deployments go toKubernetes After that deployment through GitLab remains the same asit was with Swarm For services that do not have external volumes theycan be deployed to both clouds and tests run as neededFor many services migration can be as easy as changing a DNS recordRemember the exact same image that wersquore running in the Swarm will berunning in Kubernetes There are no surprises with the image No needto rebuild or re-install

How Can You Get Started

I Start with something simpleI Start with services that can tolerate downtimeI You may need to change your backup strategyI CI CI CI

Randall Smith Docker at Adams State University

Donrsquot do what I did Start with something simple Web services and tasksthat donrsquot require permanent local storage are a great start These areusually easier to build and more resilient to being frequently stopped andstarted This will happen a lot with your first servicesStart with services that can tolerate downtime Test services are great butyoursquoll eventually want to move something useful into the cluster Therersquosa learning curve Donrsquot let it bite you (or discourage you)When you move to containers how you back up your data may need tochange You donrsquot have a server to log into and run backup scripts Knowwhere your data lives and back it up thereCI CI CI Also CI Having a consistent workflow will make it easier foreveryone to start using the new cluster

Build vs Buy

Randall Smith Docker at Adams State University

You can build from scratch but there are a number of pre-built commercialservicesDocker Enterprise is a simple install on-prem and provides both Swarmand Kubernetes services at the same time It also provides a registry androle-based access control We are testing this approach nowYou can also go straight to the cloud with GKE AWS or Azure Infact using Kubernetes makes it much easier to transition between cloudproviders or even use multiple providers at the same time

Logging

I Containers generally log tostandard out

I Logs can be shipped to a logaggreagator such asElasticsearch

I View logs and createdashboards in Kibana

Randall Smith Docker at Adams State University

Moving to Docker changes a number of things not the least of which islogging The official accepted way for containers to log is to write theirlogs to standard out The logs are then available via Docker This alsoallows for a standard way to ship those logs to an aggregatorGenerally this done by shipping those logs to Elasticsearch One can thenuse a tool such as Kibana to access those logs build dashboards and eventrigger alerts based on the contents of those logs

Monitoring

I Collect metrics with PrometheusI View the metrics with GrafanaI Use InfluxDB for long-term storage

Randall Smith Docker at Adams State University

Prometheus has become the premier way to collect metrics on Kubernetesclusters It collects host stats as well as information on running podsGrafana is frequently used to build dashboards to view the collected met-ricsPrometheus only keeps metrics for a short amount of time Usually lessthan two weeks The metrics can be shipped to InfluxDB for long termstorage and analysis

Security Scans

I No built-in way to see if an image has out-of-date packagesI clair and Docker Enterprise can scan images for vulnerable

packagesI Doesnrsquot work as well for 3rd party packages

Randall Smith Docker at Adams State University

One of the things that you lose with containers is a built-in way to ensurethat software in an image is not running vulnerable packages You can useyum or apt to see if new updates need to be applied but containers donrsquothave thatTools such as clair can be used to scan images at build time as part ofthe CI process or to regularly scan all images in a registry This feature isalso included in Docker Enterprise This works well for packages installedwith standard tools (such as apt) but doesnrsquot work as well for softwareinstalled from sourceOne way to keep images up-to-date is run scheduled tasks in GitLab CIthat scans the image and rebuilds is with the latest updates installed ifsomething is found Remember that base images are generally very strippeddown so they will generally have fewer updates

BANDOCK Google Group

httpsgroupsgooglecomforumforumbandock

Randall Smith Docker at Adams State University

Virginia Tech started a Google Group for schools looking to run Banner inDocker VT is running Banner 9 in Docker SwarmTo request access go to the BANDOCK Google group and click the linkthat says Apply for membership

Questions

Feel free to ask questions after the conference

Email rbsmithadamseduPhone 719-587-7836Twitter PerlStalker

Randall Smith Docker at Adams State University

Thank You

Randall Smith Docker at Adams State University

References

bull Two Five and Seven painting the rosebush from AlicersquosAdventures in Wonderland

bull ELK Stack image - httpswwwelasticcoelk-stack

bull Docker Enterprise -httpswwwdockercomproductsdocker-enterprise

bull Book cover generated with the devto book generator -httpsdevtorly

bull Double Facepalm meme created at imgflipcom

bull GitLab - httpsaboutgitlabcom

bull Kubernetes - httpskubernetesio

bull Calico - httpswwwprojectcalicoorg

bull Metallb - httpsmetallbuniversetf

Page 15: Docker at Adams State UniversityRandall Smith Docker at Adams State University One of the best things about using Docker and GitLab CI to manage service deployments is that very little

Building Images

FROM wordpress498-apacheLABEL maintainer=Mike Henderson mhendersonadamsedu

RUN apt-get update ampamp apt-get install -y curl zip unzip git libldap2-dev ampamp rm -rf varlibaptlists ampamp docker-php-ext-configure ldap --with-libdir=libx86_64-linux-gnu ampamp docker-php-ext-install ldap ampamp apt-get purge -y --auto-remove libldap2-dev ampamp php -r readfile(rsquohttpgetcomposerorginstallerrsquo)

| php -- --install-dir=usrbin --filename=composer

git clone of themes and plugins removed

Pull in composer file and runCOPY composerjson RUN composer install --verbose --profile --prefer-dist --no-autoloader

Make NFS mount point set ownership symlink into wordpressRUN mkdir uploadsRUN chown www-datawww-data uploadsRUN ln -s uploads usrsrcwordpresswp-contentuploads

Randall Smith Docker at Adams State University

Building an image is the first step to running a service in Docker Using offthe shelf images can be a great way to get started but eventually yoursquollneed to roll your own image This is done via a DockerfileThe easiest place to start is from an existing image The slide shows asnippet from the Dockerfile that wersquore using to build the image for whatwill be our main web site In this case wersquore expanding off the officialWordpress image to build exactly what we need for adamseduOur web guru Mike Henderson built and maintains the Dockerfile Our CIprocess automates the build process when he pushes a change into GitLaband allows him to deploy it to testing or productionThe build process makes it easier for others to review and audit an imageUnlike a server everything that goes into a service is in the image Youdonrsquot have extra things hanging around that no one knows about becausethe person doing the install forgot about it

Deploying Services

docker stack deploy -c docker-composeyml www

Randall Smith Docker at Adams State University

So letrsquos dig into how we deploy a service This is done in Swarm witha docker-composeyml file In it you specify all of the services volumesand networks that your application needs All of this happens at run timeThis is the compose file that wersquore using for to test our wordpress deploy-ment for our new website

bull base images are usually very minimal

bull version tags allow rollback Go back to previous version

bull this db approach means that every service gets own dedicated DB

[ back to slide ] When wersquore ready to deploy we run docker stackdeploy and Swarm tries to make reality match what wersquove defined inthe compose fileThe combination of Dockerfiles and the compose file define the entireconfiguration for a service This fills the same role as server configurationmanagement

Zero-downtime Upgrades

I Requires the service to run multiple replicasI Running containers are replaced one at a time until they are all

replacedI The load balancer serves requests from all running services

Randall Smith Docker at Adams State University

One of the great features of Swarm and Kubernetes is the ability to per-form zero-downtime upgrades This feature is based firstly on runningmultiple replicas of the image Each running container is updated one ata time or in a configurable number of groupsThe built-in load balancer will serve requests from each running containerAs containers are shutdown they are removed from the load balancer Thenew containers are added and will start serving requests Eventually everycontainer will be replaced with ones running the new image The servicewill remain up the entire timeDuring the update process some requests may go to old containers whileothers go to the new ones As long as everything is backwards compatiblethis works wellEven for services that cannot run with multiple replicas such as databasesthey generally restart so quickly that downtime for upgrades is reduced toseconds

Who Needs a Server

docker run ictuopenvas-docker openvasrun_scanpy -v pathtoresultsopenvasreports host1host2host2 report-name

Randall Smith Docker at Adams State University

One of the great things about running services in containers is that westarted to kill off servers In some cases is was what you would expectThe server that was running a service isnrsquot needed anymore In one casein particular the change was even more drasticWe use OpenVAS to run regular security scans against our servers TheOpenVAS server included the security scanning and a web interface formanaging the scans and providing access to the reportsCameron started looking into how to move OpenVAS into a container anddiscovered a couple of pre-built images One of them allows you to passin a list of hosts to scan on startup The container will go through thelist scan every host and write the report to a volume Even better thecontainer downloads all of the latest checks when it starts so it is alwaysup-to-dateWe were able to replace an entire server with a one-liner We can schedulethat to run as a cron job or run it on demand anytime we need it

Who Needs a Cluster

I Use single node Docker Swarm for standalone servicesI All of the CI tooling is still availableI Take advantage of rollback and CI audit trail

Randall Smith Docker at Adams State University

We also found that in some cases it still might make sense to run aservice on a standalone server However you can still take advantage ofthe managed deployment and rollback that is available with Docker Swarmon a single hostAll of the CI build and deployment options are available as they wouldbe if the service were running on a full cluster Instead a specificgitlab-runner is used on the standalone host Deployments are thenconfigured to use that runnerThere are two big wins when taking this approach on standalone serversFirst you get the rapid rollback in the event of failure Second you getthe audit trail and accountability that comes from the CI environment

There Were Problems

I Swarm scheduler failedI Rexray driver doesnrsquot

consistently unmap RBDsI Overlay randomly network

stopped workingI IPAM assigns too many IP

addresses

Randall Smith Docker at Adams State University

As we started to use Swarm more we started to see problems appearFirst of all we ran into issues with the Swarm scheduler There was a bugthat triggered once in a while that prevented the Swarm from starting newservices Running containers were fine but we couldnrsquot start new onesEventually this was solved in a later Docker releaseThe other problem we have is the Rexray driver doesnrsquot always cleanupafter itself when a container is stopped It can leave RBDs mounted ormapped on a node preventing it from starting elsewhere in the SwarmThis can cause a service to take longer to start or prevent it from startingentirelyWe had two major network issues First of all the overlay network wouldsometimes stop talking Containers on the same nodes could continue toconnect but they couldnrsquot talk to others in the Swarm There turned outto be a conflict between the Kernel network timeout settings and the IPVSsettings which led to the Kernel dropping the overlay connectionsThe second problem is one wersquore still working around Therersquos a bug inthe IPAM module which assigns the virtual IPs If services are frequentlyupdated it can lead to multiple IPs being assigned to the service andthe old IPs not being cleaned up This can lead to strange connectivityproblems

Not A Total Loss

In the end Docker Swarm has proven to be a solid proof ofconcept

Randall Smith Docker at Adams State University

In the end Docker Swarm has proven to be a solid proof of conceptWe learned a lot about how to handle containers and the workflow fordeploying servicesDonrsquot be fooled with the proof of concept label Swarm is a solid prod-uct So solid that we are using this to run production services

Next Steps

Randall Smith Docker at Adams State University

The next step for us is to move into our new Kubernetes clusterI wish I could say that we were there now Unfortunately little things likemoving to Banner 9 took up a lot of time Herersquos where we are now andwhat our plans are moving forward

Architecture

Randall Smith Docker at Adams State University

The architecture for the new Kubernetes cluster mirrors what wersquore doingwith SwarmWersquore using haproxy on the gateway nodes to provide public access toservices running in the cluster It also provides TLS termination for webservices The nodes are managed by puppet (and GitLab CI)The Kubernetes cluster can use a number of different drivers to provide thenetwork overlay and enforce network access controls The Calico driverfor example uses BGP to route requests for services to the host where thecontainers running Using BGP means that standard routingnetworkingtools can be used to troubleshoot problemsOne of the best things about the native Docker overlay network is thatall traffic between all of the containers on that network in the Swarm canbe encrypted I didnrsquot want to lose that when moving to Kubernetes so Iuse puppet to automatically configure libreswan to build host-to-host linksbetween all of the hosts with IPSec and letsencrypt (Mostly automaticallyI still have to manually request the the certificate)Cluster management goes through kubeadmin01 A gitlab-runner onkubeadmin01 handles service updates coming from GitLab CI

Setup

I rke kubespray and kubeadm make installs easierI Shared storage via Ceph and NFSI Networking is the hard part

Randall Smith Docker at Adams State University

Kubernetes development is focused on the cloud Unfortunately thatmeans that deploying locally is not as easy as it could be Tools likerke kubespray and kubeadm make it easierOne of the things I like about rke is that it has an easy option to teardown a cluster This is great for testingWe use Ceph RBD to provide block devices to our Linux KVM clusterWe are also able to leverage that for Docker but the way itrsquos configureddoesnrsquot work well if volumes need to be shared We use NFS for thosecases where multiple services need to access the volume at the same timeThe cloud focus really shows up when it comes time to setup networkingTherersquos an expectation that the cluster use the load balancer from thecloud provider to provide access Obviously thatrsquos not the case whenrunning it locallyAll of the subnets used by Kubernetes are designed to be internal to thecluster That makes it hard to make services public Metallb has a coupleof options to do that The easiest is Layer 2 mode which publishes IPsdirectly on the nodes and use ARPs to make other servers aware of theIPs This is what wersquore using The other option is BGP with publishesthe IPs and their location in the cluster to other routers

Migration

I Convert from Docker Compose to Pods and ServicesI Add a CI jobs to deploy to the k8s clusterI Change DNS to point to the k8s serviceI No image rebuild needed

Randall Smith Docker at Adams State University

One of the best things about using Docker and GitLab CI to manageservice deployments is that very little needs to change There is a one-time conversion of the docker compose file to the Kubernetes config formatThe CI configuration also needs to be updated so that deployments go toKubernetes After that deployment through GitLab remains the same asit was with Swarm For services that do not have external volumes theycan be deployed to both clouds and tests run as neededFor many services migration can be as easy as changing a DNS recordRemember the exact same image that wersquore running in the Swarm will berunning in Kubernetes There are no surprises with the image No needto rebuild or re-install

How Can You Get Started

I Start with something simpleI Start with services that can tolerate downtimeI You may need to change your backup strategyI CI CI CI

Randall Smith Docker at Adams State University

Donrsquot do what I did Start with something simple Web services and tasksthat donrsquot require permanent local storage are a great start These areusually easier to build and more resilient to being frequently stopped andstarted This will happen a lot with your first servicesStart with services that can tolerate downtime Test services are great butyoursquoll eventually want to move something useful into the cluster Therersquosa learning curve Donrsquot let it bite you (or discourage you)When you move to containers how you back up your data may need tochange You donrsquot have a server to log into and run backup scripts Knowwhere your data lives and back it up thereCI CI CI Also CI Having a consistent workflow will make it easier foreveryone to start using the new cluster

Build vs Buy

Randall Smith Docker at Adams State University

You can build from scratch but there are a number of pre-built commercialservicesDocker Enterprise is a simple install on-prem and provides both Swarmand Kubernetes services at the same time It also provides a registry androle-based access control We are testing this approach nowYou can also go straight to the cloud with GKE AWS or Azure Infact using Kubernetes makes it much easier to transition between cloudproviders or even use multiple providers at the same time

Logging

I Containers generally log tostandard out

I Logs can be shipped to a logaggreagator such asElasticsearch

I View logs and createdashboards in Kibana

Randall Smith Docker at Adams State University

Moving to Docker changes a number of things not the least of which islogging The official accepted way for containers to log is to write theirlogs to standard out The logs are then available via Docker This alsoallows for a standard way to ship those logs to an aggregatorGenerally this done by shipping those logs to Elasticsearch One can thenuse a tool such as Kibana to access those logs build dashboards and eventrigger alerts based on the contents of those logs

Monitoring

I Collect metrics with PrometheusI View the metrics with GrafanaI Use InfluxDB for long-term storage

Randall Smith Docker at Adams State University

Prometheus has become the premier way to collect metrics on Kubernetesclusters It collects host stats as well as information on running podsGrafana is frequently used to build dashboards to view the collected met-ricsPrometheus only keeps metrics for a short amount of time Usually lessthan two weeks The metrics can be shipped to InfluxDB for long termstorage and analysis

Security Scans

I No built-in way to see if an image has out-of-date packagesI clair and Docker Enterprise can scan images for vulnerable

packagesI Doesnrsquot work as well for 3rd party packages

Randall Smith Docker at Adams State University

One of the things that you lose with containers is a built-in way to ensurethat software in an image is not running vulnerable packages You can useyum or apt to see if new updates need to be applied but containers donrsquothave thatTools such as clair can be used to scan images at build time as part ofthe CI process or to regularly scan all images in a registry This feature isalso included in Docker Enterprise This works well for packages installedwith standard tools (such as apt) but doesnrsquot work as well for softwareinstalled from sourceOne way to keep images up-to-date is run scheduled tasks in GitLab CIthat scans the image and rebuilds is with the latest updates installed ifsomething is found Remember that base images are generally very strippeddown so they will generally have fewer updates

BANDOCK Google Group

httpsgroupsgooglecomforumforumbandock

Randall Smith Docker at Adams State University

Virginia Tech started a Google Group for schools looking to run Banner inDocker VT is running Banner 9 in Docker SwarmTo request access go to the BANDOCK Google group and click the linkthat says Apply for membership

Questions

Feel free to ask questions after the conference

Email rbsmithadamseduPhone 719-587-7836Twitter PerlStalker

Randall Smith Docker at Adams State University

Thank You

Randall Smith Docker at Adams State University

References

bull Two Five and Seven painting the rosebush from AlicersquosAdventures in Wonderland

bull ELK Stack image - httpswwwelasticcoelk-stack

bull Docker Enterprise -httpswwwdockercomproductsdocker-enterprise

bull Book cover generated with the devto book generator -httpsdevtorly

bull Double Facepalm meme created at imgflipcom

bull GitLab - httpsaboutgitlabcom

bull Kubernetes - httpskubernetesio

bull Calico - httpswwwprojectcalicoorg

bull Metallb - httpsmetallbuniversetf

Page 16: Docker at Adams State UniversityRandall Smith Docker at Adams State University One of the best things about using Docker and GitLab CI to manage service deployments is that very little

Deploying Services

docker stack deploy -c docker-composeyml www

Randall Smith Docker at Adams State University

So letrsquos dig into how we deploy a service This is done in Swarm witha docker-composeyml file In it you specify all of the services volumesand networks that your application needs All of this happens at run timeThis is the compose file that wersquore using for to test our wordpress deploy-ment for our new website

bull base images are usually very minimal

bull version tags allow rollback Go back to previous version

bull this db approach means that every service gets own dedicated DB

[ back to slide ] When wersquore ready to deploy we run docker stackdeploy and Swarm tries to make reality match what wersquove defined inthe compose fileThe combination of Dockerfiles and the compose file define the entireconfiguration for a service This fills the same role as server configurationmanagement

Zero-downtime Upgrades

I Requires the service to run multiple replicasI Running containers are replaced one at a time until they are all

replacedI The load balancer serves requests from all running services

Randall Smith Docker at Adams State University

One of the great features of Swarm and Kubernetes is the ability to per-form zero-downtime upgrades This feature is based firstly on runningmultiple replicas of the image Each running container is updated one ata time or in a configurable number of groupsThe built-in load balancer will serve requests from each running containerAs containers are shutdown they are removed from the load balancer Thenew containers are added and will start serving requests Eventually everycontainer will be replaced with ones running the new image The servicewill remain up the entire timeDuring the update process some requests may go to old containers whileothers go to the new ones As long as everything is backwards compatiblethis works wellEven for services that cannot run with multiple replicas such as databasesthey generally restart so quickly that downtime for upgrades is reduced toseconds

Who Needs a Server

docker run ictuopenvas-docker openvasrun_scanpy -v pathtoresultsopenvasreports host1host2host2 report-name

Randall Smith Docker at Adams State University

One of the great things about running services in containers is that westarted to kill off servers In some cases is was what you would expectThe server that was running a service isnrsquot needed anymore In one casein particular the change was even more drasticWe use OpenVAS to run regular security scans against our servers TheOpenVAS server included the security scanning and a web interface formanaging the scans and providing access to the reportsCameron started looking into how to move OpenVAS into a container anddiscovered a couple of pre-built images One of them allows you to passin a list of hosts to scan on startup The container will go through thelist scan every host and write the report to a volume Even better thecontainer downloads all of the latest checks when it starts so it is alwaysup-to-dateWe were able to replace an entire server with a one-liner We can schedulethat to run as a cron job or run it on demand anytime we need it

Who Needs a Cluster

I Use single node Docker Swarm for standalone servicesI All of the CI tooling is still availableI Take advantage of rollback and CI audit trail

Randall Smith Docker at Adams State University

We also found that in some cases it still might make sense to run aservice on a standalone server However you can still take advantage ofthe managed deployment and rollback that is available with Docker Swarmon a single hostAll of the CI build and deployment options are available as they wouldbe if the service were running on a full cluster Instead a specificgitlab-runner is used on the standalone host Deployments are thenconfigured to use that runnerThere are two big wins when taking this approach on standalone serversFirst you get the rapid rollback in the event of failure Second you getthe audit trail and accountability that comes from the CI environment

There Were Problems

I Swarm scheduler failedI Rexray driver doesnrsquot

consistently unmap RBDsI Overlay randomly network

stopped workingI IPAM assigns too many IP

addresses

Randall Smith Docker at Adams State University

As we started to use Swarm more we started to see problems appearFirst of all we ran into issues with the Swarm scheduler There was a bugthat triggered once in a while that prevented the Swarm from starting newservices Running containers were fine but we couldnrsquot start new onesEventually this was solved in a later Docker releaseThe other problem we have is the Rexray driver doesnrsquot always cleanupafter itself when a container is stopped It can leave RBDs mounted ormapped on a node preventing it from starting elsewhere in the SwarmThis can cause a service to take longer to start or prevent it from startingentirelyWe had two major network issues First of all the overlay network wouldsometimes stop talking Containers on the same nodes could continue toconnect but they couldnrsquot talk to others in the Swarm There turned outto be a conflict between the Kernel network timeout settings and the IPVSsettings which led to the Kernel dropping the overlay connectionsThe second problem is one wersquore still working around Therersquos a bug inthe IPAM module which assigns the virtual IPs If services are frequentlyupdated it can lead to multiple IPs being assigned to the service andthe old IPs not being cleaned up This can lead to strange connectivityproblems

Not A Total Loss

In the end Docker Swarm has proven to be a solid proof ofconcept

Randall Smith Docker at Adams State University

In the end Docker Swarm has proven to be a solid proof of conceptWe learned a lot about how to handle containers and the workflow fordeploying servicesDonrsquot be fooled with the proof of concept label Swarm is a solid prod-uct So solid that we are using this to run production services

Next Steps

Randall Smith Docker at Adams State University

The next step for us is to move into our new Kubernetes clusterI wish I could say that we were there now Unfortunately little things likemoving to Banner 9 took up a lot of time Herersquos where we are now andwhat our plans are moving forward

Architecture

Randall Smith Docker at Adams State University

The architecture for the new Kubernetes cluster mirrors what wersquore doingwith SwarmWersquore using haproxy on the gateway nodes to provide public access toservices running in the cluster It also provides TLS termination for webservices The nodes are managed by puppet (and GitLab CI)The Kubernetes cluster can use a number of different drivers to provide thenetwork overlay and enforce network access controls The Calico driverfor example uses BGP to route requests for services to the host where thecontainers running Using BGP means that standard routingnetworkingtools can be used to troubleshoot problemsOne of the best things about the native Docker overlay network is thatall traffic between all of the containers on that network in the Swarm canbe encrypted I didnrsquot want to lose that when moving to Kubernetes so Iuse puppet to automatically configure libreswan to build host-to-host linksbetween all of the hosts with IPSec and letsencrypt (Mostly automaticallyI still have to manually request the the certificate)Cluster management goes through kubeadmin01 A gitlab-runner onkubeadmin01 handles service updates coming from GitLab CI

Setup

I rke kubespray and kubeadm make installs easierI Shared storage via Ceph and NFSI Networking is the hard part

Randall Smith Docker at Adams State University

Kubernetes development is focused on the cloud Unfortunately thatmeans that deploying locally is not as easy as it could be Tools likerke kubespray and kubeadm make it easierOne of the things I like about rke is that it has an easy option to teardown a cluster This is great for testingWe use Ceph RBD to provide block devices to our Linux KVM clusterWe are also able to leverage that for Docker but the way itrsquos configureddoesnrsquot work well if volumes need to be shared We use NFS for thosecases where multiple services need to access the volume at the same timeThe cloud focus really shows up when it comes time to setup networkingTherersquos an expectation that the cluster use the load balancer from thecloud provider to provide access Obviously thatrsquos not the case whenrunning it locallyAll of the subnets used by Kubernetes are designed to be internal to thecluster That makes it hard to make services public Metallb has a coupleof options to do that The easiest is Layer 2 mode which publishes IPsdirectly on the nodes and use ARPs to make other servers aware of theIPs This is what wersquore using The other option is BGP with publishesthe IPs and their location in the cluster to other routers

Migration

I Convert from Docker Compose to Pods and ServicesI Add a CI jobs to deploy to the k8s clusterI Change DNS to point to the k8s serviceI No image rebuild needed

Randall Smith Docker at Adams State University

One of the best things about using Docker and GitLab CI to manageservice deployments is that very little needs to change There is a one-time conversion of the docker compose file to the Kubernetes config formatThe CI configuration also needs to be updated so that deployments go toKubernetes After that deployment through GitLab remains the same asit was with Swarm For services that do not have external volumes theycan be deployed to both clouds and tests run as neededFor many services migration can be as easy as changing a DNS recordRemember the exact same image that wersquore running in the Swarm will berunning in Kubernetes There are no surprises with the image No needto rebuild or re-install

How Can You Get Started

I Start with something simpleI Start with services that can tolerate downtimeI You may need to change your backup strategyI CI CI CI

Randall Smith Docker at Adams State University

Donrsquot do what I did Start with something simple Web services and tasksthat donrsquot require permanent local storage are a great start These areusually easier to build and more resilient to being frequently stopped andstarted This will happen a lot with your first servicesStart with services that can tolerate downtime Test services are great butyoursquoll eventually want to move something useful into the cluster Therersquosa learning curve Donrsquot let it bite you (or discourage you)When you move to containers how you back up your data may need tochange You donrsquot have a server to log into and run backup scripts Knowwhere your data lives and back it up thereCI CI CI Also CI Having a consistent workflow will make it easier foreveryone to start using the new cluster

Build vs Buy

Randall Smith Docker at Adams State University

You can build from scratch but there are a number of pre-built commercialservicesDocker Enterprise is a simple install on-prem and provides both Swarmand Kubernetes services at the same time It also provides a registry androle-based access control We are testing this approach nowYou can also go straight to the cloud with GKE AWS or Azure Infact using Kubernetes makes it much easier to transition between cloudproviders or even use multiple providers at the same time

Logging

I Containers generally log tostandard out

I Logs can be shipped to a logaggreagator such asElasticsearch

I View logs and createdashboards in Kibana

Randall Smith Docker at Adams State University

Moving to Docker changes a number of things not the least of which islogging The official accepted way for containers to log is to write theirlogs to standard out The logs are then available via Docker This alsoallows for a standard way to ship those logs to an aggregatorGenerally this done by shipping those logs to Elasticsearch One can thenuse a tool such as Kibana to access those logs build dashboards and eventrigger alerts based on the contents of those logs

Monitoring

I Collect metrics with PrometheusI View the metrics with GrafanaI Use InfluxDB for long-term storage

Randall Smith Docker at Adams State University

Prometheus has become the premier way to collect metrics on Kubernetesclusters It collects host stats as well as information on running podsGrafana is frequently used to build dashboards to view the collected met-ricsPrometheus only keeps metrics for a short amount of time Usually lessthan two weeks The metrics can be shipped to InfluxDB for long termstorage and analysis

Security Scans

I No built-in way to see if an image has out-of-date packagesI clair and Docker Enterprise can scan images for vulnerable

packagesI Doesnrsquot work as well for 3rd party packages

Randall Smith Docker at Adams State University

One of the things that you lose with containers is a built-in way to ensurethat software in an image is not running vulnerable packages You can useyum or apt to see if new updates need to be applied but containers donrsquothave thatTools such as clair can be used to scan images at build time as part ofthe CI process or to regularly scan all images in a registry This feature isalso included in Docker Enterprise This works well for packages installedwith standard tools (such as apt) but doesnrsquot work as well for softwareinstalled from sourceOne way to keep images up-to-date is run scheduled tasks in GitLab CIthat scans the image and rebuilds is with the latest updates installed ifsomething is found Remember that base images are generally very strippeddown so they will generally have fewer updates

BANDOCK Google Group

httpsgroupsgooglecomforumforumbandock

Randall Smith Docker at Adams State University

Virginia Tech started a Google Group for schools looking to run Banner inDocker VT is running Banner 9 in Docker SwarmTo request access go to the BANDOCK Google group and click the linkthat says Apply for membership

Questions

Feel free to ask questions after the conference

Email rbsmithadamseduPhone 719-587-7836Twitter PerlStalker

Randall Smith Docker at Adams State University

Thank You

Randall Smith Docker at Adams State University

References

bull Two Five and Seven painting the rosebush from AlicersquosAdventures in Wonderland

bull ELK Stack image - httpswwwelasticcoelk-stack

bull Docker Enterprise -httpswwwdockercomproductsdocker-enterprise

bull Book cover generated with the devto book generator -httpsdevtorly

bull Double Facepalm meme created at imgflipcom

bull GitLab - httpsaboutgitlabcom

bull Kubernetes - httpskubernetesio

bull Calico - httpswwwprojectcalicoorg

bull Metallb - httpsmetallbuniversetf

Page 17: Docker at Adams State UniversityRandall Smith Docker at Adams State University One of the best things about using Docker and GitLab CI to manage service deployments is that very little

Zero-downtime Upgrades

I Requires the service to run multiple replicasI Running containers are replaced one at a time until they are all

replacedI The load balancer serves requests from all running services

Randall Smith Docker at Adams State University

One of the great features of Swarm and Kubernetes is the ability to per-form zero-downtime upgrades This feature is based firstly on runningmultiple replicas of the image Each running container is updated one ata time or in a configurable number of groupsThe built-in load balancer will serve requests from each running containerAs containers are shutdown they are removed from the load balancer Thenew containers are added and will start serving requests Eventually everycontainer will be replaced with ones running the new image The servicewill remain up the entire timeDuring the update process some requests may go to old containers whileothers go to the new ones As long as everything is backwards compatiblethis works wellEven for services that cannot run with multiple replicas such as databasesthey generally restart so quickly that downtime for upgrades is reduced toseconds

Who Needs a Server

docker run ictuopenvas-docker openvasrun_scanpy -v pathtoresultsopenvasreports host1host2host2 report-name

Randall Smith Docker at Adams State University

One of the great things about running services in containers is that westarted to kill off servers In some cases is was what you would expectThe server that was running a service isnrsquot needed anymore In one casein particular the change was even more drasticWe use OpenVAS to run regular security scans against our servers TheOpenVAS server included the security scanning and a web interface formanaging the scans and providing access to the reportsCameron started looking into how to move OpenVAS into a container anddiscovered a couple of pre-built images One of them allows you to passin a list of hosts to scan on startup The container will go through thelist scan every host and write the report to a volume Even better thecontainer downloads all of the latest checks when it starts so it is alwaysup-to-dateWe were able to replace an entire server with a one-liner We can schedulethat to run as a cron job or run it on demand anytime we need it

Who Needs a Cluster

I Use single node Docker Swarm for standalone servicesI All of the CI tooling is still availableI Take advantage of rollback and CI audit trail

Randall Smith Docker at Adams State University

We also found that in some cases it still might make sense to run aservice on a standalone server However you can still take advantage ofthe managed deployment and rollback that is available with Docker Swarmon a single hostAll of the CI build and deployment options are available as they wouldbe if the service were running on a full cluster Instead a specificgitlab-runner is used on the standalone host Deployments are thenconfigured to use that runnerThere are two big wins when taking this approach on standalone serversFirst you get the rapid rollback in the event of failure Second you getthe audit trail and accountability that comes from the CI environment

There Were Problems

I Swarm scheduler failedI Rexray driver doesnrsquot

consistently unmap RBDsI Overlay randomly network

stopped workingI IPAM assigns too many IP

addresses

Randall Smith Docker at Adams State University

As we started to use Swarm more we started to see problems appearFirst of all we ran into issues with the Swarm scheduler There was a bugthat triggered once in a while that prevented the Swarm from starting newservices Running containers were fine but we couldnrsquot start new onesEventually this was solved in a later Docker releaseThe other problem we have is the Rexray driver doesnrsquot always cleanupafter itself when a container is stopped It can leave RBDs mounted ormapped on a node preventing it from starting elsewhere in the SwarmThis can cause a service to take longer to start or prevent it from startingentirelyWe had two major network issues First of all the overlay network wouldsometimes stop talking Containers on the same nodes could continue toconnect but they couldnrsquot talk to others in the Swarm There turned outto be a conflict between the Kernel network timeout settings and the IPVSsettings which led to the Kernel dropping the overlay connectionsThe second problem is one wersquore still working around Therersquos a bug inthe IPAM module which assigns the virtual IPs If services are frequentlyupdated it can lead to multiple IPs being assigned to the service andthe old IPs not being cleaned up This can lead to strange connectivityproblems

Not A Total Loss

In the end Docker Swarm has proven to be a solid proof ofconcept

Randall Smith Docker at Adams State University

In the end Docker Swarm has proven to be a solid proof of conceptWe learned a lot about how to handle containers and the workflow fordeploying servicesDonrsquot be fooled with the proof of concept label Swarm is a solid prod-uct So solid that we are using this to run production services

Next Steps

Randall Smith Docker at Adams State University

The next step for us is to move into our new Kubernetes clusterI wish I could say that we were there now Unfortunately little things likemoving to Banner 9 took up a lot of time Herersquos where we are now andwhat our plans are moving forward

Architecture

Randall Smith Docker at Adams State University

The architecture for the new Kubernetes cluster mirrors what wersquore doingwith SwarmWersquore using haproxy on the gateway nodes to provide public access toservices running in the cluster It also provides TLS termination for webservices The nodes are managed by puppet (and GitLab CI)The Kubernetes cluster can use a number of different drivers to provide thenetwork overlay and enforce network access controls The Calico driverfor example uses BGP to route requests for services to the host where thecontainers running Using BGP means that standard routingnetworkingtools can be used to troubleshoot problemsOne of the best things about the native Docker overlay network is thatall traffic between all of the containers on that network in the Swarm canbe encrypted I didnrsquot want to lose that when moving to Kubernetes so Iuse puppet to automatically configure libreswan to build host-to-host linksbetween all of the hosts with IPSec and letsencrypt (Mostly automaticallyI still have to manually request the the certificate)Cluster management goes through kubeadmin01 A gitlab-runner onkubeadmin01 handles service updates coming from GitLab CI

Setup

I rke kubespray and kubeadm make installs easierI Shared storage via Ceph and NFSI Networking is the hard part

Randall Smith Docker at Adams State University

Kubernetes development is focused on the cloud Unfortunately thatmeans that deploying locally is not as easy as it could be Tools likerke kubespray and kubeadm make it easierOne of the things I like about rke is that it has an easy option to teardown a cluster This is great for testingWe use Ceph RBD to provide block devices to our Linux KVM clusterWe are also able to leverage that for Docker but the way itrsquos configureddoesnrsquot work well if volumes need to be shared We use NFS for thosecases where multiple services need to access the volume at the same timeThe cloud focus really shows up when it comes time to setup networkingTherersquos an expectation that the cluster use the load balancer from thecloud provider to provide access Obviously thatrsquos not the case whenrunning it locallyAll of the subnets used by Kubernetes are designed to be internal to thecluster That makes it hard to make services public Metallb has a coupleof options to do that The easiest is Layer 2 mode which publishes IPsdirectly on the nodes and use ARPs to make other servers aware of theIPs This is what wersquore using The other option is BGP with publishesthe IPs and their location in the cluster to other routers

Migration

I Convert from Docker Compose to Pods and ServicesI Add a CI jobs to deploy to the k8s clusterI Change DNS to point to the k8s serviceI No image rebuild needed

Randall Smith Docker at Adams State University

One of the best things about using Docker and GitLab CI to manageservice deployments is that very little needs to change There is a one-time conversion of the docker compose file to the Kubernetes config formatThe CI configuration also needs to be updated so that deployments go toKubernetes After that deployment through GitLab remains the same asit was with Swarm For services that do not have external volumes theycan be deployed to both clouds and tests run as neededFor many services migration can be as easy as changing a DNS recordRemember the exact same image that wersquore running in the Swarm will berunning in Kubernetes There are no surprises with the image No needto rebuild or re-install

How Can You Get Started

I Start with something simpleI Start with services that can tolerate downtimeI You may need to change your backup strategyI CI CI CI

Randall Smith Docker at Adams State University

Donrsquot do what I did Start with something simple Web services and tasksthat donrsquot require permanent local storage are a great start These areusually easier to build and more resilient to being frequently stopped andstarted This will happen a lot with your first servicesStart with services that can tolerate downtime Test services are great butyoursquoll eventually want to move something useful into the cluster Therersquosa learning curve Donrsquot let it bite you (or discourage you)When you move to containers how you back up your data may need tochange You donrsquot have a server to log into and run backup scripts Knowwhere your data lives and back it up thereCI CI CI Also CI Having a consistent workflow will make it easier foreveryone to start using the new cluster

Build vs Buy

Randall Smith Docker at Adams State University

You can build from scratch but there are a number of pre-built commercialservicesDocker Enterprise is a simple install on-prem and provides both Swarmand Kubernetes services at the same time It also provides a registry androle-based access control We are testing this approach nowYou can also go straight to the cloud with GKE AWS or Azure Infact using Kubernetes makes it much easier to transition between cloudproviders or even use multiple providers at the same time

Logging

I Containers generally log tostandard out

I Logs can be shipped to a logaggreagator such asElasticsearch

I View logs and createdashboards in Kibana

Randall Smith Docker at Adams State University

Moving to Docker changes a number of things not the least of which islogging The official accepted way for containers to log is to write theirlogs to standard out The logs are then available via Docker This alsoallows for a standard way to ship those logs to an aggregatorGenerally this done by shipping those logs to Elasticsearch One can thenuse a tool such as Kibana to access those logs build dashboards and eventrigger alerts based on the contents of those logs

Monitoring

I Collect metrics with PrometheusI View the metrics with GrafanaI Use InfluxDB for long-term storage

Randall Smith Docker at Adams State University

Prometheus has become the premier way to collect metrics on Kubernetesclusters It collects host stats as well as information on running podsGrafana is frequently used to build dashboards to view the collected met-ricsPrometheus only keeps metrics for a short amount of time Usually lessthan two weeks The metrics can be shipped to InfluxDB for long termstorage and analysis

Security Scans

I No built-in way to see if an image has out-of-date packagesI clair and Docker Enterprise can scan images for vulnerable

packagesI Doesnrsquot work as well for 3rd party packages

Randall Smith Docker at Adams State University

One of the things that you lose with containers is a built-in way to ensurethat software in an image is not running vulnerable packages You can useyum or apt to see if new updates need to be applied but containers donrsquothave thatTools such as clair can be used to scan images at build time as part ofthe CI process or to regularly scan all images in a registry This feature isalso included in Docker Enterprise This works well for packages installedwith standard tools (such as apt) but doesnrsquot work as well for softwareinstalled from sourceOne way to keep images up-to-date is run scheduled tasks in GitLab CIthat scans the image and rebuilds is with the latest updates installed ifsomething is found Remember that base images are generally very strippeddown so they will generally have fewer updates

BANDOCK Google Group

httpsgroupsgooglecomforumforumbandock

Randall Smith Docker at Adams State University

Virginia Tech started a Google Group for schools looking to run Banner inDocker VT is running Banner 9 in Docker SwarmTo request access go to the BANDOCK Google group and click the linkthat says Apply for membership

Questions

Feel free to ask questions after the conference

Email rbsmithadamseduPhone 719-587-7836Twitter PerlStalker

Randall Smith Docker at Adams State University

Thank You

Randall Smith Docker at Adams State University

References

bull Two Five and Seven painting the rosebush from AlicersquosAdventures in Wonderland

bull ELK Stack image - httpswwwelasticcoelk-stack

bull Docker Enterprise -httpswwwdockercomproductsdocker-enterprise

bull Book cover generated with the devto book generator -httpsdevtorly

bull Double Facepalm meme created at imgflipcom

bull GitLab - httpsaboutgitlabcom

bull Kubernetes - httpskubernetesio

bull Calico - httpswwwprojectcalicoorg

bull Metallb - httpsmetallbuniversetf

Page 18: Docker at Adams State UniversityRandall Smith Docker at Adams State University One of the best things about using Docker and GitLab CI to manage service deployments is that very little

Who Needs a Server

docker run ictuopenvas-docker openvasrun_scanpy -v pathtoresultsopenvasreports host1host2host2 report-name

Randall Smith Docker at Adams State University

One of the great things about running services in containers is that westarted to kill off servers In some cases is was what you would expectThe server that was running a service isnrsquot needed anymore In one casein particular the change was even more drasticWe use OpenVAS to run regular security scans against our servers TheOpenVAS server included the security scanning and a web interface formanaging the scans and providing access to the reportsCameron started looking into how to move OpenVAS into a container anddiscovered a couple of pre-built images One of them allows you to passin a list of hosts to scan on startup The container will go through thelist scan every host and write the report to a volume Even better thecontainer downloads all of the latest checks when it starts so it is alwaysup-to-dateWe were able to replace an entire server with a one-liner We can schedulethat to run as a cron job or run it on demand anytime we need it

Who Needs a Cluster

I Use single node Docker Swarm for standalone servicesI All of the CI tooling is still availableI Take advantage of rollback and CI audit trail

Randall Smith Docker at Adams State University

We also found that in some cases it still might make sense to run aservice on a standalone server However you can still take advantage ofthe managed deployment and rollback that is available with Docker Swarmon a single hostAll of the CI build and deployment options are available as they wouldbe if the service were running on a full cluster Instead a specificgitlab-runner is used on the standalone host Deployments are thenconfigured to use that runnerThere are two big wins when taking this approach on standalone serversFirst you get the rapid rollback in the event of failure Second you getthe audit trail and accountability that comes from the CI environment

There Were Problems

I Swarm scheduler failedI Rexray driver doesnrsquot

consistently unmap RBDsI Overlay randomly network

stopped workingI IPAM assigns too many IP

addresses

Randall Smith Docker at Adams State University

As we started to use Swarm more we started to see problems appearFirst of all we ran into issues with the Swarm scheduler There was a bugthat triggered once in a while that prevented the Swarm from starting newservices Running containers were fine but we couldnrsquot start new onesEventually this was solved in a later Docker releaseThe other problem we have is the Rexray driver doesnrsquot always cleanupafter itself when a container is stopped It can leave RBDs mounted ormapped on a node preventing it from starting elsewhere in the SwarmThis can cause a service to take longer to start or prevent it from startingentirelyWe had two major network issues First of all the overlay network wouldsometimes stop talking Containers on the same nodes could continue toconnect but they couldnrsquot talk to others in the Swarm There turned outto be a conflict between the Kernel network timeout settings and the IPVSsettings which led to the Kernel dropping the overlay connectionsThe second problem is one wersquore still working around Therersquos a bug inthe IPAM module which assigns the virtual IPs If services are frequentlyupdated it can lead to multiple IPs being assigned to the service andthe old IPs not being cleaned up This can lead to strange connectivityproblems

Not A Total Loss

In the end Docker Swarm has proven to be a solid proof ofconcept

Randall Smith Docker at Adams State University

In the end Docker Swarm has proven to be a solid proof of conceptWe learned a lot about how to handle containers and the workflow fordeploying servicesDonrsquot be fooled with the proof of concept label Swarm is a solid prod-uct So solid that we are using this to run production services

Next Steps

Randall Smith Docker at Adams State University

The next step for us is to move into our new Kubernetes clusterI wish I could say that we were there now Unfortunately little things likemoving to Banner 9 took up a lot of time Herersquos where we are now andwhat our plans are moving forward

Architecture

Randall Smith Docker at Adams State University

The architecture for the new Kubernetes cluster mirrors what wersquore doingwith SwarmWersquore using haproxy on the gateway nodes to provide public access toservices running in the cluster It also provides TLS termination for webservices The nodes are managed by puppet (and GitLab CI)The Kubernetes cluster can use a number of different drivers to provide thenetwork overlay and enforce network access controls The Calico driverfor example uses BGP to route requests for services to the host where thecontainers running Using BGP means that standard routingnetworkingtools can be used to troubleshoot problemsOne of the best things about the native Docker overlay network is thatall traffic between all of the containers on that network in the Swarm canbe encrypted I didnrsquot want to lose that when moving to Kubernetes so Iuse puppet to automatically configure libreswan to build host-to-host linksbetween all of the hosts with IPSec and letsencrypt (Mostly automaticallyI still have to manually request the the certificate)Cluster management goes through kubeadmin01 A gitlab-runner onkubeadmin01 handles service updates coming from GitLab CI

Setup

I rke kubespray and kubeadm make installs easierI Shared storage via Ceph and NFSI Networking is the hard part

Randall Smith Docker at Adams State University

Kubernetes development is focused on the cloud Unfortunately thatmeans that deploying locally is not as easy as it could be Tools likerke kubespray and kubeadm make it easierOne of the things I like about rke is that it has an easy option to teardown a cluster This is great for testingWe use Ceph RBD to provide block devices to our Linux KVM clusterWe are also able to leverage that for Docker but the way itrsquos configureddoesnrsquot work well if volumes need to be shared We use NFS for thosecases where multiple services need to access the volume at the same timeThe cloud focus really shows up when it comes time to setup networkingTherersquos an expectation that the cluster use the load balancer from thecloud provider to provide access Obviously thatrsquos not the case whenrunning it locallyAll of the subnets used by Kubernetes are designed to be internal to thecluster That makes it hard to make services public Metallb has a coupleof options to do that The easiest is Layer 2 mode which publishes IPsdirectly on the nodes and use ARPs to make other servers aware of theIPs This is what wersquore using The other option is BGP with publishesthe IPs and their location in the cluster to other routers

Migration

I Convert from Docker Compose to Pods and ServicesI Add a CI jobs to deploy to the k8s clusterI Change DNS to point to the k8s serviceI No image rebuild needed

Randall Smith Docker at Adams State University

One of the best things about using Docker and GitLab CI to manageservice deployments is that very little needs to change There is a one-time conversion of the docker compose file to the Kubernetes config formatThe CI configuration also needs to be updated so that deployments go toKubernetes After that deployment through GitLab remains the same asit was with Swarm For services that do not have external volumes theycan be deployed to both clouds and tests run as neededFor many services migration can be as easy as changing a DNS recordRemember the exact same image that wersquore running in the Swarm will berunning in Kubernetes There are no surprises with the image No needto rebuild or re-install

How Can You Get Started

I Start with something simpleI Start with services that can tolerate downtimeI You may need to change your backup strategyI CI CI CI

Randall Smith Docker at Adams State University

Donrsquot do what I did Start with something simple Web services and tasksthat donrsquot require permanent local storage are a great start These areusually easier to build and more resilient to being frequently stopped andstarted This will happen a lot with your first servicesStart with services that can tolerate downtime Test services are great butyoursquoll eventually want to move something useful into the cluster Therersquosa learning curve Donrsquot let it bite you (or discourage you)When you move to containers how you back up your data may need tochange You donrsquot have a server to log into and run backup scripts Knowwhere your data lives and back it up thereCI CI CI Also CI Having a consistent workflow will make it easier foreveryone to start using the new cluster

Build vs Buy

Randall Smith Docker at Adams State University

You can build from scratch but there are a number of pre-built commercialservicesDocker Enterprise is a simple install on-prem and provides both Swarmand Kubernetes services at the same time It also provides a registry androle-based access control We are testing this approach nowYou can also go straight to the cloud with GKE AWS or Azure Infact using Kubernetes makes it much easier to transition between cloudproviders or even use multiple providers at the same time

Logging

I Containers generally log tostandard out

I Logs can be shipped to a logaggreagator such asElasticsearch

I View logs and createdashboards in Kibana

Randall Smith Docker at Adams State University

Moving to Docker changes a number of things not the least of which islogging The official accepted way for containers to log is to write theirlogs to standard out The logs are then available via Docker This alsoallows for a standard way to ship those logs to an aggregatorGenerally this done by shipping those logs to Elasticsearch One can thenuse a tool such as Kibana to access those logs build dashboards and eventrigger alerts based on the contents of those logs

Monitoring

I Collect metrics with PrometheusI View the metrics with GrafanaI Use InfluxDB for long-term storage

Randall Smith Docker at Adams State University

Prometheus has become the premier way to collect metrics on Kubernetesclusters It collects host stats as well as information on running podsGrafana is frequently used to build dashboards to view the collected met-ricsPrometheus only keeps metrics for a short amount of time Usually lessthan two weeks The metrics can be shipped to InfluxDB for long termstorage and analysis

Security Scans

I No built-in way to see if an image has out-of-date packagesI clair and Docker Enterprise can scan images for vulnerable

packagesI Doesnrsquot work as well for 3rd party packages

Randall Smith Docker at Adams State University

One of the things that you lose with containers is a built-in way to ensurethat software in an image is not running vulnerable packages You can useyum or apt to see if new updates need to be applied but containers donrsquothave thatTools such as clair can be used to scan images at build time as part ofthe CI process or to regularly scan all images in a registry This feature isalso included in Docker Enterprise This works well for packages installedwith standard tools (such as apt) but doesnrsquot work as well for softwareinstalled from sourceOne way to keep images up-to-date is run scheduled tasks in GitLab CIthat scans the image and rebuilds is with the latest updates installed ifsomething is found Remember that base images are generally very strippeddown so they will generally have fewer updates

BANDOCK Google Group

httpsgroupsgooglecomforumforumbandock

Randall Smith Docker at Adams State University

Virginia Tech started a Google Group for schools looking to run Banner inDocker VT is running Banner 9 in Docker SwarmTo request access go to the BANDOCK Google group and click the linkthat says Apply for membership

Questions

Feel free to ask questions after the conference

Email rbsmithadamseduPhone 719-587-7836Twitter PerlStalker

Randall Smith Docker at Adams State University

Thank You

Randall Smith Docker at Adams State University

References

bull Two Five and Seven painting the rosebush from AlicersquosAdventures in Wonderland

bull ELK Stack image - httpswwwelasticcoelk-stack

bull Docker Enterprise -httpswwwdockercomproductsdocker-enterprise

bull Book cover generated with the devto book generator -httpsdevtorly

bull Double Facepalm meme created at imgflipcom

bull GitLab - httpsaboutgitlabcom

bull Kubernetes - httpskubernetesio

bull Calico - httpswwwprojectcalicoorg

bull Metallb - httpsmetallbuniversetf

Page 19: Docker at Adams State UniversityRandall Smith Docker at Adams State University One of the best things about using Docker and GitLab CI to manage service deployments is that very little

Who Needs a Cluster

I Use single node Docker Swarm for standalone servicesI All of the CI tooling is still availableI Take advantage of rollback and CI audit trail

Randall Smith Docker at Adams State University

We also found that in some cases it still might make sense to run aservice on a standalone server However you can still take advantage ofthe managed deployment and rollback that is available with Docker Swarmon a single hostAll of the CI build and deployment options are available as they wouldbe if the service were running on a full cluster Instead a specificgitlab-runner is used on the standalone host Deployments are thenconfigured to use that runnerThere are two big wins when taking this approach on standalone serversFirst you get the rapid rollback in the event of failure Second you getthe audit trail and accountability that comes from the CI environment

There Were Problems

I Swarm scheduler failedI Rexray driver doesnrsquot

consistently unmap RBDsI Overlay randomly network

stopped workingI IPAM assigns too many IP

addresses

Randall Smith Docker at Adams State University

As we started to use Swarm more we started to see problems appearFirst of all we ran into issues with the Swarm scheduler There was a bugthat triggered once in a while that prevented the Swarm from starting newservices Running containers were fine but we couldnrsquot start new onesEventually this was solved in a later Docker releaseThe other problem we have is the Rexray driver doesnrsquot always cleanupafter itself when a container is stopped It can leave RBDs mounted ormapped on a node preventing it from starting elsewhere in the SwarmThis can cause a service to take longer to start or prevent it from startingentirelyWe had two major network issues First of all the overlay network wouldsometimes stop talking Containers on the same nodes could continue toconnect but they couldnrsquot talk to others in the Swarm There turned outto be a conflict between the Kernel network timeout settings and the IPVSsettings which led to the Kernel dropping the overlay connectionsThe second problem is one wersquore still working around Therersquos a bug inthe IPAM module which assigns the virtual IPs If services are frequentlyupdated it can lead to multiple IPs being assigned to the service andthe old IPs not being cleaned up This can lead to strange connectivityproblems

Not A Total Loss

In the end Docker Swarm has proven to be a solid proof ofconcept

Randall Smith Docker at Adams State University

In the end Docker Swarm has proven to be a solid proof of conceptWe learned a lot about how to handle containers and the workflow fordeploying servicesDonrsquot be fooled with the proof of concept label Swarm is a solid prod-uct So solid that we are using this to run production services

Next Steps

Randall Smith Docker at Adams State University

The next step for us is to move into our new Kubernetes clusterI wish I could say that we were there now Unfortunately little things likemoving to Banner 9 took up a lot of time Herersquos where we are now andwhat our plans are moving forward

Architecture

Randall Smith Docker at Adams State University

The architecture for the new Kubernetes cluster mirrors what wersquore doingwith SwarmWersquore using haproxy on the gateway nodes to provide public access toservices running in the cluster It also provides TLS termination for webservices The nodes are managed by puppet (and GitLab CI)The Kubernetes cluster can use a number of different drivers to provide thenetwork overlay and enforce network access controls The Calico driverfor example uses BGP to route requests for services to the host where thecontainers running Using BGP means that standard routingnetworkingtools can be used to troubleshoot problemsOne of the best things about the native Docker overlay network is thatall traffic between all of the containers on that network in the Swarm canbe encrypted I didnrsquot want to lose that when moving to Kubernetes so Iuse puppet to automatically configure libreswan to build host-to-host linksbetween all of the hosts with IPSec and letsencrypt (Mostly automaticallyI still have to manually request the the certificate)Cluster management goes through kubeadmin01 A gitlab-runner onkubeadmin01 handles service updates coming from GitLab CI

Setup

I rke kubespray and kubeadm make installs easierI Shared storage via Ceph and NFSI Networking is the hard part

Randall Smith Docker at Adams State University

Kubernetes development is focused on the cloud Unfortunately thatmeans that deploying locally is not as easy as it could be Tools likerke kubespray and kubeadm make it easierOne of the things I like about rke is that it has an easy option to teardown a cluster This is great for testingWe use Ceph RBD to provide block devices to our Linux KVM clusterWe are also able to leverage that for Docker but the way itrsquos configureddoesnrsquot work well if volumes need to be shared We use NFS for thosecases where multiple services need to access the volume at the same timeThe cloud focus really shows up when it comes time to setup networkingTherersquos an expectation that the cluster use the load balancer from thecloud provider to provide access Obviously thatrsquos not the case whenrunning it locallyAll of the subnets used by Kubernetes are designed to be internal to thecluster That makes it hard to make services public Metallb has a coupleof options to do that The easiest is Layer 2 mode which publishes IPsdirectly on the nodes and use ARPs to make other servers aware of theIPs This is what wersquore using The other option is BGP with publishesthe IPs and their location in the cluster to other routers

Migration

I Convert from Docker Compose to Pods and ServicesI Add a CI jobs to deploy to the k8s clusterI Change DNS to point to the k8s serviceI No image rebuild needed

Randall Smith Docker at Adams State University

One of the best things about using Docker and GitLab CI to manageservice deployments is that very little needs to change There is a one-time conversion of the docker compose file to the Kubernetes config formatThe CI configuration also needs to be updated so that deployments go toKubernetes After that deployment through GitLab remains the same asit was with Swarm For services that do not have external volumes theycan be deployed to both clouds and tests run as neededFor many services migration can be as easy as changing a DNS recordRemember the exact same image that wersquore running in the Swarm will berunning in Kubernetes There are no surprises with the image No needto rebuild or re-install

How Can You Get Started

I Start with something simpleI Start with services that can tolerate downtimeI You may need to change your backup strategyI CI CI CI

Randall Smith Docker at Adams State University

Donrsquot do what I did Start with something simple Web services and tasksthat donrsquot require permanent local storage are a great start These areusually easier to build and more resilient to being frequently stopped andstarted This will happen a lot with your first servicesStart with services that can tolerate downtime Test services are great butyoursquoll eventually want to move something useful into the cluster Therersquosa learning curve Donrsquot let it bite you (or discourage you)When you move to containers how you back up your data may need tochange You donrsquot have a server to log into and run backup scripts Knowwhere your data lives and back it up thereCI CI CI Also CI Having a consistent workflow will make it easier foreveryone to start using the new cluster

Build vs Buy

Randall Smith Docker at Adams State University

You can build from scratch but there are a number of pre-built commercialservicesDocker Enterprise is a simple install on-prem and provides both Swarmand Kubernetes services at the same time It also provides a registry androle-based access control We are testing this approach nowYou can also go straight to the cloud with GKE AWS or Azure Infact using Kubernetes makes it much easier to transition between cloudproviders or even use multiple providers at the same time

Logging

I Containers generally log tostandard out

I Logs can be shipped to a logaggreagator such asElasticsearch

I View logs and createdashboards in Kibana

Randall Smith Docker at Adams State University

Moving to Docker changes a number of things not the least of which islogging The official accepted way for containers to log is to write theirlogs to standard out The logs are then available via Docker This alsoallows for a standard way to ship those logs to an aggregatorGenerally this done by shipping those logs to Elasticsearch One can thenuse a tool such as Kibana to access those logs build dashboards and eventrigger alerts based on the contents of those logs

Monitoring

I Collect metrics with PrometheusI View the metrics with GrafanaI Use InfluxDB for long-term storage

Randall Smith Docker at Adams State University

Prometheus has become the premier way to collect metrics on Kubernetesclusters It collects host stats as well as information on running podsGrafana is frequently used to build dashboards to view the collected met-ricsPrometheus only keeps metrics for a short amount of time Usually lessthan two weeks The metrics can be shipped to InfluxDB for long termstorage and analysis

Security Scans

I No built-in way to see if an image has out-of-date packagesI clair and Docker Enterprise can scan images for vulnerable

packagesI Doesnrsquot work as well for 3rd party packages

Randall Smith Docker at Adams State University

One of the things that you lose with containers is a built-in way to ensurethat software in an image is not running vulnerable packages You can useyum or apt to see if new updates need to be applied but containers donrsquothave thatTools such as clair can be used to scan images at build time as part ofthe CI process or to regularly scan all images in a registry This feature isalso included in Docker Enterprise This works well for packages installedwith standard tools (such as apt) but doesnrsquot work as well for softwareinstalled from sourceOne way to keep images up-to-date is run scheduled tasks in GitLab CIthat scans the image and rebuilds is with the latest updates installed ifsomething is found Remember that base images are generally very strippeddown so they will generally have fewer updates

BANDOCK Google Group

httpsgroupsgooglecomforumforumbandock

Randall Smith Docker at Adams State University

Virginia Tech started a Google Group for schools looking to run Banner inDocker VT is running Banner 9 in Docker SwarmTo request access go to the BANDOCK Google group and click the linkthat says Apply for membership

Questions

Feel free to ask questions after the conference

Email rbsmithadamseduPhone 719-587-7836Twitter PerlStalker

Randall Smith Docker at Adams State University

Thank You

Randall Smith Docker at Adams State University

References

bull Two Five and Seven painting the rosebush from AlicersquosAdventures in Wonderland

bull ELK Stack image - httpswwwelasticcoelk-stack

bull Docker Enterprise -httpswwwdockercomproductsdocker-enterprise

bull Book cover generated with the devto book generator -httpsdevtorly

bull Double Facepalm meme created at imgflipcom

bull GitLab - httpsaboutgitlabcom

bull Kubernetes - httpskubernetesio

bull Calico - httpswwwprojectcalicoorg

bull Metallb - httpsmetallbuniversetf

Page 20: Docker at Adams State UniversityRandall Smith Docker at Adams State University One of the best things about using Docker and GitLab CI to manage service deployments is that very little

There Were Problems

I Swarm scheduler failedI Rexray driver doesnrsquot

consistently unmap RBDsI Overlay randomly network

stopped workingI IPAM assigns too many IP

addresses

Randall Smith Docker at Adams State University

As we started to use Swarm more we started to see problems appearFirst of all we ran into issues with the Swarm scheduler There was a bugthat triggered once in a while that prevented the Swarm from starting newservices Running containers were fine but we couldnrsquot start new onesEventually this was solved in a later Docker releaseThe other problem we have is the Rexray driver doesnrsquot always cleanupafter itself when a container is stopped It can leave RBDs mounted ormapped on a node preventing it from starting elsewhere in the SwarmThis can cause a service to take longer to start or prevent it from startingentirelyWe had two major network issues First of all the overlay network wouldsometimes stop talking Containers on the same nodes could continue toconnect but they couldnrsquot talk to others in the Swarm There turned outto be a conflict between the Kernel network timeout settings and the IPVSsettings which led to the Kernel dropping the overlay connectionsThe second problem is one wersquore still working around Therersquos a bug inthe IPAM module which assigns the virtual IPs If services are frequentlyupdated it can lead to multiple IPs being assigned to the service andthe old IPs not being cleaned up This can lead to strange connectivityproblems

Not A Total Loss

In the end Docker Swarm has proven to be a solid proof ofconcept

Randall Smith Docker at Adams State University

In the end Docker Swarm has proven to be a solid proof of conceptWe learned a lot about how to handle containers and the workflow fordeploying servicesDonrsquot be fooled with the proof of concept label Swarm is a solid prod-uct So solid that we are using this to run production services

Next Steps

Randall Smith Docker at Adams State University

The next step for us is to move into our new Kubernetes clusterI wish I could say that we were there now Unfortunately little things likemoving to Banner 9 took up a lot of time Herersquos where we are now andwhat our plans are moving forward

Architecture

Randall Smith Docker at Adams State University

The architecture for the new Kubernetes cluster mirrors what wersquore doingwith SwarmWersquore using haproxy on the gateway nodes to provide public access toservices running in the cluster It also provides TLS termination for webservices The nodes are managed by puppet (and GitLab CI)The Kubernetes cluster can use a number of different drivers to provide thenetwork overlay and enforce network access controls The Calico driverfor example uses BGP to route requests for services to the host where thecontainers running Using BGP means that standard routingnetworkingtools can be used to troubleshoot problemsOne of the best things about the native Docker overlay network is thatall traffic between all of the containers on that network in the Swarm canbe encrypted I didnrsquot want to lose that when moving to Kubernetes so Iuse puppet to automatically configure libreswan to build host-to-host linksbetween all of the hosts with IPSec and letsencrypt (Mostly automaticallyI still have to manually request the the certificate)Cluster management goes through kubeadmin01 A gitlab-runner onkubeadmin01 handles service updates coming from GitLab CI

Setup

I rke kubespray and kubeadm make installs easierI Shared storage via Ceph and NFSI Networking is the hard part

Randall Smith Docker at Adams State University

Kubernetes development is focused on the cloud Unfortunately thatmeans that deploying locally is not as easy as it could be Tools likerke kubespray and kubeadm make it easierOne of the things I like about rke is that it has an easy option to teardown a cluster This is great for testingWe use Ceph RBD to provide block devices to our Linux KVM clusterWe are also able to leverage that for Docker but the way itrsquos configureddoesnrsquot work well if volumes need to be shared We use NFS for thosecases where multiple services need to access the volume at the same timeThe cloud focus really shows up when it comes time to setup networkingTherersquos an expectation that the cluster use the load balancer from thecloud provider to provide access Obviously thatrsquos not the case whenrunning it locallyAll of the subnets used by Kubernetes are designed to be internal to thecluster That makes it hard to make services public Metallb has a coupleof options to do that The easiest is Layer 2 mode which publishes IPsdirectly on the nodes and use ARPs to make other servers aware of theIPs This is what wersquore using The other option is BGP with publishesthe IPs and their location in the cluster to other routers

Migration

I Convert from Docker Compose to Pods and ServicesI Add a CI jobs to deploy to the k8s clusterI Change DNS to point to the k8s serviceI No image rebuild needed

Randall Smith Docker at Adams State University

One of the best things about using Docker and GitLab CI to manageservice deployments is that very little needs to change There is a one-time conversion of the docker compose file to the Kubernetes config formatThe CI configuration also needs to be updated so that deployments go toKubernetes After that deployment through GitLab remains the same asit was with Swarm For services that do not have external volumes theycan be deployed to both clouds and tests run as neededFor many services migration can be as easy as changing a DNS recordRemember the exact same image that wersquore running in the Swarm will berunning in Kubernetes There are no surprises with the image No needto rebuild or re-install

How Can You Get Started

I Start with something simpleI Start with services that can tolerate downtimeI You may need to change your backup strategyI CI CI CI

Randall Smith Docker at Adams State University

Donrsquot do what I did Start with something simple Web services and tasksthat donrsquot require permanent local storage are a great start These areusually easier to build and more resilient to being frequently stopped andstarted This will happen a lot with your first servicesStart with services that can tolerate downtime Test services are great butyoursquoll eventually want to move something useful into the cluster Therersquosa learning curve Donrsquot let it bite you (or discourage you)When you move to containers how you back up your data may need tochange You donrsquot have a server to log into and run backup scripts Knowwhere your data lives and back it up thereCI CI CI Also CI Having a consistent workflow will make it easier foreveryone to start using the new cluster

Build vs Buy

Randall Smith Docker at Adams State University

You can build from scratch but there are a number of pre-built commercialservicesDocker Enterprise is a simple install on-prem and provides both Swarmand Kubernetes services at the same time It also provides a registry androle-based access control We are testing this approach nowYou can also go straight to the cloud with GKE AWS or Azure Infact using Kubernetes makes it much easier to transition between cloudproviders or even use multiple providers at the same time

Logging

I Containers generally log tostandard out

I Logs can be shipped to a logaggreagator such asElasticsearch

I View logs and createdashboards in Kibana

Randall Smith Docker at Adams State University

Moving to Docker changes a number of things not the least of which islogging The official accepted way for containers to log is to write theirlogs to standard out The logs are then available via Docker This alsoallows for a standard way to ship those logs to an aggregatorGenerally this done by shipping those logs to Elasticsearch One can thenuse a tool such as Kibana to access those logs build dashboards and eventrigger alerts based on the contents of those logs

Monitoring

I Collect metrics with PrometheusI View the metrics with GrafanaI Use InfluxDB for long-term storage

Randall Smith Docker at Adams State University

Prometheus has become the premier way to collect metrics on Kubernetesclusters It collects host stats as well as information on running podsGrafana is frequently used to build dashboards to view the collected met-ricsPrometheus only keeps metrics for a short amount of time Usually lessthan two weeks The metrics can be shipped to InfluxDB for long termstorage and analysis

Security Scans

I No built-in way to see if an image has out-of-date packagesI clair and Docker Enterprise can scan images for vulnerable

packagesI Doesnrsquot work as well for 3rd party packages

Randall Smith Docker at Adams State University

One of the things that you lose with containers is a built-in way to ensurethat software in an image is not running vulnerable packages You can useyum or apt to see if new updates need to be applied but containers donrsquothave thatTools such as clair can be used to scan images at build time as part ofthe CI process or to regularly scan all images in a registry This feature isalso included in Docker Enterprise This works well for packages installedwith standard tools (such as apt) but doesnrsquot work as well for softwareinstalled from sourceOne way to keep images up-to-date is run scheduled tasks in GitLab CIthat scans the image and rebuilds is with the latest updates installed ifsomething is found Remember that base images are generally very strippeddown so they will generally have fewer updates

BANDOCK Google Group

httpsgroupsgooglecomforumforumbandock

Randall Smith Docker at Adams State University

Virginia Tech started a Google Group for schools looking to run Banner inDocker VT is running Banner 9 in Docker SwarmTo request access go to the BANDOCK Google group and click the linkthat says Apply for membership

Questions

Feel free to ask questions after the conference

Email rbsmithadamseduPhone 719-587-7836Twitter PerlStalker

Randall Smith Docker at Adams State University

Thank You

Randall Smith Docker at Adams State University

References

bull Two Five and Seven painting the rosebush from AlicersquosAdventures in Wonderland

bull ELK Stack image - httpswwwelasticcoelk-stack

bull Docker Enterprise -httpswwwdockercomproductsdocker-enterprise

bull Book cover generated with the devto book generator -httpsdevtorly

bull Double Facepalm meme created at imgflipcom

bull GitLab - httpsaboutgitlabcom

bull Kubernetes - httpskubernetesio

bull Calico - httpswwwprojectcalicoorg

bull Metallb - httpsmetallbuniversetf

Page 21: Docker at Adams State UniversityRandall Smith Docker at Adams State University One of the best things about using Docker and GitLab CI to manage service deployments is that very little

Not A Total Loss

In the end Docker Swarm has proven to be a solid proof ofconcept

Randall Smith Docker at Adams State University

In the end Docker Swarm has proven to be a solid proof of conceptWe learned a lot about how to handle containers and the workflow fordeploying servicesDonrsquot be fooled with the proof of concept label Swarm is a solid prod-uct So solid that we are using this to run production services

Next Steps

Randall Smith Docker at Adams State University

The next step for us is to move into our new Kubernetes clusterI wish I could say that we were there now Unfortunately little things likemoving to Banner 9 took up a lot of time Herersquos where we are now andwhat our plans are moving forward

Architecture

Randall Smith Docker at Adams State University

The architecture for the new Kubernetes cluster mirrors what wersquore doingwith SwarmWersquore using haproxy on the gateway nodes to provide public access toservices running in the cluster It also provides TLS termination for webservices The nodes are managed by puppet (and GitLab CI)The Kubernetes cluster can use a number of different drivers to provide thenetwork overlay and enforce network access controls The Calico driverfor example uses BGP to route requests for services to the host where thecontainers running Using BGP means that standard routingnetworkingtools can be used to troubleshoot problemsOne of the best things about the native Docker overlay network is thatall traffic between all of the containers on that network in the Swarm canbe encrypted I didnrsquot want to lose that when moving to Kubernetes so Iuse puppet to automatically configure libreswan to build host-to-host linksbetween all of the hosts with IPSec and letsencrypt (Mostly automaticallyI still have to manually request the the certificate)Cluster management goes through kubeadmin01 A gitlab-runner onkubeadmin01 handles service updates coming from GitLab CI

Setup

I rke kubespray and kubeadm make installs easierI Shared storage via Ceph and NFSI Networking is the hard part

Randall Smith Docker at Adams State University

Kubernetes development is focused on the cloud Unfortunately thatmeans that deploying locally is not as easy as it could be Tools likerke kubespray and kubeadm make it easierOne of the things I like about rke is that it has an easy option to teardown a cluster This is great for testingWe use Ceph RBD to provide block devices to our Linux KVM clusterWe are also able to leverage that for Docker but the way itrsquos configureddoesnrsquot work well if volumes need to be shared We use NFS for thosecases where multiple services need to access the volume at the same timeThe cloud focus really shows up when it comes time to setup networkingTherersquos an expectation that the cluster use the load balancer from thecloud provider to provide access Obviously thatrsquos not the case whenrunning it locallyAll of the subnets used by Kubernetes are designed to be internal to thecluster That makes it hard to make services public Metallb has a coupleof options to do that The easiest is Layer 2 mode which publishes IPsdirectly on the nodes and use ARPs to make other servers aware of theIPs This is what wersquore using The other option is BGP with publishesthe IPs and their location in the cluster to other routers

Migration

I Convert from Docker Compose to Pods and ServicesI Add a CI jobs to deploy to the k8s clusterI Change DNS to point to the k8s serviceI No image rebuild needed

Randall Smith Docker at Adams State University

One of the best things about using Docker and GitLab CI to manageservice deployments is that very little needs to change There is a one-time conversion of the docker compose file to the Kubernetes config formatThe CI configuration also needs to be updated so that deployments go toKubernetes After that deployment through GitLab remains the same asit was with Swarm For services that do not have external volumes theycan be deployed to both clouds and tests run as neededFor many services migration can be as easy as changing a DNS recordRemember the exact same image that wersquore running in the Swarm will berunning in Kubernetes There are no surprises with the image No needto rebuild or re-install

How Can You Get Started

I Start with something simpleI Start with services that can tolerate downtimeI You may need to change your backup strategyI CI CI CI

Randall Smith Docker at Adams State University

Donrsquot do what I did Start with something simple Web services and tasksthat donrsquot require permanent local storage are a great start These areusually easier to build and more resilient to being frequently stopped andstarted This will happen a lot with your first servicesStart with services that can tolerate downtime Test services are great butyoursquoll eventually want to move something useful into the cluster Therersquosa learning curve Donrsquot let it bite you (or discourage you)When you move to containers how you back up your data may need tochange You donrsquot have a server to log into and run backup scripts Knowwhere your data lives and back it up thereCI CI CI Also CI Having a consistent workflow will make it easier foreveryone to start using the new cluster

Build vs Buy

Randall Smith Docker at Adams State University

You can build from scratch but there are a number of pre-built commercialservicesDocker Enterprise is a simple install on-prem and provides both Swarmand Kubernetes services at the same time It also provides a registry androle-based access control We are testing this approach nowYou can also go straight to the cloud with GKE AWS or Azure Infact using Kubernetes makes it much easier to transition between cloudproviders or even use multiple providers at the same time

Logging

I Containers generally log tostandard out

I Logs can be shipped to a logaggreagator such asElasticsearch

I View logs and createdashboards in Kibana

Randall Smith Docker at Adams State University

Moving to Docker changes a number of things not the least of which islogging The official accepted way for containers to log is to write theirlogs to standard out The logs are then available via Docker This alsoallows for a standard way to ship those logs to an aggregatorGenerally this done by shipping those logs to Elasticsearch One can thenuse a tool such as Kibana to access those logs build dashboards and eventrigger alerts based on the contents of those logs

Monitoring

I Collect metrics with PrometheusI View the metrics with GrafanaI Use InfluxDB for long-term storage

Randall Smith Docker at Adams State University

Prometheus has become the premier way to collect metrics on Kubernetesclusters It collects host stats as well as information on running podsGrafana is frequently used to build dashboards to view the collected met-ricsPrometheus only keeps metrics for a short amount of time Usually lessthan two weeks The metrics can be shipped to InfluxDB for long termstorage and analysis

Security Scans

I No built-in way to see if an image has out-of-date packagesI clair and Docker Enterprise can scan images for vulnerable

packagesI Doesnrsquot work as well for 3rd party packages

Randall Smith Docker at Adams State University

One of the things that you lose with containers is a built-in way to ensurethat software in an image is not running vulnerable packages You can useyum or apt to see if new updates need to be applied but containers donrsquothave thatTools such as clair can be used to scan images at build time as part ofthe CI process or to regularly scan all images in a registry This feature isalso included in Docker Enterprise This works well for packages installedwith standard tools (such as apt) but doesnrsquot work as well for softwareinstalled from sourceOne way to keep images up-to-date is run scheduled tasks in GitLab CIthat scans the image and rebuilds is with the latest updates installed ifsomething is found Remember that base images are generally very strippeddown so they will generally have fewer updates

BANDOCK Google Group

httpsgroupsgooglecomforumforumbandock

Randall Smith Docker at Adams State University

Virginia Tech started a Google Group for schools looking to run Banner inDocker VT is running Banner 9 in Docker SwarmTo request access go to the BANDOCK Google group and click the linkthat says Apply for membership

Questions

Feel free to ask questions after the conference

Email rbsmithadamseduPhone 719-587-7836Twitter PerlStalker

Randall Smith Docker at Adams State University

Thank You

Randall Smith Docker at Adams State University

References

bull Two Five and Seven painting the rosebush from AlicersquosAdventures in Wonderland

bull ELK Stack image - httpswwwelasticcoelk-stack

bull Docker Enterprise -httpswwwdockercomproductsdocker-enterprise

bull Book cover generated with the devto book generator -httpsdevtorly

bull Double Facepalm meme created at imgflipcom

bull GitLab - httpsaboutgitlabcom

bull Kubernetes - httpskubernetesio

bull Calico - httpswwwprojectcalicoorg

bull Metallb - httpsmetallbuniversetf

Page 22: Docker at Adams State UniversityRandall Smith Docker at Adams State University One of the best things about using Docker and GitLab CI to manage service deployments is that very little

Next Steps

Randall Smith Docker at Adams State University

The next step for us is to move into our new Kubernetes clusterI wish I could say that we were there now Unfortunately little things likemoving to Banner 9 took up a lot of time Herersquos where we are now andwhat our plans are moving forward

Architecture

Randall Smith Docker at Adams State University

The architecture for the new Kubernetes cluster mirrors what wersquore doingwith SwarmWersquore using haproxy on the gateway nodes to provide public access toservices running in the cluster It also provides TLS termination for webservices The nodes are managed by puppet (and GitLab CI)The Kubernetes cluster can use a number of different drivers to provide thenetwork overlay and enforce network access controls The Calico driverfor example uses BGP to route requests for services to the host where thecontainers running Using BGP means that standard routingnetworkingtools can be used to troubleshoot problemsOne of the best things about the native Docker overlay network is thatall traffic between all of the containers on that network in the Swarm canbe encrypted I didnrsquot want to lose that when moving to Kubernetes so Iuse puppet to automatically configure libreswan to build host-to-host linksbetween all of the hosts with IPSec and letsencrypt (Mostly automaticallyI still have to manually request the the certificate)Cluster management goes through kubeadmin01 A gitlab-runner onkubeadmin01 handles service updates coming from GitLab CI

Setup

I rke kubespray and kubeadm make installs easierI Shared storage via Ceph and NFSI Networking is the hard part

Randall Smith Docker at Adams State University

Kubernetes development is focused on the cloud Unfortunately thatmeans that deploying locally is not as easy as it could be Tools likerke kubespray and kubeadm make it easierOne of the things I like about rke is that it has an easy option to teardown a cluster This is great for testingWe use Ceph RBD to provide block devices to our Linux KVM clusterWe are also able to leverage that for Docker but the way itrsquos configureddoesnrsquot work well if volumes need to be shared We use NFS for thosecases where multiple services need to access the volume at the same timeThe cloud focus really shows up when it comes time to setup networkingTherersquos an expectation that the cluster use the load balancer from thecloud provider to provide access Obviously thatrsquos not the case whenrunning it locallyAll of the subnets used by Kubernetes are designed to be internal to thecluster That makes it hard to make services public Metallb has a coupleof options to do that The easiest is Layer 2 mode which publishes IPsdirectly on the nodes and use ARPs to make other servers aware of theIPs This is what wersquore using The other option is BGP with publishesthe IPs and their location in the cluster to other routers

Migration

I Convert from Docker Compose to Pods and ServicesI Add a CI jobs to deploy to the k8s clusterI Change DNS to point to the k8s serviceI No image rebuild needed

Randall Smith Docker at Adams State University

One of the best things about using Docker and GitLab CI to manageservice deployments is that very little needs to change There is a one-time conversion of the docker compose file to the Kubernetes config formatThe CI configuration also needs to be updated so that deployments go toKubernetes After that deployment through GitLab remains the same asit was with Swarm For services that do not have external volumes theycan be deployed to both clouds and tests run as neededFor many services migration can be as easy as changing a DNS recordRemember the exact same image that wersquore running in the Swarm will berunning in Kubernetes There are no surprises with the image No needto rebuild or re-install

How Can You Get Started

I Start with something simpleI Start with services that can tolerate downtimeI You may need to change your backup strategyI CI CI CI

Randall Smith Docker at Adams State University

Donrsquot do what I did Start with something simple Web services and tasksthat donrsquot require permanent local storage are a great start These areusually easier to build and more resilient to being frequently stopped andstarted This will happen a lot with your first servicesStart with services that can tolerate downtime Test services are great butyoursquoll eventually want to move something useful into the cluster Therersquosa learning curve Donrsquot let it bite you (or discourage you)When you move to containers how you back up your data may need tochange You donrsquot have a server to log into and run backup scripts Knowwhere your data lives and back it up thereCI CI CI Also CI Having a consistent workflow will make it easier foreveryone to start using the new cluster

Build vs Buy

Randall Smith Docker at Adams State University

You can build from scratch but there are a number of pre-built commercialservicesDocker Enterprise is a simple install on-prem and provides both Swarmand Kubernetes services at the same time It also provides a registry androle-based access control We are testing this approach nowYou can also go straight to the cloud with GKE AWS or Azure Infact using Kubernetes makes it much easier to transition between cloudproviders or even use multiple providers at the same time

Logging

I Containers generally log tostandard out

I Logs can be shipped to a logaggreagator such asElasticsearch

I View logs and createdashboards in Kibana

Randall Smith Docker at Adams State University

Moving to Docker changes a number of things not the least of which islogging The official accepted way for containers to log is to write theirlogs to standard out The logs are then available via Docker This alsoallows for a standard way to ship those logs to an aggregatorGenerally this done by shipping those logs to Elasticsearch One can thenuse a tool such as Kibana to access those logs build dashboards and eventrigger alerts based on the contents of those logs

Monitoring

I Collect metrics with PrometheusI View the metrics with GrafanaI Use InfluxDB for long-term storage

Randall Smith Docker at Adams State University

Prometheus has become the premier way to collect metrics on Kubernetesclusters It collects host stats as well as information on running podsGrafana is frequently used to build dashboards to view the collected met-ricsPrometheus only keeps metrics for a short amount of time Usually lessthan two weeks The metrics can be shipped to InfluxDB for long termstorage and analysis

Security Scans

I No built-in way to see if an image has out-of-date packagesI clair and Docker Enterprise can scan images for vulnerable

packagesI Doesnrsquot work as well for 3rd party packages

Randall Smith Docker at Adams State University

One of the things that you lose with containers is a built-in way to ensurethat software in an image is not running vulnerable packages You can useyum or apt to see if new updates need to be applied but containers donrsquothave thatTools such as clair can be used to scan images at build time as part ofthe CI process or to regularly scan all images in a registry This feature isalso included in Docker Enterprise This works well for packages installedwith standard tools (such as apt) but doesnrsquot work as well for softwareinstalled from sourceOne way to keep images up-to-date is run scheduled tasks in GitLab CIthat scans the image and rebuilds is with the latest updates installed ifsomething is found Remember that base images are generally very strippeddown so they will generally have fewer updates

BANDOCK Google Group

httpsgroupsgooglecomforumforumbandock

Randall Smith Docker at Adams State University

Virginia Tech started a Google Group for schools looking to run Banner inDocker VT is running Banner 9 in Docker SwarmTo request access go to the BANDOCK Google group and click the linkthat says Apply for membership

Questions

Feel free to ask questions after the conference

Email rbsmithadamseduPhone 719-587-7836Twitter PerlStalker

Randall Smith Docker at Adams State University

Thank You

Randall Smith Docker at Adams State University

References

bull Two Five and Seven painting the rosebush from AlicersquosAdventures in Wonderland

bull ELK Stack image - httpswwwelasticcoelk-stack

bull Docker Enterprise -httpswwwdockercomproductsdocker-enterprise

bull Book cover generated with the devto book generator -httpsdevtorly

bull Double Facepalm meme created at imgflipcom

bull GitLab - httpsaboutgitlabcom

bull Kubernetes - httpskubernetesio

bull Calico - httpswwwprojectcalicoorg

bull Metallb - httpsmetallbuniversetf

Page 23: Docker at Adams State UniversityRandall Smith Docker at Adams State University One of the best things about using Docker and GitLab CI to manage service deployments is that very little

Architecture

Randall Smith Docker at Adams State University

The architecture for the new Kubernetes cluster mirrors what wersquore doingwith SwarmWersquore using haproxy on the gateway nodes to provide public access toservices running in the cluster It also provides TLS termination for webservices The nodes are managed by puppet (and GitLab CI)The Kubernetes cluster can use a number of different drivers to provide thenetwork overlay and enforce network access controls The Calico driverfor example uses BGP to route requests for services to the host where thecontainers running Using BGP means that standard routingnetworkingtools can be used to troubleshoot problemsOne of the best things about the native Docker overlay network is thatall traffic between all of the containers on that network in the Swarm canbe encrypted I didnrsquot want to lose that when moving to Kubernetes so Iuse puppet to automatically configure libreswan to build host-to-host linksbetween all of the hosts with IPSec and letsencrypt (Mostly automaticallyI still have to manually request the the certificate)Cluster management goes through kubeadmin01 A gitlab-runner onkubeadmin01 handles service updates coming from GitLab CI

Setup

I rke kubespray and kubeadm make installs easierI Shared storage via Ceph and NFSI Networking is the hard part

Randall Smith Docker at Adams State University

Kubernetes development is focused on the cloud Unfortunately thatmeans that deploying locally is not as easy as it could be Tools likerke kubespray and kubeadm make it easierOne of the things I like about rke is that it has an easy option to teardown a cluster This is great for testingWe use Ceph RBD to provide block devices to our Linux KVM clusterWe are also able to leverage that for Docker but the way itrsquos configureddoesnrsquot work well if volumes need to be shared We use NFS for thosecases where multiple services need to access the volume at the same timeThe cloud focus really shows up when it comes time to setup networkingTherersquos an expectation that the cluster use the load balancer from thecloud provider to provide access Obviously thatrsquos not the case whenrunning it locallyAll of the subnets used by Kubernetes are designed to be internal to thecluster That makes it hard to make services public Metallb has a coupleof options to do that The easiest is Layer 2 mode which publishes IPsdirectly on the nodes and use ARPs to make other servers aware of theIPs This is what wersquore using The other option is BGP with publishesthe IPs and their location in the cluster to other routers

Migration

I Convert from Docker Compose to Pods and ServicesI Add a CI jobs to deploy to the k8s clusterI Change DNS to point to the k8s serviceI No image rebuild needed

Randall Smith Docker at Adams State University

One of the best things about using Docker and GitLab CI to manageservice deployments is that very little needs to change There is a one-time conversion of the docker compose file to the Kubernetes config formatThe CI configuration also needs to be updated so that deployments go toKubernetes After that deployment through GitLab remains the same asit was with Swarm For services that do not have external volumes theycan be deployed to both clouds and tests run as neededFor many services migration can be as easy as changing a DNS recordRemember the exact same image that wersquore running in the Swarm will berunning in Kubernetes There are no surprises with the image No needto rebuild or re-install

How Can You Get Started

I Start with something simpleI Start with services that can tolerate downtimeI You may need to change your backup strategyI CI CI CI

Randall Smith Docker at Adams State University

Donrsquot do what I did Start with something simple Web services and tasksthat donrsquot require permanent local storage are a great start These areusually easier to build and more resilient to being frequently stopped andstarted This will happen a lot with your first servicesStart with services that can tolerate downtime Test services are great butyoursquoll eventually want to move something useful into the cluster Therersquosa learning curve Donrsquot let it bite you (or discourage you)When you move to containers how you back up your data may need tochange You donrsquot have a server to log into and run backup scripts Knowwhere your data lives and back it up thereCI CI CI Also CI Having a consistent workflow will make it easier foreveryone to start using the new cluster

Build vs Buy

Randall Smith Docker at Adams State University

You can build from scratch but there are a number of pre-built commercialservicesDocker Enterprise is a simple install on-prem and provides both Swarmand Kubernetes services at the same time It also provides a registry androle-based access control We are testing this approach nowYou can also go straight to the cloud with GKE AWS or Azure Infact using Kubernetes makes it much easier to transition between cloudproviders or even use multiple providers at the same time

Logging

I Containers generally log tostandard out

I Logs can be shipped to a logaggreagator such asElasticsearch

I View logs and createdashboards in Kibana

Randall Smith Docker at Adams State University

Moving to Docker changes a number of things not the least of which islogging The official accepted way for containers to log is to write theirlogs to standard out The logs are then available via Docker This alsoallows for a standard way to ship those logs to an aggregatorGenerally this done by shipping those logs to Elasticsearch One can thenuse a tool such as Kibana to access those logs build dashboards and eventrigger alerts based on the contents of those logs

Monitoring

I Collect metrics with PrometheusI View the metrics with GrafanaI Use InfluxDB for long-term storage

Randall Smith Docker at Adams State University

Prometheus has become the premier way to collect metrics on Kubernetesclusters It collects host stats as well as information on running podsGrafana is frequently used to build dashboards to view the collected met-ricsPrometheus only keeps metrics for a short amount of time Usually lessthan two weeks The metrics can be shipped to InfluxDB for long termstorage and analysis

Security Scans

I No built-in way to see if an image has out-of-date packagesI clair and Docker Enterprise can scan images for vulnerable

packagesI Doesnrsquot work as well for 3rd party packages

Randall Smith Docker at Adams State University

One of the things that you lose with containers is a built-in way to ensurethat software in an image is not running vulnerable packages You can useyum or apt to see if new updates need to be applied but containers donrsquothave thatTools such as clair can be used to scan images at build time as part ofthe CI process or to regularly scan all images in a registry This feature isalso included in Docker Enterprise This works well for packages installedwith standard tools (such as apt) but doesnrsquot work as well for softwareinstalled from sourceOne way to keep images up-to-date is run scheduled tasks in GitLab CIthat scans the image and rebuilds is with the latest updates installed ifsomething is found Remember that base images are generally very strippeddown so they will generally have fewer updates

BANDOCK Google Group

httpsgroupsgooglecomforumforumbandock

Randall Smith Docker at Adams State University

Virginia Tech started a Google Group for schools looking to run Banner inDocker VT is running Banner 9 in Docker SwarmTo request access go to the BANDOCK Google group and click the linkthat says Apply for membership

Questions

Feel free to ask questions after the conference

Email rbsmithadamseduPhone 719-587-7836Twitter PerlStalker

Randall Smith Docker at Adams State University

Thank You

Randall Smith Docker at Adams State University

References

bull Two Five and Seven painting the rosebush from AlicersquosAdventures in Wonderland

bull ELK Stack image - httpswwwelasticcoelk-stack

bull Docker Enterprise -httpswwwdockercomproductsdocker-enterprise

bull Book cover generated with the devto book generator -httpsdevtorly

bull Double Facepalm meme created at imgflipcom

bull GitLab - httpsaboutgitlabcom

bull Kubernetes - httpskubernetesio

bull Calico - httpswwwprojectcalicoorg

bull Metallb - httpsmetallbuniversetf

Page 24: Docker at Adams State UniversityRandall Smith Docker at Adams State University One of the best things about using Docker and GitLab CI to manage service deployments is that very little

Setup

I rke kubespray and kubeadm make installs easierI Shared storage via Ceph and NFSI Networking is the hard part

Randall Smith Docker at Adams State University

Kubernetes development is focused on the cloud Unfortunately thatmeans that deploying locally is not as easy as it could be Tools likerke kubespray and kubeadm make it easierOne of the things I like about rke is that it has an easy option to teardown a cluster This is great for testingWe use Ceph RBD to provide block devices to our Linux KVM clusterWe are also able to leverage that for Docker but the way itrsquos configureddoesnrsquot work well if volumes need to be shared We use NFS for thosecases where multiple services need to access the volume at the same timeThe cloud focus really shows up when it comes time to setup networkingTherersquos an expectation that the cluster use the load balancer from thecloud provider to provide access Obviously thatrsquos not the case whenrunning it locallyAll of the subnets used by Kubernetes are designed to be internal to thecluster That makes it hard to make services public Metallb has a coupleof options to do that The easiest is Layer 2 mode which publishes IPsdirectly on the nodes and use ARPs to make other servers aware of theIPs This is what wersquore using The other option is BGP with publishesthe IPs and their location in the cluster to other routers

Migration

I Convert from Docker Compose to Pods and ServicesI Add a CI jobs to deploy to the k8s clusterI Change DNS to point to the k8s serviceI No image rebuild needed

Randall Smith Docker at Adams State University

One of the best things about using Docker and GitLab CI to manageservice deployments is that very little needs to change There is a one-time conversion of the docker compose file to the Kubernetes config formatThe CI configuration also needs to be updated so that deployments go toKubernetes After that deployment through GitLab remains the same asit was with Swarm For services that do not have external volumes theycan be deployed to both clouds and tests run as neededFor many services migration can be as easy as changing a DNS recordRemember the exact same image that wersquore running in the Swarm will berunning in Kubernetes There are no surprises with the image No needto rebuild or re-install

How Can You Get Started

I Start with something simpleI Start with services that can tolerate downtimeI You may need to change your backup strategyI CI CI CI

Randall Smith Docker at Adams State University

Donrsquot do what I did Start with something simple Web services and tasksthat donrsquot require permanent local storage are a great start These areusually easier to build and more resilient to being frequently stopped andstarted This will happen a lot with your first servicesStart with services that can tolerate downtime Test services are great butyoursquoll eventually want to move something useful into the cluster Therersquosa learning curve Donrsquot let it bite you (or discourage you)When you move to containers how you back up your data may need tochange You donrsquot have a server to log into and run backup scripts Knowwhere your data lives and back it up thereCI CI CI Also CI Having a consistent workflow will make it easier foreveryone to start using the new cluster

Build vs Buy

Randall Smith Docker at Adams State University

You can build from scratch but there are a number of pre-built commercialservicesDocker Enterprise is a simple install on-prem and provides both Swarmand Kubernetes services at the same time It also provides a registry androle-based access control We are testing this approach nowYou can also go straight to the cloud with GKE AWS or Azure Infact using Kubernetes makes it much easier to transition between cloudproviders or even use multiple providers at the same time

Logging

I Containers generally log tostandard out

I Logs can be shipped to a logaggreagator such asElasticsearch

I View logs and createdashboards in Kibana

Randall Smith Docker at Adams State University

Moving to Docker changes a number of things not the least of which islogging The official accepted way for containers to log is to write theirlogs to standard out The logs are then available via Docker This alsoallows for a standard way to ship those logs to an aggregatorGenerally this done by shipping those logs to Elasticsearch One can thenuse a tool such as Kibana to access those logs build dashboards and eventrigger alerts based on the contents of those logs

Monitoring

I Collect metrics with PrometheusI View the metrics with GrafanaI Use InfluxDB for long-term storage

Randall Smith Docker at Adams State University

Prometheus has become the premier way to collect metrics on Kubernetesclusters It collects host stats as well as information on running podsGrafana is frequently used to build dashboards to view the collected met-ricsPrometheus only keeps metrics for a short amount of time Usually lessthan two weeks The metrics can be shipped to InfluxDB for long termstorage and analysis

Security Scans

I No built-in way to see if an image has out-of-date packagesI clair and Docker Enterprise can scan images for vulnerable

packagesI Doesnrsquot work as well for 3rd party packages

Randall Smith Docker at Adams State University

One of the things that you lose with containers is a built-in way to ensurethat software in an image is not running vulnerable packages You can useyum or apt to see if new updates need to be applied but containers donrsquothave thatTools such as clair can be used to scan images at build time as part ofthe CI process or to regularly scan all images in a registry This feature isalso included in Docker Enterprise This works well for packages installedwith standard tools (such as apt) but doesnrsquot work as well for softwareinstalled from sourceOne way to keep images up-to-date is run scheduled tasks in GitLab CIthat scans the image and rebuilds is with the latest updates installed ifsomething is found Remember that base images are generally very strippeddown so they will generally have fewer updates

BANDOCK Google Group

httpsgroupsgooglecomforumforumbandock

Randall Smith Docker at Adams State University

Virginia Tech started a Google Group for schools looking to run Banner inDocker VT is running Banner 9 in Docker SwarmTo request access go to the BANDOCK Google group and click the linkthat says Apply for membership

Questions

Feel free to ask questions after the conference

Email rbsmithadamseduPhone 719-587-7836Twitter PerlStalker

Randall Smith Docker at Adams State University

Thank You

Randall Smith Docker at Adams State University

References

bull Two Five and Seven painting the rosebush from AlicersquosAdventures in Wonderland

bull ELK Stack image - httpswwwelasticcoelk-stack

bull Docker Enterprise -httpswwwdockercomproductsdocker-enterprise

bull Book cover generated with the devto book generator -httpsdevtorly

bull Double Facepalm meme created at imgflipcom

bull GitLab - httpsaboutgitlabcom

bull Kubernetes - httpskubernetesio

bull Calico - httpswwwprojectcalicoorg

bull Metallb - httpsmetallbuniversetf

Page 25: Docker at Adams State UniversityRandall Smith Docker at Adams State University One of the best things about using Docker and GitLab CI to manage service deployments is that very little

Migration

I Convert from Docker Compose to Pods and ServicesI Add a CI jobs to deploy to the k8s clusterI Change DNS to point to the k8s serviceI No image rebuild needed

Randall Smith Docker at Adams State University

One of the best things about using Docker and GitLab CI to manageservice deployments is that very little needs to change There is a one-time conversion of the docker compose file to the Kubernetes config formatThe CI configuration also needs to be updated so that deployments go toKubernetes After that deployment through GitLab remains the same asit was with Swarm For services that do not have external volumes theycan be deployed to both clouds and tests run as neededFor many services migration can be as easy as changing a DNS recordRemember the exact same image that wersquore running in the Swarm will berunning in Kubernetes There are no surprises with the image No needto rebuild or re-install

How Can You Get Started

I Start with something simpleI Start with services that can tolerate downtimeI You may need to change your backup strategyI CI CI CI

Randall Smith Docker at Adams State University

Donrsquot do what I did Start with something simple Web services and tasksthat donrsquot require permanent local storage are a great start These areusually easier to build and more resilient to being frequently stopped andstarted This will happen a lot with your first servicesStart with services that can tolerate downtime Test services are great butyoursquoll eventually want to move something useful into the cluster Therersquosa learning curve Donrsquot let it bite you (or discourage you)When you move to containers how you back up your data may need tochange You donrsquot have a server to log into and run backup scripts Knowwhere your data lives and back it up thereCI CI CI Also CI Having a consistent workflow will make it easier foreveryone to start using the new cluster

Build vs Buy

Randall Smith Docker at Adams State University

You can build from scratch but there are a number of pre-built commercialservicesDocker Enterprise is a simple install on-prem and provides both Swarmand Kubernetes services at the same time It also provides a registry androle-based access control We are testing this approach nowYou can also go straight to the cloud with GKE AWS or Azure Infact using Kubernetes makes it much easier to transition between cloudproviders or even use multiple providers at the same time

Logging

I Containers generally log tostandard out

I Logs can be shipped to a logaggreagator such asElasticsearch

I View logs and createdashboards in Kibana

Randall Smith Docker at Adams State University

Moving to Docker changes a number of things not the least of which islogging The official accepted way for containers to log is to write theirlogs to standard out The logs are then available via Docker This alsoallows for a standard way to ship those logs to an aggregatorGenerally this done by shipping those logs to Elasticsearch One can thenuse a tool such as Kibana to access those logs build dashboards and eventrigger alerts based on the contents of those logs

Monitoring

I Collect metrics with PrometheusI View the metrics with GrafanaI Use InfluxDB for long-term storage

Randall Smith Docker at Adams State University

Prometheus has become the premier way to collect metrics on Kubernetesclusters It collects host stats as well as information on running podsGrafana is frequently used to build dashboards to view the collected met-ricsPrometheus only keeps metrics for a short amount of time Usually lessthan two weeks The metrics can be shipped to InfluxDB for long termstorage and analysis

Security Scans

I No built-in way to see if an image has out-of-date packagesI clair and Docker Enterprise can scan images for vulnerable

packagesI Doesnrsquot work as well for 3rd party packages

Randall Smith Docker at Adams State University

One of the things that you lose with containers is a built-in way to ensurethat software in an image is not running vulnerable packages You can useyum or apt to see if new updates need to be applied but containers donrsquothave thatTools such as clair can be used to scan images at build time as part ofthe CI process or to regularly scan all images in a registry This feature isalso included in Docker Enterprise This works well for packages installedwith standard tools (such as apt) but doesnrsquot work as well for softwareinstalled from sourceOne way to keep images up-to-date is run scheduled tasks in GitLab CIthat scans the image and rebuilds is with the latest updates installed ifsomething is found Remember that base images are generally very strippeddown so they will generally have fewer updates

BANDOCK Google Group

httpsgroupsgooglecomforumforumbandock

Randall Smith Docker at Adams State University

Virginia Tech started a Google Group for schools looking to run Banner inDocker VT is running Banner 9 in Docker SwarmTo request access go to the BANDOCK Google group and click the linkthat says Apply for membership

Questions

Feel free to ask questions after the conference

Email rbsmithadamseduPhone 719-587-7836Twitter PerlStalker

Randall Smith Docker at Adams State University

Thank You

Randall Smith Docker at Adams State University

References

bull Two Five and Seven painting the rosebush from AlicersquosAdventures in Wonderland

bull ELK Stack image - httpswwwelasticcoelk-stack

bull Docker Enterprise -httpswwwdockercomproductsdocker-enterprise

bull Book cover generated with the devto book generator -httpsdevtorly

bull Double Facepalm meme created at imgflipcom

bull GitLab - httpsaboutgitlabcom

bull Kubernetes - httpskubernetesio

bull Calico - httpswwwprojectcalicoorg

bull Metallb - httpsmetallbuniversetf

Page 26: Docker at Adams State UniversityRandall Smith Docker at Adams State University One of the best things about using Docker and GitLab CI to manage service deployments is that very little

How Can You Get Started

I Start with something simpleI Start with services that can tolerate downtimeI You may need to change your backup strategyI CI CI CI

Randall Smith Docker at Adams State University

Donrsquot do what I did Start with something simple Web services and tasksthat donrsquot require permanent local storage are a great start These areusually easier to build and more resilient to being frequently stopped andstarted This will happen a lot with your first servicesStart with services that can tolerate downtime Test services are great butyoursquoll eventually want to move something useful into the cluster Therersquosa learning curve Donrsquot let it bite you (or discourage you)When you move to containers how you back up your data may need tochange You donrsquot have a server to log into and run backup scripts Knowwhere your data lives and back it up thereCI CI CI Also CI Having a consistent workflow will make it easier foreveryone to start using the new cluster

Build vs Buy

Randall Smith Docker at Adams State University

You can build from scratch but there are a number of pre-built commercialservicesDocker Enterprise is a simple install on-prem and provides both Swarmand Kubernetes services at the same time It also provides a registry androle-based access control We are testing this approach nowYou can also go straight to the cloud with GKE AWS or Azure Infact using Kubernetes makes it much easier to transition between cloudproviders or even use multiple providers at the same time

Logging

I Containers generally log tostandard out

I Logs can be shipped to a logaggreagator such asElasticsearch

I View logs and createdashboards in Kibana

Randall Smith Docker at Adams State University

Moving to Docker changes a number of things not the least of which islogging The official accepted way for containers to log is to write theirlogs to standard out The logs are then available via Docker This alsoallows for a standard way to ship those logs to an aggregatorGenerally this done by shipping those logs to Elasticsearch One can thenuse a tool such as Kibana to access those logs build dashboards and eventrigger alerts based on the contents of those logs

Monitoring

I Collect metrics with PrometheusI View the metrics with GrafanaI Use InfluxDB for long-term storage

Randall Smith Docker at Adams State University

Prometheus has become the premier way to collect metrics on Kubernetesclusters It collects host stats as well as information on running podsGrafana is frequently used to build dashboards to view the collected met-ricsPrometheus only keeps metrics for a short amount of time Usually lessthan two weeks The metrics can be shipped to InfluxDB for long termstorage and analysis

Security Scans

I No built-in way to see if an image has out-of-date packagesI clair and Docker Enterprise can scan images for vulnerable

packagesI Doesnrsquot work as well for 3rd party packages

Randall Smith Docker at Adams State University

One of the things that you lose with containers is a built-in way to ensurethat software in an image is not running vulnerable packages You can useyum or apt to see if new updates need to be applied but containers donrsquothave thatTools such as clair can be used to scan images at build time as part ofthe CI process or to regularly scan all images in a registry This feature isalso included in Docker Enterprise This works well for packages installedwith standard tools (such as apt) but doesnrsquot work as well for softwareinstalled from sourceOne way to keep images up-to-date is run scheduled tasks in GitLab CIthat scans the image and rebuilds is with the latest updates installed ifsomething is found Remember that base images are generally very strippeddown so they will generally have fewer updates

BANDOCK Google Group

httpsgroupsgooglecomforumforumbandock

Randall Smith Docker at Adams State University

Virginia Tech started a Google Group for schools looking to run Banner inDocker VT is running Banner 9 in Docker SwarmTo request access go to the BANDOCK Google group and click the linkthat says Apply for membership

Questions

Feel free to ask questions after the conference

Email rbsmithadamseduPhone 719-587-7836Twitter PerlStalker

Randall Smith Docker at Adams State University

Thank You

Randall Smith Docker at Adams State University

References

bull Two Five and Seven painting the rosebush from AlicersquosAdventures in Wonderland

bull ELK Stack image - httpswwwelasticcoelk-stack

bull Docker Enterprise -httpswwwdockercomproductsdocker-enterprise

bull Book cover generated with the devto book generator -httpsdevtorly

bull Double Facepalm meme created at imgflipcom

bull GitLab - httpsaboutgitlabcom

bull Kubernetes - httpskubernetesio

bull Calico - httpswwwprojectcalicoorg

bull Metallb - httpsmetallbuniversetf

Page 27: Docker at Adams State UniversityRandall Smith Docker at Adams State University One of the best things about using Docker and GitLab CI to manage service deployments is that very little

Build vs Buy

Randall Smith Docker at Adams State University

You can build from scratch but there are a number of pre-built commercialservicesDocker Enterprise is a simple install on-prem and provides both Swarmand Kubernetes services at the same time It also provides a registry androle-based access control We are testing this approach nowYou can also go straight to the cloud with GKE AWS or Azure Infact using Kubernetes makes it much easier to transition between cloudproviders or even use multiple providers at the same time

Logging

I Containers generally log tostandard out

I Logs can be shipped to a logaggreagator such asElasticsearch

I View logs and createdashboards in Kibana

Randall Smith Docker at Adams State University

Moving to Docker changes a number of things not the least of which islogging The official accepted way for containers to log is to write theirlogs to standard out The logs are then available via Docker This alsoallows for a standard way to ship those logs to an aggregatorGenerally this done by shipping those logs to Elasticsearch One can thenuse a tool such as Kibana to access those logs build dashboards and eventrigger alerts based on the contents of those logs

Monitoring

I Collect metrics with PrometheusI View the metrics with GrafanaI Use InfluxDB for long-term storage

Randall Smith Docker at Adams State University

Prometheus has become the premier way to collect metrics on Kubernetesclusters It collects host stats as well as information on running podsGrafana is frequently used to build dashboards to view the collected met-ricsPrometheus only keeps metrics for a short amount of time Usually lessthan two weeks The metrics can be shipped to InfluxDB for long termstorage and analysis

Security Scans

I No built-in way to see if an image has out-of-date packagesI clair and Docker Enterprise can scan images for vulnerable

packagesI Doesnrsquot work as well for 3rd party packages

Randall Smith Docker at Adams State University

One of the things that you lose with containers is a built-in way to ensurethat software in an image is not running vulnerable packages You can useyum or apt to see if new updates need to be applied but containers donrsquothave thatTools such as clair can be used to scan images at build time as part ofthe CI process or to regularly scan all images in a registry This feature isalso included in Docker Enterprise This works well for packages installedwith standard tools (such as apt) but doesnrsquot work as well for softwareinstalled from sourceOne way to keep images up-to-date is run scheduled tasks in GitLab CIthat scans the image and rebuilds is with the latest updates installed ifsomething is found Remember that base images are generally very strippeddown so they will generally have fewer updates

BANDOCK Google Group

httpsgroupsgooglecomforumforumbandock

Randall Smith Docker at Adams State University

Virginia Tech started a Google Group for schools looking to run Banner inDocker VT is running Banner 9 in Docker SwarmTo request access go to the BANDOCK Google group and click the linkthat says Apply for membership

Questions

Feel free to ask questions after the conference

Email rbsmithadamseduPhone 719-587-7836Twitter PerlStalker

Randall Smith Docker at Adams State University

Thank You

Randall Smith Docker at Adams State University

References

bull Two Five and Seven painting the rosebush from AlicersquosAdventures in Wonderland

bull ELK Stack image - httpswwwelasticcoelk-stack

bull Docker Enterprise -httpswwwdockercomproductsdocker-enterprise

bull Book cover generated with the devto book generator -httpsdevtorly

bull Double Facepalm meme created at imgflipcom

bull GitLab - httpsaboutgitlabcom

bull Kubernetes - httpskubernetesio

bull Calico - httpswwwprojectcalicoorg

bull Metallb - httpsmetallbuniversetf

Page 28: Docker at Adams State UniversityRandall Smith Docker at Adams State University One of the best things about using Docker and GitLab CI to manage service deployments is that very little

Logging

I Containers generally log tostandard out

I Logs can be shipped to a logaggreagator such asElasticsearch

I View logs and createdashboards in Kibana

Randall Smith Docker at Adams State University

Moving to Docker changes a number of things not the least of which islogging The official accepted way for containers to log is to write theirlogs to standard out The logs are then available via Docker This alsoallows for a standard way to ship those logs to an aggregatorGenerally this done by shipping those logs to Elasticsearch One can thenuse a tool such as Kibana to access those logs build dashboards and eventrigger alerts based on the contents of those logs

Monitoring

I Collect metrics with PrometheusI View the metrics with GrafanaI Use InfluxDB for long-term storage

Randall Smith Docker at Adams State University

Prometheus has become the premier way to collect metrics on Kubernetesclusters It collects host stats as well as information on running podsGrafana is frequently used to build dashboards to view the collected met-ricsPrometheus only keeps metrics for a short amount of time Usually lessthan two weeks The metrics can be shipped to InfluxDB for long termstorage and analysis

Security Scans

I No built-in way to see if an image has out-of-date packagesI clair and Docker Enterprise can scan images for vulnerable

packagesI Doesnrsquot work as well for 3rd party packages

Randall Smith Docker at Adams State University

One of the things that you lose with containers is a built-in way to ensurethat software in an image is not running vulnerable packages You can useyum or apt to see if new updates need to be applied but containers donrsquothave thatTools such as clair can be used to scan images at build time as part ofthe CI process or to regularly scan all images in a registry This feature isalso included in Docker Enterprise This works well for packages installedwith standard tools (such as apt) but doesnrsquot work as well for softwareinstalled from sourceOne way to keep images up-to-date is run scheduled tasks in GitLab CIthat scans the image and rebuilds is with the latest updates installed ifsomething is found Remember that base images are generally very strippeddown so they will generally have fewer updates

BANDOCK Google Group

httpsgroupsgooglecomforumforumbandock

Randall Smith Docker at Adams State University

Virginia Tech started a Google Group for schools looking to run Banner inDocker VT is running Banner 9 in Docker SwarmTo request access go to the BANDOCK Google group and click the linkthat says Apply for membership

Questions

Feel free to ask questions after the conference

Email rbsmithadamseduPhone 719-587-7836Twitter PerlStalker

Randall Smith Docker at Adams State University

Thank You

Randall Smith Docker at Adams State University

References

bull Two Five and Seven painting the rosebush from AlicersquosAdventures in Wonderland

bull ELK Stack image - httpswwwelasticcoelk-stack

bull Docker Enterprise -httpswwwdockercomproductsdocker-enterprise

bull Book cover generated with the devto book generator -httpsdevtorly

bull Double Facepalm meme created at imgflipcom

bull GitLab - httpsaboutgitlabcom

bull Kubernetes - httpskubernetesio

bull Calico - httpswwwprojectcalicoorg

bull Metallb - httpsmetallbuniversetf

Page 29: Docker at Adams State UniversityRandall Smith Docker at Adams State University One of the best things about using Docker and GitLab CI to manage service deployments is that very little

Monitoring

I Collect metrics with PrometheusI View the metrics with GrafanaI Use InfluxDB for long-term storage

Randall Smith Docker at Adams State University

Prometheus has become the premier way to collect metrics on Kubernetesclusters It collects host stats as well as information on running podsGrafana is frequently used to build dashboards to view the collected met-ricsPrometheus only keeps metrics for a short amount of time Usually lessthan two weeks The metrics can be shipped to InfluxDB for long termstorage and analysis

Security Scans

I No built-in way to see if an image has out-of-date packagesI clair and Docker Enterprise can scan images for vulnerable

packagesI Doesnrsquot work as well for 3rd party packages

Randall Smith Docker at Adams State University

One of the things that you lose with containers is a built-in way to ensurethat software in an image is not running vulnerable packages You can useyum or apt to see if new updates need to be applied but containers donrsquothave thatTools such as clair can be used to scan images at build time as part ofthe CI process or to regularly scan all images in a registry This feature isalso included in Docker Enterprise This works well for packages installedwith standard tools (such as apt) but doesnrsquot work as well for softwareinstalled from sourceOne way to keep images up-to-date is run scheduled tasks in GitLab CIthat scans the image and rebuilds is with the latest updates installed ifsomething is found Remember that base images are generally very strippeddown so they will generally have fewer updates

BANDOCK Google Group

httpsgroupsgooglecomforumforumbandock

Randall Smith Docker at Adams State University

Virginia Tech started a Google Group for schools looking to run Banner inDocker VT is running Banner 9 in Docker SwarmTo request access go to the BANDOCK Google group and click the linkthat says Apply for membership

Questions

Feel free to ask questions after the conference

Email rbsmithadamseduPhone 719-587-7836Twitter PerlStalker

Randall Smith Docker at Adams State University

Thank You

Randall Smith Docker at Adams State University

References

bull Two Five and Seven painting the rosebush from AlicersquosAdventures in Wonderland

bull ELK Stack image - httpswwwelasticcoelk-stack

bull Docker Enterprise -httpswwwdockercomproductsdocker-enterprise

bull Book cover generated with the devto book generator -httpsdevtorly

bull Double Facepalm meme created at imgflipcom

bull GitLab - httpsaboutgitlabcom

bull Kubernetes - httpskubernetesio

bull Calico - httpswwwprojectcalicoorg

bull Metallb - httpsmetallbuniversetf

Page 30: Docker at Adams State UniversityRandall Smith Docker at Adams State University One of the best things about using Docker and GitLab CI to manage service deployments is that very little

Security Scans

I No built-in way to see if an image has out-of-date packagesI clair and Docker Enterprise can scan images for vulnerable

packagesI Doesnrsquot work as well for 3rd party packages

Randall Smith Docker at Adams State University

One of the things that you lose with containers is a built-in way to ensurethat software in an image is not running vulnerable packages You can useyum or apt to see if new updates need to be applied but containers donrsquothave thatTools such as clair can be used to scan images at build time as part ofthe CI process or to regularly scan all images in a registry This feature isalso included in Docker Enterprise This works well for packages installedwith standard tools (such as apt) but doesnrsquot work as well for softwareinstalled from sourceOne way to keep images up-to-date is run scheduled tasks in GitLab CIthat scans the image and rebuilds is with the latest updates installed ifsomething is found Remember that base images are generally very strippeddown so they will generally have fewer updates

BANDOCK Google Group

httpsgroupsgooglecomforumforumbandock

Randall Smith Docker at Adams State University

Virginia Tech started a Google Group for schools looking to run Banner inDocker VT is running Banner 9 in Docker SwarmTo request access go to the BANDOCK Google group and click the linkthat says Apply for membership

Questions

Feel free to ask questions after the conference

Email rbsmithadamseduPhone 719-587-7836Twitter PerlStalker

Randall Smith Docker at Adams State University

Thank You

Randall Smith Docker at Adams State University

References

bull Two Five and Seven painting the rosebush from AlicersquosAdventures in Wonderland

bull ELK Stack image - httpswwwelasticcoelk-stack

bull Docker Enterprise -httpswwwdockercomproductsdocker-enterprise

bull Book cover generated with the devto book generator -httpsdevtorly

bull Double Facepalm meme created at imgflipcom

bull GitLab - httpsaboutgitlabcom

bull Kubernetes - httpskubernetesio

bull Calico - httpswwwprojectcalicoorg

bull Metallb - httpsmetallbuniversetf

Page 31: Docker at Adams State UniversityRandall Smith Docker at Adams State University One of the best things about using Docker and GitLab CI to manage service deployments is that very little

BANDOCK Google Group

httpsgroupsgooglecomforumforumbandock

Randall Smith Docker at Adams State University

Virginia Tech started a Google Group for schools looking to run Banner inDocker VT is running Banner 9 in Docker SwarmTo request access go to the BANDOCK Google group and click the linkthat says Apply for membership

Questions

Feel free to ask questions after the conference

Email rbsmithadamseduPhone 719-587-7836Twitter PerlStalker

Randall Smith Docker at Adams State University

Thank You

Randall Smith Docker at Adams State University

References

bull Two Five and Seven painting the rosebush from AlicersquosAdventures in Wonderland

bull ELK Stack image - httpswwwelasticcoelk-stack

bull Docker Enterprise -httpswwwdockercomproductsdocker-enterprise

bull Book cover generated with the devto book generator -httpsdevtorly

bull Double Facepalm meme created at imgflipcom

bull GitLab - httpsaboutgitlabcom

bull Kubernetes - httpskubernetesio

bull Calico - httpswwwprojectcalicoorg

bull Metallb - httpsmetallbuniversetf

Page 32: Docker at Adams State UniversityRandall Smith Docker at Adams State University One of the best things about using Docker and GitLab CI to manage service deployments is that very little

Questions

Feel free to ask questions after the conference

Email rbsmithadamseduPhone 719-587-7836Twitter PerlStalker

Randall Smith Docker at Adams State University

Thank You

Randall Smith Docker at Adams State University

References

bull Two Five and Seven painting the rosebush from AlicersquosAdventures in Wonderland

bull ELK Stack image - httpswwwelasticcoelk-stack

bull Docker Enterprise -httpswwwdockercomproductsdocker-enterprise

bull Book cover generated with the devto book generator -httpsdevtorly

bull Double Facepalm meme created at imgflipcom

bull GitLab - httpsaboutgitlabcom

bull Kubernetes - httpskubernetesio

bull Calico - httpswwwprojectcalicoorg

bull Metallb - httpsmetallbuniversetf

Page 33: Docker at Adams State UniversityRandall Smith Docker at Adams State University One of the best things about using Docker and GitLab CI to manage service deployments is that very little

Thank You

Randall Smith Docker at Adams State University

References

bull Two Five and Seven painting the rosebush from AlicersquosAdventures in Wonderland

bull ELK Stack image - httpswwwelasticcoelk-stack

bull Docker Enterprise -httpswwwdockercomproductsdocker-enterprise

bull Book cover generated with the devto book generator -httpsdevtorly

bull Double Facepalm meme created at imgflipcom

bull GitLab - httpsaboutgitlabcom

bull Kubernetes - httpskubernetesio

bull Calico - httpswwwprojectcalicoorg

bull Metallb - httpsmetallbuniversetf