zero downtime-java-deployments-with-docker-and-kubernetes

Post on 06-Jan-2017

3.771 Views

Category:

Software

0 Downloads

Preview:

Click to see full reader

TRANSCRIPT

Zero downtime Java deploymentswith Docker & Kubernetes

@pbakker @arjanschaaf

Why care about containers

Don’t worry about environment setup

Easy to replicate to dev/test/prod

More compact than VMs

Why care about Kubernetes

Docker is about containers on a single host

How to deploy on a cluster?

What about failover of nodes?

How to network between nodes?

Automated, production ready Kubernetes cluster in

steps8

Step

Understanding Kubernetes0

Terminology, and concepts to build upon

Nodes, Pods, Controllers

Docker container

Docker container

Docker container

Docker container

Pods

Node

Docker container

Docker container

Docker container

Docker container

Pods

Node

Docker container

Docker container

Docker containerReplication Controller

Master

schedules

schedules

Deployment 101

Push your Docker image

Create a new replication controller JSON file kubectlcreate-fmycontroller.json

Replication Controller creates Pods

mycontroller.json"spec":{"replicas":3,"selector":{"name":"frontend"},"template":{"metadata":{"labels":{"name":"frontend"}},"spec":{"containers":[{"name":"php-redis","image":"kubernetes/example-guestbook-php-redis:v2","ports":[{"containerPort":80}]}]}}

Scaling

kubectlscale—replicas=10myreplication-controller

Updating my app

Create a new Replication Controller JSON file

kubectl create -f my-new-rc.json

Scale down and delete old RC

Step

Automated deployment (simplistic)

1This kubectl stuff doesn’t really feel like automation…

The simplest Automated deployment

Don’t use kubectl, use the API!

Build server creates Replication Controller using REST

Build server destroys old cluster using REST

Docker container

Docker container

Docker container

Docker container

Node

Docker registry

Build Server

Docker container

Docker container

Docker container

Docker container

Node

push

Create RCDocker

containerDocker

containerDocker

containerReplication Controller

Master

schedules

schedules

API

Curl example

curl-XPOST

http://k8-master:8080/api/v1/namespaces/default/replicationcontrollers-d'{#Poddefinition

}’

What about downtime?

Not quite there yet

Step

Load balancing

2Our containers are running, but how do we access them!?

Pods come and go

Pods have dynamic IP addresses

First try - Kubernetes Services

A service is a proxy to your Pods

Fixed IP

P O D

S ERVIC E

Docker container

Docker container

Docker container

Docker container

Pods

Node

Docker container

Docker container

Docker container

Docker container

Pods

Node

MyServiceHTTP Virtual IP

Virtual IP

Fixed IP

What about SSL offloading?

… better load balancing?

… redirects, rewrites, etc?

… and that “fixed” IP can’t be reached!?

Services - Not quite right

Services are for communication within the k8 network (inter Pod communication)

Services - A Hammer and screws…

Docker container

Docker container

Docker container

Docker container

Pods

Node

Docker container

Docker container

Docker container

Docker container

Pods

Node

HAProxyHTTPS

Virtual IP

Virtual IP

Fixed IP

Custom load balancer

confd

etcd

HTTP

HTTP

Proxy RegistratorKubernetes API

confd etcd

Proxy Registrator

Watch

update backend configurations

watch changes

HAProxy

update config file

Choosing a load balancer

Vulcand uses etcd for all its config

Can use Nginx / HAProxy with templating ⇒ confd

So you’re telling me…

—link doesn’t work!?

And now you’re telling me…

—I can’t see my Pods!?

Step

Software Defined Network

3Each Pod gets its own IP

Access Pods from outside k8 on the flannel network

Docker container

Docker container

Docker container

Docker container

Node

Docker container

Docker container

Docker container

Docker container

Pods

Node

HAProxyHTTPS Virtual IP

Virtual IP

Fixed IP

Pods

HTTP

HTTP

Kubernetes networkpublic addressable network segment

Docker container

Docker container

Docker container

Docker container

Node

Docker container

Docker container

Docker container

Docker container

Pods

Node

HAProxyHTTPS Virtual IP

Virtual IP

Fixed IP

Pods

HTTP

HTTP

Flannel networkpublic addressable

network segment

Flannel: easy to setup & fast (on CoreOS)

Weave: userspace implementation is slow, loads of features

Project Calico: promising integration with Kubernetes

Docker libnetwork: batteries included but swappable

SDN - loads of options

Step

Blue / Green deployment

4Auto deploy is great, but downtime not so much

Step 4 - Blue / Green

Scale up new cluster

Wait until healthy

Switch backend in Load Balancer

Dispose old cluster

How do we know a Pod is healthy?

Its RUNNING status is not sufficient…

Is the app fully started?

Introduce App level health checks

Docker container

Docker container

Docker container

Docker container

Node

Docker container

Docker container

Docker container

Docker container

Pods

Node

Deployer

GET /health

GET /healthPods

Deploy Server

Running a Deployer

Blue/Green deployment requires lots of coordination

Our build server can’t access the Pods

… how do we health check?

Kubernets API etcd

Deployer Build Server

Start deployment

Kubernets API

Kubernets API

Kubernets APIPods

GET /health Create RC Switch Load Balancer Backend

Kubernets API

HAProxy

etcd

Deployer Build Server

Start deployment

Kubernets API

Kubernets API

Kubernets APIPods

GET /health Create RC

Proxy Registrator

Watch Create backends Read config

confdWatch

Switch Load Balancer Backend

Deployment descriptor

{ "useHealthCheck": true, "newVersion": "${bamboo.deploy.version}", "appName": "todo", "replicas": 2, "frontend": "rti-todo.amdatu.com", “podspec": {

…. }

}

"podspec": { "containers": [{ "image": “amdatu/mycontainer", "name": "todo", "ports": [{ "containerPort": 8080 }], "env": [ { "name": "version", "value": "${bamboo.deploy.version}" } ]}] }

Deployment demo

Demo

Step

Canary deployment

5

Canary deployments

Different strategy for the Deployer

Add Replication Controller

But don’t change the running cluster

K8 NodeK8 NodeK8 NodeK8 NodeProd pod

Canary

Main Replication Controller

K8 NodeK8 NodeK8 NodeK8 NodeCanary pod

Canary Replication Controller

HAProxy

Step

Persistent data

6How to deploy Mongo/MySQL/ElasticSearch in Kubernetes?

You don’t

Kubernetes is great for…

Stateless containers

Running lots of containers together

Moving containers around

Datastores scaling mechanics

Reactive scaling makes less sense

Cluster should be tuned

Scaling is expensive

Infra server(s)K8 Master K8 NodeK8 NodeK8 NodeK8 NodeK8 NodeHAProxy

Deployer

Mongo Cluster

ElasticSearch Cluster

… Cluster

Cluster topology

Step

Logging

7kubectl logs mypod?

Logging

Centralised application logging is key in a dynamic environment

Assume you can’t access a pod

ElasticSearch / LogStash / Kibana or Graylog are very useful for this

Logging

Docker container

Docker container

Docker container

Docker container

Docker container

Docker container

Docker container

Docker container

LogStash

ElasticSearch

Graylog

Logging example

OSGi app OSGi

LogService

SLF4J

Kafka Graylog

Graylog Dashboard

Developer

Step

Configuration

8Passing config to containers

Use environment variables

dbName=todo-apphost=${mongo}

myconfig.cfg

"podspec":{"env":[{"name":"mongo","value":"10.100.2.4"},

Deployment descriptor

Approach 1

Use etcd

etcd=[etcdnode]:2379

myconfig.cfg

/apps/config/demo-app

etcd

Approach 2

[{"name":"mongo","value":"10.100.2.4"}]

What to learn from all this?

Docker and Kubernetes are awesome

They are building blocks, not solutions

Use the API!

And if you don’t want to do all this yourself….

Fully managed Kubernetes based clusters

Logging and Monitoring

Automated deployments

Thank you!

Blog: http://paulbakker.io | https://arjanschaaf.github.io

Twitter: @pbakker | @arjanschaaf

Mail: paul.bakker@luminis.eu | arjan.schaaf@luminis.eu

top related