getting started with kubernetes on aws

Post on 21-Jan-2018

314 Views

Category:

Documents

4 Downloads

Preview:

Click to see full reader

TRANSCRIPT

GettingstartedwithKubernetesonAWS

AbbyFuller,Sr TechnicalEvangelist,AWS@abbyfuller

© 2017, Amazon Web Services, Inc. or its Affiliates. All rights reserved.

Kubernetes

• Container orchestration platform that manages containers across your infrastructure in logical groups• Rich API to integrate 3rd parties• Open Source

WhatareorchestrationtoolsandwhyshouldIcare?Containersarelotsofwork(andmovingpieces)!Orchestrationtoolshelpyoumanage,scale,anddeployyourcontainers.

Whatplatformisrightforme?

Bottomline:usethetoolthat’srightforyou.

Thatmeansthatyoushouldchoosewhatevermakesthemostsenseforyouandyourarchitecture,thatyou’recomfortablewith,andthatyoucanscale,maintain,andmanage.

Bottom line: we want to be the best place to run your containers, however you want to do it.

GettingstartedwithKubernetes

Initial set up

I’m using a Cloudformation stack provided by AWS and Heptiofor my initial cluster set up. To see the stack in full, you can look here:

https://s3.amazonaws.com/quickstart-reference/heptio/latest/templates/kubernetes-cluster-with-new-vpc.template

This will download the full template.

Choosing my parameters

Thesetuptemplatetakesafewparameters:

STACK=k8s-demo TEMPLATEPATH=https://s3.amazonaws.com/quickstart-reference/heptio/latest/templates/kubernetes-cluster-with-new-vpc.templateAZ=us-east-1bINGRESS=0.0.0.0/0 KEYNAME=demo

Running the stack

abbyfull$ aws cloudformation create-stack --stack-name $STACK \> --template-body $TEMPLATEPATH \> --capabilities CAPABILITY_NAMED_IAM \> --parameters ParameterKey=AvailabilityZone,ParameterValue=$AZ \> ParameterKey=AdminIngressLocation,ParameterValue=$INGRESS \> ParameterKey=KeyName,ParameterValue=$KEYNAME

This should return the ARN

{"StackId": "arn:aws:cloudformation:us-east-

1:<accountID>:stack/k8s-demo/a8ec95d0-c47e-11e7-b1fb-50a686e4bb1e"}

ARN is the Amazon Resource Name. This is a unique identifier that can be used within AWS.

Checking the values for my cluster

To see more information about my cluster, I can look at the Cloudformation stack like this:

abbyfull$ aws cloudformation describe-stacks --stack-name $STACK

This will return the values the stack was created with, and some current information.

Ihaveaclustercreated- nowwhat?

You can ssh to your instance like this:

Run:$aws cloudformation describe-stacks --query 'Stacks[*].Outputs[?OutputKey == `GetKubeConfigCommand`].OutputValue' --output text --stack-name

And use this output to SSH:$STACKSSH_KEY="demo.pem"; scp -i $SSH_KEY -o ProxyCommand="ssh -i \"${SSH_KEY}\" ubuntu@52.90.105.146 nc %h %p" ubuntu@10.0.31.91:~/kubeconfig ./kubeconfig

TherearesometoolsavailabletohelpmanageyourK8sinfrastructureInthisdemo,we’reusingkubectl:https://kubernetes.io/docs/user-guide/kubectl/

Therearesomeothergoodoptionsoutthere,like:

kubicorn:https://github.com/kris-nova/kubicornkubeadm:https://kubernetes.io/docs/setup/independent/install-kubeadm/

Oryoucanfindalistoftoolshere:https://kubernetes.io/docs/tools/

Download and test kubectlI installed kubectl with homebrew:

$ brew install kubectl

Next, test it against your cluster:

$ kubectl get nodes

NAME STATUS ROLES AGE VERSIONip-blah.ec2.internal Ready <none> 3h v1.8.2

Iprobablydon’twantaclusterwithjustonenodeThisgetsourclustertoken:

$ aws cloudformation describe-stacks --stack-name $STACK | grep -A 2 -B 2 JoinNodes

Thisreturnsatoken:

$ kubeadm join --token=<token>

Next,runjoinonthenewnode

CLUSTERTOKEN=xxxxxx.xxxxxxxxxxxxxxxxPRIVATEIP=10.0.0.0

$ kubeadm join --token=$CLUSTERTOKEN $PRIVATEIP

You don’t have to update your nodes automatically thoughYou can add capacity through the autoscaling group in AWS:

How about some content?

We probably want to actually install things. You can run applications on Kubernetes clusters a couple of different ways: you can install from helm (helm.sh), which is a package manager for Kubernetes, like this:

$ brew install kubernetes-helm==> Downloading https://homebrew.bintray.com/bottles/kubernetes-helm-2.7.0.el_capitan.bottle.tar.gz

Or, use a YAML fileHere’s a YAML file for an Nginx deployment:apiVersion: apps/v1beta2 kind: Deploymentmetadata:

name: nginx-deploymentspec: selector:

matchLabels: app: nginx

replicas: 2 # tells deployment to run 2 pods matching the template template: # create pods using pod definition in this template

metadata: # unlike pod-nginx.yaml, the name is not included in the meta data as a unique name is

# generated from the deployment name labels:

app: nginxspec:

containers: - name: nginximage: nginx:1.7.9 ports: - containerPort: 80

I can run my deployment like this:$ kubectl apply -f https://k8s.io/docs/tasks/run-application/deployment.yaml

deployment "nginx-deployment" created

I can get more information by running:

$ kubectl describe deployment nginx-deployment

CheckforrunningpodsfrommydeploymentApod isagroupofcontainers(likeanECSservice)withsharednetwork/storage.Icancheckforpodsrelatedtoadeploymentlikethis:

$ kubectl get pods -l app=nginx

FormyNginxexample,itreturnsthis(namesabbreviated):

NAME READY STATUS RESTARTS AGEnginx-deployment-568 1/1 Running 0 13mnginx-deployment-569 1/1 Running 0 13m

Scaling up and down

Earlier, we covered how to scale our underlying infrastructure with nodes or autoscaling groups. We can also scale our deployments!

Remember our YAML file? I can update the value of replicas to scale my deployment up or down. Then, I just reapply the deployment.

replicas: 2$ kubectl apply -f https://k8s.io/docs/tasks/run-application/deployment.yaml

Updating my content

I can update my content the same way: by changing the YAML file, and re-running my apply command:

$ kubectl apply -f https://k8s.io/docs/tasks/run-application/deployment.yaml

You’ll also need a Load Balancer

We can run the kubectl command for this:

$ kubectl expose --namespace=nginx deployment echoheaders --type=LoadBalancer --port=80 --target-port=8080 --name=echoheaders-public

Justlikewithnon-containerizedapps,LoadBalancershelpdistributetraffic.Inacontainerizedapp,LoadBalancersdistributetrafficbetweenthecontainersofapod.

High Availability in Kubernetes

• Generally in AWS, the best practice is to run highly available apps. This means that your is designed to work in the event of an Availability Zone or Region failure. If one AZ went down, your application would still function.• This is not quite the same in Kubernetes: rather than run one

cluster that spans multiple AZs, you run one cluster per AZ.• You can learn more about high availability in Kubernetes here.• You can manage multiple clusters in Kubernetes with

something called “federation”.

Kubernetes and the master node

• An important difference between Kubernetes and ECS is the master node: a Kubernetes cluster has a master node, which hosts the control plane. This is responsible for deployments, and updates, and if a node is lost.

In case of master node emergency

• So what happens if the master node goes down?• For AWS, you can use EC2 Auto Recovery.• In a lot of cases, not necessarily to have a highly available master: as

long as Auto Recovery can replace the node fast enough, the only impact on your cluster will be that you can’t deploy new versions, or update the cluster until the master is back online.

Clustersetupwithkops

Kubernetes setup with kops

In real life, it’s probably best to stick with tools. A popular one is kops, which is maintained by the Kubernetes community. Being used in production by companies like Ticketmaster.

Kops will help you out with things like service discovery, high availability, and provisioning.

Downloadkops

First,let’sdownloadkops:

$ wgethttps://github.com/kubernetes/kops/releases/download/1.7.0/kops-darwin-amd64 $ chmod +x kops-darwin-amd64 $ mv kops-darwin-amd64 /usr/local/bin/kops

Somekopsspecificsetup

KopsisbuiltonDNS,soweneedsomespecificsetupinAWSbeforewegetrolling:

First,you’llneedahostedzoneinRoute53.Thisissomethinglikekops.abby.com.

YoucandothiswiththeCLI(assumingyouownthedomain!):

$ aws route53 create-hosted-zone --name kops.abby.com --caller-reference 1

Next, I’ll need an s3 bucket to store cluster infoCreate the bucket like this:

$ aws s3 mb s3://config.kops.abby.com

And then:

$ export KOPS_STATE_STORE=s3://config.kops.abby.com

Create a cluster configuration with kops

Tocreatetheconfig:

$ kops create cluster --zones=us-east-1c useast1.kops.abby.com

To create the cluster resources:

$ kops update cluster useast1.dev.example.com --yes

So let’s recap.

• VPC with nodes in private subnets (only ELB in public)• Limit ports, access, and security groups• For production workloads, run multiple cluster in different AZs

for fault tolerance and high availability• Kubernetes clusters can involve a fair amount of setup and

maintenance: highly recommend taking advantage of tools for both setup (CloudFormation or Terraform), and updates/deployments (like kubectl or kubicorn or kops)• Kubernetes has a rich community- take advantage of it!

top related