you can find the most up-to-date technical documentation ...€¦ · tanzu kubernetes clusters...

128

Upload: others

Post on 18-Aug-2020

1 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: You can find the most up-to-date technical documentation ...€¦ · Tanzu Kubernetes clusters implement Calico for pod-to-pod networking by default. Tanzu Kubernetes Cluster Plans
Page 2: You can find the most up-to-date technical documentation ...€¦ · Tanzu Kubernetes clusters implement Calico for pod-to-pod networking by default. Tanzu Kubernetes Cluster Plans

You can find the most up-to-date technical documentation on the VMware website at:

https://docs.vmware.com/

If you have comments about this documentation, submit your feedback to [email protected]

VMware, Inc. 3401 Hillview Ave. Palo Alto, CA 94304 www.vmware.com

Copyright © 2020 VMware, Inc. All rights reserved. Copyright and trademark information.

Page 3: You can find the most up-to-date technical documentation ...€¦ · Tanzu Kubernetes clusters implement Calico for pod-to-pod networking by default. Tanzu Kubernetes Cluster Plans

1.1

1.2

1.3

1.4

1.4.1

1.4.2

1.4.2.1

1.4.2.2

1.4.2.3

1.4.3

1.4.3.1

1.4.3.2

1.4.4

1.4.4.1

1.4.4.1.1

1.5

1.5.1

1.5.2

1.5.3

1.5.4

1.5.5

1.5.6

1.6

1.6.1

1.6.1.1

1.6.1.2

1.6.1.3

1.6.1.4

1.6.1.5

1.6.1.6

1.6.1.7

1.6.2

1.6.2.1

Table of ContentsIntroduction

VMware Tanzu Kubernetes Grid 1.0 Documentation

Tanzu Kubernetes Grid Concepts

Installing Tanzu Kubernetes Grid

Set Up the Bootstrap Environment for Tanzu Kubernetes Grid

Prepare to Deploy the Management Cluster to vSphere

Deploy Tanzu Kubernetes Grid to vSphere in an Air-Gapped Environment

Deploy the Management Cluster to vSphere with the Installer Interface

Deploy the Management Cluster to vSphere with the CLI

Prepare to Deploy the Management Cluster to Amazon EC2

Deploy the Management Cluster to Amazon EC2 with the Installer Interface

Deploy the Management Cluster to Amazon EC2 with the CLI

Examine the Management Cluster Deployment

Create Namespaces in the Management Cluster

Manage Multiple Management Clusters

Deploying Tanzu Kubernetes Clusters and Managing their Lifecycle

Create Tanzu Kubernetes Clusters

Use the Tanzu Kubernetes Grid CLI with a vSphere with Kubernetes Supervisor Cluster

Create Tanzu Kubernetes Cluster Configuration Files

Connect to and Examine Tanzu Kubernetes Clusters

Scale Tanzu Kubernetes Clusters

Delete Tanzu Kubernetes Clusters

Configuring and Managing the Tanzu Kubernetes Grid Instance

Create Clusters with User Authentication

Deploy Dex with LDAP to a Management Cluster Running on vSphere

Deploy Dex with OIDC to a Management Cluster Running on vSphere

Deploy Dex to a Management Cluster Running on Amazon EC2

Deploy an OIDC-Enabled Cluster

Enable Gangway on Clusters on vSphere

Enable Gangway on Clusters on Amazon EC2

Access Clusters with Your IDP Credentials

Implementing Log Forwarding with Fluent Bit

Create Namespace and RBAC Components

Page 4: You can find the most up-to-date technical documentation ...€¦ · Tanzu Kubernetes clusters implement Calico for pod-to-pod networking by default. Tanzu Kubernetes Cluster Plans

1.6.2.2

1.6.2.3

1.6.2.4

1.6.2.5

1.6.3

1.6.3.1

1.6.3.2

1.6.3.3

1.6.4

1.7

1.7.1

1.8

Deploy Fluent Bit with an Elastic Search Output Plugin

Deploy Fluent Bit with a Kafka Output Plugin

Deploy Fluent Bit with a Splunk Output Plugin

Deploy Fluent Bit with an HTTP Endpoint Output Plugin

Implementing Ingress Control on Tanzu Kubernetes Clusters with Contour

Deploy Contour on Tanzu Kubernetes Clusters Running on vSphere

Deploy Contour on Tanzu Kubernetes Clusters Running on Amazon EC2

View Data from Your Contour Deployment

Delete Management Clusters

Troubleshooting Tanzu Kubernetes Grid

Troubleshooting Tanzu Kubernetes Clusters with Crash Diagnostics

Tanzu Kubernetes Grid CLI Reference

Page 5: You can find the most up-to-date technical documentation ...€¦ · Tanzu Kubernetes clusters implement Calico for pod-to-pod networking by default. Tanzu Kubernetes Cluster Plans
Page 6: You can find the most up-to-date technical documentation ...€¦ · Tanzu Kubernetes clusters implement Calico for pod-to-pod networking by default. Tanzu Kubernetes Cluster Plans

VMware Tanzu Kubernetes Grid 1.0 DocumentationThe documentation for the standalone, multi-cloud version of VMware Tanzu Kubernetes Grid provides informationabout how to install, configure, and use VMware Tanzu Kubernetes Grid .

Tanzu Kubernetes Grid Concepts introduces the key components of Tanzu Kubernetes Grid and describes howyou use them and what they do.Installing Tanzu Kubernetes Grid describes the prerequisites for installing Tanzu Kubernetes Grid on vSphereand on Amazon EC2, and how to deploy the Tanzu Kubernetes Grid management cluster to both vSphere andAmazon EC2.Deploying Tanzu Kubernetes Clusters and Managing their Lifecycle describes how to use the Tanzu KubernetesGrid CLI to deploy Tanzu Kubernetes clusters from your management cluster, and how to manage the lifecycle ofthose clusters.Configuring and Managing the Tanzu Kubernetes Grid Instance describes how to set up local shared services foryour Tanzu Kubernetes clusters, such as authentication and authorization, logging, networking, and ingresscontrol.Troubleshooting Tips for Tanzu Kubernetes Grid includes tips to help you to troubleshoot common problems thatyou might encounter when installing Tanzu Kubernetes Grid and deploying Tanzu Kubernetes clusters.Tanzu Kubernetes Grid CLI Reference lists all of the commands and options of the Tanzu Kubernetes Grid CLI,and provides links to the section in which they are documented.

TM TM

6

Page 7: You can find the most up-to-date technical documentation ...€¦ · Tanzu Kubernetes clusters implement Calico for pod-to-pod networking by default. Tanzu Kubernetes Cluster Plans

Tanzu Kubernetes Grid ConceptsThis topic describes the key elements and concepts of a Tanzu Kubernetes Grid deployment.

Management ClusterThe management cluster is the first element that you deploy when you create a Tanzu Kubernetes Grid instance. Themanagement cluster is a Kubernetes cluster that performs the role of the primary management and operational centerfor the Tanzu Kubernetes Grid instance. This is where Cluster API runs to create Tanzu Kubernetes clusters, andwhere you configure the shared and in-cluster services that the clusters use.

When you deploy a management cluster, networking with Calico is automatically enabled in the management cluster.The management cluster is purpose-built for operating the platform and managing the lifecycle of Tanzu Kubernetesclusters. As such, the management cluster should not be used as a general purpose compute environment for end-user workloads.

Tanzu Kubernetes ClustersTanzu Kubernetes clusters are the clusters that you deploy from the management cluster by using the TanzuKubernetes Grid CLI. Tanzu Kubernetes clusters can run different versions of Kubernetes, depending on the needs ofthe applications they run. You can manage the entire lifecycle of Tanzu Kubernetes clusters by using the TanzuKubernetes Grid CLI. Tanzu Kubernetes clusters implement Calico for pod-to-pod networking by default.

Tanzu Kubernetes Cluster PlansA cluster plan is the blueprint that describes the configuration with which to deploy a Tanzu Kubernetes cluster. Itprovides a set of configurable values that describe settings like the number of control plane machines, workermachines, VM types, and so on.

This release of Tanzu Kubernetes Grid provides two default templates, dev and prod . If you have TanzuKubernetes Grid Plus support, you can engage with Tanzu Kubernetes Grid Plus Customer Reliability Engineers, whocan help you to develop your own custom plans by following the Cluster API provider specs.

Shared and In-Cluster ServicesShared and in-cluster services are services that run in the Tanzu Kubernetes Grid instance, to provide authenticationand authorization of Tanzu Kubernetes clusters, logging, and ingress control.

Tanzu Kubernetes Grid Instance

7

Page 8: You can find the most up-to-date technical documentation ...€¦ · Tanzu Kubernetes clusters implement Calico for pod-to-pod networking by default. Tanzu Kubernetes Cluster Plans

A Tanzu Kubernetes Grid instance is a full deployment of Tanzu Kubernetes Grid, including the management cluster,the deployed Tanzu Kubernetes clusters, and the shared and in-cluster services that you configure. You can operatemany instances of Tanzu Kubernetes Grid, for different environments, such as production, staging, and test; fordifferent IaaS providers, such as vSphere and Amazon EC2; and for different failure domains, for exampleDatacenter-1, AWS us-east-2, or AWS us-west-2.

Bootstrap EnvironmentThe bootstrap environment is the laptop, host, or server on which you download and run the Tanzu Kubernetes GridCLI. This is where the initial bootstrapping of a management cluster occurs, before it is pushed to the platform whereit will run.

Tanzu Kubernetes Grid InstallerThe Tanzu Kubernetes Grid installer is a graphical wizard that you start up by running the tkg init --ui command.The installer wizard runs locally on the bootstrap environment machine, and provides a user interface to guide youthrough the process of deploying a management cluster.

8

Page 9: You can find the most up-to-date technical documentation ...€¦ · Tanzu Kubernetes clusters implement Calico for pod-to-pod networking by default. Tanzu Kubernetes Cluster Plans

Installing Tanzu Kubernetes GridTo install Tanzu Kubernetes Grid, you download the Tanzu Kubernetes Grid installer bundle to your local machine.You then use the Tanzu Kubernetes Grid command line interface (CLI) to deploy the Tanzu Kubernetes Gridmanagement cluster and Tanzu Kubernetes clusters, and use the tools that Tanzu Kubernetes Grid provides toconfigure the Tanzu Kubernetes Grid instance.

Deploying the Tanzu Kubernetes Grid Management ClusterDeploying Tanzu Kubernetes ClustersConfiguring the Tanzu Kubernetes Grid Instance

Deploying the Tanzu Kubernetes Grid Management ClusterYou can deploy the Tanzu Kubernetes Grid management cluster in two ways:

By starting a local instance of the Tanzu Kubernetes Grid installer interface, which provides a graphical installerto guide you through the deployment process. This is the recommended method.By using CLI commands to deploy the management cluster from a configuration that you provide in a YAMLtemplate file.

The Tanzu Kubernetes Grid CLI allows you to provision and manage the Tanzu Kubernetes Grid management clusteron the following platforms:

vSphere 6.7u3vSphere 7.0 (see below)Amazon Elastic Compute Cloud (Amazon EC2).

You can provision the management cluster as both a single node configuration for development, and in a highlyavailable, multi-node configuration for production environments.

For information about where to download Tanzu Kubernetes Grid and the initial steps to perform, see Set Up theBootstrap Environment for Tanzu Kubernetes Grid.

After you have performed the general set up of your bootstrap environment, there are further steps to performdepending on whether you are deploying to vSphere or to Amazon EC2.

For information about deploying the management cluster to vSphere, see Prepare to Deploy the ManagementCluster to vSphere.For information about deploying the management cluster to Amazon EC2, see Prepare to Deploy theManagement Cluster to Amazon EC2.After you have deployed management clusters, see Examine the Management Cluster Deployment and ManageMultiple Management Clusters

Deploying Management Clusters on vSphere 7.0

9

Page 10: You can find the most up-to-date technical documentation ...€¦ · Tanzu Kubernetes clusters implement Calico for pod-to-pod networking by default. Tanzu Kubernetes Cluster Plans

If you have vSphere 7.0 and the vSphere with Kubernetes feature is enabled, the built-in Supervisor Cluster thatvSphere with Kubernetes provides performs the same role as the Tanzu Kubernetes Grid management cluster. Youdo not need to deploy a management cluster in this case, and the Tanzu Kubernetes Grid installer prevents you fromdoing so. You can use the Tanzu Kubernetes Grid CLI to connect to the Supervisor Cluster and deploy and manageTanzu Kubernetes clusters in vSphere 7.0. For more information, see Deploying Tanzu Kubernetes Clusters onvSphere 7.0 below.

If the vSphere with Kubernetes feature is not enabled, deploying a Tanzu Kubernetes Grid management cluster tovSphere 7.0 is possible, but it is not supported. For the best experience of Kubernetes on vSphere 7.0, you shouldenable the vSphere with Kubernetes feature and use the built-in Supervisor Cluster, rather than a Tanzu KubernetesGrid management cluster. For information about the vSphere with Kubernetes feature in vSphere 7.0, see vSpherewith Kubernetes Configuration and Management in the vSphere 7.0 documentation.

Deploying Tanzu Kubernetes ClustersAfter you have deployed and configured the Tanzu Kubernetes Grid management cluster, you use the TanzuKubernetes Grid CLI to deploy CNCF conformant Kubernetes clusters and manage their lifecycle.

You can use the Tanzu Kubernetes Grid CLI to deploy Tanzu Kubernetes clusters to the following platforms:

vSphere 6.7u3vSphere 7.0 (see below)Amazon EC2

For information about how to deploy Tanzu Kubernetes clusters to your chosen platform, see Deploying TanzuKubernetes Clusters and Managing their Lifecycle.

Deploying Tanzu Kubernetes Clusters on vSphere 7.0

You do not need to deploy Tanzu Kubernetes Grid management clusters to vSphere 7.0 when the vSphere withKubernetes feature is enabled, because you can connect the Tanzu Kubernetes Grid CLI to a vSphere withKubernetes Supervisor Cluster. You can then use the Tanzu Kubernetes Grid CLI to deploy Tanzu Kubernetesclusters to vSphere with Kubernetes. For information about how to deploy Tanzu Kubernetes clusters to vSphere 7.0,see Use the Tanzu Kubernetes Grid CLI with a vSphere with Kubernetes Supervisor Cluster.

If you have vSphere 7.0 and the vSphere with Kubernetes feature is not enabled, it is possible to deploy amanagement cluster to vSphere 7.0 and use the Tanzu Kubernetes Grid CLI to deploy Tanzu Kubernetes clusters, inthe same way as for vSphere 6.7u3. However, this configuration is not supported.

Configuring the Tanzu Kubernetes Grid InstanceTanzu Kubernetes Grid provides container images and deployment manifests of additional open source tools that youcan use to configure the Tanzu Kubernetes Grid instance in which your Tanzu Kubernetes clusters run. Forinformation about how to set up these tools, see Configuring and Managing the Tanzu Kubernetes Grid Instance.

10

Page 11: You can find the most up-to-date technical documentation ...€¦ · Tanzu Kubernetes clusters implement Calico for pod-to-pod networking by default. Tanzu Kubernetes Cluster Plans

11

Page 12: You can find the most up-to-date technical documentation ...€¦ · Tanzu Kubernetes clusters implement Calico for pod-to-pod networking by default. Tanzu Kubernetes Cluster Plans

Set Up the Bootstrap Environment for TanzuKubernetes GridThe Tanzu Kubernetes Grid bundle includes the Tanzu Kubernetes Grid CLI binaries and cluster plans from whichTanzu Kubernetes Grid deploys clusters. Tanzu Kubernetes Grid also provides the base OS images from which nodeVMs are created, that include the supported versions of Kubernetes and Cluster API.

To use Tanzu Kubernetes Grid, you download and run the Tanzu Kubernetes Grid CLI on a local system, known asthe bootstrap environment. The bootstrap environment is the laptop, host, or server on which the initial bootstrappingof a management cluster is performed. This is where you run Tanzu Kubernetes Grid CLI commands. TanzuKubernetes Grid creates a temporary management cluster using a Kubernetes in Docker ( kind ) cluster on thebootstrap environment. After creating the temporary management cluster locally, Tanzu Kubernetes Grid uses it toprovision the final management cluster in the platform of your choice.

PrerequisitesDownload and Unpack the Tanzu Kubernetes Grid BundleInstall the Tanzu Kubernetes Grid CLI Binary

CLI Short Names and AliasesCommon Tanzu Kubernetes Grid Options

What to Do Next

PrerequisitesTanzu Kubernetes Grid provides CLI binaries for Linux and Mac OS systems.

The bootstrap environment on which you run the Tanzu Kubernetes Grid CLI must meet the following requirements:

If you intend to use the Tanzu Kubernetes Grid installer interface, a browser is available. You can use the TanzuKubernetes Grid CLI without a browser, but for first deployments it is strongly recommended to use the installerinterface. kubectl is installed.Docker is installed and running, if you are installing Tanzu Kubernetes Grid on Linux platforms.Docker Desktop is installed and running, if you are installing Tanzu Kubernetes Grid on Mac OS platforms.System time is synchronized with a Network Time Protocol (NTP) server.

If you are running Docker Desktop on Mac OS, the kind container requires at least 6GB of RAM. For informationabout how to configure Docker Desktop so that it can run kind , see Settings for Docker Desktop in the kind documentation.

NOTE: If you have previously used Cluster API on the machine that you are using as your bootstrap environment,you must delete the ~/.cluster-api folder from that machine. This folder contains configuration files that mightinterfere with the correct interoperation of Cluster API and Tanzu Kubernetes Grid.

Download and Unpack the Tanzu Kubernetes Grid Bundle12

Page 13: You can find the most up-to-date technical documentation ...€¦ · Tanzu Kubernetes clusters implement Calico for pod-to-pod networking by default. Tanzu Kubernetes Cluster Plans

1. Go to https://www.vmware.com/go/get-tkg and log in with your My VMware credentials.2. Download the Tanzu Kubernetes Grid CLI to the machine to use as the bootstrap environment.

For Linux platforms, download tkg-linux-amd64-v1.0.0_vmware.1.gz .For Mac OS platforms, download tkg-darwin-amd64-v1.0.0_vmware.1.gz .

3. Use either the gunzip command or the extraction tool of your choice to unpack the binaries. Run either one ofthe following commands:

gunzip tkg-linux-amd64-v1.0.0_vmware.1.gz

gunzip tkg-darwin-amd64-v1.0.0_vmware.1.gz

The unpacked files are tkg-linux-amd64 or tkg-darwin-amd64 .

Install the Tanzu Kubernetes Grid CLI BinaryAfter you have downloaded and unpacked the Tanzu Kubernetes Grid CLI binary to your bootstrap environment, youmust make it available to the system.

1. Navigate to the executable for the Tanzu Kubernetes Grid CLI.

cd download_location/tkg-linux-amd64-v1.0.0_vmware.1

cd download_location/tkg-darwin-amd64-v1.0.0_vmware.1

2. Rename the CLI binary for your platform to tkg , make sure that it is executable, and add it to your Path .

i. Move the binary into the /usr/local/bin folder and rename it to tkg .

mv ./tkg-linux-amd64 /usr/local/bin/tkg

mv ./tkg-darwin-amd64 /usr/local/bin/tkg

ii. Make the file executable.

chmod +x /usr/local/bin/tkg

3. At the command line in a new terminal, run tkg to see help for the full list of commands and options that theTanzu Kubernetes Grid CLI provides.

4. Run tkg version to check that the correct version of the binary is properly installed.

You should see information about the installed Tanzu Kubernetes Grid CLI version.

13

Page 14: You can find the most up-to-date technical documentation ...€¦ · Tanzu Kubernetes clusters implement Calico for pod-to-pod networking by default. Tanzu Kubernetes Cluster Plans

Client: Version: v1.0.0 Git commit: 60f6fd5f40101d6b78e95a33334498ecca86176e

5. Run tkg --help to see the list of commands that the Tanzu Kubernetes Grid CLI provides.

You can run any command with the --help option to see information about that specific command or sub-command. For example, tkg init --help or tkg create cluster --help .

If you are running on Mac OS, you might encounter the following error:

"tkg" cannot be opened because the developer cannot be verified.

If this happens, you need to create a security exception for the tkg executable. Locate the tkg app in Finder,control-click the app, and select Open.

CLI Short Names and Aliases

Most of the Tanzu Kubernetes Grid CLI commands and options have short names or aliases, so that you do not haveto type the full command and option names each time you run tkg . For example, -h for --help , and mc for management-cluster . For increased clarity, this documentation always uses the full command and option names. Tosee the shortnames and aliases for commands and options, run CLI commands with the --help option.

Common Tanzu Kubernetes Grid Options

The Tanzu Kubernetes Grid CLI provides common options that can be used with all of the CLI commands.

Option Description

--config The path to the management cluster configuration file, if it is not stored in the defaultlocation, $HOME/.tkg/config.yaml . For example, tkg init --ui --config=/path/my-config.yaml .

--help Show help for the current command. For example, tkg create cluster --help .

--kubeconfig The path to the kubeconfig file for the management cluster, if it is not stored in thedefault location.

--log_file Specify a file in which to save the logs for the current command. For example, tkg scalecluster my-cluster --worker-machine-count=9 --log_file=my-cluster-scale-logs .

--quiet Mute all output for the current command.

--v Set the logging verbosity level for the command.

What to Do NextThe Tanzu Kubernetes Grid CLI is ready to use. You can use the Tanzu Kubernetes Grid installer interface or CLI todeploy a management cluster to either vSphere or Amazon EC2.

Prepare to Deploy the Management Cluster to vSphere

14

Page 15: You can find the most up-to-date technical documentation ...€¦ · Tanzu Kubernetes clusters implement Calico for pod-to-pod networking by default. Tanzu Kubernetes Cluster Plans

Prepare to Deploy the Management Cluster to Amazon EC2

If you have vSphere 7.0 and the vSphere with Kubernetes feature is enabled, you can directly use the TanzuKubernetes Grid CLI to deploy Tanzu Kubernetes clusters to vSphere with Kubernetes. For information about how toconnect the Tanzu Kubernetes Grid CLI to a vSphere with Kubernetes Supervisor Cluster, see Use the TanzuKubernetes Grid CLI with a vSphere with Kubernetes Supervisor Cluster .

15

Page 16: You can find the most up-to-date technical documentation ...€¦ · Tanzu Kubernetes clusters implement Calico for pod-to-pod networking by default. Tanzu Kubernetes Cluster Plans

Prepare to Deploy the Management Cluster to vSphereBefore you can use the Tanzu Kubernetes Grid CLI or installer interface to deploy the management cluster, you mustprepare your vSphere environment. You must make sure that vSphere meets the general requirements and importthe base OS templates from which Tanzu Kubernetes Grid creates node VMs.

General RequirementsCreate an SSH Key PairImport the Base OS Image Template into vSphereImport the API Server Load Balancer into vSphereWhat to Do Next

General RequirementsPerform the steps described in Set Up the Bootstrap Environment for Tanzu Kubernetes Grid.You have a vSphere 6.7u3 instance with an Enterprise Plus license.

NOTE: Deployment to vSphere 7.0 instances on which the vSphere with Kubernetes feature is not enabled ispossible but is not supported.

Your vSphere instance has the following objects in place:A vSphere cluster with at least two hosts, on which vSphere DRS is enabledA resource pool in which to deploy the Tanzu Kubernetes Grid InstanceA VM folder in which to collect the Tanzu Kubernetes Grid VMsA datastore with sufficient capacity for the control plane and worker node VM filesA network with DHCP to connect the VMsThe Network Time Protocol (NTP) service is running on all hosts

NOTE: If you intend to deploy multiple Tanzu Kubernetes Grid instances to this vSphere instance, you must create adedicated resource pool, VM folder, and network for each instance that you deploy.

Create an SSH Key PairIn order for Tanzu Kubernetes Grid VMs to run tasks in vSphere, you must provide the public key part of an SSH keypair to Tanzu Kubernetes Grid when you deploy the management cluster. You can use a tool such as ssh-keygen togenerate a key pair.

1. On the machine on which you will run the Tanzu Kubernetes Grid CLI, run the following ssh-keygen command.

ssh-keygen -t rsa -b 4096 -C "[email protected]"

2. At the prompt Enter file in which to save the key (/root/.ssh/id_rsa): press Enter to accept the default.3. Enter and repeat a password for the key pair.

16

Page 17: You can find the most up-to-date technical documentation ...€¦ · Tanzu Kubernetes clusters implement Calico for pod-to-pod networking by default. Tanzu Kubernetes Cluster Plans

4. Add the private key to the SSH agent running on your machine, and enter the password you created in theprevious step.

ssh-add ~/.ssh/id_rsa

5. Open the file .ssh/id_rsa.pub in a text editor so that you can easily copy and paste it when you deploy themanagement cluster.

Import the Base OS Image Template into vSphereBefore you can deploy a Tanzu Kubernetes Grid management cluster or Tanzu Kubernetes clusters to vSphere, youmust provide a base OS image template to vSphere. Tanzu Kubernetes Grid creates the management cluster andTanzu Kubernetes cluster node VMs from this template. Tanzu Kubernetes Grid provides a base OS image templatein OVA format for you to import into vSphere. After importing the OVA, you must convert the resulting VM into a VMtemplate.

1. Go to https://www.vmware.com/go/get-tkg and log in with your My VMware credentials.2. Download both of the Tanzu Kubernetes Grid OVA files:

OVA for node VMs: photon-3-v1.17.3_vmware.2.ova OVA for load balancer VMs: photon-3-capv-haproxy-v0.6.3_vmware.1.ova

3. In the vSphere Client, right-click an object in the vCenter Server inventory, select Deploy OVF template.4. Select Local file, click the button to upload files, and navigate to the photon-3-v1.17.3_vmware.2.ova file on your

local machine.5. Follow the installer prompts to deploy a VM from the OVA temaplate.

Accept or modify the appliance nameSelect the destination datacenter or folderSelect the destination host, cluster, or resource poolAccept the end user license agreements (EULA)Select the disk format and destination datastoreSelect the network for the VM to connect to

6. Click Finish to deploy the VM.

7. Right-click the VM and select Template > Convert to Template.

If you have Tanzu Kubernetes Grid Plus support, you can engage with Tanzu Kubernetes Grid Plus CustomerReliability Engineers, who can help you to build custom images with different operating systems.

Import the API Server Load Balancer into vSphereYou must also provide an API server load balancer to vSphere as a VM template. The API server load balancer isprovided as an OVA file, photon-3-capv-haproxy-v0.6.3_vmware.1.ova .

The procedure to upload the API server load balancer OVA to vSphere is identical to that for base OS image OVAfiles. Import the photon-3-capv-haproxy-v0.6.3_vmware.1.ova file into vSphere, and convert the resulting VM to a VMtemplate.

17

Page 18: You can find the most up-to-date technical documentation ...€¦ · Tanzu Kubernetes clusters implement Calico for pod-to-pod networking by default. Tanzu Kubernetes Cluster Plans

What to Do NextYour environment is now ready for you to deploy the Tanzu Kubernetes Grid management cluster to vSphere.

Deploy the Management Cluster to vSphere with the Installer Interface. This is the preferred option for firstdeployments.Deploy the Management Cluster to vSphere with the CLI. This is the more complicated method, that allowsgreater flexibility of configuration.

If you are installing Tanzu Kubernetes Grid in an internet-restricted environment, see Deploy Tanzu Kubernetes Gridto vSphere in an Air-Gapped Environment for the additional steps to perform.

18

Page 19: You can find the most up-to-date technical documentation ...€¦ · Tanzu Kubernetes clusters implement Calico for pod-to-pod networking by default. Tanzu Kubernetes Cluster Plans

Deploy Tanzu Kubernetes Grid to vSphere in an Air-Gapped EnvironmentThis topic describes how to install Tanzu Kubernetes Grid in air-gapped environments, namely environments that arenot connected to the Internet. The procedures described here only apply to deployments to vSphere.

If you are installing Tanzu Kubernetes Grid in a connected environment that can pull images over an external internetconnection, you do not need to perform this procedure.

PrerequisitesProcedureWhat to Do Next

PrerequisitesTo deploy Tanzu Kubernetes in an air-gapped environment, you require the following.

Within your firewall, install and configure a private Docker Registry. For example, install Harbor, which is theregistry against which this procedure has been tested. For information about how to install Harbor, see HarborInstallation and Configuration.A valid SSL certificate for the Docker Registry. For information about how to obtain the Harbor registry certificate,see the Harbor documentation. Alternatively, you can obtain a DNS wildcard SSL certificate by using a servicesuch as Let's Encrypt.A system with an external internet connection to perform the initial downloads and the mirroring of the requiredimages.The internet-connected machine must have Docker installed and running.You can connect to the private registry from the internet-connected machine.

Procedure1. On a machine with an internet connection, follow the instructions in Set Up the Bootstrap Environment for Tanzu

Kubernetes Grid to download, unpack, and install the Tanzu Kubernetes Grid CLI binary on your internet-connected system.

2. Follow the instructions in Prepare to Deploy the Management Cluster to vSphere to create SSH keys and toimport into vSphere the OVAs from which node and loadbalancer VMs are created.

3. Pull the following images into your local Docker image store.

Copy and run the following command without changing it.

xargs -n1 docker pull << 'EOF' registry.tkg.vmware.run/kind/node:v1.17.3_vmware.2 registry.tkg.vmware.run/calico-all/cni-plugin:v3.11.2_vmware.1 registry.tkg.vmware.run/calico-all/kube-controllers:v3.11.2_vmware.1 registry.tkg.vmware.run/calico-all/node:v3.11.2_vmware.1 registry.tkg.vmware.run/calico-all/pod2daemon:v3.11.2_vmware.1

19

Page 20: You can find the most up-to-date technical documentation ...€¦ · Tanzu Kubernetes clusters implement Calico for pod-to-pod networking by default. Tanzu Kubernetes Cluster Plans

registry.tkg.vmware.run/ccm/manager:v1.1.0_vmware.2 registry.tkg.vmware.run/cluster-api/cluster-api-aws-controller:v0.5.2_vmware.1 registry.tkg.vmware.run/cluster-api/cluster-api-controller:v0.3.3_vmware.1 registry.tkg.vmware.run/cluster-api/cluster-api-vsphere-controller:v0.6.3_vmware.1 registry.tkg.vmware.run/cluster-api/kube-rbac-proxy:v0.4.1_vmware.2 registry.tkg.vmware.run/cluster-api/kubeadm-bootstrap-controller:v0.3.3_vmware.1 registry.tkg.vmware.run/cluster-api/kubeadm-control-plane-controller:v0.3.3_vmware.1 registry.tkg.vmware.run/csi/csi-attacher:v1.1.1_vmware.7 registry.tkg.vmware.run/csi/csi-livenessprobe:v1.1.0_vmware.7 registry.tkg.vmware.run/csi/csi-node-driver-registrar:v1.1.0_vmware.7 registry.tkg.vmware.run/csi/csi-provisioner:v1.4.0_vmware.2 registry.tkg.vmware.run/csi/volume-metadata-syncer:v1.0.2_vmware.1 registry.tkg.vmware.run/csi/vsphere-block-csi-driver:v1.0.2_vmware.1 registry.tkg.vmware.run/cert-manager/cert-manager-controller:v0.11.0_vmware.1 registry.tkg.vmware.run/cert-manager/cert-manager-cainjector:v0.11.0_vmware.1 registry.tkg.vmware.run/cert-manager/cert-manager-webhook:v0.11.0_vmware.1 EOF

4. Set the IP address or FQDN of your local registry as an environment variable.

For example, replace <local-registry-address> with my.harbor.example.com .

LOCAL_REGISTRY=<local-registry-address>

5. Log in to your local private registry.

docker login ${LOCAL_REGISTRY}

6. Tag all of the images in your image store so that you can push them to the local registry.

Copy and run the following command without changing it.

xargs -n2 docker tag << EOF registry.tkg.vmware.run/kind/node:v1.17.3_vmware.2 ${LOCAL_REGISTRY}/kind/node:v1.17.3_vmware.2 registry.tkg.vmware.run/calico-all/cni-plugin:v3.11.2_vmware.1 ${LOCAL_REGISTRY}/calico-all/cni-plugin:v3.11.2_vmware.1 registry.tkg.vmware.run/calico-all/kube-controllers:v3.11.2_vmware.1 ${LOCAL_REGISTRY}/calico-all/kube-controllers:v3.11.2_vmware.1 registry.tkg.vmware.run/calico-all/node:v3.11.2_vmware.1 ${LOCAL_REGISTRY}/calico-all/node:v3.11.2_vmware.1 registry.tkg.vmware.run/calico-all/pod2daemon:v3.11.2_vmware.1 ${LOCAL_REGISTRY}/calico-all/pod2daemon:v3.11.2_vmware.1 registry.tkg.vmware.run/ccm/manager:v1.1.0_vmware.2 ${LOCAL_REGISTRY}/ccm/manager:v1.1.0_vmware.2 registry.tkg.vmware.run/cluster-api/cluster-api-aws-controller:v0.5.2_vmware.1 ${LOCAL_REGISTRY}/cluster-api/cluster-api-aws-controller:v0.5.2_vmware.1 registry.tkg.vmware.run/cluster-api/cluster-api-controller:v0.3.3_vmware.1 ${LOCAL_REGISTRY}/cluster-api/cluster-api-controller:v0.3.3_vmware.1 registry.tkg.vmware.run/cluster-api/cluster-api-vsphere-controller:v0.6.3_vmware.1 ${LOCAL_REGISTRY}/cluster-api/cluster-api-vsphere-controller:v0.6.3_vmware.1 registry.tkg.vmware.run/cluster-api/kube-rbac-proxy:v0.4.1_vmware.2 ${LOCAL_REGISTRY}/cluster-api/kube-rbac-proxy:v0.4.1_vmware.2 registry.tkg.vmware.run/cluster-api/kubeadm-bootstrap-controller:v0.3.3_vmware.1 ${LOCAL_REGISTRY}/cluster-api/kubeadm-bootstrap-controller:v0.3.3_vmware.1 registry.tkg.vmware.run/cluster-api/kubeadm-control-plane-controller:v0.3.3_vmware.1 ${LOCAL_REGISTRY}/cluster-api/kubeadm-control-plane-controller:v0.3.3_vmware.1 registry.tkg.vmware.run/csi/csi-attacher:v1.1.1_vmware.7 ${LOCAL_REGISTRY}/csi/csi-attacher:v1.1.1_vmware.7 registry.tkg.vmware.run/csi/csi-livenessprobe:v1.1.0_vmware.7 ${LOCAL_REGISTRY}/csi/csi-livenessprobe:v1.1.0_vmware.7

20

Page 21: You can find the most up-to-date technical documentation ...€¦ · Tanzu Kubernetes clusters implement Calico for pod-to-pod networking by default. Tanzu Kubernetes Cluster Plans

registry.tkg.vmware.run/csi/csi-node-driver-registrar:v1.1.0_vmware.7 ${LOCAL_REGISTRY}/csi/csi-node-driver-registrar:v1.1.0_vmware.7 registry.tkg.vmware.run/csi/csi-provisioner:v1.4.0_vmware.2 ${LOCAL_REGISTRY}/csi/csi-provisioner:v1.4.0_vmware.2 registry.tkg.vmware.run/csi/volume-metadata-syncer:v1.0.2_vmware.1 ${LOCAL_REGISTRY}/csi/volume-metadata-syncer:v1.0.2_vmware.1 registry.tkg.vmware.run/csi/vsphere-block-csi-driver:v1.0.2_vmware.1 ${LOCAL_REGISTRY}/csi/vsphere-block-csi-driver:v1.0.2_vmware.1 registry.tkg.vmware.run/cert-manager/cert-manager-controller:v0.11.0_vmware.1 ${LOCAL_REGISTRY}/cert-manager/cert-manager-controller:v0.11.0_vmware.1 registry.tkg.vmware.run/cert-manager/cert-manager-cainjector:v0.11.0_vmware.1 ${LOCAL_REGISTRY}/cert-manager/cert-manager-cainjector:v0.11.0_vmware.1 registry.tkg.vmware.run/cert-manager/cert-manager-webhook:v0.11.0_vmware.1 ${LOCAL_REGISTRY}/cert-manager/cert-manager-webhook:v0.11.0_vmware.1 EOF

7. Push all of the images from your image store into the local registry.

Copy and run the following command without changing it.

xargs -n1 docker push << EOF ${LOCAL_REGISTRY}/kind/node:v1.17.3_vmware.2 ${LOCAL_REGISTRY}/calico-all/cni-plugin:v3.11.2_vmware.1 ${LOCAL_REGISTRY}/calico-all/kube-controllers:v3.11.2_vmware.1 ${LOCAL_REGISTRY}/calico-all/node:v3.11.2_vmware.1 ${LOCAL_REGISTRY}/calico-all/pod2daemon:v3.11.2_vmware.1 ${LOCAL_REGISTRY}/ccm/manager:v1.1.0_vmware.2 ${LOCAL_REGISTRY}/cluster-api/cluster-api-aws-controller:v0.5.2_vmware.1 ${LOCAL_REGISTRY}/cluster-api/cluster-api-controller:v0.3.3_vmware.1 ${LOCAL_REGISTRY}/cluster-api/cluster-api-vsphere-controller:v0.6.3_vmware.1 ${LOCAL_REGISTRY}/cluster-api/kube-rbac-proxy:v0.4.1_vmware.2 ${LOCAL_REGISTRY}/cluster-api/kubeadm-bootstrap-controller:v0.3.3_vmware.1 ${LOCAL_REGISTRY}/cluster-api/kubeadm-control-plane-controller:v0.3.3_vmware.1 ${LOCAL_REGISTRY}/csi/csi-attacher:v1.1.1_vmware.7 ${LOCAL_REGISTRY}/csi/csi-livenessprobe:v1.1.0_vmware.7 ${LOCAL_REGISTRY}/csi/csi-node-driver-registrar:v1.1.0_vmware.7 ${LOCAL_REGISTRY}/csi/csi-provisioner:v1.4.0_vmware.2 ${LOCAL_REGISTRY}/csi/volume-metadata-syncer:v1.0.2_vmware.1 ${LOCAL_REGISTRY}/csi/vsphere-block-csi-driver:v1.0.2_vmware.1 ${LOCAL_REGISTRY}/cert-manager/cert-manager-controller:v0.11.0_vmware.1 ${LOCAL_REGISTRY}/cert-manager/cert-manager-cainjector:v0.11.0_vmware.1 ${LOCAL_REGISTRY}/cert-manager/cert-manager-webhook:v0.11.0_vmware.1 EOF

8. Run tkg get management-cluster to populate your local ~/.tkg folder.9. Use the search and replace utility of your choise to replace registry.tkg.vmware.run with <local-registry-address>

recursively throughout the ~/.tkg folder.

Make sure that the search and replace operation includes the ~/.tkg/providers folder and the ~/.tkg/config.yaml file.

10. Turn off your internet connection.11. Run any Tanzu Kubernetes Grid CLI command, for example tkg init --ui .

The Tanzu Kubernetes Grid installer interface should open.

21

Page 22: You can find the most up-to-date technical documentation ...€¦ · Tanzu Kubernetes clusters implement Calico for pod-to-pod networking by default. Tanzu Kubernetes Cluster Plans

What to Do NextYour air-gapped environment is now ready for you to deploy Tanzu Kubernetes Grid management clusters tovSphere.

Deploy the Management Cluster to vSphere with the Installer Interface. This is the preferred option for firstdeployments.Deploy the Management Cluster to vSphere with the CLI. This is the more complicated method, that allowsgreater flexibility of configuration.

22

Page 23: You can find the most up-to-date technical documentation ...€¦ · Tanzu Kubernetes clusters implement Calico for pod-to-pod networking by default. Tanzu Kubernetes Cluster Plans

Deploy the Management Cluster to vSphere with theInstaller InterfaceThis topic describes how to use the Tanzu Kubernetes Grid installer interface to deploy a management cluster to avSphere instance. The Tanzu Kubernetes Grid installer interface guides you through the deployment of themanagement cluster, and provides different configurations for you to select or reconfigure.

PrerequisitesProcedureWhat to Do Next

PrerequisitesMake sure that you have met the all of the requirements listed in Set Up the Bootstrap Environment for TanzuKubernetes Grid and Prepare to Deploy the Management Cluster to vSphere.

ProcedureIMPORTANT:

Do not run multiple management cluster deployments on the same bootstrap environment machine at the same time.Do not change context or edit the kubeconfig file while Tanzu Kubernetes Grid operations are running.

Tanzu Kubernetes Grid does not support IPv6 addresses. This is because upstream Kubernetes only provides alphasupport for IPv6. Always provide IPv4 addresses in the procedures in this topic.

1. On the machine on which you downloaded and installed the Tanzu Kubernetes Grid CLI, run the tkg init command with the --ui option.

tkg init --ui

By default Tanzu Kubernetes Grid creates a folder called $HOME/.tkg and creates the cluster configuration file, config.yaml in that folder. To create config.yaml in a different location or with a different name, specify the --config option. If you specify the --config option, Tanzu Kubernetes Grid only creates the YAML file in thespecified location. Other files are still created in the $HOME/.tkg folder.

tkg init --ui --config=/path/my-config.yaml

When you run the tkg init --ui command, it validates that your system meets the prerequisites then openshttp://127.0.0.1:8080 in your default browser to display the Tanzu Kubernetes Grid installer interface.

23

Page 24: You can find the most up-to-date technical documentation ...€¦ · Tanzu Kubernetes clusters implement Calico for pod-to-pod networking by default. Tanzu Kubernetes Cluster Plans

2. Click the Deploy On vSphere button.

3. In the IaaS Provider section, enter the IP address or FQDN for the vCenter Server instance on which to deploythe management cluster.

4. Enter the vCenter Single Sign On username and password for a user account that has vSphere administratorpermissions, and click Connect.

NOTES:

If you connect to a vSphere 7.0 instance and the vSphere with Kubernetes feature is enabled, the installerinforms you that deploying a Tanzu Kubernetes Grid management cluster is not possible and exits.If you connect to a vSphere 7.0 instance and the vSphere with Kubernetes feature is not enabled, theinstaller informs you that deploying a Tanzu Kubernetes Grid management cluster is possible but notrecommended. You can either exit the installer and enable the vSphere with Kubernetes feature, or you canchoose to continue with this unsupported installation configuration.

24

Page 25: You can find the most up-to-date technical documentation ...€¦ · Tanzu Kubernetes clusters implement Calico for pod-to-pod networking by default. Tanzu Kubernetes Cluster Plans

5. Select the datacenter in which to deploy the management cluster from the Datacenter drop-down menu.

6. Paste the contents of your SSH public key into the text box and click Next.

7. In the Control Plane Settings section, select the Development or Production tile.

If you select Development, the installer deploys a management cluster with a single control plane node.If you select Production, the installer deploys a highly available management cluster with three controlplane nodes.

8. In either of the Development or Production tiles, use the Instance type drop-down menu to select fromdifferent combinations of CPU, RAM, and storage for the control plane node VM or VMs.

Choose the configuration for the control plane node VMs depending on the expected CPU, memory, and storageconsumption of the workloads that it will run. For example, some workloads might require a large computecapacity but relatively little storage, while others might require a large amount of storage and less computecapacity. The instance type that you select applies to the management cluster itself and to the Tanzu Kubernetesclusters that you deploy from it.

25

Page 26: You can find the most up-to-date technical documentation ...€¦ · Tanzu Kubernetes clusters implement Calico for pod-to-pod networking by default. Tanzu Kubernetes Cluster Plans

9. Use the API Server Load Balancer drop-down menu to select the VM template for the API Server LoadBalancer.

The drop-down menu includes VM templates that are present in your vSphere instance that meet the criteria foruse as API Server Load Balancer VMs. If you have not already imported a suitable VM template to vSphere, youcan do so now without quitting the installer, and then use the Refresh button to make it available in the drop-down menu.

26

Page 27: You can find the most up-to-date technical documentation ...€¦ · Tanzu Kubernetes clusters implement Calico for pod-to-pod networking by default. Tanzu Kubernetes Cluster Plans

10. Optionally enter a name for your management cluster and click Next.

If you do not specify a name, Tanzu Kubernetes Grid automatically generates a unique name.

11. In the Resources section, select vSphere resources for the management cluster to use, and click Next.

Select the resource pool in which to place the management cluster.Select the VM folder in which to place the management cluster VMs.Select a vSphere datastore for the management cluster to use.

If appropriate resources do not already exist in vSphere, without quitting the Tanzu Kubernetes Grid installer, goto vSphere to create them. Then click the refresh button so that the new resources appear in the drop-downmenus.

12. In the Kubernetes Network section, configure the networking for Kubernetes services, and click Next.

Network Name: Select a vSphere network to use as the Kubernetes service network.Cluster Service CIDR: If the recommended CIDR range of 100.64.0.0/13 is unavailable, enter a differentCIDR range to use for the Kubernetes services.Cluster Pod CIDR: If the recommended CIDR range of 100.96.0.0/11 is unavailable, enter a different CIDRrange to use for pods.

13. In the OS Image section, use the drop-down menu to select the OS image template to use for deploying TanzuKubernetes Grid VMs, and click Next.

The drop-down menu includes all of the OS image templates that are present in your vSphere instance that meetthe criteria for use as Tanzu Kubernetes Grid base OS images. The OS image template must include the correctversion of Kubernetes for this release of Tanzu Kubernetes Grid. If you have not already imported a suitable OSimage template to vSphere, you can do so now without quitting the Tanzu Kubernetes Grid installer. After youimport it, use the Refresh button to make it available in the drop-down menu.

27

Page 28: You can find the most up-to-date technical documentation ...€¦ · Tanzu Kubernetes clusters implement Calico for pod-to-pod networking by default. Tanzu Kubernetes Cluster Plans

14. Click Review Configuration to see the details of the management cluster that you have configured.

15. (Optional) Click Edit Configuration to return to the installer wizard to modify your configuration.

16. Click Deploy Management Cluster.

Deployment of the management cluster can take several minutes. The first run of tkg init takes longer thansubsequent runs because it has to pull the required Docker images into the image store on your bootstrapenvironment. Subsequent runs do not require this step, so are faster. You can follow the progress of thedeployment of the management cluster in the installer interface or in the terminal in which you ran tkg init --ui .

What to Do NextFor information about what happened during the deployment of the management cluster, how to connect kubectl tothe management cluster, and how to create namespaces, see Examine the Management Cluster Deployment.

28

Page 29: You can find the most up-to-date technical documentation ...€¦ · Tanzu Kubernetes clusters implement Calico for pod-to-pod networking by default. Tanzu Kubernetes Cluster Plans

29

Page 30: You can find the most up-to-date technical documentation ...€¦ · Tanzu Kubernetes clusters implement Calico for pod-to-pod networking by default. Tanzu Kubernetes Cluster Plans

Deploy the Management Cluster to vSphere with theCLIThis topic describes how to use the Tanzu Kubernetes Grid CLI to deploy a management cluster to vSphere from aYAML file.

PrerequisitesProcedureWhat to Do Next

PrerequisitesMake sure that you have met the all of the requirements listed in Set Up the Bootstrap Environment for TanzuKubernetes Grid and Prepare to Deploy the Management Cluster to vSphere.It is strongly recommended to use the Tanzu Kubernetes Grid installer interface rather than the CLI to deployyour first management cluster to vSphere. When you deploy a management cluster by using the installerinterface, it populates the config.yaml file for the management cluster with the required parameters. You can usethe created config.yaml as a model for future deployments from the CLI.

If this is the first time that you are running Tanzu Kubernetes Grid commands on this machine, and you have notalready deployed a management cluster to vSphere by using the Tanzu Kubernetes Grid installer interface, opena terminal and run the tkg get management-cluster command.

tkg get management-cluster

Running a tkg command for the first time creates the $HOME/.tkg folder, that contains the management clusterconfiguration file config.yaml .

ProcedureIMPORTANT:

Do not run multiple management cluster deployments on the same bootstrap environment machine at the same time.Do not change context or edit the kubeconfig file while Tanzu Kubernetes Grid operations are running.

Tanzu Kubernetes Grid does not support IPv6 addresses. This is because upstream Kubernetes only provides alphasupport for IPv6. Always provide IPv4 addresses in the procedures in this topic.

1. Open the .tkg/config.yaml file in a text editor.

If you have already deployed a management cluster to vSphere from the installer interface, you will see variablesthat describe your previous deployment.

If you have not already deployed a management cluster to vSphere from the installer interface, copy and pastethe following rows into the configuration file, after the end of the images section.

30

Page 31: You can find the most up-to-date technical documentation ...€¦ · Tanzu Kubernetes clusters implement Calico for pod-to-pod networking by default. Tanzu Kubernetes Cluster Plans

VSPHERE_SERVER: VSPHERE_USERNAME: VSPHERE_PASSWORD: VSPHERE_DATACENTER: VSPHERE_DATASTORE: VSPHERE_NETWORK: VSPHERE_RESOURCE_POOL: VSPHERE_FOLDER: VSPHERE_TEMPLATE: VSPHERE_HAPROXY_TEMPLATE: VSPHERE_DISK_GIB: VSPHERE_NUM_CPUS: VSPHERE_MEM_MIB: VSPHERE_SSH_AUTHORIZED_KEY: SERVICE_CIDR: CLUSTER_CIDR:

2. Edit the configuration file to update the information about the target vSphere environment and the configurationof the management cluster to deploy.

The table below describes all of the configuration options that you must provide for deployment of themanagement cluster to vSphere. Leave a space between the colon ( : ) and the variable value. For example:

VSPHERE_USERNAME: [email protected]

Option Value Description

VSPHERE_SERVER: vCenter_Server_address The IP address or FQDN of the vCenterServer instance on which to deploy themanagement cluster.

VSPHERE_USERNAME: [email protected] A vSphere user account with administratorprivileges.

VSPHERE_PASSWORD: My_P@ssword! The password for the vSphere useraccount. This value is base64-encodedwhen you run `tkg init`.

VSPHERE_DATACENTER: datacenter_name The name of the datacenter in which todeploy the management cluster, as itappears in the vSphere inventory.

VSPHERE_DATASTORE: datastore_name The name of the vSphere datastore for themanagement cluster to use, as it appearsin the vSphere inventory.

VSPHERE_NETWORK: VM Network The name of an existing vSphere networkto use as the Kubernetes service network,as it appears in the vSphere inventory.

VSPHERE_RESOURCE_POOL: resource_pool_name

The name of an existing resource pool inwhich to place this Tanzu Kubernetes Gridinstance, as it appears in the vSphereinventory. To use the root resource pool fora cluster, enter the full path, for examplefor a cluster named `cluster0` in datacenter`dc0`, the full path is

31

Page 32: You can find the most up-to-date technical documentation ...€¦ · Tanzu Kubernetes clusters implement Calico for pod-to-pod networking by default. Tanzu Kubernetes Cluster Plans

/dc0/host/cluster0/Resources .

VSPHERE_FOLDER: VM_folder_name The name of an existing VM folder in whichto place Tanzu Kubernetes Grid VMs, as itappears in the vSphere inventory.

VSPHERE_TEMPLATE: photon-3-v1.17.3+vmware.1

The VM template in the vSphere inventoryfrom which to bootstrap managementcluster VMs. In this release, it is photon-3-v1.17.3+vmware.1 .

VSPHERE_HAPROXY_TEMPLATE: photon-3-capv-haproxy-v0.6.2+vmware.1

The VM template in the vSphere inventoryfrom which to bootstrap API server loadbalancer VMs. In this release, it is capv-haproxy-v0.6.1 .

VSPHERE_DISK_GIB: "30" The size in gigabytes of the disk for thecontrol plane node VMs. Include thequotes ( "" ).

VSPHERE_NUM_CPUS: "1" The number of CPUs for the control planenode VMs. Include the quotes ( "" ).

VSPHERE_MEM_MIB: "2048" The amount of memory in megabytes forthe control plane node VMs. Include thequotes ( "" ).

VSPHERE_SSH_AUTHORIZED_KEY: "ssh-rsa AAAAB3NzaC1yc2EAA[...] lYImkx21vUu58cj"

Paste in the contents of the SSH public keythat you created in Prepare to Deploy theManagement Cluster to vSphere.

SERVICE_CIDR: 100.64.0.0/13

The CIDR range to use for the Kubernetesservices. The recommended range is100.64.0.0/13. Change this value only ifthe recommended range is unavailable.

CLUSTER_CIDR: 100.96.0.0/11

The CIDR range to use for pods. Therecommended range is 100.96.0.0/11.Change this value only if therecommended range is unavailable.

3. Save the configuration file.

4. Run the tkg init command.

You must specify at least the --infrastructure=vsphere option. If you do not specify a name, TanzuKubernetes Grid automatically generates a unique name for the cluster.

tkg init --infrastructure=vsphere

You can optionally specify a name for the management cluster in the --name option.

tkg init --infrastructure=vsphere --name=management_cluster_name

To deploy a management cluster with a single control plane node, add the --plan=dev option. If you do notspecify --plan , the dev plan is used by default.

tkg init --infrastructure=vsphere --name management_cluster_name --plan=dev

32

Page 33: You can find the most up-to-date technical documentation ...€¦ · Tanzu Kubernetes clusters implement Calico for pod-to-pod networking by default. Tanzu Kubernetes Cluster Plans

To deploy a highly available management cluster with three control plane nodes, specify the --plan=prod option.

tkg init --infrastructure=vsphere --name=management_cluster_name --plan=prod

By default Tanzu Kubernetes Grid creates $HOME/.tkg and creates the cluster configuration file, config.yaml in that folder. To create config.yaml in a different location or with a different name, specify the --config option. If you specify the --config option, Tanzu Kubernetes Grid only creates the YAML file in the specifiedlocation. Other files are still created in the $HOME/.tkg folder.

tkg init --infrastructure=vsphere --name=management_cluster_name --config path_to_file/my-config.yaml

NOTES:

If you connect to a vSphere 7.0 instance and the vSphere with Kubernetes feature is enabled, the CLIinforms you that deploying a Tanzu Kubernetes Grid management cluster is not possible and exits.If you connect to a vSphere 7.0 instance and the vSphere with Kubernetes feature is not enabled, the CLIinforms you that deploying a Tanzu Kubernetes Grid management cluster is possible but not recommended.You can either quit the installation and enable the vSphere with Kubernetes feature, or you can choose tocontinue with this unsupported installation configuration.

5. Follow the progress of the deployment of the management cluster in the terminal.

Deployment of the management cluster can take several minutes. The first run of tkg init takes longer thansubsequent runs because it has to pull the required Docker images into the image store on your bootstrapenvironment. Subsequent runs do not require this step, so are faster.

What to Do NextFor information about what happened during the deployment of the management cluster, how to connect kubectl tothe management cluster, and how to create namespaces see Examine the Management Cluster Deployment.

33

Page 34: You can find the most up-to-date technical documentation ...€¦ · Tanzu Kubernetes clusters implement Calico for pod-to-pod networking by default. Tanzu Kubernetes Cluster Plans

Prepare to Deploy the Management Cluster to AmazonEC2Before you can use the Tanzu Kubernetes Grid CLI or installer interface to deploy the management cluster, you mustprepare the machine on which you run the Tanzu Kubernetes Grid CLI and your Amazon EC2 account.

General RequirementsResource Usage in Your Amazon Web Services Account

Install the clusterawsadm Utility and Set Up a CloudFormation StackRegister an SSH Public Key with Your AWS AccountSet Your AWS Credentials as Environment Variables for Use by Cluster APIIdentify the Tanzu Kubernetes Grid Amazon Machine Image for Your RegionWhat to Do Next

General RequirementsPerform the steps described in Set Up Tanzu Kubernetes Grid .You have the access key and access key secret for an active Amazon Web Services account.Your AWS account must have Administrator privileges.Your AWS account has a sufficient quota of Virtual Private Cloud (VPC) instances. Each management clusterthat you deploy creates one VPC. The default VPC quota is 5 instances. For more information, see Amazon VPCQuotas in the AWS documentation.Install the AWS CLI.Install jq .

The AWS CLI uses jq to process JSON when creating SSH key pairs. It is also used to prepare theenvironment or configuration variables when you deploy Tanzu Kubernetes Grid by using the CLI.

Resource Usage in Your Amazon Web Services Account

For each cluster that you create, Tanzu Kubernetes Grid provisions a set of resources in your Amazon Web Servicesaccount.

For development clusters that are not configured for high availability, Tanzu Kubernetes Grid provisions the followingresources:

3 VMs

The VMs include a control plane node, a worker node (to run the cluster agent extensions), and a bastion host. Ifyou specify additional VMs in your node pool, those are provisioned as well.

4 security groups (one for the load balancer and one for each of the initial VMs)

1 private subnet and 1 public subnet in the specified availability zone1 public and 1 private route table in the specified availability zone

34

Page 35: You can find the most up-to-date technical documentation ...€¦ · Tanzu Kubernetes clusters implement Calico for pod-to-pod networking by default. Tanzu Kubernetes Cluster Plans

1 classic load balancer1 internet gateway1 NAT gateway in the specified availability zone2 VPC Elastic IPs, one for the NAT gateway, and one for the Elastic Load Balancer

For production clusters that are configured for high availability, Tanzu Kubernetes Grid provisions the resources listedabove and the following additional resources to support replication in two additional availability zones:

2 additional control plane VMs2 additional private and public subnets2 additional private and public route tables2 additional NAT gateways2 additional VPC elastic IPs

Amazon Web Services implements a set of default limits or quotas on these types of resources, and allows you tomodify the limits. Typically the default limits are sufficient to get started creating clusters from Tanzu Kubernetes Grid.However, as you increase the number of clusters you are running or the workloads on your clusters, you willencroach on these limits. When you reach the limits imposed by Amazon Web Services, any attempts to provisionthat type of resource fail. As a result, Tanzu Kubernetes Grid will be unable to create a new cluster, or you might beunable to create additional deployments on your existing clusters.

For example, if your quota on internet gateways is set to 5 and you already have five in use, then Tanzu KubernetesGrid is unable to provision the necessary resources when you attempt to create a new cluster.

Therefore regularly assess the limits you have specified in Amazon Web Services account, and adjust them asnecessary to fit your business needs.

Install the clusterawsadm Utility and Set Up aCloudFormation StackTanzu Kubernetes Grid uses Cluster API Provider AWS to deploy clusters to Amazon EC2. Cluster API ProviderAWS requires the clusterawsadm command line utility to be present on your system.

The clusterawsadm command line utility assists with identity and access management (IAM) for Cluster API ProviderAWS. Tanzu Kubernetes Grid uses Cluster API Provider AWS to deploy clusters to Amazon EC2.

The clusterawsadm utility takes the credentials that you set as environment variables and uses them to create aCloudFormation stack in your AWS account with the correct IAM resources. Tanzu Kubernetes Grid uses theresources of the CloudFormation stack to create management and Tanzu Kubernetes clusters. The IAM resourcesare added to the control plane and node roles when they are created during cluster deployment. For more informationabout CloudFormation stacks, see Working with Stacks in the AWS documentation.

1. Create the following environment variables for your AWS account.

Your AWS access key:

export AWS_ACCESS_KEY_ID=aws_access_key

35

Page 36: You can find the most up-to-date technical documentation ...€¦ · Tanzu Kubernetes clusters implement Calico for pod-to-pod networking by default. Tanzu Kubernetes Cluster Plans

Your AWS access key secret:

export AWS_SECRET_ACCESS_KEY=aws_access_key_secret

If you use multi-factor authentication, your AWS session token.

export AWS_SESSION_TOKEN=aws_session_token

The AWS region in which to deploy the cluster.

For example, set the region to us-west-2 .

export AWS_REGION=us-west-2

For the full list of AWS regions, see AWS Service Endpoints.

2. Go to https://www.vmware.com/go/get-tkg and log in with your My VMware credentials.3. Download the executable for clusterawsadm for your platform.

Linux platforms: clusterawsadm-linux-amd64-v0.5.2_vmware.1.gz Mac OS platforms: clusterawsadm-darwin-amd64-v0.5.2_vmware.1.gz

4. Use either the gunzip command or the extraction tool of your choice to unpack the binary that corresponds tothe OS of your bootstrap environment:

gunzip clusterawsadm-linux-amd64-v0.5.2_vmware.1.gz

gunzip clusterawsadm-darwin-amd64-v0.5.2_vmware.1.gz

The resulting files are clusterawsadm-linux or clusterawsadm-darwin .

5. Rename the binary for your platform to clusterawsadm , make sure that it is executable, and add it to your Path .

i. Move the binary into the /usr/local/bin folder and rename it to clusterawsadm .Linux:

mv ./clusterawsadm-linux /usr/local/bin/clusterawsadm

Mac OS:

mv ./clusterawsadm-darwin /usr/local/bin/clusterawsadm

i. Make the file executable.

chmod +x /usr/local/bin/clusterawsadm

6. Run the following clusterawsadm command to create a CloudFoundation stack.

36

Page 37: You can find the most up-to-date technical documentation ...€¦ · Tanzu Kubernetes clusters implement Calico for pod-to-pod networking by default. Tanzu Kubernetes Cluster Plans

clusterawsadm alpha bootstrap create-stack

You only need to run clusterawsadm once per account. The CloudFormation stack that is created is not specific toany region.

Register an SSH Public Key with Your AWS AccountIn order for Tanzu Kubernetes Grid VMs to launch on Amazon EC2, you must provide the public key part of an SSHkey pair to Amazon EC2 for every region you would like to deploy a management cluster.

NOTE: AWS only supports RSA keys. The keys required by AWS are of a different format to those required byvSphere. You cannot use the same key pair for both vSphere and AWS deployments.

If you do not already have an SSH key pair, you can use the AWS CLI to create one, by performing the steps below.

1. Create a key pair named default and save it as default.pem .

aws ec2 create-key-pair --key-name default --output json | jq .KeyMaterial -r > default.pem

2. Log in to your Amazon EC2 dashboard, and go to Network & Security > Key Pairs to verify that the created keypair is registered with your account.

Set Your AWS Credentials as Environment Variables forUse by Cluster APIAfter you have created the CloudFoundation stack, you must set your AWS credentials as environment variables.Cluster API Provider AWS needs these variables so that it can write the credentials into cluster manifests when itcreates clusters. You must perform the steps in Install the clusterawsadm Utility and Set Up a CloudFormation Stackbefore you perform these steps.

1. Set a new environment variable for your AWS credentials.

export AWS_CREDENTIALS=$(aws iam create-access-key --user-name bootstrapper.cluster-api-provider-aws.sigs.k8s.io --output json)

2. Replace the environment variable that you created for your AWS access key ID.

export AWS_ACCESS_KEY_ID=$(echo $AWS_CREDENTIALS | jq .AccessKey.AccessKeyId -r)

3. Replace the environment variable that you created for your secret access key.

export AWS_SECRET_ACCESS_KEY=$(echo $AWS_CREDENTIALS | jq .AccessKey.SecretAccessKey -r)

4. Set a new environment variable to encode your AWS credentials.

37

Page 38: You can find the most up-to-date technical documentation ...€¦ · Tanzu Kubernetes clusters implement Calico for pod-to-pod networking by default. Tanzu Kubernetes Cluster Plans

export AWS_B64ENCODED_CREDENTIALS=$(clusterawsadm alpha bootstrap encode-aws-credentials)

Identify the Tanzu Kubernetes Grid Amazon Machine Imagefor Your RegionTanzu Kubernetes Grid creates the management cluster and Tanzu Kubernetes cluster node VMs from standardAmazon Linux 2 Amazon Machine Images (AMI). A Tanzu Kubernetes Grid AMI is publicly available in every AWSregion, to all Amazon EC2 users.

To direct Tanzu Kubernetes Grid to the correct AMI, you must create an environment variable with the AMI ID of theAMI for your designated region. For example, the AMI ID for the us-west-2 region is ami-074a82cfc610da035 .

export AWS_AMI_ID=ami-074a82cfc610da035

For the full list of AMI IDs for each AWS region in this release of Tanzu Kubernetes Grid, see the Tanzu KubernetesGrid 1.0 Release Notes.If you have Tanzu Kubernetes Grid Plus support, you can engage with Tanzu Kubernetes Grid Plus CustomerReliability Engineers, who can help you to build custom Amazon images.

What to Do NextYour environment is now ready for you to deploy the Tanzu Kubernetes Grid management cluster to Amazon EC2.

Deploy the Management Cluster to Amazon EC2 with the Installer Interface. This is the preferred option for firstdeployments.Deploy the Management Cluster to Amazon EC2 with the CLI. This is the more complicated method, that allowsgreater flexibility of configuration.

38

Page 39: You can find the most up-to-date technical documentation ...€¦ · Tanzu Kubernetes clusters implement Calico for pod-to-pod networking by default. Tanzu Kubernetes Cluster Plans

Deploy the Management Cluster to Amazon EC2 withthe Installer InterfaceThis topic describes how to use the Tanzu Kubernetes Grid installer interface to deploy a management cluster toAmazon Elastic Compute Cloud (Amazon EC2). The Tanzu Kubernetes Grid installer interface guides you throughthe deployment of the management cluster, and provides different configurations for you to choose.

PrerequisitesProcedureWhat to Do Next

PrerequisitesMake sure that you have met the all of the requirements listed in Set Up Tanzu Kubernetes Grid and Prepare toDeploy the Management Cluster to Amazon EC2.

ProcedureThe values that you set as environment variables in Prepare to Deploy the Management Cluster to Amazon EC2 areprepopulated in the relevant fields of the installer interface.

IMPORTANT:

Do not run multiple management cluster deployments on the same bootstrap environment machine at the same time.Do not change context or edit the kubeconfig file while Tanzu Kubernetes Grid operations are running.

Tanzu Kubernetes Grid does not support IPv6 addresses. This is because upstream Kubernetes only provides alphasupport for IPv6. Always provide IPv4 addresses in the procedures in this topic.

1. On the machine on which you downloaded and installed the Tanzu Kubernetes Grid CLI, run the tkg init command with the --ui option.

tkg init --ui

By default Tanzu Kubernetes Grid creates a folder called $HOME/.tkg and creates the cluster configuration file, config.yaml in that folder. To create config.yaml in a different location or with a different name, specify the --config option. If you specify the --config option, Tanzu Kubernetes Grid only creates the YAML file in thespecified location. Other files are still created in the $HOME/.tkg folder.

tkg init --ui --config=/path/my-config.yaml

When you run the tkg init --ui command, it opens http://127.0.0.1:8080 in your default browser and displaysthe Tanzu Kubernetes Grid installer interface.

39

Page 40: You can find the most up-to-date technical documentation ...€¦ · Tanzu Kubernetes clusters implement Calico for pod-to-pod networking by default. Tanzu Kubernetes Cluster Plans

2. Click the Deploy On AWS EC2 button.

3. In the IaaS Provider section, enter the access key ID and secret access key for your Amazon EC2 account, andthe name of an SSH key that is already registered with your Amazon EC2 account.

4. Select the AWS region in which to deploy the management cluster and click Connect.5. If the connection is successful, click Next.

6. In the VPC for AWS section, check that the pre-filled network ranges are available, check that the availabilityzone is set to the correct region, and click Next.

If the recommended CIDR ranges are not available, enter new IP ranges in CIDR format for the managementcluster to use. The recommended ranges are as follows:

VPC CIDR: 10.0.0.0/16Public Node CIDR: 10.0.1.0/24Private Node CIDR: 10.0.0.0/24

7. In the Control Plane Settings section, select the Development or Production tile.

If you select Development, the installer deploys a single control plane node.If you select Production, the installer deploys three control plane nodes.

40

Page 41: You can find the most up-to-date technical documentation ...€¦ · Tanzu Kubernetes clusters implement Calico for pod-to-pod networking by default. Tanzu Kubernetes Cluster Plans

8. In either of the Development or Production tiles, use the Instance type drop-down menu to select theconfiguration for the control plane node VM or VMs.

Select a small , medium , large , or xlarge AWS T3 instance for the control plane node VMs, depending on theexpected workloads that you will run in the cluster. For information about the configuration of the different sizesof T3 instances, see Amazon EC2 Instance Types. The instance type that you select applies to the managementcluster itself and to the Tanzu Kubernetes clusters that you deploy from it.

9. Optionally enter a name for your management cluster and click Next.

If you do not specify a name, Tanzu Kubernetes Grid generates one automatically.

10. In the Kubernetes Network section, if recommended CIDR range of 100.96.0.0/11 for the Cluster Pod CIDR isunavailable, enter a different CIDR range to use for pods and click Next.

11. Click Review Configuration to see the details of the management cluster that you have configured.

41

Page 42: You can find the most up-to-date technical documentation ...€¦ · Tanzu Kubernetes clusters implement Calico for pod-to-pod networking by default. Tanzu Kubernetes Cluster Plans

12. (Optional) Click Edit Configuration to return to the installer wizard to modify your configuration.13. Click Deploy Management Cluster and follow the progress of the deployment of the management cluster in the

installer interface.

Deployment of the management cluster can take several minutes. The first run of tkg init takes longer thansubsequent runs because it has to pull the required Docker images into the image store on your bootstrapenvironment. Subsequent runs do not require this step, so are faster. You can follow the progress of thedeployment of the management cluster in the installer interface or in the terminal in which you ran tkg init --ui .

What to Do NextFor information about what happened during the deployment of the management cluster, how to connect kubectl tothe management cluster, and how to create namespaces, see Examine the Management Cluster Deployment.

42

Page 43: You can find the most up-to-date technical documentation ...€¦ · Tanzu Kubernetes clusters implement Calico for pod-to-pod networking by default. Tanzu Kubernetes Cluster Plans

Deploy the Management Cluster to Amazon EC2 withthe CLIThis topic describes how to use the Tanzu Kubernetes Grid CLI to deploy a management cluster to Amazon ElasticCompute Cloud (Amazon EC2).

PrerequisitesProcedureWhat to Do Next

PrerequisitesMake sure that you have met the all of the requirements listed in Set Up Tanzu Kubernetes Grid and Prepare toDeploy the Management Cluster to Amazon EC2.It is strongly recommended to use the Tanzu Kubernetes Grid installer interface rather than the CLI to deployyour first management cluster to Amazon EC2. When you deploy a management cluster by using the installerinterface, it populates the config.yaml file for the management cluster with the required parameters. You can usethe created config.yaml as a model for future deployments from the CLI.

If this is the first time that you are running Tanzu Kubernetes Grid commands on this machine, and you have notalready deployed a management cluster to Amazon EC2 by using the Tanzu Kubernetes Grid installer interface,open a terminal and run the tkg get management-cluster command.

tkg get management-cluster

Running a tkg command for the first time creates the $HOME/.tkg folder, that contains the management clusterconfiguration file config.yaml .

ProcedureIMPORTANT:

Do not run multiple management cluster deployments on the same bootstrap environment machine at the same time.Do not change context or edit the kubeconfig file while Tanzu Kubernetes Grid operations are running.

Tanzu Kubernetes Grid does not support IPv6 addresses. This is because upstream Kubernetes only provides alphasupport for IPv6. Always provide IPv4 addresses in the procedures in this topic.

1. Open the .tkg/config.yaml file in a text editor.

If you have already deployed a management cluster to Amazon EC2 from the installer interface, you will seevariables that describe your previous deployment.

If you have not already deployed a management cluster to Amazon EC2 from the installer interface, copy andpaste the following rows into the configuration file, after the end of the images section.

43

Page 44: You can find the most up-to-date technical documentation ...€¦ · Tanzu Kubernetes clusters implement Calico for pod-to-pod networking by default. Tanzu Kubernetes Cluster Plans

AWS_REGION: AWS_NODE_AZ:AWS_PUBLIC_NODE_CIDR:AWS_PRIVATE_NODE_CIDR:AWS_VPC_CIDR:CLUSTER_CIDR:AWS_SSH_KEY_NAME:CONTROL_PLANE_MACHINE_TYPE:NODE_MACHINE_TYPE:

The table below describes all of the variables that you must set for deployment to Amazon EC2. Leave a spacebetween the colon ( : ) and the variable value. For example:

AWS_NODE_AZ: us-west-2a

IMPORTANT: Any environment variables that you have set that have the same key as the variables that you setin config.yaml will override the values that you set in config.yaml . You must unset those variables before youdeploy the management cluster from the CLI.

Option Value Description

AWS_REGION us-west , ap-northeast , etc.

The name of the AWS region in which to deploythe management cluster. If you have already seta different region as an environment variable, forexample in [Prepare to Deploy the ManagementCluster to Amazon EC2](aws.md), you mustunset that environment variable.

AWS_NODE_AZ us-west-2a , ap-northeast-2b , etc.

The name of the AWS availability zone in yourchosen region, to use as the availability zone fornodes of this management cluster. Availabilityzone names are the same as the AWS regionname, with a single lower-case letter suffix, suchas `a`, `b`, `c`.

AWS_PUBLIC_NODE_CIDR 10.0.1.0/24 If the recommended range of 10.0.0.0/16 is notavailable, enter a different IP range in CIDRformat for public nodes to use.

AWS_PRIVATE_NODE_CIDR 10.0.0.0/24 If the recommended range of 10.0.0.0/24 is notavailable, enter a different IP range in CIDRformat for private nodes to use.

AWS_VPC_CIDR 10.0.0.0/16 If the recommended range of 10.0.0.0/16 is notavailable, enter a different IP range in CIDRformat for the management cluster to use.

CLUSTER_CIDR 100.96.0.0/11 If the recommended range of 100.96.0.0/11 is notavailable, enter a different IP range in CIDRformat for pods to use.

AWS_SSH_KEY_NAME Your SSH key name

Enter the name of the SSH private key that youregistered with your Amazon EC2 account in[Register an SSH Public Key with Your AWSAccount](aws.md#register-ssh).

Enter t3.small , t3.medium , t3.large , or

44

Page 45: You can find the most up-to-date technical documentation ...€¦ · Tanzu Kubernetes clusters implement Calico for pod-to-pod networking by default. Tanzu Kubernetes Cluster Plans

CONTROL_PLANE_MACHINE_TYPE t3.size

t3.xlarge for the control plane node VMs,depending on the expected workloads that youwill run in the cluster. For information about theconfiguration of the different sizes of T3instances, see [Amazon EC2 Instance Types](https://aws.amazon.com/ec2/instance-types/).

NODE_MACHINE_TYPE t3.size

Enter t3.small , t3.medium , t3.large , or t3.xlarge for the worker node VMs, dependingon the expected workloads that you will run in thecluster.

2. Run the tkg init command.

Running tkg init for the first time creates the $HOME/.tkg folder, that contains the template configuration file config.yaml from which the management cluster is deployed.

You must specify at least the --infrastructure=aws option.

tkg init --infrastructure=aws

You can optionally specify a name for the management cluster in the --name option.

tkg init --infrastructure=aws --name=management_cluster_name

To deploy a management cluster with a single control plane node, add the --plan=dev option. If you do notspecify --plan , the dev plan is used by default.

tkg init --infrastructure=aws --name management_cluster_name --plan=dev

To deploy a highly available management cluster with three control plane nodes, specify the --plan=prod option.

tkg init --infrastructure=aws --name=management_cluster_name --plan=prod

By default Tanzu Kubernetes Grid creates $HOME/.tkg and creates the cluster configuration file, config.yaml in that folder. To create config.yaml in a different location or with a different name, specify the --config option. If you specify the --config option, Tanzu Kubernetes Grid only creates the YAML file in the specifiedlocation. Other files are still created in the $HOME/.tkg folder.

tkg init --infrastructure=aws --name=management_cluster_name --config path_to_file/my-config.yaml

3. Follow the progress of the deployment of the management cluster in the terminal.

Deployment of the management cluster can take several minutes. The first run of tkg init takes longer thansubsequent runs because it has to pull the required Docker images into the image store on your bootstrapenvironment. Subsequent runs do not require this step, so are faster.

What to Do Next45

Page 46: You can find the most up-to-date technical documentation ...€¦ · Tanzu Kubernetes clusters implement Calico for pod-to-pod networking by default. Tanzu Kubernetes Cluster Plans

For information about what happened during the deployment of the management cluster, how to connect kubectl tothe management cluster, and how to create namespaces, see Examine the Management Cluster Deployment.

46

Page 47: You can find the most up-to-date technical documentation ...€¦ · Tanzu Kubernetes clusters implement Calico for pod-to-pod networking by default. Tanzu Kubernetes Cluster Plans

Examine the Management Cluster DeploymentDuring the deployment of the management cluster, either from the installer interface or the CLI, Tanzu KubernetesGrid creates a temporary management cluster using a Kubernetes in Docker ( kind ) cluster on the bootstrapenvironment. After creating the temporary management cluster locally, Tanzu Kubernetes Grid uses it to provision thefinal management cluster in the platform of your choice, depending on whether you are deploying to vSphere orAmazon EC2. After the deployment of the management cluster finishes successfully, Tanzu Kubernetes Grid deletesthe temporary kind cluster.

Tanzu Kubernetes Grid saves the configuration of your management cluster in the ~/.tkg/config.yaml file. TanzuKubernetes Grid also creates a folder named ~/.tkg/providers , that contains all of the files required by Cluster API tocreate the management cluster.

IMPORTANT: By default, unless you specify the --config option to save the kubeconfig for a cluster to a specific file,all clusters that you deploy from the Tanzu Kubernetes Grid CLI are added to a shared kubeconfig file. If you deletethe shared kubeconfig file, all management clusters become orphaned and thus unusable.

Management Cluster NetworkingVerify the Deployment of the Management ClusterConnect tkg and kubectl to Management ClustersWhat to Do Next

Management Cluster NetworkingWhen you deploy a Tanzu Kubernetes Grid management cluster, pod-to-pod networking with Calico is automaticallyenabled in the management cluster.

Verify the Deployment of the Management ClusterAfter the deployment of the management cluster completes successfully, control plane VM and one or more workernode VMs are present in your vSphere inventory or Amazon EC2 instances. You can obtain information about yourmanagement cluster by running the tkg get management-cluster command, and by locating the created artifacts ineither vSphere or your Amazon EC2 dashboard.

Unless you manually change the Kubernetes context to another cluster by using the tkg set management-cluster command, by default Tanzu Kubernetes Grid uses the context of the most recently deployed management cluster.

1. To facilitate the deployment of similar management clusters in the future, make a copy of the ~/.tkg/config.yaml file.

2. View the management cluster objects in either vSphere or Amazon EC2.

If you deployed the management cluster to vSphere, go to the resource pool that you designated when youdeployed the management cluster.If you deployed the management cluster to Amazon EC2, go to the Instances view of your EC2 dashboard.

47

Page 48: You can find the most up-to-date technical documentation ...€¦ · Tanzu Kubernetes clusters implement Calico for pod-to-pod networking by default. Tanzu Kubernetes Cluster Plans

You should see the following VMs or instances. If you did not specify a name for the managementcluster, cluster_name is something similar to tkg-mgmt-vsphere-20200323121503 or tkg-mgmt-aws-20200323140554 .

vSphere with a development control plane:A control plane VM with a name similar to cluster_name-control-plane-sx5rp A worker node VM with a name similar to cluster_name-md-0-6b8db6b59d-kbnk4 A load balancer VM with the name cluster_name-tkg-system-lb

vSphere with a production control plane:Three control plane VMs with names similar to cluster_name-control-plane-9tzxl A load balancer VM with the name cluster_name-tkg-system-lb A worker node VM with a name similar to cluster_name-md-0-787f688d8b-djhsz

Amazon EC2 with a development control plane:A control plane instance with a name similar to cluster_name-control-plane-bcpfp A worker node instance with a name similar to cluster_name-md-0-dwfnm An EC2 bastion host instance with the name cluster_name-bastion

Amazon EC2 with a production control plane:Three control plane instances with names similar to cluster_name-control-plane-xfbt9 A worker node instance with a name similar to cluster_name-md-0-b8dch An EC2 bastion host instance with the name cluster_name-bastion

Connect kubectl to Management ClustersTanzu Kubernetes Grid CLI provides commands that facilitate many of the operations that you can perform with yourmanagement cluster. However, for certain operations, you still need to use kubectl .

1. On the bootstrap environment machine on which you ran tkg init , run the tkg get management-cluster commandto see the context of the management cluster that you have deployed.

+-------------------------+------------------------------------------------------+ | MANAGEMENT CLUSTER NAME | CONTEXT NAME | +-------------------------+------------------------------------------------------+ | my-management-cluster * | my-management-cluster-admin@my-management- cluster | +-------------------------+------------------------------------------------------+

2. To instruct kubectl to use the context of the management cluster, so that you can examine its resources, run kubectl config use-context .

kubectl config use-context my-management-cluster-admin@my-management-cluster

3. Use kubectl commands to examine the resources of the management cluster.

For example, run kubectl get nodes , kubectl get pods , or kubectl get namespaces to see the nodes, pods, andnamespaces running in the management cluster.

What to Do Next

48

Page 49: You can find the most up-to-date technical documentation ...€¦ · Tanzu Kubernetes clusters implement Calico for pod-to-pod networking by default. Tanzu Kubernetes Cluster Plans

You can now use Tanzu Kubernetes Grid to start deploying Tanzu Kubernetes clusters. For information, seeDeploying Tanzu Kubernetes Clusters and Managing their Lifecycle.

You can also deploy more management clusters from the Tanzu Kubernetes Grid CLI. For information about how tomanage multiple management clusters from the same bootstrap environment, see Manage Multiple ManagementClusters.

49

Page 50: You can find the most up-to-date technical documentation ...€¦ · Tanzu Kubernetes clusters implement Calico for pod-to-pod networking by default. Tanzu Kubernetes Cluster Plans

Create Namespaces in the Management ClusterTo help you to organize and manage your development projects, you can optionally divide the management clusterinto Kubernetes namespaces. You can then use Tanzu Kubernetes Grid CLI to deploy Tanzu Kubernetes clusters tospecific namespaces in your management cluster. For example, you might want to create different types of clusters indedicated namespaces. If you do not create additional namespaces, Tanzu Kubernetes Grid creates all TanzuKubernetes clusters in the default namespace. For information about Kubernetes namespaces, see the Kubernetesdocumentation.

1. Make sure that kubectl is connected to the management cluster context.2. List the namespaces that are currently present in the management cluster.

kubectl get namespaces

You will see that the management cluster already includes several namespaces for the different services that itprovides:

capi-kubeadm-bootstrap-system Active 4m7s capi-kubeadm-control-plane-system Active 4m5s capi-system Active 4m11s capi-webhook-system Active 4m13s capv-system Active 3m59s cert-manager Active 6m56s default Active 7m11s kube-node-lease Active 7m12s kube-public Active 7m12s kube-system Active 7m12s tkg-system Active 3m57s

3. Use kubectl create -f to create new namespaces, for example for development and production.

These examples use the production and development namespaces from the Kubernetes documentation.

kubectl create -f https://k8s.io/examples/admin/namespace-dev.json

kubectl create -f https://k8s.io/examples/admin/namespace-prod.json

4. Run kubectl get namespaces --show-labels to see the new namespaces.

development Active 22m name=development production Active 22m name=production

50

Page 51: You can find the most up-to-date technical documentation ...€¦ · Tanzu Kubernetes clusters implement Calico for pod-to-pod networking by default. Tanzu Kubernetes Cluster Plans

51

Page 52: You can find the most up-to-date technical documentation ...€¦ · Tanzu Kubernetes clusters implement Calico for pod-to-pod networking by default. Tanzu Kubernetes Cluster Plans

Manage Multiple Management ClustersYou can use Tanzu Kubernetes Grid to deploy multiple management clusters from the same bootstrap environment,to both vSphere and Amazon EC2.

You can manage all of the management clusters from the same Tanzu Kubernetes Grid CLI instance, by switchingthe focus of the CLI between the contexts of the different management clusters. You can also add managementclusters that someone else created to your kubeconfig , so that you can manage those clusters from your TanzuKubernetes Grid CLI instance.

PrerequisitesProcedureWhat to Do Next

PrerequisitesYou have deployed at least two management clusters, to either vSphere or Amazon EC2, or both.

Procedure1. On the bootstrap environment machine on which you ran tkg init , run the tkg get management-cluster command

to see the list of management clusters that you have deployed.

tkg get management-cluster

If you deployed two management clusters, named my-vsphere-cluster and my-aws-cluster , you will see thefollowing output:

+-------------------------+----------------------------------------------+ | MANAGEMENT CLUSTER NAME | CONTEXT NAME | +-------------------------+----------------------------------------------+ | my-vsphere-cluster * | my-vsphere-cluster-admin@my-vsphere-cluster | | my-aws-cluster | my-aws-cluster-admin@my-aws-cluster | +-------------------------+----------------------------------------------+

The management cluster context that is the current focus of the Tanzu Kubernetes Grid CLI is marked with anasterisk ( * ). By default, the management cluster that you deployed most recently is the focus of the TanzuKubernetes Grid CLI.

2. To change the focus of the Tanzu Kubernetes Grid CLI to a different management cluster context, run the tkgset management-cluster command.

tkg set management-cluster my-aws-cluster

52

Page 53: You can find the most up-to-date technical documentation ...€¦ · Tanzu Kubernetes clusters implement Calico for pod-to-pod networking by default. Tanzu Kubernetes Cluster Plans

3. To add a management cluster that someone else created to your list of managed management clusters, use the tkg add management-cluster command.

i. Obtain the kubeconfig of the management cluster that you want to add, and save it on your bootstrapenvironment.

ii. Set an environment variable to point to the kubeconfig file of the new management cluster.

export KUBECONFIG=~/<path>/mgmt-cluster.kubeconfig

iii. Set the focus of kubectl to the context of the new management cluster.

kubectl config use-context mgmt-cluster-admin@mgmt-cluster

iv. Add the new cluster to your Tanzu Kubernetes Grid instance.

tkg add management-cluster

v. Run tkg get management-cluster to see the newly added cluster in your list of management clusters.vi. Set the focus of the Tanzu Kubernetes Grid CLI to the new cluster.

tkg set management-cluster new-cluster

What to Do NextYou can use Tanzu Kubernetes Grid to start deploying Tanzu Kubernetes clusters to different Tanzu Kubernetes Gridinstances. For information, see Deploying Tanzu Kubernetes Clusters and Managing their Lifecycle.

If you have vSphere 7.0, you can also deploy and manage Tanzu Kubernetes clusters in vSphere with Kubernetes.For information, see Use the Tanzu Kubernetes Grid CLI with a vSphere with Kubernetes Supervisor Cluster .

53

Page 54: You can find the most up-to-date technical documentation ...€¦ · Tanzu Kubernetes clusters implement Calico for pod-to-pod networking by default. Tanzu Kubernetes Cluster Plans

Deploying Tanzu Kubernetes Clusters and Managingtheir LifecycleThis section describes how you use Tanzu Kubernetes Grid to deploy and manage Tanzu Kubernetes clusters.

Before you can create Tanzu Kubernetes clusters, you must deploy a Tanzu Kubernetes Grid management cluster.For information about deploying the management cluster, see Installing Tanzu Kubernetes Grid.

You can use the Tanzu Kubernetes Grid CLI to deploy Tanzu Kubernetes clusters from your management cluster tovSphere and Amazon EC2. If you have vSphere 7.0 and you have enabled the vSphere with Kubernetes feature, youcan also use the Tanzu Kubernetes Grid CLI to interact with the vSphere with Kubernetes Supervisor Cluster, and todeploy Tanzu Kubernetes clusters from the Supervisor Cluster.

NOTE: You cannot provision Tanzu Kubernetes clusters across providers. A management cluster that is running onvSphere cannot create a Tanzu Kubernetes cluster in Amazon EC2, and vice-versa. It is not possible to use sharedservices between the different providers because, for example, vSphere clusters are reliant on sharing vSpherenetworks and storage, while Amazon EC2 uses its own systems.

The Tanzu Kubernetes Grid provides commands and options to perform the following common cluster creation andlifecycle management operations:

Create Tanzu Kubernetes ClustersUse the Tanzu Kubernetes Grid CLI with a vSphere with Kubernetes Supervisor ClusterCreate Tanzu Kubernetes Cluster Configuration FilesConnect to and Examine Tanzu Kubernetes ClustersScale Tanzu Kubernetes ClustersDelete Tanzu Kubernetes Clusters

54

Page 55: You can find the most up-to-date technical documentation ...€¦ · Tanzu Kubernetes clusters implement Calico for pod-to-pod networking by default. Tanzu Kubernetes Cluster Plans

Create Tanzu Kubernetes ClustersAfter you have deployed the management cluster to either vSphere or to Amazon EC2, you can use the TanzuKubernetes Grid CLI to deploy Tanzu Kubernetes clusters. In Tanzu Kubernetes Grid, Tanzu Kubernetes clusters arethe Kubernetes clusters in which your applications run.

To deploy Tanzu Kubernetes clusters, you run the tkg create cluster command, specifying different options to deploythe Tanzu Kubernetes clusters with different configurations.

Tanzu Kubernetes Grid automatically deploys clusters to the platform on which you deployed the managementcluster. You cannot deploy clusters to Amazon EC2 from a management cluster that is running in vSphere, or thereverse. Tanzu Kubernetes Grid automatically deploys clusters from whichever management cluster you have set asthe focus for the CLI by using the tkg set management-cluster command. For information about tkg set management-cluster , see Connect tkg and kubectl to Management Clusters.

To deploy clusters to a vSphere 7.0 instance on which the vSphere with Kubernetes feature is enabled, you mustconnect the Tanzu Kubernetes Grid CLI to the vSphere with Kubernetes Supervisor Cluster. For information abouthow to do this, see Connect the Tanzu Kubernetes Grid CLI to a vSphere 7.0 Supervisor Cluster.

Prerequisites for Cluster DeploymentTanzu Kubernetes Cluster NetworkingTanzu Kubernetes Cluster PlansDeploy a Default Tanzu Kubernetes ClusterPreview the YAML for a Tanzu Kubernetes ClusterDeploy a Cluster with Multiple Worker NodesDeploy a Cluster with a Highly Available Control PlaneDeploy a Cluster in a Specific NamespaceDeploy a Cluster that Runs a Different Version of Kubernetes

IMPORTANT:

Do not change context or edit the kubeconfig file while Tanzu Kubernetes Grid operations are running.By default, unless you specify the --config option to save the kubeconfig for a cluster to a specific file, allclusters that you deploy from the Tanzu Kubernetes Grid CLI are added to a shared kubeconfig file. If you deletethe shared kubeconfig file, all clusters become unusable.

Prerequisites for Cluster DeploymentYou have followed the procedures in Installing Tanzu Kubernetes Grid to deploy a management cluster to eithervSphere or Amazon EC2, or you have a vSphere 7.0 instance on which a vSphere with Kubernetes SupervisorCluster is running.

Tanzu Kubernetes Cluster Networking

55

Page 56: You can find the most up-to-date technical documentation ...€¦ · Tanzu Kubernetes clusters implement Calico for pod-to-pod networking by default. Tanzu Kubernetes Cluster Plans

When you use the Tanzu Kubernetes Grid CLI to deploy a Tanzu Kubernetes cluster, by default Calico networking isautomatically enabled in the cluster. If you have Tanzu Kubernetes Grid Plus support, other container networkinginterfaces (CNI) are supported.

Tanzu Kubernetes Cluster PlansTanzu Kubernetes Grid provides standard templates for clusters, known as plans. In this release, there are two plansfor Tanzu Kubernetes clusters:

By default, the dev plan deploys a cluster with one control plane node and one worker node.By default, the prod plan deploys a cluster with three control plane nodes and one worker node.

You can specify options to deploy clusters with different numbers of control plane and worker nodes. If you deploy acluster with multiple control plane nodes, Tanzu Kubernetes Grid automatically enables stacked HA on the controlplane. You can also change the number of nodes in a cluster after deployment by running the tkg scale cluster command on the cluster. For more information, see Scale Tanzu Kubernetes Clusters.

Tanzu Kubernetes Grid creates the individual nodes of the clusters according to the settings that you provided whenyou deployed the management cluster.

If you deployed the management cluster from the Tanzu Kubernetes Grid installer interface, nodes are createdwith the configuration that you set in the Control Plane Settings > Instance Type drop-down menu.If you deployed the management cluster from the Tanzu Kubernetes Grid CLI, nodes are created with theconfiguration that you set in the following options:

vSphere: VSPHERE_DISK_GIB , VSPHERE_NUM_CPUS , and VSPHERE_MEM_MIB Amazon EC2: CONTROL_PLANE_MACHINE_TYPE and NODE_MACHINE_TYPE

If you have Tanzu Kubernetes Grid Plus support, you can engage with Tanzu Kubernetes Grid Plus CustomerReliability Engineers, who can help you to develop your own custom plans by following the Cluster API providerspecs.

Deploy a Default Tanzu Kubernetes ClusterTo deploy a Tanzu Kubernetes cluster with the minimum default configuration, you run tkg create cluster , specifyingthe cluster name and the --plan=dev option.

tkg create cluster my-cluster --plan=dev

This command deploys a Tanzu Kubernetes cluster that runs the default version of Kubernetes for this TanzuKubernetes Grid release. The cluster consists of the following VMs or instances:

vSphere:One control plane VM, with a name similar to my-cluster-control-plane-nj4z6 .One loadbalancer VM, with a name similar to my-cluster-default-lb , where default is the name of thenamespace in which the cluster is running.One worker node, with a name similar to my-cluster-md-0-6ff9f5cffb-jhcrh .

56

Page 57: You can find the most up-to-date technical documentation ...€¦ · Tanzu Kubernetes clusters implement Calico for pod-to-pod networking by default. Tanzu Kubernetes Cluster Plans

Amazon EC2:One control plane instance, with a name similar to my-cluster-control-plane-d78t5 .One EC2 bastion instance, with the name my-cluster-bastion .One worker node, with a name similar to my-cluster-md-0-2vsr4 .

Preview the YAML for a Tanzu Kubernetes ClusterTo see a preview of the YAML file that Tanzu Kubernetes Grid will create when it deploys a Tanzu Kubernetescluster, you can run the tkg create cluster command with the --dry-run option. If you specify --dry-run , TanzuKubernetes Grid displays the full YAML file for the cluster, but does not create the cluster.

tkg create cluster my-cluster --plan=dev --dry-run

You can add --dry-run to any of the commands described in the following sections, to see a preview of the YAMLbefore you deploy the cluster. If you are satisfied with the displayed configuration file, run the command again withoutthe --dry-run option, to create the cluster.

If you specify --dry-run , Tanzu Kubernetes Grid sends the YAML file for the cluster to stdout , so that you can saveit for repeated future use.

tkg create cluster my-cluster --plan=dev --dry-run > my-cluster-config.yaml

NOTE: Running tkg create cluster with the --dry-run option works in the same way as running the tkg configcluster command. You can save the output of --dry-run and use it to create clusters by using kubectl apply . Forinformation how to deploy clusters from saved YAML files and enable Calico, see Create Tanzu Kubernetes ClusterConfiguration Files.

Deploy a Cluster with Multiple Worker NodesBy default, a Tanzu Kubernetes cluster has one control plane node and one worker node. To deploy clusters withmultiple worker nodes, specify the --worker-machine-count option.

tkg create cluster my-dev-cluster --plan=dev --worker-machine-count 3

This command deploys a Tanzu Kubernetes cluster that consists of the following VMs or instances:

vSphere:One control plane VM, with a name similar to my-dev-cluster-control-plane-nj4z6 .One loadbalancer VM, with a name similar to my-dev-cluster-default-lb , where default is the name of thenamespace in which the cluster is running.Three worker nodes, with names similar to my-dev-cluster-md-0-6ff9f5cffb-jhcrh .

Amazon EC2:One control plane instance, with a name similar to my-dev-cluster-control-plane-d78t5 .One EC2 bastion instance, with the name my-dev-cluster-bastion .

57

Page 58: You can find the most up-to-date technical documentation ...€¦ · Tanzu Kubernetes clusters implement Calico for pod-to-pod networking by default. Tanzu Kubernetes Cluster Plans

Three worker nodes, with names similar to my-dev-cluster-md-0-2vsr4 .

Deploy a Cluster with a Highly Available Control PlaneIf you specify --plan=prod , Tanzu Kubernetes Grid deploys a cluster with three control plane nodes and automaticallyimplements stacked control plane HA for the cluster.

tkg create cluster my-prod-cluster --plan=prod

This command deploys a Tanzu Kubernetes cluster that consists of the following VMs or instances:

vSphereThree control plane VMs, with names similar to my-prod-cluster-control-plane-nj4z6 .One loadbalancer VM, with a name similar to my-prod-cluster-default-lb , where default is the name of thenamespace in which the cluster is running.Three worker nodes, with names similar to my-prod-cluster-md-0-6ff9f5cffb-jhcrh .

Amazon EC2:Three control plane instances, with names similar to my-prod-cluster-control-plane-d78t5 .One EC2 bastion instance, with the name my-prod-cluster-bastion .Three worker nodes, with names similar to my-prod-cluster-md-0-2vsr4 .

You can deploy a Tanzu Kubernetes cluster with more control plane nodes by specifying the --controlplane-machine-count option. The number of control plane nodes that you specify in --controlplane-machine-count must be uneven.

tkg create cluster cluster_name --plan=prod --controlplane-machine-count 5 --worker-machine-count 10

Deploy a Cluster in a Specific NamespaceIf you have created namespaces in your Tanzu Kubernetes Grid instance, you can deploy Tanzu Kubernetes clustersto those namespaces by using the --namespace option. If you do not specify the --namespace option, TanzuKubernetes Grid places clusters in the default namespace. Any namespace that you identify in the --namespace option must exist in the management cluster before you run the command. For example, you might want to createdifferent types of clusters in dedicated namespaces. For information about creating namespaces in the managementcluster, see Create Namespaces in the Management Cluster.

tkg create cluster my-cluster --plan=dev --namespace my_namespace

This command deploys a default Tanzu Kubernetes cluster and places it in the designated namespace. For example,if you specify --namespace production , Tanzu Kubernetes Grid creates the Tanzu Kubernetes cluster in an existingnamespace named production .

58

Page 59: You can find the most up-to-date technical documentation ...€¦ · Tanzu Kubernetes clusters implement Calico for pod-to-pod networking by default. Tanzu Kubernetes Cluster Plans

NOTE: If you have created namespaces, you must provide a unique name for all Tanzu Kubernetes clusters acrossall namespaces. If you provide a cluster name that is in use in another namespace in the same instance, thedeployment fails with an error.

Deploy a Cluster that Runs a Different Version ofKubernetesEach release of Tanzu Kubernetes Grid provides a default version of Kubernetes. As upstream Kubernetes releasespatches or new versions, VMware makes these patches and versions available in Tanzu Kubernetes Grid patch andupdate releases. Each Tanzu Kubernetes Grid release supports a defined set of Kubernetes versions. For the list ofKubernetes versions that the Tanzu Kubernetes Grid 1.0.x releases support, see the Tanzu Kubernetes Grid 1.0Release Notes.

To deploy a Tanzu Kubernetes cluster with a version of Kubernetes that is not the default for your Tanzu KubernetesGrid release, specify the version in the --kubernetes-version option.

NOTES:

If you are deploying to vSphere, before you can deploy clusters that use a non-default version of Kubernetes foryour version of Tanzu Kubernetes Grid, you must import the appropriate base OS OVA into vSphere and convertit to a VM template. For information about importing base OVA files into vSphere, see Import the Base ImageTemplate into vSphere.If you are deploying to AWS, the updated Amazon Linux 2 Amazon Machine Image (AMI) that includes the newKubernetes version will be made publicly available to all Amazon EC2 users, in all supported AWS regions.You can only specify a version of Kubernetes that is provided with and supported by a Tanzu Kubernetes Gridrelease.

tkg create cluster my-cluster --plan=dev --kubernetes-version=v1.17.y

This command deploys a Tanzu Kubernetes cluster that runs Kubernetes 1.17.y, even though by default this versionof Tanzu Kubernetes Grid runs Kubernetes 1.17.x.

59

Page 60: You can find the most up-to-date technical documentation ...€¦ · Tanzu Kubernetes clusters implement Calico for pod-to-pod networking by default. Tanzu Kubernetes Cluster Plans

Use the Tanzu Kubernetes Grid CLI with a vSpherewith Kubernetes Supervisor ClusterYou can connect the Tanzu Kubernetes Grid CLI to a vSphere with Kubernetes Supervisor Cluster that is running in avSphere 7.0 instance. In this way, you can deploy Tanzu Kubernetes clusters to vSphere with Kubernetes andmanage their lifecycle directly from the Tanzu Kubernetes Grid CLI.

vSphere with Kubernetes provides a vSphere Plugin for kubectl . The vSphere Plugin for kubectl extends thestandard kubectl commands so that you can connect to the Supervisor Cluster from kubectl by using vCenterSingle Sign-On credentials. Once you have installed the vSphere Plugin for kubectl , you can connect the TanzuKubernetes Grid CLI to the Supervisor Cluster. Then, you can use the Tanzu Kubernetes Grid CLI to deploy andmanage Tanzu Kubernetes clusters running in vSphere.

PrerequisitesProcedureWhat to Do Next

PrerequisitesPerform the steps described in Set Up the Bootstrap Environment for Tanzu Kubernetes Grid.You have access to a vSphere 7.0 instance on which the vSphere with Kubernetes feature is enabled.Download and install the kubectl vsphere CLI utility on the bootstrap environment machine on which you runTanzu Kubernetes Grid CLI commands.

For information about how to obtain and install the vSphere Plugin for kubectl , see Download and Install theKubernetes CLI Tools for vSphere in the vSphere with Kubernetes documentation.

Procedure1. On the bootstrap environment machine, run the kubectl vsphere login command to log in to your vSphere 7.0

instance.

Specify a vCenter Single Sign-On user account with vSphere Administrator privileges to use to log in to vSphere,and the virtual IP (VIP) address for the control plane of the Supervisor Cluster. For example:

kubectl vsphere login --vsphere-username [email protected] --server=control_Plane_VIP --insecure-skip-tls-verify=true

2. Enter the password for the vSphere Administrator account.

When you have successfully logged in, kubectl vsphere displays all of the contexts to which you have access.The list of contexts should include the vSphere with Kubernetes Supervisor Cluster.

3. Set the focus of kubectl to the context of the Supervisor Cluster.

60

Page 61: You can find the most up-to-date technical documentation ...€¦ · Tanzu Kubernetes clusters implement Calico for pod-to-pod networking by default. Tanzu Kubernetes Cluster Plans

kubectl config use-context <Supervisor_Cluster_context>

4. Add the Supervisor Cluster to your Tanzu Kubernetes Grid instance.

tkg add management-cluster

5. Run tkg get management-cluster to see the list of management clusters that your Tanzu Kubernetes Grid CLI canaccess.

tkg get management-cluster

The output should show the vSphere with Kubernetes Supervisor Cluster in the list.

+-------------------------+-------------------------------+ | MANAGEMENT CLUSTER NAME | CONTEXT NAME | +-------------------------+-------------------------------+ | vsphere-mc | vsphere-mc-admin@vsphere-mc | | aws-mc * | aws-mc-admin@aws-mc | | <Supervisor_Cluster_IP> | <Supervisor_Cluster_context> | +-------------------------+-------------------------------+

6. Set the focus of the Tanzu Kubernetes Grid CLI to the Supervisor Cluster.

tkg set management-cluster <Supervisor_Cluster_IP>

7. Obtain information about the storage classes that are defined in the Supervisor Cluster.

kubectl get storageclasses

8. Set variables to define the storage classes, VM classes, and service domain with which to create your cluster.

For information about all of the configuration parameters that you can set when deploying Tanzu Kubernetesclusters to vSphere with Kubernetes, see Configuration Parameters for Provisioning Tanzu Kubernetes Clustersin the vSphere with Kubernetes documentation.

You can either set these variables as environment variables by specifying export <variable>=<value> at thecommand line, or set them by updating the ~/.tkg/config file.

For the *_STORAGE_CLASS variables, specify one of the storage classes that you obtained in the previous step.

CONTROL_PLANE_STORAGE_CLASS=<storage_class_name>

WORKER_STORAGE_CLASS=<storage_class_name>

For the DEFAULT_STORAGE_CLASS variable, specify a storage class. Alternatively, you can specify DEFAULT_STORAGE_CLASS="" , in which case no default storage class is set.

61

Page 62: You can find the most up-to-date technical documentation ...€¦ · Tanzu Kubernetes clusters implement Calico for pod-to-pod networking by default. Tanzu Kubernetes Cluster Plans

DEFAULT_STORAGE_CLASS=<storage_class_name>

Or

DEFAULT_STORAGE_CLASS=""

For the STORAGE_CLASSES variable, enter a comma-separated list of storage classes for the cluster to use.Alternatively, you can specify STORAGE_CLASSES="" so that all storage classes that are available to the namespaceare made available to clusters that you create. For example, set either of the following:

STORAGE_CLASSES="<storageclass1>,<storageclass2>,<storageclass3>"

Or

STORAGE_CLASSES=""

For the SERVICE_DOMAIN variable, enter a service domain name for the cluster. If you are going to assign FQDNswith the nodes, DNS lookup is required.

SERVICE_DOMAIN=my.domain.com

For the *_VM_CLASS variables, specify one of the standard VM classes for vSphere with Kubernetes. Forinformation about the VM classes that vSphere with Kubernetes provides, see Virtual Machine Class Types forTanzu Kubernetes Clusters in the vSphere with Kubernetes documentation. For example, set the followingvariables.

CONTROL_PLANE_VM_CLASS=guaranteed-large

WORKER_VM_CLASS=guaranteed-xlarge

For the SERVICE_CIDR variable, specify the CIDR range to use for the Kubernetes services. The recommendedrange is 100.64.0.0/13. Use a different range if the recommended range is unavailable.

SERVICE_CIDR=100.64.0.0/13

For the CLUSTER_CIDR variable, specify the CIDR range to use for pods. The recommended range is100.96.0.0/11. Use a different range if the recommended range is unavailable.

CLUSTER_CIDR=100.96.0.0/11

1. Obtain the list of Kubernetes versions that are available in the Supervisor Cluster.

kubectl get virtualmachineimages

62

Page 63: You can find the most up-to-date technical documentation ...€¦ · Tanzu Kubernetes clusters implement Calico for pod-to-pod networking by default. Tanzu Kubernetes Cluster Plans

1. Obtain the list of namespaces that are available in the Supervisor Cluster.

kubectl get namespaces

2. Run tkg create cluster to create a cluster in vSphere with Kubernetes.

When deploying clusters to vSphere with Kubernetes, you must provide the namespace when you run tkg createcluster . If the available versions of Kubernetes differ from the one expected by Tanzu Kubernetes Grid, youmust also specify the Kubernetes version. You must also specify which of the Tanzu Kubernetes Grid plans touse, dev or prod .

tkg create cluster my-vsphere7-cluster --plan=dev --namespace=<namespace> --kubernetes-version=v1.16.8+vmware.1-tkg.3.60d2ffd

What to Do NextYou can now use the Tanzu Kubernetes Grid CLI to deploy more Tanzu Kubernetes clusters to the vSphere withKubernetes Supervisor Cluster. You can also use the Tanzu Kubernetes Grid CLI to manage the lifecycles of clustersthat are already running there. For information about how to manage the lifecycle of clusters, see the other topics inDeploying Tanzu Kubernetes Clusters and Managing their Lifecycle.

63

Page 64: You can find the most up-to-date technical documentation ...€¦ · Tanzu Kubernetes clusters implement Calico for pod-to-pod networking by default. Tanzu Kubernetes Cluster Plans

Create Tanzu Kubernetes Cluster Configuration FilesYou can use the Tanzu Kubernetes Grid CLI to create YAML files for Tanzu Kubernetes clusters without actuallycreating the clusters. If you are experienced with creating and editing YAML files to use for deploying Kubernetesclusters, you can modify the generated YAML files to customize your clusters.

To generate a YAML file for a cluster configuration, run the tkg config cluster command with the same options asyou would specify when running tkg create cluster . For more information about the tkg create cluster options, seeCreate Tanzu Kubernetes Clusters.

For example, the following command uses all of the possible tkg config cluster options, including identifying thenamespace in which to run the cluster.

tkg config cluster my-cluster --plan=dev --controlplane-machine-count 3 --worker-machine-count 10 --namespace=my_namespace

If you are satisfied with the displayed configuration file, run the tkg create cluster command with the same options,to create the cluster.

tkg create cluster my-cluster --plan=dev --controlplane-machine-count 3 --worker-machine-count 10

NOTE: Running the tkg config cluster command works in the same way as specifying --dry-run option with tkgcreate cluster . For more information about the --dry-run option, see Preview the YAML for a Tanzu KubernetesCluster.

Deploy a Cluster from a Saved YAML File

When you run tkg config cluster , Tanzu Kubernetes Grid sends the YAML file for the cluster to stdout , so that youcan save it for repeated future use.

To deploy a cluster from a saved YAML file, you must use kubectl rather than the Tanzu Kubernetes Grid CLI. Whenyou use kubectl to deploy a cluster from a saved YAML file, the post-creation steps to automatically enable Calico inthe cluster are not run. Consequently, there are additional steps that you must perform manually, to enable Caliconetworking in the cluster.

1. Run tkg config cluster to create a cluster configuration and save it to a YAML file.

tkg config cluster my-cluster --plan=dev --controlplane-machine-count 3 --worker-machine-count 10 > my-cluster.yaml

2. Use a tool such as yq to extract the Calico YAML information from the generated configuration file, and save itas a new YAML file.

64

Page 65: You can find the most up-to-date technical documentation ...€¦ · Tanzu Kubernetes clusters implement Calico for pod-to-pod networking by default. Tanzu Kubernetes Cluster Plans

cat my-cluster.yaml | yq r -d7 - stringData.calicoYaml > postcreation_steps.yaml

3. Make sure that kubectl is set to the context of the management cluster on which you ran tkg config cluster .

kubectl config use-context my-mgmt-cluster-admin@my-mgmt-cluster

4. Use kubectl apply to deploy the Tanzu Kubernetes cluster from the YAML that you generated with tkg configcluster .

kubectl apply -f my-cluster.yaml

5. When the cluster is ready, set kubectl to the context of the newly created cluster.

kubectl config use-context my-cluster-admin@my-cluster

6. Run kubectl apply on the YAML that you created with yq , to enable Calico in the cluster.

kubectl apply -f postcreation_steps.yaml

The Tanzu Kubernetes cluster is running, and Calico is enabled in the cluster.

65

Page 66: You can find the most up-to-date technical documentation ...€¦ · Tanzu Kubernetes clusters implement Calico for pod-to-pod networking by default. Tanzu Kubernetes Cluster Plans

Connect to and Examine Tanzu Kubernetes ClustersAfter you have deployed Tanzu Kubernetes clusters, you use the tkg get cluster and tkg get credentials commandsto obtain the list of running clusters and their credentials. Then, you can connect to the clusters by using kubectl andstart working with your clusters.

Obtain the List of Deployed Tanzu Kubernetes ClustersObtain Tanzu Kubernetes Cluster CredentialsExamine the Deployed Cluster

Obtain the List of Deployed Tanzu Kubernetes ClustersTo see the list of Tanzu Kubernetes Grid management clusters and the Tanzu Kubernetes clusters that they aremanaging, use the tkg get command.

1. If you have deployed more than one management cluster, run tkg get management-cluster to see the list ofmanagement clusters.

tkg get management-cluster

If you deployed two management clusters, named vsphere-mgmt-cluster and aws-mgmt-cluster , you will see thefollowing output:

+-------------------------+-------------------------------------------+ | MANAGEMENT CLUSTER NAME | CONTEXT NAME | +-------------------------+-------------------------------------------+ | vsphere-mgmt-cluster * | vsphere-mgmt-cluster-admin@my-cluster | | aws-mgmt-cluster | aws-mgmt-cluster-admin@my-other-cluster | +-------------------------+-------------------------------------------+

The management cluster context that is the current focus of the Tanzu Kubernetes Grid CLI and kubectl ismarked with an asterisk ( * ).

2. To change the focus of the Tanzu Kubernetes Grid CLI to a different management cluster context, run the tkgset management-cluster command.

tkg set management-cluster aws-mgmt-cluster

3. To list all of the Tanzu Kubernetes clusters that are running in the default namespace of this managementcluster, run the tkg get cluster command.

tkg get cluster

The output lists all of the Tanzu Kubernetes clusters that are currently running in the default namespace, andprovides their contexts.

66

Page 67: You can find the most up-to-date technical documentation ...€¦ · Tanzu Kubernetes clusters implement Calico for pod-to-pod networking by default. Tanzu Kubernetes Cluster Plans

4. If the management cluster has clusters running in namespaces other than default , you must specify the --namespace option to list the clusters running in a given namespace.

tkg get cluster --namespace=my-namespace

Obtain Tanzu Kubernetes Cluster CredentialsAfter you create a Tanzu Kubernetes cluster, you obtain the kubeconfig of the deployed cluster by running the tkgget credentials command.

1. To automatically add the credentials of a cluster to your kubeconfig file, specify the name of the cluster when yourun tkg get credentials .

tkg get credentials my-cluster

You should see the following output:

Credentials of workload cluster my-cluster have been saved You can now access the cluster by switching the context to my-cluster-admin@my-cluster under /root/.kube/config

If the cluster is running in a namespace other than the default namespace, you must specify the --namespace option to get the credentials of that cluster.

tkg get credentials my-cluster --namespace=my-namespace

To save the credentials in a separate kubeconfig file, for example to distribute them to developers, specify the --export-file option.

tkg get credentials my-cluster --export-file my-cluster-credentials

IMPORTANT: By default, unless you specify the --export-file option to save the kubeconfig for a cluster to aspecific file, the credentials for all clusters that you deploy from the Tanzu Kubernetes Grid CLI are added to ashared kubeconfig file. If you delete the shared kubeconfig file, all clusters become unusable.

Examine the Deployed Cluster1. After you have added the credentials to your kubeconfig , you can connect to the cluster by using kubectl .

kubectl config use-context my-cluster-admin@my-cluster

2. Use kubectl to see the status of the nodes in the cluster.

kubectl get nodes

67

Page 68: You can find the most up-to-date technical documentation ...€¦ · Tanzu Kubernetes clusters implement Calico for pod-to-pod networking by default. Tanzu Kubernetes Cluster Plans

For example, if you deployed the my-prod-cluster in Deploy a Cluster with a Highly Available Control Plane withthe prod plan and the default 3 control plane nodes and worker nodes, you see the following output.

NAME STATUS ROLES AGE VERSION my-prod-cluster-gp4rl Ready master 8m51s v1.17.3+vmware.1 my-prod-cluster-md-0-6946bcb48b-dk7m6 Ready <none> 6m45s v1.17.3+vmware.1 my-prod-cluster-md-0-6946bcb48b-dq8s9 Ready <none> 7m23s v1.17.3+vmware.1 my-prod-cluster-md-0-6946bcb48b-nrdlp Ready <none> 7m8s v1.17.3+vmware.1 my-prod-cluster-n8bh7 Ready master 5m58s v1.17.3+vmware.1 my-prod-cluster-xflrg Ready master 3m39s v1.17.3+vmware.1

Because networking with Calico is enabled by default in Tanzu Kubernetes clusters, all clusters are in the Ready state without requiring any additional configuration.

3. Use kubectl to see the status of the pods running in the cluster.

kubectl get pods -A

If you deployed the my-prod-cluster to vSphere, you see the following pods running in the kube-system namespace in the cluster.

NAMESPACE NAME READY STATUS RESTARTS AGE kube-system calico-kube-controllers-7986b8994b-kph5f 1/1 Running 0 18m kube-system calico-node-96xkq 1/1 Running 0 17m kube-system calico-node-dp887 1/1 Running 0 18m kube-system calico-node-gvh5b 1/1 Running 0 16m kube-system calico-node-m6xgw 1/1 Running 0 16m kube-system calico-node-pbz5h 1/1 Running 0 17m kube-system calico-node-q6zh8 1/1 Running 0 13m kube-system coredns-5c4f46bfcb-dhm7s 1/1 Running 0 18m kube-system coredns-5c4f46bfcb-hlkks 1/1 Running 0 18m kube-system etcd-my-prod-cluster-gp4rl 1/1 Running 0 18m kube-system etcd-my-prod-cluster-n8bh7 1/1 Running 0 15m kube-system etcd-my-prod-cluster-xflrg 1/1 Running 0 13m kube-system kube-apiserver-my-prod-cluster-gp4rl 1/1 Running 0 18m kube-system kube-apiserver-my-prod-cluster-n8bh7 1/1 Running 0 16m kube-system kube-apiserver-my-prod-cluster-xflrg 1/1 Running 0 13m kube-system kube-controller-manager-my-prod-cluster-gp4rl 1/1 Running 1 18m kube-system kube-controller-manager-my-prod-cluster-n8bh7 1/1 Running 2 16m kube-system kube-controller-manager-my-prod-cluster-xflrg 1/1 Running 0 13m kube-system kube-proxy-68fkt 1/1 Running 0 16m kube-system kube-proxy-dc4kf 1/1 Running 0 17m kube-system kube-proxy-fnjkg 1/1 Running 0 13m kube-system kube-proxy-g2kq6 1/1 Running 0 16m kube-system kube-proxy-r48c8 1/1 Running 0 17m kube-system kube-proxy-x55vb 1/1 Running 0 18m kube-system kube-scheduler-my-prod-cluster-gp4rl 1/1 Running 2 18m kube-system kube-scheduler-my-prod-cluster-n8bh7 1/1 Running 1 15m kube-system kube-scheduler-my-prod-cluster-xflrg 1/1 Running 0 13m kube-system vsphere-cloud-controller-manager-6x98w 1/1 Running 3 18m kube-system vsphere-cloud-controller-manager-gzmmd 1/1 Running 0 15m kube-system vsphere-cloud-controller-manager-rmtmq 1/1 Running 0 13m kube-system vsphere-csi-controller-0 5/5 Running 2 18m kube-system vsphere-csi-node-6r64z 3/3 Running 1 17m kube-system vsphere-csi-node-bt78l 3/3 Running 0 17m kube-system vsphere-csi-node-l8t5n 3/3 Running 0 16m

68

Page 69: You can find the most up-to-date technical documentation ...€¦ · Tanzu Kubernetes clusters implement Calico for pod-to-pod networking by default. Tanzu Kubernetes Cluster Plans

kube-system vsphere-csi-node-qwr4w 3/3 Running 0 15m kube-system vsphere-csi-node-rp9qd 3/3 Running 0 16m kube-system vsphere-csi-node-vjqsh 3/3 Running 0 12m

You can see from the list above that the following services are running in the cluster:

Calico, the container networking interface coredns , for DNS etcd , for key-value storage kube-apiserver , the Kubernetes API server kube-proxy , the Kubernetes network proxy kube-scheduler , for scheduling and availability vsphere-cloud-controller-manager , the Kubernetes cloud provider for vSphere vsphere-csi-controller and vsphere-csi-node , the container storage interface for vSphere

69

Page 70: You can find the most up-to-date technical documentation ...€¦ · Tanzu Kubernetes clusters implement Calico for pod-to-pod networking by default. Tanzu Kubernetes Cluster Plans

Scale Tanzu Kubernetes ClustersAfter you create a Tanzu Kubernetes cluster, you can scale it up or down by increasing or reducing the number ofnode VMs that it contains. To scale a cluster, use the tkg scale cluster command. You change the number of controlplane nodes by specifying the --controlplane-machine-count option. You change the number of worker nodes byspecifying the --worker-machine-count option.

NOTE: If you deployed Tanzu Kubernetes clusters to vSphere 7.0 with Kubernetes, you can only scale the number ofworker nodes upwards. You cannot scale down the number of worker nodes on vSphere 7 with Kubernetes. Youcannot scale the number of control plane nodes either up or down on clusters that run in vSphere 7 with Kubernetes.

To scale a cluster that you originally deployed with 3 control plane nodes and 5 worker nodes to 5 and 10 nodesrespectively, run the following command:

tkg scale cluster cluster_name --controlplane-machine-count 5 --worker-machine-count 10

If you initially deployed a cluster with --controlplane-machine-count 1 and then you scale it up to 3 control planenodes, Tanzu Kubernetes Grid automatically enables stacked HA on the control plane.

If the cluster in running in a namespace other than the default namespace, you must specify the --namespace option to scale that cluster.

tkg scale cluster cluster_name --controlplane-machine-count 5 --worker-machine-count 10 --namespace=my-namespace

IMPORTANT: Do not change context or edit the kubeconfig file while Tanzu Kubernetes Grid operations are running.

70

Page 71: You can find the most up-to-date technical documentation ...€¦ · Tanzu Kubernetes clusters implement Calico for pod-to-pod networking by default. Tanzu Kubernetes Cluster Plans

Delete Tanzu Kubernetes ClustersTo delete a Tanzu Kubernetes cluster, run the tkg delete cluster command.

1. To list all of the Tanzu Kubernetes clusters that a management cluster is managing, run the tkg get cluster command.

tkg get cluster

2. If there are clusters that you no longer require, run tkg delete cluster .

tkg delete cluster my-cluster

If the cluster is running in a namespace other than the default namespace, you must specify the --namespace option to delete that cluster.

tkg delete cluster my-cluster --namespace=my-namespace

To skip the yes/no verification step when you run tkg delete cluster , specify the --yes option.

tkg delete cluster my-cluster --namespace=my-namespace --yes

IMPORTANT: Do not change context or edit the kubeconfig file while Tanzu Kubernetes Grid operations are running.

71

Page 72: You can find the most up-to-date technical documentation ...€¦ · Tanzu Kubernetes clusters implement Calico for pod-to-pod networking by default. Tanzu Kubernetes Cluster Plans

Configuring and Managing the Tanzu Kubernetes GridInstanceTanzu Kubernetes Grid includes binaries for tools that help you to provide in-cluster and shared services to yourTanzu Kubernetes Grid instance. All of the provided binaries and container images are built by VMware.

Tanzu Kubernetes Grid includes extensions for authentication, log forwarding, and ingress control.

Create Clusters with User AuthenticationImplementing Log Forwarding with Fluent BitImplementing Ingress Control on Tanzu Kubernetes Clusters with ContourDelete Management Clusters

Download and Unpack the Tanzu Kubernetes GridExtensions Bundle

1. On the system that you use as the bootstrap environment, go to https://www.vmware.com/go/get-tkg and log inwith your My VMware credentials.

2. Download the Tanzu Kubernetes Grid extension manifests bundle, tkg-extensions-manifests-v1.0.0_vmware.1.tar.gz .

3. Use either the tar command or the extraction tool of your choice to unpack the bundle of YAML manifest filesfor the Tanzu Kubernetes Grid extensions.

tar -xzf tkg-extensions-manifests-v1.0.0_vmware.1.tar.gz

For convenience, unpack the bundle in the same location as the one from which you run tkg and kubectl commands.

72

Page 73: You can find the most up-to-date technical documentation ...€¦ · Tanzu Kubernetes clusters implement Calico for pod-to-pod networking by default. Tanzu Kubernetes Cluster Plans

Create Clusters with User AuthenticationYou can deploy Tanzu Kubernetes clusters that implement authentication and authorization, so that only users withthe correct permissions can access those clusters. This release of Tanzu Kubernetes Grid provides userauthentication of clusters by implementing the open source Dex and Gangway projects.

Tanzu Kubernetes Grid includes signed binaries for Dex, so that you can enable authentication services in yourclusters. Dex is an OpenID Connect (OIDC) provider, that enables authentication for Kubernetes clusters byconnecting to an external identity provider (IDP), such as an LDAP server or to OIDC providers like Okta. TanzuKubernetes Grid uses NodePort to expose the Dex service when running on vSphere and the LoadBalancer servicetype to expose the Dex service on Amazon EC2.

IMPORTANT:

In this release of Tanzu Kubernetes Grid, the provided authentication implementation assumes that you use self-signed certificates. If you have Tanzu Kubernetes Grid Plus support, you can engage with Tanzu KubernetesGrid Plus Customer Reliability Engineers, who can help you to implement authentication with your owncertificates.Tanzu Kubernetes Grid does not support IPv6 addresses. This is because upstream Kubernetes only providesalpha support for IPv6. Always provide IPv4 addresses in the procedures in this section.

The process to set up authentication with Dex and Gangway involves several stages:

1. Deploy Dex on your management cluster.

If your management cluster runs on vSphere, Tanzu Kubernetes Grid supports using Dex with either LDAP orOIDC. If your management cluster runs on Amazon EC2, only OIDC is supported. The identity provider that youconfigure Dex to use is used by all of the clusters in your Tanzu Kubernetes Grid instance.

Perform one of the following the procedures to deploy Dex:

Deploy Dex with LDAP to a Management Cluster Running on vSphereDeploy Dex with OIDC to a Management Cluster Running on vSphereDeploy Dex with OIDC to a Management Cluster Running on Amazon EC2

2. Deploy an OIDC-enabled Tanzu Kubernetes cluster.

To use the Dex service you must deploy Tanzu Kubernetes clusters with an embedded OIDC endpoint. TheOIDC endpoint allows the cluster to connect to your LDAP or OIDC server. The procedure to Deploy anAuthentication-Enabled Cluster is the same for both vSphere and Amazon EC2 deployments.

3. Enable Gangway on the OIDC-enabled cluster.

Gangway is a Kubernetes authentication client that you install on each workload cluster for which you want toimplement authentication. Gangway generates a kubeconfig that allows clusters to use Dex to connect to youridentity provider.

Perform one of the following the procedures to enable Gangway:

Enable Gangway on Clusters on vSphere

73

Page 74: You can find the most up-to-date technical documentation ...€¦ · Tanzu Kubernetes clusters implement Calico for pod-to-pod networking by default. Tanzu Kubernetes Cluster Plans

Enable Gangway on Clusters on Amazon EC24. Access the cluster with your IDP credentials.

Gangway exposes a Web-based endpoint on workload clusters, to which end users can connect with their IDPcredentials, in order to access the application that runs in the cluster.

Access Clusters with Your IDP Credentials

If you have Tanzu Kubernetes Grid Plus support, you can engage with Tanzu Kubernetes Grid Plus CustomerReliability Engineers, who can help you to implement other authentication providers with Tanzu Kubernetes Grid.

74

Page 75: You can find the most up-to-date technical documentation ...€¦ · Tanzu Kubernetes clusters implement Calico for pod-to-pod networking by default. Tanzu Kubernetes Cluster Plans

Deploy Dex with LDAP to a Management ClusterRunning on vSphereThe procedure in this topic describes how to deploy Dex on a Tanzu Kubernetes Grid management cluster that isrunning in vSphere, and connect it to an LDAP server.

PrerequisitesProcedureWhat to Do Next

PrerequisitesYou have deployed a management cluster.You have downloaded and unpacked the bundle of Tanzu Kubernetes Grid extensions. For information aboutwhere to obtain the bundle, see Download and Unpack the Tanzu Kubernetes Grid Extensions Bundle.

Procedure1. Navigate to the bundle of Tanzu Kubernetes Grid extension manifests and open the file tkg-extensions-

v1.0.0/authentication/dex/vsphere/ldap/02-certs-selfsigned.yaml in a text editor.

For example, use vi to edit the file.

vi tkg-extensions-v1.0.0/authentication/dex/vsphere/ldap/02-certs-selfsigned.yaml

2. Modify 02-certs-selfsigned.yaml with information about your management cluster.

Replace <MGMT_CLUSTER_IP1> and <MGMT_CLUSTER_IP2> with the IP addresses of the control plane nodes for yourmanagement cluster. Remove the row for <MGMT_CLUSTER_IP2> if your management cluster has a single nodecontrol plane. Add more rows if your control plane has more than two nodes.

3. Open the Dex configuration map file tkg-extensions-v1.0.0/authentication/dex/vsphere/ldap/03-cm.yaml in a texteditor.

For example, use vi to edit the file.

vi tkg-extensions-v1.0.0/authentication/dex/vsphere/ldap/03-cm.yaml

4. Modify 03-cm.yaml with information about your management cluster and LDAP server.

Replace <MGMT_CLUSTER_IP> with the IP address of one of the control plane nodes of your managementcluster.Replace <LDAP_HOST> with the IP or DNS address of your LDAP server.Update the userSearch parameters with your LDAP server configuration.

75

Page 76: You can find the most up-to-date technical documentation ...€¦ · Tanzu Kubernetes clusters implement Calico for pod-to-pod networking by default. Tanzu Kubernetes Cluster Plans

5. Set the focus of kubectl to the context of your management cluster.

For example, if your cluster is named my-cluster , run the following command.

kubectl config use-context my-cluster-admin@my-cluster

6. Apply all of the YAML files in the tkg-extensions-v1.0.0/authentication/dex/vsphere/ldap folder to your managementcluster, in order.

i. Create a namespace named tanzu-system-auth in your management cluster for the authentication service.

kubectl apply -f tkg-extensions-v1.0.0/authentication/dex/vsphere/ldap/01-namespace.yaml

ii. Generate a self-signed certificate.

kubectl apply -f tkg-extensions-v1.0.0/authentication/dex/vsphere/ldap/02-certs-selfsigned.yaml

iii. Deploy your LDAP configuration.

kubectl apply -f tkg-extensions-v1.0.0/authentication/dex/vsphere/ldap/03-cm.yaml

iv. Configure Role-Based Access Control (RBAC):

kubectl apply -f tkg-extensions-v1.0.0/authentication/dex/vsphere/ldap/04-rbac.yaml

v. Deploy Dex.

kubectl apply -f tkg-extensions-v1.0.0/authentication/dex/vsphere/ldap/05-deployment.yaml

vi. Create Dex NodePort service.

kubectl apply -f tkg-extensions-v1.0.0/authentication/dex/vsphere/ldap/06-service.yaml

7. Run kubectl get pods -A to list all of the pods running in the management cluster.

You should see the Dex service running in a pod with a name similar to dex-6849555c67-bqmpd .

What to Do NextDeploy an Authentication-Enabled Cluster, that has an embedded OIDC endpoint that can connect to your LDAPserver.

76

Page 77: You can find the most up-to-date technical documentation ...€¦ · Tanzu Kubernetes clusters implement Calico for pod-to-pod networking by default. Tanzu Kubernetes Cluster Plans

Deploy Dex with OIDC to a Management ClusterRunning on vSphereThe procedure in this topic describes how to deploy Dex on a Tanzu Kubernetes Grid management cluster that isrunning in vSphere, if your identity provider is OIDC.

PrerequisitesProcedureDeploy a Tanzu Kubernetes Cluster with OIDC AuthenticationWhat to Do Next

PrerequisitesYou have deployed a management cluster.You have downloaded and unpacked the bundle of Tanzu Kubernetes Grid extensions. For information aboutwhere to obtain the bundle, see Download and Unpack the Tanzu Kubernetes Grid Extensions Bundle.

Procedure1. Open the Dex configuration map file tkg-extensions-v1.0.0/authentication/dex/vsphere/oidc/03-cm.yaml in a text

editor.

For example, use vi to edit the file.

vi tkg-extensions-v1.0.0/authentication/dex/vsphere/oidc/03-cm.yaml

2. Update tkg-extensions-v1.0.0/authentication/dex/vsphere/oidc/03-cm.yaml as follows:

Replace two instances of <MGMT_CLUSTER_IP> with the IP address of one of the control plane nodes of yourmanagement cluster.Replace one instance of <OIDC_IDP_URL> with the IP or DNS address of the OIDC server.

3. Create a secret file from the provided example.

cp tkg-extensions-v1.0.0/authentication/dex/vsphere/oidc/05-0-secret.example tkg-extensions-v1.0.0/authentication/dex/vsphere/oidc/05-0-secret.yaml

4. Open the OIDC secret file in a text editor.

For example, use vi to edit the file.

vi tkg-extensions-v1.0.0/authentication/dex/vsphere/oidc/05-0-secret.yaml

77

Page 78: You can find the most up-to-date technical documentation ...€¦ · Tanzu Kubernetes clusters implement Calico for pod-to-pod networking by default. Tanzu Kubernetes Cluster Plans

5. Replace <CLIENT_ID> and <CLIENT_SECRET> with Base64 values client_id and secret that you obtain from yourOIDC provider.

For example, if your provider is Okta, log in to Okta, create a Web application, and select the Client Credentialsoptions in order to get a client_id and secret .

1. Set the focus of kubectl to the context of your management cluster.

For example, if your cluster is named my-cluster , run the following command.

kubectl config use-context my-cluster-admin@my-cluster

2. Apply all of the created the YAML files to your management cluster.

i. Create a namespace in your management cluster for the authentication service.

kubectl apply -f tkg-extensions-v1.0.0/authentication/dex/vsphere/oidc/01-namespace.yaml

ii. Generate a self-signed certificate.

kubectl apply -f tkg-extensions-v1.0.0/authentication/dex/vsphere/oidc/02-certs-selfsigned.yaml

iii. Connect to your OIDC server.

kubectl apply -f tkg-extensions-v1.0.0/authentication/dex/vsphere/oidc/03-cm.yaml

iv. Configure Role-Based Access Control (RBAC).

kubectl apply -f tkg-extensions-v1.0.0/authentication/dex/vsphere/oidc/04-rbac.yaml

v. Deploy Dex.

kubectl apply -f tkg-extensions-v1.0.0/authentication/dex/vsphere/oidc/05-deployment.yaml

vi. Start the service.

kubectl apply -f tkg-extensions-v1.0.0/authentication/dex/vsphere/oidc/06-service.yaml

3. Run kubectl get pods --namespace tanzu-system-auth to see the pod that is running the Dex service.

The service is running in a pod with a name similar to dex-6849555c67-bqmpd .

What to Do NextDeploy an Authentication-Enabled Cluster, that has an embedded OIDC endpoint that can connect to your OIDCprovider.

78

Page 79: You can find the most up-to-date technical documentation ...€¦ · Tanzu Kubernetes clusters implement Calico for pod-to-pod networking by default. Tanzu Kubernetes Cluster Plans

79

Page 80: You can find the most up-to-date technical documentation ...€¦ · Tanzu Kubernetes clusters implement Calico for pod-to-pod networking by default. Tanzu Kubernetes Cluster Plans

Deploy Dex to a Management Cluster Running onAmazon EC2This topic describes how to deploy Dex on a Tanzu Kubernetes Grid management cluster that is running in AmazonEC2.

This release of Tanzu Kubernetes Grid supports using Dex with OIDC in management clusters that you deploy toAmazon EC2. LDAP is not supported for Amazon EC2 deployments.

PrerequisitesProcedureWhat to Do Next

PrerequisitesYou have deployed a management cluster.You have downloaded and unpacked the bundle of Tanzu Kubernetes Grid extensions, tkg-extensions-manifests-v1.0.0+vmware.1.tar.gz .

Procedure1. Set the focus of kubectl to the context of your management cluster.

For example, if your cluster is named my-management-cluster , run the following command.

kubectl config use-context my-management-cluster-admin@my-management-cluster

2. Create a namespace named tanzu-system-auth in your management cluster for the authentication service.

kubectl apply -f tkg-extensions-v1.0.0/authentication/dex/aws/oidc/01-namespace.yaml

3. Create the Dex service

kubectl apply -f tkg-extensions-v1.0.0/authentication/dex/aws/oidc/02-service.yaml

4. Get the hostname of the load balancer of the Dex service.

kubectl get svc dexsvc -n tanzu-system-auth -o jsonpath='{.status.loadBalancer.ingress[0].hostname}'

5. Open 03-certs-selfsigned.yaml in a text editor.

For example, use vi to edit the file.

80

Page 81: You can find the most up-to-date technical documentation ...€¦ · Tanzu Kubernetes clusters implement Calico for pod-to-pod networking by default. Tanzu Kubernetes Cluster Plans

vi tkg-extensions-v1.0.0/authentication/dex/aws/oidc/03-certs-selfsigned.yaml

6. Replace <DEX_SVC_LB_HOSTNAME> with the hostname of the load balancer of the Dex service, from the previous step.7. Create the self-signed certificate.

kubectl apply -f tkg-extensions-v1.0.0/authentication/dex/aws/oidc/03-certs-selfsigned.yaml

8. Open Dex configuration map file, 04-cm.yaml , in a text editor.

For example, use vi to edit the file.

vi tkg-extensions-v1.0.0/authentication/dex/aws/oidc/04-cm.yaml

9. Update 04-cm.yaml with information about your load balancer and OIDC provider.

Replace <DEX_SVC_LB_HOSTNAME> with the hostname of the load balancer of the Dex service.Replace <OIDC_IDP_URL> with the IP or DNS address of the your OIDC provider, for example an Okta server.

10. Apply the configuration map to the cluster.

kubectl apply -f tkg-extensions-v1.0.0/authentication/dex/aws/oidc/04-cm.yaml

11. Configure Role-Based Access Control (RBAC).

kubectl apply -f tkg-extensions-v1.0.0/authentication/dex/aws/oidc/05-rbac.yaml

12. Create a secret file from the provided example.

cp tkg-extensions-v1.0.0/authentication/dex/aws/oidc/06-0-secret.example tkg-extensions-v1.0.0/authentication/dex/aws/oidc/06-0-secret.yaml

13. Open the OIDC secret file in a text editor.

For example, use vi to edit the file.

vi tkg-extensions-v1.0.0/authentication/dex/aws/oidc/06-0-secret.yaml

14. Replace <CLIENT_ID> and <CLIENT_SECRET> with Base64 values client_id and secret that you obtain from yourOIDC provider.

For example, if your provider is Okta, log in to Okta, create a Web application, and select the Client Credentialsoptions in order to get a client_id and secret .

15. Pass the secret to the cluster.

kubectl apply -f tkg-extensions-v1.0.0/authentication/dex/aws/oidc/06-0-secret.yaml

16. Create the Dex deployment.

81

Page 82: You can find the most up-to-date technical documentation ...€¦ · Tanzu Kubernetes clusters implement Calico for pod-to-pod networking by default. Tanzu Kubernetes Cluster Plans

kubectl apply -f tkg-extensions-v1.0.0/authentication/dex/aws/oidc/06-deployment.yaml

17. Run kubectl get pods --namespace tanzu-system-auth to see the pod that is running the Dex service.

The service is running in a pod with a name similar to dex-6849555c67-bqmpd .

What to Do NextDeploy an Authentication-Enabled Cluster, that has an embedded OIDC endpoint that can connect to your OIDCprovider.

82

Page 83: You can find the most up-to-date technical documentation ...€¦ · Tanzu Kubernetes clusters implement Calico for pod-to-pod networking by default. Tanzu Kubernetes Cluster Plans

Deploy an Authentication-Enabled ClusterTo use the Dex authentication service you must deploy a Tanzu Kubernetes cluster that is configured to use Dex. TheTanzu Kubernetes Grid extensions bundle provides a special plan from which to deploy a cluster that is enabled forDex. The cluster includes a Dex OIDC endpoint that allows it to connect to your LDAP or OIDC Identity Provider.

This procedure applies to both vSphere and Amazon EC2 deployments.

PrerequisitesProcedureWhat to Do Next

PrerequisitesYou have completed the steps in one of the following procedures.

Deploy Dex with LDAP to a Management Cluster Running on vSphereDeploy Dex with OIDC to a Management Cluster Running on vSphereDeploy Dex with OIDC to a Management Cluster Running on Amazon EC2

Procedure1. Move the file cluster-template-oidc.yaml into the folder for the Cluster API provider for either vSphere or Amazon

EC2 .

In the following commands, replace <version> with the current version of the Cluster API provider for vSphere.For example, Tanzu Kubernetes Grid 1.0.0 uses Cluster API v0.6.3 for vSphere and v0.5.2 for Amazon EC2.

vSphere:

mv tkg-extensions-v1.0.0/authentication/dex/vsphere/cluster-template-oidc.yaml .tkg/providers/infrastructure-vsphere/v0.6.3/cluster-template-oidc.yaml

Amazon EC2:

mv tkg-extensions-v1.0.0/authentication/dex/aws/cluster-template-oidc.yaml .tkg/providers/infrastructure-aws/v0.5.2/cluster-template-oidc.yaml

2. Make sure that the focus of kubectl is set to the context of your management cluster.

kubectl config use-context my-management-cluster-admin@my-management-cluster

3. Set the following environment variables on your bootstrap environment, depending on whether you are deployingthe cluster to vSphere or Amazon EC2.

83

Page 84: You can find the most up-to-date technical documentation ...€¦ · Tanzu Kubernetes clusters implement Calico for pod-to-pod networking by default. Tanzu Kubernetes Cluster Plans

vSphere: Replace <MGMT_CLUSTER_IP> with the IP address of the management cluster control plane node thatyou identified in 03-cm.yaml in the previous procedure.

export OIDC_ISSUER_URL=https://<MGMT_CLUSTER_IP>:30167

export OIDC_USERNAME_CLAIM=email

export OIDC_GROUPS_CLAIM=groups

export DEX_CA=$(kubectl get secret dex-cert-tls -n tanzu-system-auth -o 'go-template={{ index .data "ca.crt" }}' | base64 -D | gzip | base64)

Amazon EC2:Replace <DEX_SVC_LB_HOSTNAME> with the hostname of the loadbalancer that you identified in 04-cm.yaml in theprevious procedure.

export OIDC_ISSUER_URL=https://<DEX_SVC_LB_HOSTNAME>

export OIDC_USERNAME_CLAIM=email

export OIDC_GROUPS_CLAIM=groups

export DEX_CA=$(kubectl get secret dex-cert-tls -n tanzu-system-auth -o 'go-template=&lbrace;&lbrace; index .data "ca.crt" &rbrace;&rbrace;' | base64 -D | gzip | base64)

NOTE: On Linux systems, replace base64 -D with base64 -d in the DEX_CA variable. On Mac OS systems,use -D .

4. Make sure that the context of your management cluster is the focus of the Tanzu Kubernetes Grid CLI.

tkg set management-cluster my-management-cluster

5. Use the Tanzu Kubernetes Grid CLI to create a Tanzu Kubernetes cluster from the plan that you copied into the providers folder.

tkg create cluster my-oidc-cluster --plan=oidc

6. When the cluster deployment finishes, get the credentials of the created cluster.

tkg get credentials my-oidc-cluster

84

Page 85: You can find the most up-to-date technical documentation ...€¦ · Tanzu Kubernetes clusters implement Calico for pod-to-pod networking by default. Tanzu Kubernetes Cluster Plans

7. Set the focus of kubectl to the OIDC-enabled cluster.

kubectl config use-context my-oidc-cluster-admin@my-oidc-cluster

8. Install cert-manager on the Tanzu Kubernetes cluster.

kubectl apply -f tkg-extensions-v1.0.0/cert-manager/

9. Create namespace named tanzu-system-auth on the cluster.

kubectl apply -f tkg-extensions-v1.0.0/authentication/gangway/vsphere/01-namespace.yaml

What to Do NextNow that you have deployed Dex on the management cluster and deployed an OIDC-enabled cluster, you mustenable Gangway on the cluster and connect it to the Dex service. The procedure is different depending on whetheryour cluster is running on vSphere or Amazon EC2.

Enable Gangway on Clusters on vSphereEnable Gangway on Clusters on Amazon EC2

85

Page 86: You can find the most up-to-date technical documentation ...€¦ · Tanzu Kubernetes clusters implement Calico for pod-to-pod networking by default. Tanzu Kubernetes Cluster Plans

Enable Gangway on Clusters on vSphereThis procedure describes how to enable Gangway on OIDC-enabled clusters that you have deployed to vSphere.

PrerequisitesProcedureWhat to Do Next

PrerequisitesYou have completed the steps in either Deploy Dex with LDAP to a Management Cluster Running on vSphere orDeploy Dex with OIDC to a Management Cluster Running on vSphere.You have deployed a Tanzu Kubernetes cluster from the cluster-template-oidc.yaml file, as described in Deployan Authentication-Enabled Cluster.

Procedure1. Set the focus of kubectl to the OIDC-enabled cluster.

kubectl config use-context my-oidc-cluster-admin@my-oidc-cluster

2. Edit the file tkg-extensions-v1.0.0/authentication/gangway/vsphere/02-config.yaml with information about your TanzuKubernetes Grid instance.

For example, use vi to edit the file.

vi tkg-extensions-v1.0.0/authentication/gangway/vsphere/02-config.yaml

Replace all instances of <WORKLOAD_CLUSTER_NAME> with the name of the OIDC-enabled cluster.Replace all instances of <MGMT_CLUSTER_IP> with the IP address of one of the control plane nodes of themanagement cluster.Replace <WORKLOAD_CLUSTER_IP> with the IP address of the control plane node or nodes of workload cluster.Replace <APISERVER_URL> with the IP or DNS address of the Kubernetes API Server endpoint for theworkload cluster. This is the address of the load balancer VM that is running in the cluster, that has a namesimilar to my-cluster-default-lb .

3. Apply the configuration to the OIDC-enabled cluster.

kubectl apply -f tkg-extensions-v1.0.0/authentication/gangway/vsphere/02-config.yaml

4. Create an openssl client secret file from the provided example.

cp tkg-extensions-v1.0.0/authentication/gangway/vsphere/03-secret.example tkg-extensions-v1.0.0/authentication/gangway/vsphere/03-secret.yaml

86

Page 87: You can find the most up-to-date technical documentation ...€¦ · Tanzu Kubernetes clusters implement Calico for pod-to-pod networking by default. Tanzu Kubernetes Cluster Plans

5. Open 03-secret.yaml in a text editor.

For example, use vi to edit the file.

vi tkg-extensions-v1.0.0/authentication/gangway/vsphere/03-secret.yaml

6. At the command line, use openssl to create a session key.

The following command uses pbcopy to copy the output to your clipboard.

openssl rand -base64 32 | pbcopy

7. In 03-secret.yaml , update the sesssionKey value by pasting in the output of the previous command.8. Create a client secret.

Run the following commands and copy the output of the echo command.

clientSecret=$(openssl rand -base64 32)

echo -n "$clientSecret" | base64

9. In 03-secret.yaml , update the clientSecret value by pasting in the output of the previous command.10. Pass the secret to the cluster.

kubectl apply -f tkg-extensions-v1.0.0/authentication/gangway/vsphere/03-secret.yaml

11. Open tkg-extensions-v1.0.0/authentication/gangway/vsphere/04-cert-selfsigned.yaml in a text editor.

For example, use vi to edit the file.

vi tkg-extensions-v1.0.0/authentication/gangway/vsphere/04-cert-selfsigned.yaml

Replace <WORKLOAD_CLUSTER_IP1> and <WORKLOAD_CLUSTER_IP2> with the IP addresses of the control plane node VM orVMs. Remove the row for <WORKLOAD_CLUSTER_IP2> if your management cluster has a single node control plane.Add more rows if your control plane has more than two nodes.

12. Create a self-signed certificate by applying 04-cert-selfsigned.yaml to the cluster.

kubectl apply -f tkg-extensions-v1.0.0/authentication/gangway/vsphere/04-cert-selfsigned.yaml

13. Provide the CA for the Dex service running on management cluster to the Gangway service running in the TanzuKubernetes cluster.

i. Set the focus of kubectl to the context of the management cluster.

kubectl config use-context my-management-cluster-admin@my-management-cluster

87

Page 88: You can find the most up-to-date technical documentation ...€¦ · Tanzu Kubernetes clusters implement Calico for pod-to-pod networking by default. Tanzu Kubernetes Cluster Plans

ii. Get the CA from the management cluster.

kubectl get secret dex-cert-tls -n tanzu-system-auth -o 'go-template={{ index .data "ca.crt" }}' | base64 -D > dex-ca.crt

NOTE: On Linux systems, replace base64 -D with base64 -d .

i. Set the focus of kubectl back to the context of the OIDC-enabled cluster.

kubectl config use-context my-oidc-cluster-admin@my-oidc-cluster

ii. Create a ConfigMap file with the CA certificate.

kubectl create cm dex-ca -n tanzu-system-auth --from-file=dex-ca.crt=dex-ca.crt

14. Create the deployment.

kubectl apply -f tkg-extensions-v1.0.0/authentication/gangway/vsphere/05-deployment.yaml

15. Create the Gangway service in the cluster.

kubectl apply -f tkg-extensions-v1.0.0/authentication/gangway/vsphere/06-service.yaml

16. Open the ConfigMap for the Dex service that is running in the management cluster.

For example, use vi to edit the file.

vi tkg-extensions-v1.0.0/authentication/dex/vsphere/ldap/03-cm.yaml

17. Add a new entry for the OIDC-enabled cluster to the staticClients list, to inform Dex that the Gangwayapplication is a client of the Dex service.

staticClients: ... - id: <WORKLOAD_CLUSTER_NAME> redirectURIs: - 'https://<WORKLOAD_CLUSTER_IP>:30166/callback' name: '<WORKLOAD_CLUSTER_NAME>' # echo -n '<clientSecret>' secret: <clientSecret>

Replace <WORKLOAD_CLUSTER_NAME> , <WORKLOAD_CLUSTER_IP> and <clientSecret> with the values that you used in theprevious steps.

18. Set the focus of kubectl to the context of the management cluster.

kubectl config use-context my-management-cluster-admin@my-management-cluster

88

Page 89: You can find the most up-to-date technical documentation ...€¦ · Tanzu Kubernetes clusters implement Calico for pod-to-pod networking by default. Tanzu Kubernetes Cluster Plans

19. Get the ID of the Dex service pod that is running in the management cluster.

kubectl get pods --namespace tanzu-system-auth

NAME READY STATUS RESTARTS AGE dex-6849555c67-bqmpd 1/1 Running 0 2d5h

20. Bounce the Dex pod by deleting it.

kubectl delete pod dex-6849555c67-bqmpd

What to Do NextDex and Gangway are now running on your management cluster and Tanzu Kubernetes cluster respectively. You cannow use your the credentials from your external identity provider (IDP) to connect to the cluster, as described inAccess Clusters with Your IDP Credentials.

89

Page 90: You can find the most up-to-date technical documentation ...€¦ · Tanzu Kubernetes clusters implement Calico for pod-to-pod networking by default. Tanzu Kubernetes Cluster Plans

Enable Gangway on Clusters on Amazon EC2This procedure describes how to enable Gangway on OIDC-enabled clusters that you have deployed to AmazonEC2.

PrerequisitesProcedureWhat to Do Next

PrerequisitesYou have completed the steps in Deploy Dex to a Management Cluster Running on Amazon EC2.You have deployed a Tanzu Kubernetes cluster from the cluster-template-oidc.yaml file, as described in Deployan Authentication-Enabled Cluster.

Procedure1. Set the focus of kubectl to the OIDC-enabled cluster.

kubectl config use-context my-oidc-cluster-admin@my-oidc-cluster

2. Create the Gangway service.

kubectl apply -f tkg-extensions-v1.0.0/authentication/gangway/aws/02-service.yaml

3. Get the host name of the Gangway service load balancer.

kubectl get svc gangwaysvc -n tanzu-system-auth -o jsonpath='{.status.loadBalancer.ingress[0].hostname}'

4. Edit the file tkg-extensions-v1.0.0/authentication/gangway/aws/03-config.yaml with information about your TanzuKubernetes Grid instance.

For example, use vi to edit the file.

vi tkg-extensions-v1.0.0/authentication/gangway/aws/03-config.yaml

Replace all instances of <WORKLOAD_CLUSTER_NAME> with the name of the OIDC-enabled cluster.Replace <DEX_SVC_LB_HOSTNAME> with the host name of the Dex service load balancer that is running in themanagement cluster, that you identified in the previous procedure.Replace <GANGWAY_SVC_LB_HOSTNAME> with the host name of the Gangway service load balancer that youobtained in the preceding step.Replace <APISERVER_URL> with the host name of the Kubernetes API Server endpoint for the workload cluster.

90

Page 91: You can find the most up-to-date technical documentation ...€¦ · Tanzu Kubernetes clusters implement Calico for pod-to-pod networking by default. Tanzu Kubernetes Cluster Plans

This is the Bastion VM that is running in the cluster, that has a name like my-oidc-cluster-bastion .5. Apply the configuration to the OIDC-enabled cluster.

kubectl apply -f tkg-extensions-v1.0.0/authentication/gangway/aws/03-config.yaml

6. Create an openssl client secret file from the provided example.

cp tkg-extensions-v1.0.0/authentication/gangway/aws/04-secret.example tkg-extensions-v1.0.0/authentication/gangway/aws/04-secret.yaml

7. Open 03-secret.yaml in a text editor.

For example, use vi to edit the file.

vi tkg-extensions-v1.0.0/authentication/gangway/aws/04-secret.yaml

8. At the command line, use openssl to create a session key.

The following command uses pbcopy to copy the output to your clipboard.

openssl rand -base64 32 | pbcopy

9. In 03-secret.yaml , update the sesssionKey value by pasting in the output of the previous command.10. Create a client secret.

Run the following commands and copy the output of the echo command.

clientSecret=$(openssl rand -base64 32)

echo -n "$clientSecret" | base64

11. In 04-secret.yaml , update the clientSecret value by pasting in the output of the previous command.12. Pass the secret to the cluster.

kubectl apply -f tkg-extensions-v1.0.0/authentication/gangway/aws/04-secret.yaml

13. Open tkg-extensions-v1.0.0/authentication/gangway/aws/05-cert-selfsigned.yaml in a text editor.

For example, use vi to edit the file.

vi tkg-extensions-v1.0.0/authentication/gangway/aws/05-cert-selfsigned.yaml

Replace <GANGWAY_SVC_LB_HOSTNAME> in 05-cert-selfsigned.yaml with the host name of the Gangway service loadbalancer.

14. Create a self-signed certificate by applying 05-cert-selfsigned.yaml to the cluster.

kubectl apply -f tkg-extensions-v1.0.0/authentication/gangway/aws/05-cert-selfsigned.yaml

91

Page 92: You can find the most up-to-date technical documentation ...€¦ · Tanzu Kubernetes clusters implement Calico for pod-to-pod networking by default. Tanzu Kubernetes Cluster Plans

15. Provide the CA for the Dex service running on management cluster to the Gangway service running in the TanzuKubernetes cluster.

i. Set the focus of kubectl to the context of the management cluster.

kubectl config use-context my-management-cluster-admin@my-management-cluster

ii. Get the CA from the management cluster.

kubectl get secret dex-cert-tls -n tanzu-system-auth -o 'go-template={{ index .data "ca.crt" }}' | base64 -D > dex-ca.crt

NOTE: On Linux systems, replace base64 -D with base64 -d . On Mac OS, use -D .

i. Set the focus of kubectl back to the context of the OIDC-enabled cluster.

kubectl config use-context my-oidc-cluster-admin@my-oidc-cluster

ii. Create a ConfigMap file with the CA certificate.

kubectl create cm dex-ca -n tanzu-system-auth --from-file=dex-ca.crt=dex-ca.crt

16. Create the deployment.

kubectl apply -f tkg-extensions-v1.0.0/authentication/gangway/aws/06-deployment.yaml

17. Open the ConfigMap for the Dex service that is running in the management cluster.

For example, use vi to edit the file.

vi tkg-extensions-v1.0.0/authentication/dex/aws/oidc/04-cm.yaml

18. Add a new entry for the OIDC-enabled cluster to the staticClients list.

staticClients: ... - id: <WORKLOAD_CLUSTER_NAME> redirectURIs: - 'https://<GANGWAY_SVC_LB_HOSTNAME>/callback' name: '<WORKLOAD_CLUSTER_NAME>' secret: <clientSecret>

Replace <WORKLOAD_CLUSTER_NAME> , <GANGWAY_SVC_LB_HOSTNAME> and <clientSecret> with the values that you used inthe previous steps.

19. Set the focus of kubectl to the context of the management cluster.

kubectl config use-context my-management-cluster-admin@my-management-cluster

92

Page 93: You can find the most up-to-date technical documentation ...€¦ · Tanzu Kubernetes clusters implement Calico for pod-to-pod networking by default. Tanzu Kubernetes Cluster Plans

20. Get the ID of the Dex service pod that is running in the management cluster.

kubectl get pods --namespace tanzu-system-auth

NAME READY STATUS RESTARTS AGE dex-6849555c67-bqmpd 1/1 Running 0 2d5h

21. Bounce the Dex pod by deleting it.

kubectl delete pod dex-6849555c67-bqmpd

What to Do NextDex and Gangway are now running on your management cluster and Tanzu Kubernetes cluster respectively. You cannow use your the credentials from your external identity provider (IDP) to connect to the cluster, as described inAccess Clusters with Your IDP Credentials.

93

Page 94: You can find the most up-to-date technical documentation ...€¦ · Tanzu Kubernetes clusters implement Calico for pod-to-pod networking by default. Tanzu Kubernetes Cluster Plans

Access the OIDC-Enabled Cluster with Your IDPCredentialsAfter you have deployed the OIDC-enabled cluster and configured Dex and Gangway, you can use your credentialsfrom your external identity provider (IDP) to connect to the cluster.

PrerequisitesProcedureConfiguring Role-Based Access Control (RBAC)

Prerequisites1. You have deployed Dex on your management cluster by completing the steps in one of the following procedures:

Deploy Dex with LDAP to a Management Cluster Running on vSphereDeploy Dex with OIDC to a Management Cluster Running on vSphereDeploy Dex with OIDC to a Management Cluster Running on Amazon EC2

2. You have deployed an OIDC-enabled cluster by completing the steps in Deploy an Authentication-EnabledCluster.

3. You have enabled Gangway on your OIDC-enabled cluster by completing the steps in either of the followingprocedures:

Enable Gangway on Clusters on vSphereEnable Gangway on Clusters on Amazon EC2

Procedure1. If you have not already done so, add the credentials of the OIDC-enabled cluster to your kubeconfig .

tkg get credentials my-oidc-cluster

2. Set the focus of kubectl to the context of the OIDC-enabled cluster.

``` kubectl config use-context my-oidc-cluster-admin@my-oidc-cluster ```

3. Get the list of nodes that are running in the OIDC-enabled cluster.

kubectl get nodes -owide

For a cluster that is running in vSphere, you will see output similar to the following:

NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME

94

Page 95: You can find the most up-to-date technical documentation ...€¦ · Tanzu Kubernetes clusters implement Calico for pod-to-pod networking by default. Tanzu Kubernetes Cluster Plans

my-oidc-cluster-control-plane-7dv69 Ready master 47h v1.17.3+vmware.1 10.184.111.216 10.184.111.216 VMware Photon OS/Linux 4.19.112-1.ph3 containerd://1.3.3 my-oidc-cluster-md-0-6654f7958f-t877d Ready <none> 47h v1.17.3+vmware.1 10.184.99.196 10.184.99.196 VMware Photon OS/Linux 4.19.112-1.ph3 containerd://1.3.3

4. Copy the IP address of the control plane node, under EXTERNAL-IP .5. To access the OIDC endpoint address, go to https://:30166 in a browser.

You can distribute this URL to any users who need to access this cluster. Users can access the cluster providedthat their IDP account has the correct permissions set. For information about how to set permissions, seeConfiguring Role-Based Access Control (RBAC) below.

NOTE: Because the example in this section uses a self-signed certificate, follow the browser prompts to acceptthe certificates from Dex and Gangway.

i. On the Tanzu Kubernetes Grid Authentication page at https://:30166, users click the Sign In button.ii. At the Log in to Your Account page, users enter their credentials from your IDP.iii. Once they have logged in, they can download the kubeconfig file and access the cluster by using kubectl .

Configuring Role-Based Access Control (RBAC)The kubeconfig that Gangway provides enables user authentication to clusters. For a user to be able to perform anytype of create, reconfigure, update, or delete (CRUD) actions against the cluster, the appropriate cluster roles androle bindings must be defined. For information about configuring RBAC on clusters, see Using RBAC Authorization inthe Kubernetes documentation.

The following example shows a cluster role binding that gives any user in an example group cluster-admin access.

kind: ClusterRoleBindingapiVersion: rbac.authorization.k8s.io/v1metadata: name: <name>subjects: - kind: Group name: <group-name> apiGroup: ""roleRef: kind: ClusterRole #this must be Role or ClusterRole name: cluster-admin # this must match the name of the Role or ClusterRole you wish to bind to apiGroup: rbac.authorization.k8s.io

95

Page 96: You can find the most up-to-date technical documentation ...€¦ · Tanzu Kubernetes clusters implement Calico for pod-to-pod networking by default. Tanzu Kubernetes Cluster Plans

Implementing Log Forwarding with Fluent BitFluent Bit is a lightweight log processor and forwarder that allows you to collect data and logs from different sources,unify them, and send them to multiple destinations. Tanzu Kubernetes Grid includes signed binaries for Fluent Bit,that you can deploy on the management cluster and on workload clusters to provide a log-forwarding service.

The Fluent Bit implementation provided in this release of Tanzu Kubernetes Grid allows you to gather logs frommanagement clusters or Tanzu Kubernetes clusters running in vSphere or Amazon EC2, and to forward them toElastic Search, Kafka, Splunk, or an HTTP endpoint.

First, you deploy Fluent Bit on the cluster from which you want to gather logs. Then, you configure an output plugin onthe cluster, depending on whether you use Elastic Search, Kafka, Splunk, or an HTTP endpoint.

Create Namespace and RBAC ComponentsDeploy Fluent Bit with an Elastic Search Output PluginDeploy Fluent Bit with a Kafka Output PluginDeploy Fluent Bit with a Splunk Output PluginDeploy Fluent Bit with an HTTP Endpoint Output Plugin

IMPORTANT: Tanzu Kubernetes Grid does not support IPv6 addresses. This is because upstream Kubernetes onlyprovides alpha support for IPv6. Always provide IPv4 addresses in the procedures in this section.

96

Page 97: You can find the most up-to-date technical documentation ...€¦ · Tanzu Kubernetes clusters implement Calico for pod-to-pod networking by default. Tanzu Kubernetes Cluster Plans

Create Namespace and RBAC ComponentsYou implement log forwarding with Fluent Bit at the individual cluster level. This applies to both management clustersand Tanzu Kubernetes clusters that you deploy from the management cluster.

The procedures in this topic describe how to deploy Fluent Bit on management clusters and Tanzu Kubernetesclusters that are running on either vSphere or Amazon EC2.

You deploy Fluent Bit on clusters by applying YAML files from the Tanzu Kubernetes Grid extensions bundle ontoclusters.

PrerequisitesProcedureWhat to Do Next

PrerequisitesYou have deployed a management cluster to either vSphere or Amazon EC2 and optionally one or more TanzuKubernetes clusters.You have downloaded and unpacked the bundle of Tanzu Kubernetes Grid extensions. For information aboutwhere to obtain the bundle, see Download and Unpack the Tanzu Kubernetes Grid Extensions Bundle.You have deployed one of the following logging management backends for storing and analyzing logs.

Elastic SearchKafkaSplunkHTTP

ProcedurePerform this procedure on all clusters from which you want to collect logs. You can apply this procedure on eithermanagement clusters or Tanzu Kubernetes clusters, that are running on either vSphere or Amazon EC2. Theinstructions in this procedure assume that you unpacked the bundle of Tanzu Kubernetes Grid extensions in thelocation in which your are running the commands.

1. Get the contexts of the clusters from which to gather logs.

To see the contexts of all of your management clusters, run tkg get management-cluster .To see the contexts of all of the clusters that a management cluster manages, run tkg set management-clustermy-management-cluster then tkg get cluster .

2. Set the focus of kubectl to the context of the management cluster or Tanzu Kubernetes cluster from which togather logs.

kubectl config use-context my-cluster-admin@my-cluster

97

Page 98: You can find the most up-to-date technical documentation ...€¦ · Tanzu Kubernetes clusters implement Calico for pod-to-pod networking by default. Tanzu Kubernetes Cluster Plans

3. Create a namespace on the cluster for Fluent Bit.

vSphere:

kubectl apply -f tkg-extensions-v1.0.0/logging/fluent-bit/vsphere/00-fluent-bit-namespace.yaml

Amazon EC2

kubectl apply -f tkg-extensions-v1.0.0/logging/fluent-bit/aws/00-fluent-bit-namespace.yaml

4. Create role-based access control (RBAC) resources for Fluent Bit.

This procedure creates a cluster role that grants get , list , and watch permissions on pods and namespaceobjects. The ClusterRoleBinding binds the ClusterRole to the ServiceAccount within the logging namespace.

i. Create a service account.

vSphere:

kubectl apply -f tkg-extensions-v1.0.0/logging/fluent-bit/vsphere/01-fluent-bit-service-account.yaml

Amazon EC2:

kubectl apply -f tkg-extensions-v1.0.0/logging/fluent-bit/aws/01-fluent-bit-service-account.yaml

ii. Create a cluster role.

vSphere:

kubectl apply -f tkg-extensions-v1.0.0/logging/fluent-bit/vsphere/02-fluent-bit-role.yaml

Amazon EC2:

kubectl apply -f tkg-extensions-v1.0.0/logging/fluent-bit/aws/02-fluent-bit-role.yaml

iii. Create a cluster role binding.

vSphere:

kubectl apply -f logging/fluent-bit/vsphere/03-fluent-bit-role-binding.yaml

Amazon EC2:

kubectl apply -f tkg-extensions-v1.0.0/logging/fluent-bit/aws/03-fluent-bit-role-binding.yaml

What to Do NextDepending on whether you use Elastic Search, Kafka, Splunk, or HTTP, configure an output plugin on your cluster.

Deploy Fluent Bit with an Elastic Search Output PluginDeploy Fluent Bit with a Kafka Output Plugin

98

Page 99: You can find the most up-to-date technical documentation ...€¦ · Tanzu Kubernetes clusters implement Calico for pod-to-pod networking by default. Tanzu Kubernetes Cluster Plans

Deploy Fluent Bit with a Splunk Output PluginDeploy Fluent Bit with an HTTP Endpoint Output Plugin

99

Page 100: You can find the most up-to-date technical documentation ...€¦ · Tanzu Kubernetes clusters implement Calico for pod-to-pod networking by default. Tanzu Kubernetes Cluster Plans

Deploy Fluent Bit with an Elastic Search Output PluginThis procedure describes how to configure Elastic Search as the output plugin on a cluster on which you havedeployed Fluent Bit as the log forwarder.

PrerequisitesProcedureWhat to Do Next

PrerequisitesYou performed the steps in Create Namespace and RBAC Components.You have deployed Elastic Search as the logging management backend for storing and analyzing logs.

Procedure1. Open the file 04-fluent-bit-configmap.yaml in a text editor.

For example, use vi to edit the file.

vSphere:

vi tkg-extensions-v1.0.0/logging/fluent-bit/vsphere/output/elasticsearch/04-fluent-bit-configmap.yaml

Amazon EC2:

vi tkg-extensions-v1.0.0/logging/fluent-bit/aws/output/elasticsearch/04-fluent-bit-configmap.yaml

2. Update 04-fluent-bit-configmap.yaml to set the following environment variables:

<TKG_CLUSTER_NAME> : The name of the Tanzu Kubernetes Grid cluster.Set <TKG_INSTANCE_NAME> : The name of Tanzu Kubernetes Grid instance. This name should be the same forthe management cluster and all of the workload clusters that make up the Tanzu Kubernetes Griddeployment. <FLUENT_ELASTICSEARCH_HOST> to service name of Elasticsearch within your cluster. <FLUENT_ELASTICSEARCH_PORT> to the appropriate port Elastic Search server is listening to.

3. Apply the configuration map to the cluster.

vSphere:

kubectl apply -f logging/fluent-bit/vsphere/output/elasticsearch/04-fluent-bit-configmap.yaml

Amazon EC2

kubectl apply -f logging/fluent-bit/aws/output/elasticsearch/04-fluent-bit-configmap.yaml

100

Page 101: You can find the most up-to-date technical documentation ...€¦ · Tanzu Kubernetes clusters implement Calico for pod-to-pod networking by default. Tanzu Kubernetes Cluster Plans

4. Create Fluent Bit as a DaemonSet .

vSphere:

kubectl apply -f logging/fluent-bit/vsphere/output/elasticsearch/05-fluent-bit-ds.yaml

Amazon EC2: ```sh kubectl apply -f logging/fluent-bit/aws/output/elasticsearch/05-fluent-bit-ds.yaml

101

Page 102: You can find the most up-to-date technical documentation ...€¦ · Tanzu Kubernetes clusters implement Calico for pod-to-pod networking by default. Tanzu Kubernetes Cluster Plans

Deploy Fluent Bit with a Kafka Output PluginThis procedure describes how to configure Kafka as the output plugin on a cluster on which you have deployedFluent Bit as the log forwarder.

PrerequisitesProcedureWhat to Do Next

PrerequisitesYou performed the steps in Create Namespace and RBAC Components.You have deployed Kafka as the logging management backend for storing and analyzing logs.

Procedure1. Open the file 04-fluent-bit-configmap.yaml in a text editor. For example, use vi to edit the file.

vSphere:

vi tkg-extensions-v1.0.0/logging/fluent-bit/vsphere/output/kafka/04-fluent-bit-configmap.yaml

Amazon EC2:

vi tkg-extensions-v1.0.0/logging/fluent-bit/aws/output/kafka/04-fluent-bit-configmap.yaml

2. Update 04-fluent-bit-configmap.yaml to set the following environment variables:

<TKG_CLUSTER_NAME> : The name of the Tanzu Kubernetes Grid cluster.Set <TKG_INSTANCE_NAME> : The name of Tanzu Kubernetes Grid instance. This name should be the same forthe management cluster and all of the workload clusters that make up the Tanzu Kubernetes Griddeployment. <KAFKA_BROKER_SERVICE_NAME> : The name of the Kafka broker service. <KAFKA_TOPIC_NAME> : The name of the topic ingesting the logs in Kafka.

3. Apply the configuration map to the cluster.

vSphere:

kubectl apply -f logging/fluent-bit/vsphere/output/kafka/04-fluent-bit-configmap.yaml

Amazon EC2

kubectl apply -f logging/fluent-bit/aws/output/kafka/04-fluent-bit-configmap.yaml

102

Page 103: You can find the most up-to-date technical documentation ...€¦ · Tanzu Kubernetes clusters implement Calico for pod-to-pod networking by default. Tanzu Kubernetes Cluster Plans

4. Create Fluent Bit as a DaemonSet .

vSphere:

kubectl apply -f logging/fluent-bit/vsphere/output/kafka/05-fluent-bit-ds.yaml

Amazon EC2: ```sh kubectl apply -f logging/fluent-bit/aws/output/kafka/05-fluent-bit-ds.yaml

103

Page 104: You can find the most up-to-date technical documentation ...€¦ · Tanzu Kubernetes clusters implement Calico for pod-to-pod networking by default. Tanzu Kubernetes Cluster Plans

Deploy Fluent Bit with a Splunk Output PluginThis procedure describes how to configure Splunk as the output plugin on a cluster on which you have deployedFluent Bit as the log forwarder.

PrerequisitesProcedureWhat to Do Next

PrerequisitesYou performed the steps in Create Namespace and RBAC Components.You have deployed Splunk as the logging management backend for storing and analyzing logs.

Procedure1. Open the file 04-fluent-bit-configmap.yaml in a text editor.

For example, use vi to edit the file.

vSphere:

vi tkg-extensions-v1.0.0/logging/fluent-bit/vsphere/output/splunk/04-fluent-bit-configmap.yaml

Amazon EC2:

vi tkg-extensions-v1.0.0/logging/fluent-bit/aws/output/splunk/04-fluent-bit-configmap.yaml

2. Update 04-fluent-bit-configmap.yaml to set the following environment variables:

<TKG_CLUSTER_NAME> : The name of the Tanzu Kubernetes Grid cluster.Set <TKG_INSTANCE_NAME> : The name of Tanzu Kubernetes Grid instance. This name should be the same forthe management cluster and all of the workload clusters that make up the Tanzu Kubernetes Griddeployment. <SPLUNK_HOST> : The IP address or host name of the target Splunk Server. <SPLUNK_PORT> : The TCP port of the target Splunk Server. <SPLUNK_TOKEN> : The authentication token for the HTTP event collector interface.

3. Apply the configuration map to the cluster.

vSphere:

kubectl apply -f logging/fluent-bit/vsphere/output/splunk/04-fluent-bit-configmap.yaml

Amazon EC2

104

Page 105: You can find the most up-to-date technical documentation ...€¦ · Tanzu Kubernetes clusters implement Calico for pod-to-pod networking by default. Tanzu Kubernetes Cluster Plans

kubectl apply -f logging/fluent-bit/aws/output/splunk/04-fluent-bit-configmap.yaml

4. Create Fluent Bit as a DaemonSet .

vSphere:

kubectl apply -f logging/fluent-bit/vsphere/output/splunk/05-fluent-bit-ds.yaml

Amazon EC2: ```sh kubectl apply -f logging/fluent-bit/aws/output/splunk/05-fluent-bit-ds.yaml

105

Page 106: You can find the most up-to-date technical documentation ...€¦ · Tanzu Kubernetes clusters implement Calico for pod-to-pod networking by default. Tanzu Kubernetes Cluster Plans

Deploy Fluent Bit with an HTTP Endpoint OutputPluginThis procedure describes how to configure an HTTP endpoint as the output plugin on a cluster on which you havedeployed Fluent Bit as the log forwarder.

PrerequisitesProcedureWhat to Do Next

PrerequisitesYou performed the steps in Create Namespace and RBAC Components.You use an HTTP endpoint as the logging management backend for storing and analyzing logs.

Procedure1. Open the file 04-fluent-bit-configmap.yaml in a text editor.

For example, use vi to edit the file.

vSphere:

vi tkg-extensions-v1.0.0/logging/fluent-bit/vsphere/output/http/04-fluent-bit-configmap.yaml

Amazon EC2:

vi tkg-extensions-v1.0.0/logging/fluent-bit/aws/output/http/04-fluent-bit-configmap.yaml

2. Update 04-fluent-bit-configmap.yaml to set the following environment variables:

<TKG_CLUSTER_NAME> : The name of the Tanzu Kubernetes Grid cluster.Set <TKG_INSTANCE_NAME> : The name of Tanzu Kubernetes Grid instance. This name should be the same forthe management cluster and all of the workload clusters that make up the Tanzu Kubernetes Griddeployment. <HTTP_HOST> : The IP address or host name of the target HTTP Server. <HTTP_PORT> : The TCP port of the target HTTP Server. <HTTP_URI> : The HTTP URI for the target Web server. <HTTP_HEADER_KEY_VALUE> : The HTTP header key/value pair. For example, for VMware vRealize Log InsightCloud, specify key as Authorization Bearer and value as the API token. <HTTP_FORMAT> : The data format to use in the HTTP request body. For example, for vRealize Log InsightCloud, set to this value to json .

3. Apply the configuration map to the cluster.

106

Page 107: You can find the most up-to-date technical documentation ...€¦ · Tanzu Kubernetes clusters implement Calico for pod-to-pod networking by default. Tanzu Kubernetes Cluster Plans

vSphere:

kubectl apply -f logging/fluent-bit/vsphere/output/http/04-fluent-bit-configmap.yaml

Amazon EC2

kubectl apply -f logging/fluent-bit/aws/output/http/04-fluent-bit-configmap.yaml

4. Create Fluent Bit as a DaemonSet .

vSphere:

kubectl apply -f logging/fluent-bit/vsphere/output/http/05-fluent-bit-ds.yaml

Amazon EC2:

kubectl apply -f logging/fluent-bit/aws/output/http/05-fluent-bit-ds.yaml

107

Page 108: You can find the most up-to-date technical documentation ...€¦ · Tanzu Kubernetes clusters implement Calico for pod-to-pod networking by default. Tanzu Kubernetes Cluster Plans

Implementing Ingress Control on Tanzu KubernetesClusters with ContourIngress control is a core concept in Kubernetes, that is implemented by a third party proxy. Contour is a Kubernetesingress controller that uses the Envoy edge and service proxy. Tanzu Kubernetes Grid includes signed binaries forContour and Envoy, that you can deploy on Tanzu Kubernetes clusters to provide ingress control services on thoseclusters.

For general information about ingress control, see Ingress Controllers in the Kubernetes documentation.

Deploy Contour on Tanzu Kubernetes Clusters Running on vSphereDeploy Contour on Tanzu Kubernetes Clusters Running on Amazon EC2View Data from Your Contour Deployment

108

Page 109: You can find the most up-to-date technical documentation ...€¦ · Tanzu Kubernetes clusters implement Calico for pod-to-pod networking by default. Tanzu Kubernetes Cluster Plans

Deploy Contour on Tanzu Kubernetes ClustersRunning on vSphereYou deploy Contour and Envoy directly on Tanzu Kubernetes clusters. You do not need to deploy Contour onmanagement clusters.

In this release of Tanzu Kubernetes Grid, the provided implementation of Contour and Envoy assumes that you useself-signed certificates. If you have Tanzu Kubernetes Grid Plus support, you can engage with Tanzu KubernetesGrid Plus Customer Reliability Engineers, who can help you to configure Contour and Envoy with your owncertificates.

PrerequisitesProcedureWhat to Do Next

PrerequisitesYou have deployed a management cluster to vSphere and one or more Tanzu Kubernetes clusters.You have downloaded and unpacked the bundle of Tanzu Kubernetes Grid extensions. For information aboutwhere to obtain the bundle, see Download and Unpack the Tanzu Kubernetes Grid Extensions Bundle.

ProcedureThe instructions in this procedure assume that you unpacked the bundle of Tanzu Kubernetes Grid extensions in thelocation in which your are running the commands.

Tanzu Kubernetes Grid does not support IPv6 addresses. This is because upstream Kubernetes only provides alphasupport for IPv6. Always provide IPv4 addresses in the procedures in this topic.

1. Set the focus of kubectl to the Tanzu Kubernetes cluster on which to deploy Contour.

kubectl config use-context my-cluster-admin@my-cluster

2. Install cert-manager on the cluster.

kubectl apply -f tkg-extensions-v1.0.0/cert-manager/

3. Deploy Contour and Envoy on the cluster.

kubectl apply -f tkg-extensions-v1.0.0/ingress/contour/vsphere/

4. Deploy some test pods and services on the cluster.

109

Page 110: You can find the most up-to-date technical documentation ...€¦ · Tanzu Kubernetes clusters implement Calico for pod-to-pod networking by default. Tanzu Kubernetes Cluster Plans

kubectl apply -f tkg-extensions-v1.0.0/ingress/contour/examples/common/

5. Deploy the Kubernetes ingress resource on the cluster.

kubectl apply -f tkg-extensions-v1.0.0/ingress/contour/examples/https-ingress/

6. Add an /etc/hosts entry to map one of the worker node IP addresses to foo.bar.com .

Replace <WORKER_NODE_IP> in the following command with the IP address of one of the worker nodes in yourTanzu Kubernetes cluster.

echo '<WORKER_NODE_IP> foo.bar.com' | sudo tee -a /etc/hosts > /dev/null

7. Get the HTTPS node port of the Envoy service.

kubectl get service envoy -n tanzu-system-ingress -o jsonpath='{.spec.ports[?(@.name=="https")].nodePort}'

8. Verify that the following URLs work by going to the following addresses in a browser.

Replace <ENVOY_SERVICE_HTTPS_NODE_PORT> in the following command with the output of the kubectl get serviceenvoy command that you ran in the preceding step.

https://foo.bar.com:/foohttps://foo.bar.com:/bar

You should see output similar to the following:

Hello, world! Version: 1.0.0 Hostname: helloweb-7cd97b9cb8-vmnbj

What to Do NextWith Contour and Envoy running in your cluster, you can View Data from Your Contour Deployment.

110

Page 111: You can find the most up-to-date technical documentation ...€¦ · Tanzu Kubernetes clusters implement Calico for pod-to-pod networking by default. Tanzu Kubernetes Cluster Plans

Deploy Contour on Tanzu Kubernetes ClustersRunning on Amazon EC2You deploy Contour and Envoy directly on Tanzu Kubernetes clusters. You do not need to deploy Contour onmanagement clusters.

In this release of Tanzu Kubernetes Grid, the provided implementation of Contour and Envoy assumes that you useself-signed certificates. If you have Tanzu Kubernetes Grid Plus support, you can engage with Tanzu KubernetesGrid Plus Customer Reliability Engineers, who can help you to configure Contour and Envoy with your owncertificates.

PrerequisitesProcedureWhat to Do Next

PrerequisitesYou have deployed a management cluster to Amazon EC2 and one or more Tanzu Kubernetes clusters.You have downloaded and unpacked the bundle of Tanzu Kubernetes Grid extensions. For information aboutwhere to obtain the bundle, see Download and Unpack the Tanzu Kubernetes Grid Extensions Bundle.

ProcedureThe instructions in this procedure assume that you unpacked the bundle of Tanzu Kubernetes Grid extensions in thelocation in which your are running the commands.

Tanzu Kubernetes Grid does not support IPv6 addresses. This is because upstream Kubernetes only provides alphasupport for IPv6. Always provide IPv4 addresses in the procedures in this topic.

1. Set the focus of kubectl to the Tanzu Kubernetes cluster on which to deploy Contour.

kubectl config use-context my-cluster-admin@my-cluster

2. Install cert-manager on the cluster.

kubectl apply -f tkg-extensions-v1.0.0/cert-manager/

3. Deploy Contour and Envoy on the cluster.

kubectl apply -f tkg-extensions-v1.0.0/ingress/contour/aws/

4. Deploy some test pods and services on the cluster.

111

Page 112: You can find the most up-to-date technical documentation ...€¦ · Tanzu Kubernetes clusters implement Calico for pod-to-pod networking by default. Tanzu Kubernetes Cluster Plans

kubectl apply -f ingress/contour/examples/common/

5. Deploy the Kubernetes ingress resource on the cluster.

kubectl apply -f ingress/contour/examples/https-ingress/

6. Get the host name of the Envoy service load balancer.

kubectl get service envoy -n tanzu-system-ingress -o jsonpath='{.status.loadBalancer.ingress[0].hostname}'

7. Get the IP address of the Envoy service load balancer.

Replace <ENVOY_SERVICE_LB_HOSTNAME> in the following command with the output of the kubectl get service envoy command that you ran in the preceding step.

nslookup <ENVOY_SERVICE_LB_HOSTNAME>

8. Add an /etc/hosts entry to map the IP address of the Envoy service load balancer to foo.bar.com .

Replace <ENVOY_SERVICE_LB_IP> in the following command with the output of the nslookup command that you ranin the preceding step.

```shecho '<ENVOY_SERVICE_LB_IP> foo.bar.com' | sudo tee -a /etc/hosts > /dev/null```

1. Verify that the following URLs work by going to the following addresses.

https://foo.bar.com/foohttps://foo.bar.com/bar

You should see output similar to the following:

Hello, world! Version: 1.0.0 Hostname: helloweb-7cd97b9cb8-vmnbj

What to Do NextWith Contour and Envoy running in your cluster, you can View Data from Your Contour Deployment

112

Page 113: You can find the most up-to-date technical documentation ...€¦ · Tanzu Kubernetes clusters implement Calico for pod-to-pod networking by default. Tanzu Kubernetes Cluster Plans

View Data from Your Contour DeploymentAfter you have deployed Contour and Envoy on clusters, you can use those services to view data about yourdeployments.

PrerequisitesAccess the Envoy Administration Interface RemotelyVisualize the Internal Contour Directed Acyclic Graph

PrerequisitesYou have deployed Contour and Envoy onto a Tanzu Kubernetes cluster by performing the steps in either DeployContour on Tanzu Kubernetes Clusters Running on vSphere or Deploy Contour on Tanzu Kubernetes ClustersRunning on Amazon EC2.

Access the Envoy Administration Interface Remotely1. Get an Envoy pod that matches the Envoy daemonset.

ENVOY_POD=$(kubectl -n tanzu-system-ingress get pod -l app=envoy -o name | head -1)

2. Forward port 9001 on the Envoy pod.

kubectl -n tanzu-system-ingress port-forward $ENVOY_POD 9001

3. In a browser, navigate to http://127.0.0.1:9001/.

You should see the Envoy administration interface.

113

Page 114: You can find the most up-to-date technical documentation ...€¦ · Tanzu Kubernetes clusters implement Calico for pod-to-pod networking by default. Tanzu Kubernetes Cluster Plans

4. Click the links in the Envoy administration interface to see information about the operations in your cluster.

Visualize the Internal Contour Directed Acyclic Graph(DAG)

1. Get a Contour pod that matches the Contour daemonset.

CONTOUR_POD=$(kubectl -n tanzu-system-ingress get pod -l app=contour -o name | head -1)

2. Forward port 6060 on the Contour pod.

kubectl -n tanzu-system-ingress port-forward $CONTOUR_POD 6060

3. Open a new terminal window and download and store the DAG as a *.png file.

curl localhost:6060/debug/dag | dot -T png > contour-dag.png

4. Open contour-dag.png to view the graph.

114

Page 115: You can find the most up-to-date technical documentation ...€¦ · Tanzu Kubernetes clusters implement Calico for pod-to-pod networking by default. Tanzu Kubernetes Cluster Plans

115

Page 116: You can find the most up-to-date technical documentation ...€¦ · Tanzu Kubernetes clusters implement Calico for pod-to-pod networking by default. Tanzu Kubernetes Cluster Plans

Delete Management ClustersTo delete a Tanzu Kubernetes Grid management cluster, run the tkg delete management-cluster command.

When you run tkg delete management-cluster , Tanzu Kubernetes Grid creates a temporary kind cleanup cluster onyour bootstrap environment to manage the deletion process. The kind cluster is removed when the deletion processcompletes.

1. To list all of the management clusters are running, run the tkg get management-cluster command.

tkg get management-cluster

The management cluster context that is the current focus of the Tanzu Kubernetes Grid CLI and kubectl ismarked with an asterisk ( * ). By default, the management cluster that you deployed most recently is the focus ofthe Tanzu Kubernetes Grid CLI.

+-------------------------+-------------------------------------------+ | MANAGEMENT CLUSTER NAME | CONTEXT NAME | +-------------------------+-------------------------------------------+ | my-cluster * | my-cluster-admin@my-cluster | | my-other-cluster | my-other-cluster-admin@my-other-cluster | +-------------------------+-------------------------------------------+

1. If there are management clusters that you no longer require, run tkg delete management-cluster .

IMPORTANT: The tkg delete management-cluster command deletes the management cluster that is the currentfocus of the Tanzu Kubernetes Grid CLI. Make sure that you are connected to the correct management clusterby running tkg set management-cluster <cluster-name> before you run tkg delete management-cluster .

tkg delete management-cluster

To skip the yes/no verification step when you run tkg delete management-cluster , specify the --yes option.

tkg delete management-cluster --yes

2. If there are Tanzu Kubernetes clusters running in the management cluster, the delete operation is not performed.

In this case, you can delete the management cluster in two ways:

Run tkg delete cluster to delete all of the running clusters and then run tkg delete management-cluster again.Run tkg delete management-cluster with the --force option.

tkg delete management-cluster --force

IMPORTANT: Do not change context or edit the kubeconfig file while Tanzu Kubernetes Grid operations are running.

116

Page 117: You can find the most up-to-date technical documentation ...€¦ · Tanzu Kubernetes clusters implement Calico for pod-to-pod networking by default. Tanzu Kubernetes Cluster Plans

117

Page 118: You can find the most up-to-date technical documentation ...€¦ · Tanzu Kubernetes clusters implement Calico for pod-to-pod networking by default. Tanzu Kubernetes Cluster Plans

Troubleshooting Tips for Tanzu Kubernetes GridThis section includes tips to help you to troubleshoot common problems that you might encounter when installingTanzu Kubernetes Grid and deploying Tanzu Kubernetes clusters.

Clean Up Docker After Unsuccessful Management Cluster DeploymentsClean Up Amazon EC2 After Unsuccessful Management Cluster DeploymentsKind Cluster Remains after Deleting Management ClusterDeploying a Tanzu Kubernetes Cluster Times Out, but the Cluster is Created

Clean Up Docker After Unsuccessful Management ClusterDeploymentsProblem

Unsuccessful attempts to deploy Tanzu Kubernetes Grid can leave Docker objects in your system, which consumeresources.

Solution

To clean up after attempts to deploy the management cluster fail, remove the Docker containers, images, andvolumes that are left behind.

CAUTION: These steps remove all Docker containers, images, and volumes from your system. If you are runningDocker processes that are not related to Tanzu Kubernetes Grid on this system, do no run these commands. Removeunneeded containers, images, and volumes individually.

1. Remove all kind clusters.

kind get clusters | xargs -n1 kind delete cluster --name

2. Remove all containers.

docker rm -f $(docker ps -aq)

3. Remove all container images.

docker rmi -f $(docker images -aq)

4. Remove all orphaned Docker volumes.

docker system prune --all --volumes -f

118

Page 119: You can find the most up-to-date technical documentation ...€¦ · Tanzu Kubernetes clusters implement Calico for pod-to-pod networking by default. Tanzu Kubernetes Cluster Plans

Clean Up After Unsuccessful Management ClusterDeploymentsProblem

Unsuccessful attempts to deploy a Tanzu Kubernetes Grid management cluster leave orphaned objects in yourvSphere instance or AWS account.

Solution

There are different ways to clean up unsuccessful deployments. Attempt these methods in the following order ofpreference.

1. Run kubectl delete to delete the cluster manually.

kubectl delete cluster.cluster.x-k8s.io/cluster_name -n tkg-system

docker rm -v tkg-kind-unique_ID-control-plane -f

2. In vSphere, locate the VMs created, power them off and delete them from your system.3. In AWS, use a tool such as aws-nuke with a specific configuration to delete the objects from Amazon EC2.4. In AWS, log in to your Amazon EC2 dashboard and delete the objects manually in the console.

Kind Cluster Remains after Deleting Management ClusterProblem

Running tkg delete management-cluster removes the management cluster, but fails to delete the local kind clusterfrom the bootstrap environment.

Solution

1. List all running kind clusters and remove the one that looks like tkg-kind-unique_ID

kind delete cluster --name tkg-kind-unique_ID

2. List all running clusters and identify the kind cluster.

docker ps -a

3. Copy the container ID of the kind cluster and remove it.

docker kill container_ID

119

Page 120: You can find the most up-to-date technical documentation ...€¦ · Tanzu Kubernetes clusters implement Calico for pod-to-pod networking by default. Tanzu Kubernetes Cluster Plans

Deploying a Tanzu Kubernetes Cluster Times Out, but theCluster is CreatedProblem

Running tkg create cluster fails with a timeout error similar to the following:

I0317 11:11:16.658433 clusterclient.go:341] Waiting for resource my-cluster of type *v1alpha3.Cluster to be up and runningE0317 11:26:16.932833 common.go:29]Error: unable to wait for cluster and get the cluster kubeconfig: error waiting for cluster to be provisioned (this may take a few minutes): cluster control plane is still being initializedE0317 11:26:16.933251 common.go:33]Detailed log about the failure can be found at: /var/folders/_9/qrf26vd5629_y5vgxc1vjk440000gp/T/tkg-20200317T111108811762517.log

However, if you run tkg get cluster , the cluster appears to have been created.

-----------------------+

NAME STATUS-----------------------+

my-cluster Provisioned-----------------------+

Solution

1. Use the tkg get credentials command to add the cluster credentials to your kubeconfig .

tkg get credentials my-cluster

2. Set kubectl to the cluster's context.

kubectl config set-context my-cluster@user

3. Check whether the cluster nodes are all in the ready state.

kubectl get nodes

4. Check whether all of the pods are up and running.

kubectl get pods -A

5. If all of the nodes and pods are running correctly, your Tanzu Kubernetes cluster has been created successfullyand you can ignore the error.

6. If the nodes and pods are not running correctly, attempt to delete the cluster.

120

Page 121: You can find the most up-to-date technical documentation ...€¦ · Tanzu Kubernetes clusters implement Calico for pod-to-pod networking by default. Tanzu Kubernetes Cluster Plans

tkg delete cluster my-cluster

7. If tkg delete cluster fails, use kubectl to delete the cluster manually.

121

Page 122: You can find the most up-to-date technical documentation ...€¦ · Tanzu Kubernetes clusters implement Calico for pod-to-pod networking by default. Tanzu Kubernetes Cluster Plans

Troubleshooting Tanzu Kubernetes Clusters withCrash Recovery and DiagnosticsCrash Recovery and Diagnostics for Kubernetes is an open source project that helps you to investigate andtroubleshoot unhealthy or unresponsive Kubernetes clusters. It automates the diagnosis of problem clusters thatmight be in an unstable state, or even inoperable. Crash Recovery and Diagnostics provides you with the ability toautomatically collect machine states and other information from each node in a cluster.

To specify the resources to collect from cluster machines, a series of commands are declared in a file called adiagnostics file. Like a Dockerfile, the diagnostics file is a collection of line-by-line directives with commands that areexecuted on each specified cluster machine. The output of the commands is then added to a tar file and saved forfurther analysis.

Tanzu Kubernetes Grid includes signed binaries for Crash Recovery and Diagnostics and a default diagnostics file forPhoton OS Tanzu Kubernetes clusters.

Install the Crash Recovery and Diagnostics BinaryRun Crash Recovery and Diagnostics on Photon OS Tanzu Kubernetes Grid ClustersCrash Recovery and Diagnostics Options

Install the Crash Recovery and Diagnostics Binary1. Go to https://www.vmware.com/go/get-tkg and log in with your My VMware credentials.2. Download the bundle for Crash Recovery and Diagnostics, crash-diagnostics-v0.2.2.tar.gz .3. Use either the tar command or the extraction tool of your choice to unpack the bundle:

tar -xzf crash-diagnostics-v0.2.2.tar.gz

4. Use either the tar command or the extraction tool of your choice to unpack the crash-diagnostics-v0.2.2.tar bundle.

5. Use the gunzip command to unpack the binary for your platform.

Linux platforms:

gunzip crash-diagnostics-linux-v0.2.2.gz

Mac OS platforms:

gunzip crash-diagnostics-darwin-v0.2.2.gz

6. Move the binary into the /usr/local/bin folder.

mv ./crash-diagnostics /usr/local/bin/crash-diagnostics

122

Page 123: You can find the most up-to-date technical documentation ...€¦ · Tanzu Kubernetes clusters implement Calico for pod-to-pod networking by default. Tanzu Kubernetes Cluster Plans

7. Make the file executable.

chmod +x /usr/local/bin/crash-diagnostics

Run Crash Recovery and Diagnostics on Photon OS TanzuKubernetes Grid ClustersThe Crash Recovery and Diagnostics bundle that Tanzu Kubernetes Grid provides includes a default Crash Recoveryand Diagnostics file, Diagnostics.file , that you can use to diagnose problems on Photon OS management clustersand Tanzu Kubernetes clusters that you deploy on vSphere from Tanzu Kubernetes Grid.

Prerequisites

Crash-Diagnostics requires an SSH private/public key pair.Ensure your Tanzu Kubernetes Grid VMs are configured to use your SSH public key.Collect the IP address of your workload Tanzu Kubernetes cluster VMs.Extract the kubeconfig file from the management cluster by running tkg get credentials <management-cluster-name> .

Procedure

1. Navigate to the location in which you downloaded and unpacked the Crash Diagnostic bundle, and open Diagnostics.file in a text editor.

For example, use vi to edit the file.

vi Diagnostics.file

The file contains a series of commands that run sequentially on the cluster VMs:

# FROM specifies a space-separated host list used to get diagnostics data. # Retries is the maximum number of connection attempts made to connect to hosts. FROM hosts:"<space-separated list of ip:port>" retries:"20"

# Environment variables used later in the file ENV SSH_USER=<ssh-user> ENV SSH_KEY=<private-ssh-key>

# AUTHCONFIG configures authentication for remote connection to cluster machines # specified in the FROM declaration above. Each remote connection # will use the specified username and private-key. AUTHCONFIG username:$SSH_USER private-key:$SSH_KEY

# WORKDIR specifies a location on disk where the tool stages files # before they are bundled. WORKDIR <directory/path/>

# OUTPUT specifies a path for the generated tar.gz file output bundle OUTPUT <file/path/name>.tar.gz

# Capture run time info from each cluster machines

123

Page 124: You can find the most up-to-date technical documentation ...€¦ · Tanzu Kubernetes clusters implement Calico for pod-to-pod networking by default. Tanzu Kubernetes Cluster Plans

CAPTURE sudo df CAPTURE sudo df -i CAPTURE sudo ifconfig -a CAPTURE sudo rpm -qa CAPTURE sudo netstat -anp CAPTURE sudo netstat -aens CAPTURE sudo route CAPTURE sudo mount CAPTURE sudo dmesg CAPTURE sudo free CAPTURE sudo uptime CAPTURE sudo date CAPTURE sudo ps auwwx --sort -rss CAPTURE sudo bash -c "ulimit -a" CAPTURE sudo bash -c "umask" CAPTURE sudo cat /proc/meminfo CAPTURE sudo cat /proc/cpuinfo CAPTURE sudo cat /proc/vmstat CAPTURE sudo cat /proc/swaps CAPTURE sudo cat /proc/mounts CAPTURE sudo arp -a CAPTURE sudo env CAPTURE sudo top -d 5 -n 5 -b CAPTURE sudo docker ps -a CAPTURE sudo iptables -L -n CAPTURE sudo systemctl status kubelet CAPTURE sudo systemctl status docker CAPTURE sudo journalctl -xeu kubelet CAPTURE sudo cat /var/log/cloud-init-output.log CAPTURE sudo cat /var/log/cloud-init.log

# KUBECONFIG specifies the location of a kubeconfig file # used by subsequent Kubernetes commands below. KUBECONFIG <file/path/to>/kubeconfig

# Retrieve API objects and logs KUBEGET objects kinds:"pods" KUBEGET logs

2. Update the following elements with information about your cluster.

FROM : Add a comma-separated list of cluster node VM IP addresses. ENV SSH_USER The SSH user name for the cluster. For clusters running on vSphere, the user name is capv . ENV SSH_KEY For information about creating the SSH key pairs, see Create an SSH Key Pair in Prepare toDeploy the Management Cluster to vSphere. WORKDIR The location in which to prepare files before they are bundled into the tar output file. OUTPUT The location and name of the tar output file. KUBECONFIG The path to the configuration file for the cluster or clusters.

3. Run Crash Recovery and Diagnostics.

Run the crash-diagnostics run command from the location in which the Diagnostics.file is located.

crash-diagnostics run

Crash Recovery and Diagnostics Options124

Page 125: You can find the most up-to-date technical documentation ...€¦ · Tanzu Kubernetes clusters implement Calico for pod-to-pod networking by default. Tanzu Kubernetes Cluster Plans

When you run the crash-diagnostics run command, by default Crash Recovery and Diagnostics searches for andexecutes a diagnostics script file named ./Diagnostics.file .

crash-diagnostics run

You can specify --file to run Crash Recovery and Diagnostics from a different diagnostics file. For example, if youhave Tanzu Kubernetes Grid Plus support and you can engage with Tanzu Kubernetes Grid Plus Customer ReliabilityEngineers to resolve a problem, they might provide you with a custom diagnostics file to run.

crash-diagnostics --file my-diagnostics.file

You can specify the output file to be generated by the tool by using --output , which overrides the default value.

crash-diagnostics --file my-diagnostics.file --output my-cluster.tar.gz

If you specify the --debug flag, you see log messages on the screen similar to the following:

$> crash-diagnostics run --debug

DEBU[0000] Parsing script fileDEBU[0000] Parsing [1: FROM local]DEBU[0000] FROM parsed OKDEBU[0000] Parsing [2: WORKDIR /tmp/crasdir]...DEBU[0000] Archiving [/tmp/crashdir] in out.tar.gzDEBU[0000] Archived /tmp/crashdir/local/df_-i.txtDEBU[0000] Archived /tmp/crashdir/local/lsof_-i.txtDEBU[0000] Archived /tmp/crashdir/local/netstat_-an.txtDEBU[0000] Archived /tmp/crashdir/local/ps_-ef.txtDEBU[0000] Archived /tmp/crashdir/local/var/log/syslogINFO[0000] Created archive out.tar.gzINFO[0002] Created archive out.tar.gzINFO[0002] Output done

125

Page 126: You can find the most up-to-date technical documentation ...€¦ · Tanzu Kubernetes clusters implement Calico for pod-to-pod networking by default. Tanzu Kubernetes Cluster Plans

Tanzu Kubernetes Grid CLI ReferenceThe table below lists all of the commands and options of the Tanzu Kubernetes Grid CLI, and provides links to thesection in which they are documented.

Command Options Documented In

tkg *

--config

Common Tanzu Kubernetes Grid OptionsDeploy the Management Cluster to Amazon EC2 with the CLIDeploy the Management Cluster to Amazon EC2 with theInstaller InterfaceDeploy the Management Cluster to vSphere with the CLIDeploy the Management Cluster to vSphere with the InstallerInterface

--kubeconfig --log_file --quiet --v

Common Tanzu Kubernetes Grid Options

tkg add management-cluster

Use the Tanzu Kubernetes Grid CLI with a vSphere withKubernetes Supervisor ClusterManage Multiple Management Clusters

tkg config cluster

--controlplane-machine-count --kubernetes-version --namespace --plan --worker-machine-count

Create Tanzu Kubernetes Cluster Configuration Files

tkg create cluster

--controlplane-machine-count Deploy a Cluster with a Highly Available Control Plane

--dry-run Preview the YAML for a Tanzu Kubernetes Cluster

--kubernetes-version Deploy a Cluster that Runs a Different Version of Kubernetes

--namespace Deploy a Cluster in a Specific Namespace

--plan Tanzu Kubernetes Cluster PlansDeploy a Default Tanzu Kubernetes Cluster

--worker-machine-count Deploy a Cluster with Multiple Worker Nodes

tkg delete cluster Delete Tanzu Kubernetes Clusters

tkg deletemanagement-cluster Delete Management Clusters

tkg get cluster Connect to and Examine Tanzu Kubernetes Clusters

tkg get credentials Connect to and Examine Tanzu Kubernetes Clusters

126

Page 127: You can find the most up-to-date technical documentation ...€¦ · Tanzu Kubernetes clusters implement Calico for pod-to-pod networking by default. Tanzu Kubernetes Cluster Plans

tkg get management-cluster

Connect to and Examine Tanzu Kubernetes ClustersExamine the Management Cluster DeploymentManage Multiple Management Clusters

tkg init

--infrastructure=aws Deploy the Management Cluster to Amazon EC2 with the CLI

--infrastructure=vsphere Deploy the Management Cluster to vSphere with the CLI

--name --plan

Deploy the Management Cluster to vSphere with the CLI Deploy the Management Cluster to Amazon EC2 with the CLI

--ui

Deploy the Management Cluster to vSphere with the InstallerInterfaceDeploy the Management Cluster to Amazon EC2 with theInstaller Interface

--use-existing-bootstrap-cluster See below

tkg scale cluster

--controlplane-machine-count --namespace --worker-machine-count

Scale Tanzu Kubernetes Clusters

tkg set management-cluster

Connect to and Examine Tanzu Kubernetes ClustersManage Multiple Management Clusters

tkg version Set Up the Bootstrap Environment for Tanzu KubernetesGrid

Use an Existing Bootstrap ClusterBy default, when you deploy a management cluster by running tkg init , Tanzu Kubernetes Grid creates atemporary kind cluster on the bootstrap environment, that it uses to provision the final management cluster. Thistemporary cluster is removed after the deployment of the final management cluster to vSphere or Amazon EC2completes successfully. The same process of creating a temporary kind cluster also applies when you run tkgdelete management-cluster to remove a management cluster.

In some circumstances, it might be desirable to keep the local bootstrap cluster after deploying or deleting amanagement cluster. For example, you might want to examine the objects in the cluster or review its logs. In thiscase, you can skip the creation of the kind cluster and use any Kubernetes cluster that already exists on yourbootstrap environment as the local bootstrap cluster.

IMPORTANT:

Using an existing bootstrap cluster is an advanced use case that is for experienced Kubernetes users. It isstrongly recommended to use the default kind cluster that Tanzu Kubernetes Grid provides to bootstrap yourmanagement clusters.If you have used an existing cluster to bootstrap a management cluster, you cannot use that same cluster tobootstrap another management cluster. The same applies to deleting management clusters.

127

Page 128: You can find the most up-to-date technical documentation ...€¦ · Tanzu Kubernetes clusters implement Calico for pod-to-pod networking by default. Tanzu Kubernetes Cluster Plans

Procedure

1. Set the focus of kubectl to the context of the local Kubernetes cluster that you want to use as a bootstrapcluster.

kubectl config use-context my-bootstrap-cluster-admin@my-bootstrap-cluster

2. To create a management cluster, run the tkg init command and specify the --use-existing-bootstrap-cluster option.

tkg init --infrastructure=vsphere --use-existing-bootstrap-cluster my-bootstrap-cluster

3. To delete a management cluster, run the tkg delete management-cluster command and specify the --use-existing-bootstrap-cluster option.

If you are deleting a management cluster, first run tkg get management-cluster and tkg set management-cluster tomake sure that the focus of the Tanzu Kubernetes Grid CLI is set to the management cluster to delete.

tkg delete management-cluster --use-existing-bootstrap-cluster my-bootstrap-cluster

128