Cloud Native Environment on Oracle Private Cloud Appliance
Deployment of an HA Kubernetes cluster on Oracle Private Cloud
Appliance followed by Deployment of Oracle WebLogic Server
August 25, 2020 | Version 1.01
Copyright © 2020, Oracle and/or its affiliates
Confidential – Public
2 TECHNICAL BRIEF | Cloud Native Environment on Oracle Private Cloud Appliance/Private Cloud at Customer | Version 1.01
Copyright © 2020, Oracle and/or its affiliates | Confidential – Public
PURPOSE STATEMENT
This document provides an overview of features and enhancements included in Oracle Private Cloud Appliance and Oracle
Private Cloud at Customer release 2.4.3. It is intended solely to help you assess the business benefits of upgrading to 2.4.3
and to plan your I.T. projects around modernization of applications by adopting the Cloud Native deployment model.
DISCLAIMER
This document in any form, software or printed matter, contains proprietary information that is the exclusive property of
Oracle. Your access to and use of this confidential material is subject to the terms and conditions of your Oracle software
license and service agreement, which has been executed and with which you agree to comply. This document and
information contained herein may not be disclosed, copied, reproduced or distributed to anyone outside Oracle without
prior written consent of Oracle. This document is not part of your license agreement nor can it be incorporated into any
contractual agreement with Oracle or its subsidiaries or affiliates.
This document is for informational purposes only and is intended solely to assist you in planning for the implementation
and upgrade of the product features described. It is not a commitment to deliver any material, code, or functionality, and
should not be relied upon in making purchasing decisions. The development, release, and timing of any features or
functionality described in this document remains at the sole discretion of Oracle.
Due to the nature of the product architecture, it may not be possible to safely include all features described in this document
without risking significant destabilization of the code.
3 TECHNICAL BRIEF | Cloud Native Environment on Oracle Private Cloud Appliance/Private Cloud at Customer | Version 1.01
Copyright © 2020, Oracle and/or its affiliates | Confidential – Public
TABLE OF CONTENTS
Purpose Statement 2
Disclaimer 2
Introduction 4
Integrated production-ready cloud native environment 4
Kubernetes cluster Lifecycle operations 5 Displaying Kubernetes Clusters 5 Creating a Kubernetes Cluster 6 Scaling a Kubernetes Cluster 14 Deleting a Kubernetes Cluster 15
Deploy Oracle WebLogic Server on Kubernetes cluster 16 Prerequisites 16 Deploy Oracle WebLogic Server Kubernetes Operator 16 Deploy Traefik Load Balancer 19 Deploy a WebLogic Server Domain 20 Scale the WebLogic Server Domain 23
Conclusion 24
Resources 24
4 TECHNICAL BRIEF | Cloud Native Environment on Oracle Private Cloud Appliance/Private Cloud at Customer | Version 1.01
Copyright © 2020, Oracle and/or its affiliates | Confidential – Public
INTRODUCTION
Oracle Private Cloud Appliance and Oracle Private Cloud at Customer are an on-premises cloud native converged
infrastructure that allows customers to efficiently consolidate business critical middleware and application workloads. Oracle
Private Cloud Appliance is cost effective, easy to manage, and delivers better performance than disparate build-your-
own solutions. Oracle Private Cloud Appliance together with Oracle Exadata provides a powerful, single-vendor, application
and database platforms for today’s data driven enterprise.
Oracle Private Cloud Appliance runs enterprise workloads alongside cloud-native applications to support a variety of
application requirements. Its built-in secure Multi tenancy, zero downtime upgradability, capacity on demand and single
pane of glass management make it the ideal infrastructure for rapid deployment of mission critical workloads. Oracle Private
Cloud Appliance together with Oracle Cloud Infrastructure provides customers with a complete solution to
securely maintain workloads on both private and public clouds.
INTEGRATED PRODUCTION-READY CLOUD NATIVE ENVIRONMENT
Oracle Private Cloud Appliance and Oracle Private Cloud at Customer come fully-integrated with a production-ready Oracle
Linux Cloud Native Environment that simplifies and automates the lifecycle of Kubernetes workloads. Oracle Linux Cloud
Native Environment is a curated set of open source Cloud Native Computing Foundation (CNCF) projects that can be easily
deployed, have been tested for interoperability, and for which enterprise-grade support is offered. With the Oracle Linux
Cloud Native Environment, Oracle provides the features for customers to develop microservices-based applications that can
be deployed in environments that support open standards and specifications.
Oracle Private Cloud Appliance and Oracle Private Cloud at Customer offer you the most optimized platform to consolidate
your enterprise mission-critical workloads and your modern cloud-native containerized workloads. It provides you the
simplest path to modernize your workloads and helps you accelerate the digital transformation to meet your changing
business needs.
Oracle Private Cloud Appliance allows you to easily manage creation, deletion and scaling of Highly Available Kubernetes
clusters with a few clicks in Enterprise Manager Self Service Portal or using the pca-admin CLI. The integrated Kubernetes
dashboard offers a single pane GUI management for clusters.
Every Kubernetes cluster created this way is Highly Available (HA) as it has:
3 Kubernetes Master Nodes
Variable number of Kubernetes Worker Nodes
Image Caption 1. Architecture Overview of HA Kubernetes clusters on Oracle Private Cloud Appliance/Private Cloud at Customer
5 TECHNICAL BRIEF | Cloud Native Environment on Oracle Private Cloud Appliance/Private Cloud at Customer | Version 1.01
Copyright © 2020, Oracle and/or its affiliates | Confidential – Public
KUBERNETES CLUSTER LIFECYCLE OPERATIONS
Image Caption 2. Kubernetes cluster operations workflow on Oracle Private Cloud Appliance/Private Cloud at Customer
With Oracle Private Cloud Appliance Release 2.4.3, Kubernetes lifecycle operations (creation/deletion/scaling) have been
integrated into pca-admin CLI as well as Enterprise Manager Self Service portal for easy GUI based management. On Oracle
Private Cloud at Customer, Kubernetes clusters can be created/deleted and scaled by Customer_User in Oracle Enterprise
Manager Self Service portal.
The primary states of a Kubernetes clusters are shown in Image 2:
CONFIGURED – the Kubernetes cluster configuration exists and it is either valid or invalid
SUBMITTED – The job has been queued for execution in Oracle VM
BUILDING – The Kubernetes cluster is being created with sub-states specifying what is being built - Network,
Master VMs, Load Balancer, Control Plane, Worker VMs
RECOVERING – The Kubernetes cluster is being Stopped and Master/Worker VMs are being removed
STOPPING – Stopping nodes in a node pool of Kubernetes cluster
AVAILABLE – The Kubernetes cluster is ready to use
ERROR – The Kubernetes cluster needs to be stopped and possibly needs manual intervention
Displaying Kubernetes Clusters
To access the pca-admin CLI, login to the Master Management node of Oracle Private Cloud Appliance using the virtual IP.
stayal$ ssh [email protected]
[email protected]'s password:
Last login: Sun Jul 12 20:27:17 2020
[root@ovcamn06r1 ~]# pca-admin
Welcome to PCA! Release: 2.4.3
To display a list of all Kubernetes clusters that exist on Oracle Private Cloud Appliance, use list kube-cluster command
in pca-admin.
PCA> list kube-cluster
Cluster Tenant_Group State Sub_State Load_Balancer Vrrp_ID Masters Workers
------- ------------ ----- --------- ------------- ------- ------- -------
foo4011-cluster Rack1_ServerPool CONFIGURED VALID 10.147.37.221 None 3 3
intern-cluster Rack1_ServerPool AVAILABLE None 10.147.37.203 6 3 4
6 TECHNICAL BRIEF | Cloud Native Environment on Oracle Private Cloud Appliance/Private Cloud at Customer | Version 1.01
Copyright © 2020, Oracle and/or its affiliates | Confidential – Public
sonit-2 Rack1_ServerPool AVAILABLE None 10.147.37.233 5 3 4
sonit-demo. Rack1_ServerPool AVAILABLE VALID 10.147.37.20 2 3 6
zebra Rack1_ServerPool CONFIGURED VALID 10.147.37.151 7 3 2
----------------
5 rows displayed
Status: Success
Creating a Kubernetes Cluster
Creating a HA Kubernetes cluster on Oracle Private Cloud Appliance requires the following steps:
Create a Kubernetes cluster ‘definition’
Create the Kubernetes cluster resources including Master and Worker VMs, networks and Load Balancer
In this section, we will see the creation of a Kubernetes cluster that is attached to a DHCP network. For a complete list of
commands to create a cluster with static network addresses, refer to the PCA 2.4.3 official product documentation.
Create Cluster Configuration
The first step is to create a cluster configuration for the Kubernetes cluster using create kube-cluster command.
PCA> create kube-cluster <name_of_cluster> <Tenant_Group> <external_network> <load_balancer_ip> <Repository_for_VM_disks> <name_of_virtual_appliance> PCA> create kube-cluster sonit-3-cluster Rack1_ServerPool vm_public_313 10.147.37.166 Rack1-Repository pca-virtual-appliance Kubernetes cluster configuration (sonit-3-cluster) created Status: Success
After this step, a configuration file is created for the specified Kubernetes cluster. This configuration can be viewed by using
show kube-cluster command.
PCA> show kube-cluster <name_of_cluster>
PCA> show kube-cluster sonit-3-cluster ---------------------------------------- Cluster sonit-3-cluster Tenant_Group Rack1_ServerPool State AVAILABLE Sub_State None Ops_Required None Load_Balancer 10.147.37.166 Vrrp_ID 4 External_Network vm_public_313 Cluster_Network_Type dhcp Gateway None Netmask None Name_Servers None Search_Domains None Repository Rack1-Repository-NFS Assembly pca-virtual-appliance Masters 3 Workers 2 Cluster_Start_Time 2020-06-15 22:58:03.333948 Cluster_Stop_Time None Job_ID None
7 TECHNICAL BRIEF | Cloud Native Environment on Oracle Private Cloud Appliance/Private Cloud at Customer | Version 1.01
Copyright © 2020, Oracle and/or its affiliates | Confidential – Public
This cluster configuration contains details like name of cluster, virtual IP of Load Balancer, number of worker and master
nodes, repository for virtual disks. You can modify this cluster configuration to change the defaults by using set commands.
At this point, no resources have been actually created in Oracle VM.
Modify Cluster Configuration
To set the number of worker nodes in the Kubernetes cluster, use set kube-worker-pool command.
PCA> set kube-worker-pool sonit-3-cluster 3 Kubernetes cluster configuration (sonit-3-cluster) updated Status: Success
In addition, you can update the shape of master/worker VMs, network properties. Once, you are satisfied with the details of
the cluster, you can commit the configuration to create cluster resources in Oracle VM using start kube-cluster
command.
Start the Kubernetes Cluster
Once the cluster configuration is finalized, the cluster resources can be committed as Oracle VM jobs to create Master and
worker nodes, networking between cluster nodes and load balancers. This can be done using start kube-cluster.
PCA> start kube-cluster <name_of_cluster>
PCA> start kube-cluster sonit-3-cluster Cluster sonit-3-cluster submitted to job subsystem for starting. Job ID is 38xxxxx. Status: Success
At this point, list kube-cluster can be used to see the exact component being built at the moment.
PCA> list kube-cluster
Cluster Tenant_Group State Sub_State Load_Balancer Vrrp_ID Masters Workers
------- ------------ ----- --------- ------------- ------- ------- -------
foo4011-cluster Rack1_ServerPool CONFIGURED VALID 10.147.37.221 None 3 3
intern-cluster Rack1_ServerPool AVAILABLE None 10.147.37.203 6 3 4
sonit-2 Rack1_ServerPool AVAILABLE None 10.147.37.233 5 3 4
sonit-3-cluster Rack1_ServerPool BUILDING Network 10.147.37.166 4 3 3
sonit-demo Rack1_ServerPool AVAILABLE VALID 10.147.37.20 2 3 6
zebra Rack1_ServerPool CONFIGURED VALID 10.147.37.151 7 3 2
----------------
5 rows displayed
Status: Success
You can login to Oracle VM GUI on Oracle Private Cloud Appliance to see that various jobs have been submitted and are in
progress in response to the start kube-cluster command.
Error_Code None Error_Message None ---------------------------------------- Status: Success
8 TECHNICAL BRIEF | Cloud Native Environment on Oracle Private Cloud Appliance/Private Cloud at Customer | Version 1.01
Copyright © 2020, Oracle and/or its affiliates | Confidential – Public
After a while, approximately 3 minutes for every node in the Kubernetes cluster, the cluster will be built and available. In this
case, we are building 6 cluster nodes (3 master VMs and 3 worker VMs) and it takes about 20 minutes to complete.
Note: The cluster execution time depends on the type of repository that you are using to create virtual disks for your VMs.
The cluster creation time will be longer if the repository used is NFS
PCA> list kube-cluster
Cluster Tenant_Group State Sub_State Load_Balancer Vrrp_ID Masters Workers
------- ------------ ----- --------- ------------- ------- ------- -------
foo4011-cluster Rack1_ServerPool CONFIGURED VALID 10.147.37.221 None 3 3
intern-cluster Rack1_ServerPool AVAILABLE None 10.147.37.203 6 3 4
sonit-2 Rack1_ServerPool AVAILABLE None 10.147.37.233 5 3 4
sonit-3-cluster Rack1_ServerPool AVAILABLE None 10.147.37.166 4 3 3
sonit-demo Rack1_ServerPool AVAILABLE VALID 10.147.37.20 2 3 6
zebra Rack1_ServerPool CONFIGURED VALID 10.147.37.151 7 3 2
----------------
5 rows displayed
Status: Success
You can see the cluster nodes by logging into Oracle VM GUI
Image Caption 3. Kubernetes cluster ‘sonit-3-cluster’ nodes can be seen in Oracle VM GUI on Oracle Private Cloud Appliance
9 TECHNICAL BRIEF | Cloud Native Environment on Oracle Private Cloud Appliance/Private Cloud at Customer | Version 1.01
Copyright © 2020, Oracle and/or its affiliates | Confidential – Public
Manage the Cluster from your local machine
The Kubernetes command-line tool, kubectl, allows you to run commands against Kubernetes clusters. You can use kubectl
to deploy applications, inspect and manage cluster resources, and view logs.
You can manage your Kubernetes cluster deployed on Oracle Private Cloud Appliance from your local desktop or laptop.
This requires you to install kubectl on your local machine. Depending on your local machine, follow the steps in Kubernetes
documentation to install kubectl: https://kubernetes.io/docs/tasks/tools/install-kubectl/
Installing kubectl locally allows you to manage all your Kubernetes clusters from a single central machine, thereby removing
the need to be on the master management node of Oracle Private Cloud Appliance. This simplifies operations for
Kubernetes cluster management.
In this exercise, we can see the steps to install kubectl-1.17.4 on macOS:
stayal $ curl -LO https://storage.googleapis.com/kubernetes-release/release/v1.17.4/bin/darwin/amd64/kubectl % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 47.2M 100 47.2M 0 0 2049k 0 0:00:23 0:00:23 --:--:-- 1617k
Stayal $ chmod +x ./kubectl stayal $ sudo mv ./kubectl /usr/local/bin/kubectl stayal $ kubectl version --client Client Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.4", GitCommit:"8d8aa39598534325ad77120c120a22b3a990b5ea", GitTreeState:"clean", BuildDate:"2020-03-12T21:03:42Z", GoVersion:"go1.13.8", Compiler:"gc", Platform:"darwin/amd64"}
Once you have kubectl deployed locally, you can copy the configuration file of the Kubernetes cluster (accessible using the
Load Balancer IP of the Kubernetes cluster) that you want to manage locally. This will allow you to run kubectl commands
against the Kubernetes cluster from your local machine.
stayal$ scp [email protected]:~/.kube/config ~/config-sonit-3-cluster [email protected]'s password: config 100% 5448 199.5KB/s 00:00 stayal$ export KUBECONFIG=~/config-sonit-3-cluster
Once the Kubernetes configuration is exported as shown above, you can run kubectl commands to manage ‘sonit-3-cluster’
stayal$ kubectl get nodes NAME STATUS ROLES AGE VERSION sonit-3-cluster-m-1 Ready master 27d v1.17.4+1.0.1.el7 sonit-3-cluster-m-2 Ready master 27d v1.17.4+1.0.1.el7 sonit-3-cluster-m-3 Ready master 27d v1.17.4+1.0.1.el7 sonit-3-cluster-w-0 Ready <none> 27d v1.17.4+1.0.1.el7 sonit-3-cluster-w-1 Ready <none> 27d v1.17.4+1.0.1.el7 sonit-3-cluster-w-2 Ready <none> 27d v1.17.4+1.0.1.el7
All the pods that have been deployed in the ‘kube-system’ namespace as a result of creation of my cluster ‘sonit-3-cluster’
can be viewed as follows:
10 TECHNICAL BRIEF | Cloud Native Environment on Oracle Private Cloud Appliance/Private Cloud at Customer | Version 1.01
Copyright © 2020, Oracle and/or its affiliates | Confidential – Public
stayal$ kubectl get pods -n kube-system NAME READY STATUS RESTARTS AGE coredns-d6c8c99d8-ft5hs 1/1 Running 0 28d coredns-d6c8c99d8-tknbm 1/1 Running 0 28d etcd-sonit-3-cluster-m-1 1/1 Running 0 28d etcd-sonit-3-cluster-m-2 1/1 Running 0 28d etcd-sonit-3-cluster-m-3 1/1 Running 0 28d kube-apiserver-sonit-3-cluster-m-1 1/1 Running 3 28d kube-apiserver-sonit-3-cluster-m-2 1/1 Running 0 28d kube-apiserver-sonit-3-cluster-m-3 1/1 Running 0 28d kube-controller-manager-sonit-3-cluster-m-1 1/1 Running 1 28d kube-controller-manager-sonit-3-cluster-m-2 1/1 Running 2 28d kube-controller-manager-sonit-3-cluster-m-3 1/1 Running 4 28d kube-flannel-ds-amd64-5xffg 1/1 Running 6 28d kube-flannel-ds-amd64-grhtq 1/1 Running 0 27d kube-flannel-ds-amd64-hq7n5 1/1 Running 0 28d kube-flannel-ds-amd64-jqbpg 1/1 Running 0 27d kube-flannel-ds-amd64-lm7n4 1/1 Running 4 28d kube-flannel-ds-amd64-nlpxl 1/1 Running 0 27d kube-proxy-2p8cg 1/1 Running 0 28d kube-proxy-49k6m 1/1 Running 0 28d kube-proxy-4xsjc 1/1 Running 0 27d kube-proxy-74w9w 1/1 Running 0 28d kube-proxy-lt4pg 1/1 Running 0 27d kube-proxy-z2v4d 1/1 Running 0 27d kube-scheduler-sonit-3-cluster-m-1 1/1 Running 2 28d kube-scheduler-sonit-3-cluster-m-2 1/1 Running 0 28d kube-scheduler-sonit-3-cluster-m-3 1/1 Running 1 28d
Display the Kubernetes Dashboard for monitoring
Dashboard is a web-based Kubernetes user interface. You can use Dashboard to deploy containerized applications to a
Kubernetes cluster, troubleshoot your containerized application, and manage the cluster resources. You can use Dashboard
to get an overview of applications running on your cluster, as well as for creating or modifying individual Kubernetes
resources (such as Deployments, Jobs).
For every Kubernetes cluster that is deployed using the automation built into Oracle Private Cloud Appliance Release 2.4.3, a
Kubernetes dashboard is deployed by default. This pod can be seen running in the ‘kubernetes-dashboard’ namespace.
stayal$ kubectl get pods -n kubernetes-dashboard NAME READY STATUS RESTARTS AGE kubernetes-dashboard-74f8fcbc74-88697 1/1 Running 0 28d
To actually display and login to the web-based dashboard, we will need to create a new user using Service Account
mechanism of Kubernetes, grant this user cluster-admin permissions and login to Dashboard using bearer token tied to this
user. To do this, we use the following file ‘dashboard.yaml’
stayal$ cat dashboard.yml
apiVersion: v1 kind: ServiceAccount metadata: name: admin-user namespace: kube-system --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: admin-user roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: cluster-admin
11 TECHNICAL BRIEF | Cloud Native Environment on Oracle Private Cloud Appliance/Private Cloud at Customer | Version 1.01
Copyright © 2020, Oracle and/or its affiliates | Confidential – Public
The tasks in this ‘dashboard.yaml’ file can be executed by simply doing kubectl apply command
stayal$ kubectl apply -f dashboard.yml serviceaccount/admin-user created clusterrolebinding.rbac.authorization.k8s.io/admin-user created
At this point, we need a token for admin-user to log in to the dashboard.
stayal$ kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep admin-user | awk '{print $1}')
Name: admin-user-token-b25l2 Namespace: kube-system Labels: <none> Annotations: kubernetes.io/service-account.name: admin-user kubernetes.io/service-account.uid: 1b6d5b46-9a92-4f05-a97c-7895192311a8 Type: kubernetes.io/service-account-token Data ==== ca.crt: 1025 bytes namespace: 11 bytes token: eyJhbGciOiJSUzI1NiIsImtpZCI6IlIzQWdzdFo3anRrNktBdW9LeWU3bjJ4ZnZvMFRUVDJGNFVvOEtHTDJnenMifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZiZS1zeXN0ZW06YWRtaW4tdXNlciJ9.rZHj4VVparF6gU-6nYJu6YAOc2co9FSjHR7pBdT-e3DWQn4kwPBRv9NpJBPr9CoxNPe5Ukfc9qjgFswmC0VKrsBwRl9G8jLyjKGlXHoF4TetcZVgYkL7G5A1UCDhEZ2On3AucLY4RZGxSgFTsPUfj-ITcq9_6Pif4dj0Tzuus-vvQgH4mqYsvxSv29hKTQDGwC90JFls_8rKDXd_7gHY21DX3jD2rpgHgwmUjZmoWBS-3WmVpQNhjTHqhhmoEwtemndBxM1onzgRtQEuZSTpr1ufOGAQRPSl84ElCZivNpTPXEF-RhhGNfoWbUIMkDxTD-WvEYpL04FeTga9_xnBLw
Now you can access the Dashboard using kubectl command line by running the following command:
stayal$ kubectl proxy Starting to serve on 127.0.0.1:8001
On the system where this kubectl proxy command is executed, the dashboard can be accessed by going to:
http://localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/proxy/#/login
Then copy the authentication token generated above and paste it into the ‘Enter Token’ field to access the Dashboard.
Image 4.
subjects: - kind: ServiceAccount name: admin-user namespace: kube-system
12 TECHNICAL BRIEF | Cloud Native Environment on Oracle Private Cloud Appliance/Private Cloud at Customer | Version 1.01
Copyright © 2020, Oracle and/or its affiliates | Confidential – Public
Image Caption 4. Kubernetes Dashboard for cluster ‘sonit-3-cluster’
You can click on each node to view details of pods deployed on the worker node along with general health and usage
reports for the node as shown in Image 5.
Image Caption 5. Kubernetes Dashboard showing details of a worker node in cluster ‘sonit-3-cluster’
13 TECHNICAL BRIEF | Cloud Native Environment on Oracle Private Cloud Appliance/Private Cloud at Customer | Version 1.01
Copyright © 2020, Oracle and/or its affiliates | Confidential – Public
Deploy applications using Kubernetes Dashboard
The first step is to add the environmental variables/proxies (if needed) to all the nodes in the Kubernetes cluster so that they
can pull the desired container images from a Container Registry. This step won’t be required if you have a container registry
locally and don’t need a proxy to get to it from the worker nodes in your cluster.
For adding proxies, find the IP address of each node in your cluster using the OVM Manager GUI for Oracle Private Cloud
Appliance or from the EM IaaS Self Service portal GUI. Then login to each node of your cluster using ssh and create a new
file /etc/systemd/system/crio.service.d/http-proxy.conf. The contents of this file are shown here for one of the
worker nodes in ‘sonit-3-cluster’.
[root@sonit-3-cluster-w-0 ~]# cat /etc/systemd/system/crio.service.d/http-proxy.conf [Service] Environment="HTTP_PROXY=http://www-proxy-hqdc.us.oracle.com:80" Environment="HTTPS_PROXY=http://www-proxy-hqdc.us.oracle.com:80" Environment="NO_PROXY=localhost, 127.0.0.1"
After creating the proxy.conf file, the cri-o service needs to be restarted for these settings to take effect
[root@sonit-3-cluster-w-0 ~]# systemctl restart crio [root@sonit-3-cluster-w-0 ~]# systemctl daemon-reload
Now your Kubernetes cluster environment is ready to deploy container application workloads using ‘yaml’ declarations of
Kubernetes resources.
Let’s see a simple deployment of nginx application using the Kubernetes Dashboard. Click on the ‘+’ button in the top right
corner of Kubernetes dashboard window and import the YAML content specifying the resources to be created as shown in
Image 6 below.
Image Caption 6. Deploying a simple ‘nginx-deployment’ using Kubernetes Dashboard
14 TECHNICAL BRIEF | Cloud Native Environment on Oracle Private Cloud Appliance/Private Cloud at Customer | Version 1.01
Copyright © 2020, Oracle and/or its affiliates | Confidential – Public
Clicking ‘Upload’ after importing the YAML should start creating the desired pods for nginx-deployment. In a few seconds,
you should be able to see 2 pods running for this deployment in the dashboard as shown in Image 7 below.
Image Caption 7. ‘nginx-deployment’ completed showing 2 running pods as specified in our YAML file.
Scaling a Kubernetes Cluster
When a Kubernetes cluster is created using Oracle Private Cloud Appliance Release 2.4.3, it has 2 node pools – MASTER and
WORKER. The scaling operations for a Kubernetes cluster can be performed using pca-admin CLI or through the EM IaaS
Self-Service portal.
A Kubernetes cluster can be easily scaled up or down by scaling up/down an existing node pool. The nodes added to an
existing node pool inherit the same memory and CPU as all the other nodes in that node pool.
You can also scale a cluster by adding/removing a node pool. This helps when a cluster needs worker nodes with more (or
less) CPU and Memory or possibly create the boot disks in an alternate repository.
Here is an example of scaling a Kubernetes cluster up by adding a new node pool using pca-admin CLI:
PCA> add node-pool <name_of_cluster> <name_of_node_pool> <cpus> <memory> <repository_for_VM_disks>* <name_of_virtual_appliance>*
PCA> add node-pool sonit-demo-cluster np0 8 32768 Nodepool (np0) added to cluster (sonit-demo-cluster) Status: Success
In add node-pool command, the arguments repository and virtual appliance name are optional. If not specified, the new
node pool inherits these parameters from those used in the create kube-cluster command.
To add nodes to this newly created nope pool, use the add node-pool-node command. The following command creates a
new node in the node-pool ‘np0’ of cluster ‘sonit-3-cluster’
15 TECHNICAL BRIEF | Cloud Native Environment on Oracle Private Cloud Appliance/Private Cloud at Customer | Version 1.01
Copyright © 2020, Oracle and/or its affiliates | Confidential – Public
PCA> add node-pool-node <name_of_cluster> <name_of_node_pool>
PCA> add node-pool-node sonit-demo-cluster np0 Sonit-demo-cluster submitted to job subsystem for starting, job id is 38xxxx. Status: Success
For more information and examples of scaling Kubernetes clusters, refer to the official documentation for PCA 2.4.3
Deleting a Kubernetes Cluster
A Kubernetes cluster needs to be in ‘CONFIGURED’ state for successful deletion. Thus, to delete a Kubernetes cluster that is
AVAILABLE and running, we need the following steps:
- Stop the Running Kubernetes Cluster
- Delete the cluster resources including cluster nodes, networks and disks
To delete a Kubernetes Cluster ‘sonit-demo-cluster’, here are the commands,
PCA> stop kube-cluster <name_of_cluster>
PCA> stop kube-cluster sonit-demo-cluster
************************************************************ WARNING !!! THIS IS A DESTRUCTIVE OPERATION. ************************************************************ Are you sure [y/N]:y Status: Success
PCA> delete kube-cluster <name_of_cluster>
PCA> delete kube-cluster sonit-demo-cluster
************************************************************ WARNING !!! THIS IS A DESTRUCTIVE OPERATION. ************************************************************ Are you sure [y/N]:y Kubernetes cluster configuration (sonit-demo-cluster) deleted Status: Success
16 TECHNICAL BRIEF | Cloud Native Environment on Oracle Private Cloud Appliance/Private Cloud at Customer | Version 1.01
Copyright © 2020, Oracle and/or its affiliates | Confidential – Public
DEPLOY ORACLE WEBLOGIC SERVER ON KUBERNETES CLUSTER
This technical brief describes the process of deploying WebLogic Server applications on Kubernetes clusters on Oracle
Private Cloud Appliance.
In this paper, we leverage the newer versions of the tools that are part of the WebLogic Kubernetes ToolKit which are fully
certified to work with Oracle Linux Cloud Native Environment 1.1 (please refer to the supported versions of Kubernetes) .
Prerequisites
Install Helm 3 on your local machine
Helm is an application package manager running atop Kubernetes. It allows describing the application structure through
convenient helm-charts and managing it with simple commands. The WebLogic Kubernetes Operator project has created a
Helm chart to install the WebLogic Server Kubernetes Operator in a Kubernetes cluster.
To install Helm on your kubectl host,
stayal$ curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3 stayal$ chmod +x get_helm.sh stayal$ ./get_helm.sh Downloading https://get.helm.sh/helm-v3.2.1-darwin-amd64.tar.gz Preparing to install helm into /usr/local/bin Password: helm installed into /usr/local/bin/helm
Then you need to ensure that the service account used by helm has cluster-admin role. To do this, we use the following
‘helm_svcaccount.yml’ as shown below. This
stayal$ cat helm_svcaccount.yml apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: helm-user-cluster-admin-role
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects: - kind: ServiceAccount name: default
namespace: kube-system
To execute this file, simply do
stayal$ kubectl apply -f helm_svcaccount.yml clusterrolebinding.rbac.authorization.k8s.io/helm-user-cluster-admin-role created
Deploy Oracle WebLogic Server Kubernetes Operator
You need to clone the Operator repository to your local machine so that you have access to the various sample files
mentioned throughout this paper. First create a directory for running your commands that contains the necessary files to
run Oracle WebLogic Server on Oracle Private Cloud Appliance and then obtain the WebLogic Server Kubernetes Operator
3.0 from GitHub.
$ mkdir -p ~/WLS_K8S_PCA $ cd ~/WLS_K8S_PCA/ $ git clone https://github.com/oracle/weblogic-kubernetes-operator.git -b v3.0.0 Cloning into 'weblogic-kubernetes-operator'... remote: Enumerating objects: 150, done.
17 TECHNICAL BRIEF | Cloud Native Environment on Oracle Private Cloud Appliance/Private Cloud at Customer | Version 1.01
Copyright © 2020, Oracle and/or its affiliates | Confidential – Public
Create the namespace and service account to be used by WebLogic Server Kubernetes Operator
$ cd weblogic-kubernetes-operator/ $ kubectl create namespace sample-weblogic-operator-ns namespace/sample-weblogic-operator-ns created $ kubectl create serviceaccount -n sample-weblogic-operator-ns sample-weblogic-operator-sa serviceaccount/sample-weblogic-operator-sa created
The WebLogic Server Kubernetes Operator project provides a Helm chart to easily deploy the Operator. Helm can install the
operator by specifying the newly created namespace and service account to be used. Start by adding a stable helm
repository
$ helm repo add stable https://kubernetes-charts.storage.googleapis.com/ "stable" has been added to your repositories $ helm install sample-weblogic-operator kubernetes/charts/weblogic-operator --namespace sample-weblogic-operator-ns --set image=oracle/weblogic-kubernetes-operator:3.0.0 --set serviceAccount=sample-weblogic-operator-sa --set "domainNamespaces={}" NAME: sample-weblogic-operator LAST DEPLOYED: Wed Jun 3 16:22:12 2020 NAMESPACE: sample-weblogic-operator-ns STATUS: deployed REVISION: 1 TEST SUITE: None
Check the operator pod in sample-weblogic-operator-ns namespace.
$ kubectl get pods -n sample-weblogic-operator-ns NAME READY STATUS RESTARTS AGE weblogic-operator-7646874bfb-tcwn8 1/1 Running 102 23h
remote: Counting objects: 100% (150/150), done. remote: Compressing objects: 100% (113/113), done. remote: Total 138478 (delta 34), reused 95 (delta 10), pack-reused 138328 Receiving objects: 100% (138478/138478), 99.82 MiB | 1.71 MiB/s, done. Resolving deltas: 100% (82249/82249), done. Note: checking out '8eabb9cfe78885353e3cbd0aed0b2bc60af3b04d'. You are in 'detached HEAD' state. You can look around, make experimental changes and commit them, and you can discard any commits you make in this state without impacting any branches by performing another checkout. If you want to create a new branch to retain commits you create, you may do so (now or later) by using -b with the checkout command again. Example: git checkout -b <new-branch-name> Checking out files: 100% (8696/8696), done.
18 TECHNICAL BRIEF | Cloud Native Environment on Oracle Private Cloud Appliance/Private Cloud at Customer | Version 1.01
Copyright © 2020, Oracle and/or its affiliates | Confidential – Public
Check the helm chart for the WebLogic Server Kubernetes Operator
You can see the operator pod running in the Kubernetes dashboard in sample-weblogic-operator-ns namespace as shown
in Image 8.
Image Caption 8. ‘Weblogic-operator’ pod running in ‘sample-weblogic-operator-ns’ as seen in Kubernetes dashboard
$ helm list -n sample-weblogic-operator-ns NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION sample-weblogic-operator sample-weblogic-operator-ns 13 2020-07-06 14:53:57.452847 -0700 PDT deployed weblogic-operator-3.0.0
19 TECHNICAL BRIEF | Cloud Native Environment on Oracle Private Cloud Appliance/Private Cloud at Customer | Version 1.01
Copyright © 2020, Oracle and/or its affiliates | Confidential – Public
Deploy Traefik Load Balancer
The Oracle WebLogic Server Kubernetes Operator supports four load balancers: Traefik, Voyager, nginx and Apache.
Samples are provided in the documentation.
This tutorial demonstrates how to install the Traefik Ingress controller to provide load balancing for WebLogic clusters.
Create a namespace for Traefik
$ kubectl create namespace traefik namespace/traefik created
Install the Traefik operator in this namespace
$ helm install traefik-operator stable/traefik --namespace traefik --values kubernetes/samples/charts/traefik/values.yaml --set "kubernetes.namespaces={traefik}" --set "serviceType=LoadBalancer" NAME: traefik-operator LAST DEPLOYED: Wed Jun 10 12:28:53 2020 NAMESPACE: traefik STATUS: deployed REVISION: 1 TEST SUITE: None NOTES: 1. Get Traefik's load balancer IP/hostname: NOTE: It may take a few minutes for this to become available. You can watch the status by running: $ kubectl get svc traefik-operator --namespace traefik -w
Once 'EXTERNAL-IP' is no longer '<pending>': $ kubectl describe svc traefik-operator --namespace traefik | grep Ingress | awk '{print $3}' 2. Configure DNS records corresponding to Kubernetes ingress resources to point to the load balancer IP/hostname found in step 1
Check the running pod in traefik namespace
$ kubectl get pods -n traefik NAME READY STATUS RESTARTS AGE traefik-operator-b4f698477-scw5v 1/1 Running 0 8d
Check the helm chart for the Traefik operator
$ helm list -n traefik NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION traefik-operator. traefik 13 2020-07-06 14:54:12.953756 -0700 PDT deployed traefik-1.87.1 1.7.23
See the Traefik operator pod running in the Kubernetes dashboard in traefik namespace as shown in Image 9.
20 TECHNICAL BRIEF | Cloud Native Environment on Oracle Private Cloud Appliance/Private Cloud at Customer | Version 1.01
Copyright © 2020, Oracle and/or its affiliates | Confidential – Public
Image Caption 9. traefik-operator’ pod running in ‘traefik’ namespace as seen in Kubernetes dashboard
Deploy a WebLogic Server Domain
To deploy the domain we will create a Kubernetes Custom Resource (CR) to represent the domain object in Kubernetes, and
to have the WebLogic Server Kubernetes Operator orchestrate the Oracle WebLogic Server domain. The Domain CR is
defined and created by applying the domain.yaml file.
Create the namespace to deploy the domain.
$ kubectl create namespace sample-domain1-ns namespace/sample-domain1-ns created
Create a Kubernetes secret for the Administration Server boot credentials.
$ kubectl -n sample-domain1-ns create secret generic sample-domain1-weblogic-credentials --from-literal=username=weblogic --from-literal=password=welcome1 secret/sample-domain1-weblogic-credentials created
21 TECHNICAL BRIEF | Cloud Native Environment on Oracle Private Cloud Appliance/Private Cloud at Customer | Version 1.01
Copyright © 2020, Oracle and/or its affiliates | Confidential – Public
Update the WebLogic Server Kubernetes Operator to manage the domain in the domain’s namespace:
$ helm upgrade sample-weblogic-operator kubernetes/charts/weblogic-operator --namespace sample-weblogic-operator-ns --reuse-values --set "domainNamespaces={sample-domain1-ns}" --wait Release "sample-weblogic-operator" has been upgraded. Happy Helming! NAME: sample-weblogic-operator LAST DEPLOYED: Wed Jul 7 15:36:24 2020 NAMESPACE: sample-weblogic-operator-ns STATUS: deployed REVISION: 14 TEST SUITE: None
Update Traefik to manage ingress in this namespace
$ helm upgrade traefik-operator stable/traefik --namespace traefik --reuse-values --set "kubernetes.namespaces={traefik,sample-domain1-ns}" --wait Release "traefik-operator" has been upgraded. Happy Helming! NAME: traefik-operator LAST DEPLOYED: Thu Jul 8 10:49:54 2020 NAMESPACE: traefik STATUS: deployed REVISION: 14 TEST SUITE: None NOTES: 1. Traefik is listening on the following ports on the host machine: http - 30305 https - 30443 2. Configure DNS records corresponding to Kubernetes ingress resources to point to the NODE_IP/NODE_HOST
To deploy the WebLogic Server domain, we need to create a Domain CR which contains the necessary parameters for the
operator to start the WebLogic domain properly.
A domain.yaml file contains a YAML representation of the custom resource object. Copy the file locally from GitHub link.
Edit the information in the domain.yaml file
$ vi domain.yml apiVersion: "weblogic.oracle/v2" kind: Domain metadata: # Update this with the `domainUID` of your domain: name: sample-domain1 # Update this with the namespace your domain will run in: namespace: sample-domain1-ns labels: weblogic.resourceVersion: domain-v2 # Update this with the `domainUID` of your domain: weblogic.domainUID: sample-domain1 spec: # This parameter provides the location of the WebLogic Domain Home (from the container's point of view). # Note that this might be in the image itself or in a mounted volume or network storage. domainHome: /u01/oracle/user_projects/domains/sample-domain1 # If the domain home is inside the Docker image, set this to `true`, otherwise set `false`: domainHomeInImage: true # Update this with the name of the Docker image that will be used to run your domain:
22 TECHNICAL BRIEF | Cloud Native Environment on Oracle Private Cloud Appliance/Private Cloud at Customer | Version 1.01
Copyright © 2020, Oracle and/or its affiliates | Confidential – Public
Create the domain custom resource object by using kubectl apply command
$ kubectl apply -f domain.yml domain.weblogic.oracle/sample-domain1 configured
Check the introspect job running in the sample-domain1-ns namespace. For a description of the introspector job refer to
the documentation
$ kubectl get pod -n sample-domain1-ns NAME READY STATUS RESTARTS AGE sample-domain1-introspect-domain-job-ql9lk 0/1 ContainerCreating 0 26s
Soon, you will start seeing the Server pods, started by the admin server and then followed by the managed servers
$ kubectl get pod -n sample-domain1-ns NAME READY STATUS RESTARTS AGE sample-domain1-admin-server 1/1 Running 0 43s sample-domain1-managed-server1 0/1 ContainerCreating 0 1s
#image: "YOUR_OCI_REGION_CODE.ocir.io/YOUR_TENANCY_NAME/weblogic-operator-tutorial:latest" image: "iad.ocir.io/weblogick8s/weblogic-operator-tutorial-store:1.0" # imagePullPolicy defaults to "Always" if image version is :latest imagePullPolicy: "IfNotPresent" # If credentials are needed to pull the image, uncomment this section and identify which # Secret contains the credentials for pulling an image: #imagePullSecrets: #- name: ocirsecret # Identify which Secret contains the WebLogic Admin credentials (note that there is an example of # how to create that Secret at the end of this file) webLogicCredentialsSecret: # Update this with the name of the secret containing your WebLogic server boot credentials: name: sample-domain1-weblogic-credentials # adminServer is used to configure the desired behavior for starting the administration server. adminServer: # serverStartState legal values are "RUNNING" or "ADMIN" # "RUNNING" means the listed server will be started up to "RUNNING" mode # "ADMIN" means the listed server will be start up to "ADMIN" mode serverStartState: "RUNNING" adminService: channels: # Update this to set the NodePort to use for the Admin Server's default channel (where the # admin console will be available): - channelName: default nodePort: 30701 # clusters is used to configure the desired behavior for starting member servers of a cluster. # If you use this entry, then the rules will be applied to ALL servers that are members of the named clusters. clusters: - clusterName: cluster-1 serverStartState: "RUNNING" replicas: 2
23 TECHNICAL BRIEF | Cloud Native Environment on Oracle Private Cloud Appliance/Private Cloud at Customer | Version 1.01
Copyright © 2020, Oracle and/or its affiliates | Confidential – Public
After a couple of minutes, we can see the 3 pods in our sample-domain1-ns namespace in the Kubernetes dashboard
Image Caption 10. 1 admin server and 2 managed servers running in ‘sample-domain1-ns’ namespace as seen in Kubernetes dashboard
Scale the WebLogic Server Domain
To scale the WebLogic Server domain you can edit the domain.yaml file to change the number of replicas
clusters: - clusterName: cluster-1 serverStartState: "RUNNING" replicas: 3
Then run
$ kubectl apply -f domain.yml domain.weblogic.oracle/sample-domain1 configured
To see other methods of scaling a WebLogic Server domain including setting up automatic scaling by defining rules and
policies, please refer to whitepaper ‘Oracle WebLogic Server on Oracle Private Cloud Appliance and Kubernetes’
24 TECHNICAL BRIEF | Cloud Native Environment on Oracle Private Cloud Appliance/Private Cloud at Customer | Version 1.01
Copyright © 2020, Oracle and/or its affiliates | Confidential – Public
CONCLUSION
Oracle Private Cloud Appliance and Oracle Private Cloud at Customer offer the most optimized infrastructure to host
middleware and applications. They are ideal for consolidating enterprise mission-critical workloads along with modern cloud
native containerized workloads and manage them from a single pane of glass. Release 2.4.3 comes fully integrated with
production-ready Cloud Native Environment (based on Kubernetes 1.17).
You can simplify and automate the lifecycle of Kubernetes workloads using pca-admin CLI and Enterprise Manager Self-
Service portal. This allows you to create fully HA Kubernetes clusters and scale them in minutes, thus simplifying your
journey to digital transformation.
The integrated cloud native environment allows you to modernize your WebLogic Server applications while providing you
the best price/performance. WebLogic Server Kubernetes Operator 3.0 is fully tested and supported on Oracle Private Cloud
Appliance and Oracle Private Cloud at Customer.
RESOURCES
Oracle Private Cloud Appliance website
Oracle Private Cloud Appliance documentation
Oracle Linux Cloud Native Environment documentation
WebLogic Server Kubernetes Operator User Guide
CONNECT WITH US
Call +1.800.ORACLE1 or visit oracle.com.
Outside North America, find your local office at oracle.com/contact.
blogs.oracle.com
facebook.com/oracle
twitter.com/oracle
Copyright © 2020, Oracle and/or its affiliates. All rights reserved. This document is provided for information purposes only, and the contents hereof are subject to change without
notice. This document is not warranted to be error-free, nor subject to any other warranties or conditions, whether expressed orally or implied in law, including implied warranties
and conditions of merchantability or fitness for a particular purpose. We specifically disclaim any liability with respect to this document, and no contractual obligations are formed
either directly or indirectly by this document. This document may not be reproduced or transmitted in any form or by any means, electronic or mechanical, for any purpose, without
our prior written permission.
This device has not been authorized as required by the rules of the Federal Communications Commission. This device is not, and may not be, offered for sale or lease, or sold or
leased, until authorization is obtained.
Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners.
Intel and Intel Xeon are trademarks or registered trademarks of Intel Corporation. All SPARC trademarks are used under license and are trademarks or registered trademarks of
SPARC International, Inc. AMD, Opteron, the AMD logo, and the AMD Opteron logo are trademarks or registered trademarks of Advanced Micro Devices. UNIX is a registered
trademark of The Open Group. 0120
Cloud Native Environment on Oracle Private Cloud Appliance
September 2020
Author: Sonit Tayal