was liberty elastic clusters and centralised admin

62
Lab AAI-2822 Liberty Elastic Clusters and Centralized Administration using Scripting and Admin Center Michael C Thompson <[email protected]> Chris Vignola <[email protected]> February 2015

Upload: sflynn073

Post on 18-Jul-2015

212 views

Category:

Documents


4 download

TRANSCRIPT

Page 1: Was liberty   elastic clusters and centralised admin

Lab AAI-2822Liberty Elastic Clusters and Centralized Administration using Scripting and Admin CenterMichael C Thompson <[email protected]>Chris Vignola <[email protected]>

February 2015

Page 2: Was liberty   elastic clusters and centralised admin

Please Note

IBM’s statements regarding its plans, directions, and intent are subject to change or withdrawal without notice at IBM’s sole discretion.

Information regarding potential future products is intended to outline our general product direction and it should not be relied on in making a purchasing decision.

The information mentioned regarding potential future products is not a commitment, promise, or legal obligation to deliver any material, code or functionality. Information about potential future products may not be incorporated into any contract. The development, release, and timing of any future features or functionality described for our products remains at our sole discretion.

Performance is based on measurements and projections using standard IBM benchmarks in a controlled environment. The actual throughput or performance that any user will experience will vary depending upon many factors, including considerations such as the amount of multiprogramming in the user’s job stream, the I/O configuration, the storage configuration, and the workload processed. Therefore, no assurance can be given that an individual user will achieve results similar to those stated here.

Acknowledgements and Disclaimers

Availability. References in this presentation to IBM products, programs, or services do not imply that they will be available in all countries in which IBM operates.

The workshops, sessions and materials have been prepared by IBM or the session speakers and reflect their own views. They are provided for informational purposes only, and are neither intended to, nor shall have the effect of being, legal or other guidance or advice to any participant. While efforts were made to verify the completeness and accuracy of the information contained in this presentation, it is provided AS-IS without warranty of any kind, express or implied. IBM shall not be responsible for any damages arising out of the use of, or otherwise related to, this presentation or any other materials. Nothing contained in this presentation is intended to, nor shall have the effect of, creating any warranties or representations from IBM or its suppliers or licensors, or altering the terms and conditions of the applicable license agreement governing the use of IBM software.

All customer examples described are presented as illustrations of how those customers have used IBM products and the results they may have achieved. Actual environmental costs and performance characteristics may vary by customer. Nothing contained in these materials is intended to, nor shall have the effect of, stating or implying that any activities undertaken by you will result in any specific sales, revenue growth or other results.

© Copyright IBM Corporation 2015. All rights reserved.

• U.S. Government Users Restricted Rights - Use, duplication or disclosure restricted by GSA ADP Schedule Contract with IBM Corp.

IBM, the IBM logo, ibm.com, Bluemix, Blueworks Live, CICS, Clearcase, DOORS®, Enterprise Document Management System™, Global Business Services ®, Global Technology Services ®, Information on Demand, ILOG, Maximo®, MQIntegrator®, MQSeries®, Netcool®, OMEGAMON, OpenPower, PureAnalytics™, PureApplication®, pureCluster™, PureCoverage®, PureData®, PureExperience®, PureFlex®, pureQuery®, pureScale®, PureSystems®, QRadar®, Rational®, Rhapsody®, SoDA, SPSS, StoredIQ, Tivoli®, Trusteer®, urban{code}®, Watson, WebSphere®, Worklight®, X-Force® and System z® Z/OS, are trademarks of International Business Machines Corporation, registered in many jurisdictions worldwide. Other product and service names might be trademarks of IBM or other companies. A current list of IBM trademarks is available on the Web at "Copyright and trademark information" at: www.ibm.com/legal/copytrade.shtml

Other company, product, or service names may be trademarks or service marks of others.

2

Page 3: Was liberty   elastic clusters and centralised admin

ContentsLab Overview........................................................................................................................................5

Objectives..........................................................................................................................................5

Prerequisite Knowledge.....................................................................................................................5

What's New........................................................................................................................................6

Lab Setup..........................................................................................................................................6

Production vs Lab Scenario...............................................................................................................6

Key Reference Notes for Lab 2822....................................................................................................8

Step by Step Instructions.......................................................................................................................9

1. Introduction....................................................................................................................................9

1.1 Start and access demoServer.................................................................................................9

1.2 Start and access IBM HTTP Server.......................................................................................11

1.3 Stop demoServer..................................................................................................................12

2. Creating your first server..............................................................................................................13

2.1 Create and start a server.......................................................................................................13

2.2 Deploy an application.............................................................................................................14

2.3 Dynamically change the HTTP configuration..........................................................................15

2.4 Configure IHS to route to the new server...............................................................................17

3. Creating a collective.....................................................................................................................22

3.1 Create the collective controller..............................................................................................23

3.2 Introducing the Admin Center.................................................................................................26

3.3 Add the first collective member...............................................................................................32

4. Defining a cluster.........................................................................................................................35

4.1 Configure the first cluster member..........................................................................................35

4.2 Adding another cluster member..............................................................................................36

4.3 Update IHS to route to the cluster..........................................................................................41

5. Configuring dynamic routing........................................................................................................43

5.1 Setting up dynamic routing.....................................................................................................43

3

Page 4: Was liberty   elastic clusters and centralised admin

5.2 Expanding the cluster.............................................................................................................46

6. Auto scaled clusters.....................................................................................................................48

6.1 Enabling the scaling controller and scaling members.............................................................48

6.2 Changing the scaling policy....................................................................................................53

7. Tagging and Searching................................................................................................................56

7.1 Setting tags for a server and cluster.......................................................................................56

7.2 Searching in Admin Center.....................................................................................................61

Reference Material...........................................................................................................................62

4

Page 5: Was liberty   elastic clusters and centralised admin

Lab Overview

ObjectivesIn this hands-on lab, you build a real Liberty application cluster and monitor it using the Liberty Admin Center, the web-based administrative interface. Both application clusters and the Admin Center are new capabilities in the IBM WebSphere Application Server V8.5.5 release. You will learn what a Liberty collective is, how to define clusters, and how to configure IBM HTTP Server (IHS) with the WebSphere plugin to route traffic to a server or cluster.

In the lab, attendees set up a collective, create a cluster, deploy and verify applications on the cluster, and perform basic operational tasks on the cluster. After completing this lab, participants are fully equipped to set up and operate their own Liberty application clusters in production.

In this lab, you learn:

• The concepts and operations of a Liberty collective and clustering with the WebSphere Application Server Liberty profile

• Hands-on experience creating, configuring and performing operations on a collective and cluster

• Hands-on experience configuring IHS with the WebSphere plugin

• Hands-on experience with the Admin Center, the web-based administrative interface for Liberty profile

Prerequisite KnowledgeThis lab is written to assume no knowledge of WebSphere Application Server Liberty profile, IBM HTTP Server or the WebSphere plugin. This lab uses a Linux operating system as the host OS. The instructions do provide complete details for any command-line actions or configuration changes that must be made. Even so, the following basic knowledge is assumed:

• Basic Linux command line knowledge

• Linux editors. This lab uses gedit as the editor of choice in the command examples.You are free to use any editor you wish (the VM image has vi and emacs available).

When large amounts of text are displayed, as part of an operation output or a screen shot is provided, the important portions are highlighted using red.

5

Page 6: Was liberty   elastic clusters and centralised admin

What's NewNew for InterConnect 2015, this lab has been updated to highlight new Liberty capabilities and to follow a more production-like configuration scenario.

New capabilities include:

• Dynamic routing

• Auto scaled clusters

• Tagging servers and clusters

• Collective search in Admin Center

• Viewing server metrics in Admin Center

To better reflect real world scenarios, the lab has been updated to include configuration of IBM HTTP Server (IHS) and the WebSphere plugin. The capabilities of the WebSphere plugin has been enhanced with the introduction of dynamic routing, which is covered in this lab.

Lab SetupThe lab's virtual machine image comes pre-installed with a Liberty installation (January 2015 beta), an IBM HTTP server and the WebSphere Plugin. No installation steps are required as part of this lab. The documentation for installing this software is available on wasdev.net, the IBM Knowledge Center and links are provided at the end of this lab in the Reference Material section.

Production vs Lab ScenarioThis lab attempts to reflect a real world topology where possible. However, due to the need to contain the lab into a single virtual machine, certain things are changed. Most notably the number of host operating systems is reduced to one. For a production environment, it is recommended that IHS, the collective controller, and the collective members are deployed to separate OS instances, preferably across separate physical hardware to ensure high availability.

In order to achieve highly availabile application workloads, separation across different physical hardware is required. All IHS instances and the servers within a cluster should be deployed to different physical hosts. This separation is done in order to mitigate the effects of potential system failures. For mission critical applications, the physical separation should also span data centers and geographic areas. Establishing these kinds of environments is out of the scope for this lab.

6

Page 7: Was liberty   elastic clusters and centralised admin

In the lab, all Liberty servers and IHS are located on the same host.

Final lab topology

In a production environment, it is recommended that IHS, the collective controller and clustered servers be on different hosts. The following diagram is one such example.

Example production topology deployed across physical hosts

7

Page 8: Was liberty   elastic clusters and centralised admin

Key Reference Notes for Lab 2822

The locations, names, ports, users and passwords in this lab are listed in the tables below. All instructions include the necessary information from these tables. These tables are provided for reference.

Important: Passwords and resourcesPassword informationVMWare User: was

Password: passwordWAS Liberty Administrator User: admin

Password: adminpwdProduct and lab file locationsLiberty /home/was/IBM/wlp/IBM HTTP Server /home/was/IBM/HTTPServer/WebSphere Plugin /home/was/IBM/WebSphere/Plugins/Lab materials /home/was/lab-materials

Important: Names and ports used in this labType Name Port (HTTP / HTTPS) ClusterIHS webserver1 8080 / 8043 n/aLiberty demoServer 9080 / 9043 n/aLiberty controller1 9080 / 9043 n/aLiberty server1 9081 / 9044 defaultClusterLiberty server2 9082 / 9045 defaultClusterLiberty server3 9083 / 9046 defaultClusterLiberty autoScaled1 9090 / 9053 elasticLiberty autoScaled2 9091 / 9054 elasticLiberty autoScaled3 9092 / 9055 elastic

8

Page 9: Was liberty   elastic clusters and centralised admin

Step by Step Instructions

1. IntroductionAs part of this lab, you will create and configure multiple Liberty servers and clusters. At each major step, you will configure the WebSphere Plugin for IHS to route traffic to the newly created servers. To demonstrate the most basic configuration of a load balanced application server, the lab virtual machine comes with a pre-configured Liberty application server called demoServer which is load balanced through IHS with the WebSphere plugin.

In this brief introduction, you will start the demoServer and IHS. The Liberty server, IHS and WebSphere Plugin are pre-configured for this step.

Important locations:Liberty /home/was/IBM/wlp/IBM HTTP Server /home/was/IBM/HTTPServer/WebSphere Plugin /home/was/IBM/WebSphere/Plugins/

1.1 Start and access demoServerLog into the virtual machine using the password "password".

Open a terminal by clicking the terminal icon in the application launcher:

Execute the following commands to start the server demoServer.

# cd ~/IBM/wlp# bin/server start demoServer

was@localhost:~$ cd ~/IBM/wlpwas@localhost:~/IBM/wlp$ bin/server start demoServerStarting server demoServer.Server demoServer started with process ID 6330.

Access the lab demo application. Launch FireFox from the application launcher:

9

Page 10: Was liberty   elastic clusters and centralised admin

Go to the URL http://localhost:9080/lab

Lab web page accessed directly from the demo server

The demo server is configured to listen for HTTP requests on port 9080. This is the default HTTP port for the Liberty profile server. The demo server has the lab.war application deployed. This application will be used in this lab to demonstrate the load balancing behaviour provided by IHS and the WebSphere plugin.

Next, IHS will be started. For this introduction, IHS is configured to route all requests which match /lab/* to the demo server. This routing configuration is set by the WebSphere plugin's plugin-cfg.xml. Later in this lab, the WebSphere plugin's configuration will be updated to route to the servers which will be created. No changes to IHS will be necessary as part of this lab.

10

Page 11: Was liberty   elastic clusters and centralised admin

1.2 Start and access IBM HTTP ServerExecute the following commands to start IHS.

# cd ~/IBM/HTTPServer/# bin/apachectl start

was@localhost:~/IBM/wlp$ cd ~/IBM/HTTPServerwas@localhost:~/IBM/HTTPServer$ bin/apachectl start

Go to the URL http://localhost:8080/lab

Lab web page accessed through IHS

IHS is configured to listen for HTTP requests on port 8080. The default HTTP port for IHS is port 80 but because the process is running as a non-root user it has been changed to 8080.

Note: Only root-privileged processes can open ports below 1024.

11

Page 12: Was liberty   elastic clusters and centralised admin

1.3 Stop demoServer

Important: As the last step, stop the demo server. The demo server is using port 9080, which will be used by a different server in the next step of the lab.

Execute the following commands to stop the demo server.

# cd ~/IBM/wlp/# bin/server stop demoServer

was@localhost:~/IBM/wlp$ cd ~/IBM/wlpwas@localhost:~/IBM/wlp$ bin/server stop demoServerStopping server demoServer.Server demoServer stopped.

The demo server is now stopped. The next steps of this lab are to create a new Liberty server and to configure the WebSphere plugin to update the IHS routing information. Those steps will reproduce the environment demonstrated by this introduction.

Note: The IHS process does not need to be stopped. IHS handles most changes to the plugin-cfg.xml dynamically without the need to restart the IHS process.

12

Page 13: Was liberty   elastic clusters and centralised admin

2. Creating your first serverThese steps take you through the most basic operations supported by the Liberty profile: creating a new server, deploying an application and dynamically changing the server's configuration. As part of these steps, the WebSphere plugin will be updated so IHS will route requests to the new server.

2.1 Create and start a server Execute the following commands to create and start a new server named server1.

Note: The name server1 will be used in later instructions in this lab. Please follow all steps exactly as each step is used by subsequent steps.

# bin/server create server1# bin/server start server1

was@localhost:~/IBM/wlp$ bin/server create server1Server server1 created.was@localhost:~/IBM/wlp$ bin/server start server1Starting server server1.Server server1 started with process ID 6336.

You can validate the server is up and running by accessing the default landing page.

Access the landing page at the URL http://localhost:9080/

The Liberty default landing page

13

Page 14: Was liberty   elastic clusters and centralised admin

2.2 Deploy an applicationThe lab materials directory contains the lab.war application which will be used in this lab. The lab.war application shows information about the server on which it is running, which will be useful when clusters are created.

An application archive placed in the dropins directory will be automatically deployed and started. Liberty supports two ways to deploy applications: either using the dropins directory demonstrated here, or through explicit configuration in the server.xml. A configuration based deployment will be used later in this lab.

Copy the lab.war into the server's dropins directory.

# cp ~/lab-materials/apps/lab.war usr/servers/server1/dropins

Go to the URL http://localhost:9080/lab

Application lab.war deployed to server1 on port 9080

14

Page 15: Was liberty   elastic clusters and centralised admin

2.3 Dynamically change the HTTP configurationThe Liberty profile is designed to provide a world-class application infrastructure platform as well as a compelling developer experience. The Liberty configuration is contained in the server.xml, which follows a simple, easy-to-use, configuration-by-exception format. This means that the runtime environment operates from a set of built-in configuration default settings, and you only need to specify configuration that overrides those default settings. You do this by editing either the server.xml file or another XML file that is included in server.xml at run time. Nearly all configuration properties of the Liberty server can be changed without requiring a server restart.

Edit the server's HTTP configuration by changing the HTTP port from 9080 to 9081 and the HTTPS port to 9443 to 9444.

Attention: The new port value will be used in subsequent steps in this lab. Be sure to complete this step correctly.

# gedit usr/servers/server1/server.xml

Updated server1 configuration to use port 9081

Save and close the file.

15

Page 16: Was liberty   elastic clusters and centralised admin

Access lab.war at the new URL http://localhost:9081/lab

Application lab.war deployed to server1 on port 9081

Note: The lab.war application is now being hosted by server1 on port 9081. It may take a few seconds to the old port to be stopped and the new port to start. If the app is not available, confirm the server.xml was updated correctly.

You have successfully created your first Liberty server and deployed an application. In the coming steps, this server will be added to the Liberty management domain, known as a collective. This server will also be used as the basis for the first cluster. This server will not need to be stopped in order to complete these steps because of Liberty's support for dynamic configuration changes.

Note: The dynamic configuration of Liberty can be disabled. Details are listed in the Reference Material section.

Next, IHS will be updated to route web traffic to the newly created server.

16

Page 17: Was liberty   elastic clusters and centralised admin

2.4 Configure IHS to route to the new serverIn order to update IHS to route to the new server, a new plugin-cfg.xml will need to be generated. The WebSphere:name=com.ibm.ws.jmx.mbeans.generatePluginConfig MBean provides a method to generate the plugin-cfg.xml. Because IHS is configured to listen on non-default ports, some configuration must be specified in the server.xml in order to ensure the generated file is correct.

Add the following lines to the server.xml: <pluginConfiguration webserverPort="8080" webserverSecurePort="8443" />

# gedit usr/servers/server1/server.xml

Updated server1 configuration to specify the IHS ports

Save and close the file.

17

Page 18: Was liberty   elastic clusters and centralised admin

You will use jconsole to invoke the GeneratePluginConfigMBean operation to generate theplugin-cfg.xml file. jconsole is a program packaged with JVMs that provides a graphical interface.

Launch jconsole.# jconsole

On the jconsole New Connection page, select "ws-server.jar server1" and click Connect.

jconsole New Connection window

If prompted to connect insecurely, click the Insecure button. (Normally this should be secured, but for simplicity in this lab we have not done so)

jconsole insecure connection prompt

18

Page 19: Was liberty   elastic clusters and centralised admin

Navigate to the MBeans tab and find the com.ibm.ws.jmx.mbeans.generatePluginConfig MBean under the WebSphere domain.

Expand the twisty next to the MBean and select Operations.

Click the "generateDefaultPluginConfig" button to execute the operation.

jconsole MBean tab, GeneratePluginConfigMBean operation

After the operation has completed, a plugin-cfg.xml file will be created in the server's configuration directory. This file will be copied into the WebSphere plugin configuration directory for the IHS server.

Close jconsole by clicking on the red X in the upper left corner (or press alt-F4).

19

Page 20: Was liberty   elastic clusters and centralised admin

The WebSphere plugin configuration directory for the IHS server is called webserver1. This is the default name used when creating a WebSphere plugin for IHS. Multiple IHS installations can share the same WebSphere plugin installation by creating multiple configuration directories.

Copy the plugin-cfg.xml into the WebSphere plugin configuration directory for the IHS server.

# cp usr/servers/server1/plugin-cfg.xml ~/IBM/WebSphere/Plugins/config/webserver1/

The WebSphere plugin will automatically load the new plugin-cfg.xml and IHS will establish a route to the new server.

Access lab.war through IHS with URL http://localhost:8080/lab

IHS routing application lab.war to server1

Note: lab.war is being hosted by server1 on port 9081 but was accessed via port 8080.

20

Page 21: Was liberty   elastic clusters and centralised admin

IHS has been updated to route requests to the newly created server1. If a new application was added to server1 then the plugin-cfg.xml would need to be regenerated to update the routing information.

Now server1 is hosting the lab.war application and is being routed to by IHS. Next, server1 will be added to a Liberty collective.

Problem DeterminationIf accessing the http://localhost:8080/lab URL did not display the expected page, check the following:

1. Ensure IHS is running. If IHS is already running, the apachectl command will indicate the process has already been started.

# cd ~/IBM/HTTPServer/# bin/apachectl start

was@localhost:~/IBM/HTTPServer$ cd ~/IBM/HTTPServer/was@localhost:~/IBM/HTTPServer$ bin/apachectl starthttpd (pid 31189) already running

2. Inspect the plugin-cfg.xml file. The plugin-cfg.xml may have the incorrect port values for IHS if the server.xml update for server1 did not take effect. This could be due to a typo in the server.xml.# gedit ~/IBM/WebSphere/Plugins/config/webserver1/plugin-cfg.xml

Look for the following lines which define the VirtualHostGroup "default_host".

If the port values do not match 8080 and 8443, update the values and save the file. Also correct the configuration in the server.xml for server1 as indicated at the start of this section.

Access the http://localhost:8080/lab URL again to confirm routing now works.

21

Page 22: Was liberty   elastic clusters and centralised admin

3. Creating a collectiveA collective is the set of Liberty servers in a single administrative domain. A collective consists of (at least) one "collective controller", a server with the collectiveController-1.0 feature enabled, and many "collective members", servers with the collectiveMember-1.0 feature enabled. A collective may be configured to have multiple collective controllers, called a replica set.

The following illustration shows an example collective topology with a replicate set of 3 controllers and 5 collective members. Configuration of the replica set is not covered in this lab, but documentation is available on wasdev.net.

Example Topology: 3 controller and 5 member collective

In this lab section, a simple one controller and one member collective will be created.

Topology created by this step

The servers within a collective communicate over SSL. SSL provides the basis for the collective security model. The servers in the collective communicate with each other using signed SSL certificates. The signer is the collective root certificate. The 'collective create' command creates the initial SSL configuration, including the collective root certificate.

22

Page 23: Was liberty   elastic clusters and centralised admin

3.1 Create the collective controllerThe collective controller is a Liberty server with the collectiveController-1.0 feature enabled. To create the collective controller, create and configure a new Liberty server called controller1.

# bin/server create controller1

was@localhost:~/IBM/wlp$ bin/server create controller1Server controller1 created.

Execute the 'collective create' command to create the initial SSL configuration for the collective.

# bin/collective create controller1 --keystorePassword=InterConnect --createConfigFile

was@localhost:~/IBM/wlp$ bin/collective create controller1 --keystorePassword=InterConnect --createConfigFileCreating required certificates to establish a collective...This may take a while.Successfully generated the controller root certificate.Successfully generated the member root certificate.Successfully generated the server identity certificate.Successfully generated the HTTPS certificate.

Successfully set up collective controller configuration for controller1

Add the following lines to the server.xml to enable:

<include location="${server.config.dir}/collective-create-include.xml" />

Please ensure administrative security is configured for the server.An administrative user is required to join members to the collective.

Note: The Liberty profile does not ship with default passwords. As such, the create command requires a keystore password. In this lab, all keystore passwords will be 'InterConnect'. Each keystore password can be different, but to keep the lab simple all passwords will be the same.

23

Page 24: Was liberty   elastic clusters and centralised admin

Update controller1's server.xml to include the generated XML file.

# gedit usr/servers/controller1/server.xml

Update the controller's server.xml to include the collective create include file

Save and close the file.

Note: The Liberty profile does not ship with any default administrative users. Therefore, the administrator user name and password must be specified. For the purposes of this lab, use the user name 'admin' and the password 'adminpwd' to configure the <quickStartSecurity> element, which will establish a user with the Administrator role.

24

Page 25: Was liberty   elastic clusters and centralised admin

Define the administrator user name and password in collective-create-include.xml. Set the user and password to admin / adminpwd.

# gedit usr/servers/controller1/collective-create-include.xml

Update the collective create include file to define the Administrator user name and password

Save and close the file.

Now that the collective controller is fully configured, start the server.

# bin/server start controller1

was@localhost:~/IBM/wlp$ bin/server start controller1Starting server controller1.Server controller1 started with process ID 9099.

Hint: You can search for CWWKX9003I in the usr/servers/controller1/logs/messages.log to show the controller has successfully started.

25

Page 26: Was liberty   elastic clusters and centralised admin

3.2 Introducing the Admin CenterThe Admin Center is the web-based user interface for Liberty. It is enabled with the adminCenter-1.0 feature. The Admin Center requires an SSL configuration and a configured Administrator user, which have already been established in the previous step.

Update the controller's configuration. The adminCenter feature could be added to either server.xml or collective-create-include.xml. In this step, the server.xml will be updated to include the adminCenter-1.0 feature.

# gedit usr/servers/controller1/server.xml

Update collective1 server.xml to include the adminCenter-1.0 feature

Save and close the file.

Log in to the Admin Center by access http://localhost:9080/adminCenter

You will need to trust the SSL certificate that was just created by the 'collective create' command.

Hint: If the page can not be accessed, make sure controller1 is running.

26

Page 27: Was liberty   elastic clusters and centralised admin

First, click "I Understand the Risks", then click “Add Exception...”

Add an Exception for the newly created SSL certificate

Next, confirm the security exception by clicking “Confirm Security Exception”.

Confirm the Security Exception

You will then be prompted with the Admin Center login page.

27

Page 28: Was liberty   elastic clusters and centralised admin

To log in, enter the user name 'admin' and the password 'adminpwd'. Click Log In.

Admin Center login page

Once you have logged in, you will be presented with the Toolbox. The Toolbox is a collection of tools customized for each user.

By default, the Toolbox is populated with the initial set of tools that are present in the catalog. The catalog is the set of all tools installed into the Liberty profile. Users can also add links to commonly used pages by adding a bookmark.

.

Admin Center toolbox

From the Toolbox, launch the Explore tool by clicking on the Explore icon.

28

Page 29: Was liberty   elastic clusters and centralised admin

The Explore tool provides management capabilities for servers, applications and clusters in the collective. The tool's initial view is a dashboard which displays high-level information about the elements of the topology.

The Explore tool dashboard

Currently, the collective is relatively empty. It contains one server (the collective controller) and one host (localhost). As servers and applications are added to the collective, the dashboard will dynamically update to show the current totals.

Admin Center is designed to reflect the current status of the resources in the collective in real time, without the need to refresh the page. As you progress through the lab, you will see the dashboard and other views within the Admin Center Explore tool update dynamically.

Hint: Leave the Admin Center page up and in view after these steps.

In order to see information about a server, click on the Servers section of the dashboard.

29

Page 30: Was liberty   elastic clusters and centralised admin

Each server in the collective will be represented by a 'card' in the All Servers view. To see the details about a server, click on that server's card.

All Servers view in Explore

Every application, server, cluster, host and runtime in the collective is represented in the Explore tool. The Server view shows key information about the server, such as its name, state, user directory, its host and the set of applications deployed to it.

The Server view in Explore

There are no applications deployed to the server, but monitoring stats are available.

30

Page 31: Was liberty   elastic clusters and centralised admin

Click the Monitor button to show the server's stats:

The Monitor view is available for servers and applications, and shows stats and metrics about the resource. By default, a server will have graphs displayed for the JVM's heap usage, class loader, thread count and CPU usage.

Server monitor view in Explore

Return to the Dashboard using the breadcrumb button:

Attention: Leave the Admin Center page up and in view. As you progress through the lab, you will see it update dynamically.

31

Page 32: Was liberty   elastic clusters and centralised admin

3.3 Add the first collective memberIn this step, server1 will be added to the collective. Both new and existing Liberty servers can be easily added to a collective. The collective is designed to allow for quick and easy addition and removal of members, in order to support highly dynamic workloads.

Join server1 to the collective by executing the 'collective join' command.

Note: You will need to accept the certificate when prompted. Hit 'y' then Enter.# bin/collective join server1 --host=localhost --port=9443 --user=admin --password=adminpwd --keystorePassword=InterConnect --createConfigFile

was@localhost:~/IBM/wlp$ bin/collective join server1 --host=localhost --port=9443 --user=admin --password=adminpwd --keystorePassword=InterConnect --createConfigFileJoining the collective with target controller localhost:9443...This may take a while.

SSL trust has not been established with the target server.

Certificate chain information:...

Do you want to accept the above certificate chain? (y/n) y (hit Enter)Successfully completed MBean request to the controller.

Successfully joined the collective for server server1

Add the following lines to the server.xml to enable:

<include location="${server.config.dir}/collective-join-include.xml" />

Note: The host, port and user credentials specified in the join command are for the collective controller.

Hint: If you look in the Admin Center, you will see that the number of servers in the collective is now two. One server is stopped, and one server is running. The stopped server is the server that was just joined. All newly joined servers are considered stopped until they are connected to the controller, even if the server was joined while it was running.

32

Page 33: Was liberty   elastic clusters and centralised admin

Update the configuration for server1. Update the server.xml to include the generated XML file.# gedit usr/servers/server1/server.xml

Update server1 configuration to include the collective join include XML.

Save and close the file.

Note: Recall that server1 is still running from earlier. This is another example of Liberty's dynamic configuration at work. As soon as the include is added to the configuration, the extra configuration is processed and server1 can now communicate with the controller.

Hint: The CWWKX8112I, CWWKX8114I and CWWKX8116I messages in the server logs indicate that the collective member is successfully communicating with the controller. The messages log file for server is usr/servers/server1/logs/messages.log

33

Page 34: Was liberty   elastic clusters and centralised admin

You will see in the Admin Center that there is now two running servers in the collective and a running application.

Updated Explore dashboard after server1 is configured as a collective member

All collective members publish information about themselves to their collective controller. The controller caches this published information so that it can be queried directly from the controller without need of forwarding the request down to each collective member. It is this cached information that is used by the Admin Center to display information about the collective.

Note: The collective controller acts as an operational and status cache. This is done so that the collective can achieve very large scale.

You now have a basic collective created, where server1 is a collective member and controller1 is a collective controller. Next, server1 and a new server, server2, will be grouped into a cluster, and IHS will be updated to route to the newly defined cluster.

34

Page 35: Was liberty   elastic clusters and centralised admin

4. Defining a clusterA Liberty cluster is the set of collective members hosting the same application(s) within the same collective. A Liberty profile server can only be a member of one cluster at any given time. By default, collective members do not belong to a cluster. The member must opt-in to a cluster by enabling the clusterMember-1.0 feature. By default, cluster members are part of the “defaultCluster”. A specific cluster name can be specified by defining the cluster name in the server's configuration.

4.1 Configure the first cluster memberTo define a cluster, add the clusterMember-1.0 feature to the server.xml of server1.

# gedit usr/servers/server1/server.xml

Updated server.xml for server1, including clusterMember-1.0 feature

Hint: The Admin Center detects the new cluster and shows the number of clusters to be one.

A cluster with one member is not really a cluster, so next another cluster member will be added.

35

Page 36: Was liberty   elastic clusters and centralised admin

4.2 Adding another cluster memberA cluster should minimally consist of two servers, but it is recommended to have three or more. The optimal size of the cluster is determined by the amount of expected application workload. In this lab, the cluster will consist of three servers.

In order to add a new server to the cluster, we will clone the configuration of server1. Clusters are supposed to be homogenous – that is they should have the same configuration as all other servers in the cluster otherwise operational issues can arise as a result of differences between cluster member configuration. While the Liberty cluster model does not enforce homogeneity, it is recommended that you do so when creating and updating cluster configurations.

To clone server1, copy the server.xml and applications of server1 into server2. Note that the collective-join-include.xml file is not copied. The collective configuration for server2 will be generated when server2 is joined to the collective.

Create server2 and copy over the required files from server1.

# bin/server create server2# cp usr/servers/server1/server.xml usr/servers/server2# cp usr/servers/server1/dropins/lab.war usr/servers/server2/dropins/

was@localhost:~/IBM/wlp$ bin/server create server2Server server2 created.was@localhost:~/IBM/wlp$ cp usr/servers/server1/server.xml usr/servers/server2was@localhost:~/IBM/wlp$ cp usr/servers/server1/dropins/lab.war usr/servers/server2/dropins/

server2 is now a clone of server1. However, because the lab has all of the servers on the same host, the ports for server2 must be updated. Before that is done, create a copy of server2 by zipping it up. This zip will be used later to quickly create a third cluster member.

# cd usr/servers/server2/# zip -r ../server1Clone.zip *# cd -

was@localhost:~/IBM/wlp$ cd usr/servers/server2/was@localhost:~/IBM/wlp/usr/servers/server2$ zip -r ../server1Clone.zip * adding: ...was@localhost:~/IBM/wlp/usr/servers/server2$ cd -/home/was/IBM/wlp

36

Page 37: Was liberty   elastic clusters and centralised admin

Now that there is a template to clone from, change the ports in server2's server.xml to have HTTP port 9082 and HTTPS port 9445.

# gedit usr/servers/server2/server.xml

Updated server.xml for server2, assigned ports 9082 and 9445

Save and close the file.

The server is now ready to be joined to the collective. These collective operations can be scripted by setting a JVM property which will auto-accept certificates in the client side. SSL certificates can be automatically trusted by setting JVM property -Dcom.ibm.websphere.collective.utility.autoAcceptCertificates to true.

Set the environment variable JVM_ARGS.# export JVM_ARGS=-Dcom.ibm.websphere.collective.utility.autoAcceptCertificates=true

37

Page 38: Was liberty   elastic clusters and centralised admin

Join server2 to the collective. Because the server.xml for server2 is copied from server1 it already contains the include statement for collective-join-include.xml.

# bin/collective join server2 --host=localhost --port=9443 --user=admin --password=adminpwd --keystorePassword=InterConnect --createConfigFile

was@localhost:~/IBM/wlp$ bin/collective join server2 --host=localhost --port=9443 --user=admin --password=adminpwd --keystorePassword=InterConnect --createConfigFileJoining the collective with target controller localhost:9443...This may take a while.

Auto-accepting the certificate chain for target server.Certificate subject DN: CN=localhost, OU=controller1, O=ibm, C=us

Successfully completed MBean request to the controller.

Successfully joined the collective for server server2

Add the following lines to the server.xml to enable:

<include location="${server.config.dir}/collective-join-include.xml" />

Note: If the command prompts you to accept a certificate, accept it because setting the environment variable in the previous step must have had an error. Repeat the step to export the JVM_ARGS environment variable. The join does not need to be repeated.

Hint: The Admin Center now shows is 3 servers, 2 running and 1 stopped.

Start server2.

# bin/server start server2

was@localhost:~/IBM/wlp$ bin/server start server2Starting server server2.Server server2 started with process ID 47171.

Hint: The Admin Center now shows all 3 servers are running.

38

Page 39: Was liberty   elastic clusters and centralised admin

Click on the Servers panel in the Explore dashboard. You will now see cards for all 3 servers.

All Servers view in Explore, after joining and starting server2

Click on the server2 'card' to open the details page for server2.

The server view for server2

Notice that the cluster to which the server belongs is linked to with the "defaultCluster" button.Click the "defaultCluster" button.

39

Page 40: Was liberty   elastic clusters and centralised admin

The Cluster view in Explore shows information about the cluster. You can click on the Servers and Apps button on the side panel to see the set of servers and applications provided by this cluster.

The Cluster view in Explore, defaultCluster

Return to the Dashboard using the breadcrumb button:

The cluster defaultCluster is now defined, and consists of server1 and server2, each hosting a copy of lab.war. In order for IHS to route to both servers in the cluster, the plugin-cfg.xml needs to be regenerated. The plugin-cfg.xml for multiple servers can be created by either manually merging the plugin-cfg.xml of each server, or in the case of a cluster, a single merged plugin-cfg.xml can be created by the available Liberty MBeans.

40

Page 41: Was liberty   elastic clusters and centralised admin

4.3 Update IHS to route to the clusterTo create a new plugin-cfg.xml for the entire cluster, you will use the ClusterManagerMBean on the collective controller. This MBean creates a single merged plugin-cfg.xml file for all members of the cluster. The merge is done automatically by the controller process.

Launch jconsole and connect to the controller1 process (follow the steps from before).# jconsole

Navigate to the MBean tabs.Find the ClusterManager MBean in the WebSphere domain.Expand the twisty and select its Operations.Specify the value of "defaultCluster" as the argument to the "generateClusterPluginConfig".Click the "generateClusterPluginConfig" operation button. It may take a few seconds.

jconsole MBeans tab, ClusterManager operations

The return value of the operation is the path to the newly created plugin-cfg.xml file.

Close jconsole.

41

Page 42: Was liberty   elastic clusters and centralised admin

Copy the generated defaultCluster-plugin-cfg.xml into the WebSphere plugin configuration.

# cp usr/servers/controller1/pluginConfig/defaultCluster-plugin-cfg.xml ~/IBM/WebSphere/Plugins/config/webserver1/plugin-cfg.xml

Note: As before, IHS will automatically update its configuration.

Access the lab.war application on IHS at URL http://localhost:8080/lab/. You will notice the underlying server which is processing the request will change. You may need to refresh the page a few times to see the server name change. IHS will alternate routing the request in round-robin fashion.

The lab.war application page being hosted by the defaultCluster

Every time a new application or cluster is added to the collective, the plugin-cfg.xml will need to be regenerated. This can be arduous, if not impossible, to maintain in a highly dynamic collective. In order to provide more dynamic, intelligent routing the WebSphere plugin has been updated and a new feature has been added to Liberty: dynamicRouting-1.0.

42

Page 43: Was liberty   elastic clusters and centralised admin

5. Configuring dynamic routingThe new dynamic routing capability links IHS to the collective controller through the WebSphere plugin. The WebSphere plugin updates the IHS routing table automatically when new servers, clusters or applications are deployed in the collective.

5.1 Setting up dynamic routingEnable dynamic routing in the collective controller by adding the dynamicRouting-1.0 feature to the in the server.xml of controller1.

# gedit usr/servers/controller1/server.xml

controller1 server.xml updated with dynamicRouting-1.0 feature

Save and close the file.

The dynamicRouting-1.0 feature enables RESTful APIs which the WebSphere plugin uses in order to get the current routing information. Like collective members, the WebSphere plugin communicates to

43

Page 44: Was liberty   elastic clusters and centralised admin

the collective controller over an authenticated SSL connection. The plugin-cfg.xml and SSL certificates for the WebSphere plugin are created using the dynamicRouting command line utility.

Execute the "dynamicRouting setup" command to create the plugin-cfg.xml and SSL certificates.

# bin/dynamicRouting setup --port=9443 --host=localhost --user=admin --password=adminpwd --keystorePassword=InterConnect --pluginInstallRoot=/home/was/IBM/WebSphere/Plugins --webServerNames=webserver1

was@localhost:~/IBM/wlp$ bin/dynamicRouting setup --port=9443 --host=localhost --user=admin --password=adminpwd --keystorePassword=InterConnect --pluginInstallRoot=/home/was/IBM/WebSphere/Plugins --webServerNames=webserver1

Generating WebSphere plug-in configuration file for web server webserver1

Auto-accepting the certificate chain for target server.Certificate subject DN: CN=localhost, OU=controller1, O=ibm, C=us

Successfully completed MBean request to the controller.Successfully generated WebSphere plug-in configuration file plugin-cfg.xmlGenerating keystore for web server webserver1Successfully completed MBean request to the controller.Successfully generated keystore plugin-key.jks.

Generated WebSphere plug-in configuration file plugin-cfg.xmlfor web server webserver1.…

For example:

gskcmd -keydb -convert -pw <<password>> -db /tmp/plugin-key.jks -old_format jks -target /tmp/plugin-key.kdb -new_format cms -stashgskcmd -cert -setdefault -pw <<password>> -db /tmp/plugin-key.kdb -label default

Copy resulting /tmp/plugin-key.kdb, .sth, .rdb files to the directory /home/was/IBM/WebSphere/Plugins/config/webserver1/

The output of the dynamicRouting command indicates the necessary steps to be performed. Those steps are covered by the instructions in this lab.

44

Page 45: Was liberty   elastic clusters and centralised admin

Executing the "dynamicRouting setup" operation creates two files in the current directory: plugin-cfg.xml and plugin-key.jks. The SSL certificates the WebSphere plugin will use are stored in the plugin-key.jks, but before they can be used by the plugin they need to be converted into different file types, as the WebSphere plugin requires the SSL certificates be stored in the CMS format. The conversion can be performed by using the gskcmd available in the IHS install.

Use the gskcmd to convert the JKS keys into a format the WebSphere plugin can use.

# ~/IBM/HTTPServer/bin/gskcmd -keydb -convert -pw InterConnect -db plugin-key.jks -old_format jks -target plugin-key.kdb -new_format cms -stash -expire 365# ~/IBM/HTTPServer/bin/gskcmd -cert -setdefault -pw InterConnect -db plugin-key.kdb -label default

was@localhost:~/IBM/wlp$ ~/IBM/HTTPServer/bin/gskcmd -keydb -convert -pw InterConnect -db plugin-key.jks -old_format jks -target plugin-key.kdb -new_format cms -stash -expire 365was@localhost:~/IBM/wlp$ ~/IBM/HTTPServer/bin/gskcmd -cert -setdefault -pw InterConnect -db plugin-key.kdb -label default

The commands converts the plugin-key.jks into three new files: plugin-key.kdb, plugin-key.rdb, and plugin-key.sth. Move the created plugin files to the WebSphere plugin configuration directory.

# mv plugin* ~/IBM/WebSphere/Plugins/config/webserver1/

Important: For the beta, there are some restrictions for dynamic routing. One restriction is the need to restart IHS for dynamic routing to take effect.

# cd ~/IBM/HTTPServer/# bin/apachectl stop# bin/apachectl start# cd ~/IBM/wlp

You can now access the lab.war application through IHS at the URL http://localhost:8080/lab/.

The cluster can be expanded and new applications can be added to the collective without needing to update the plugin-cfg.xml.

45

Page 46: Was liberty   elastic clusters and centralised admin

5.2 Expanding the clusterIn this step, a third cluster member will be added to the defaultCluster. No changes will need to be made to the WebSphere plugin configuration as dynamic routing has now been configured.

In order to create a third cluster member, unzip the server1Clone.zip created earlier as server3.

# unzip usr/servers/server1Clone.zip -d usr/servers/server3

Edit server3 server.xml and change the HTTP port to 9083 and the HTTPS port to be 9446.

# gedit usr/servers/server3/server.xml

Updated server3 configuration

Save and close the file.

46

Page 47: Was liberty   elastic clusters and centralised admin

Join server3 to the collective.

Note: The output of the following commands is not shown for brevity. All of these commands have already been covered in this lab.

# bin/collective join server3 --host=localhost --port=9443 --user=admin --password=adminpwd --keystorePassword=InterConnect --createConfigFile

The configuration of server3 does not need to be changed as a result of running the join command because the server configuration already includes the collective-join-include.xml file.

Start server3.

# bin/server start server3

Note: server3 is now available. It is listed in Admin Center and is included in the IHS routing table.

Access the URL http://localhost:8080/lab/ and refresh the page a few times to see the routing balance to server3. The dynamic routing algorithm uses a "least outstanding request" algorithm, which sends the new request to the server with the least number of requests which have not been completed. Because there are three servers in the collective, it may be difficult to see server3 being routed to.

In order to see server3 in the routing, stop one of the servers.

Stop server2.

# bin/server stop server2

Refresh the URL http://localhost:8080/lab/ a few times to see IHS load balance between server1 and server3.

Attention: If the application is not being hosted from server3, confirm server3 is running and that it is configured for HTTP port 9083.

Finally, start server2 again.

# bin/server start server2

Now that dynamic routing is successfully configured, new applications and clusters can be added to the collective without regenerating the WebSphere plugin configuration. In the next step, a new elastic cluster will be added to the collective.

47

Page 48: Was liberty   elastic clusters and centralised admin

6. Auto scaled clustersAn auto scaled cluster, also known as an elastic cluster or a dynamic cluster, is a cluster which has its member's life cycle automatically controlled in response to a scaling policy. The process which controls the member's life cycle is called the scaling controller, which is enabled with the scalingController-1.0 feature. A cluster can be set to be auto scaled by enabling the scalingMember-1.0 feature in all cluster members.

6.1 Enabling the scaling controller and scaling membersThe collective controller can become a scaling controller by including the scalingController-1.0 feature. Update the collective controller's server.xml.

# gedit usr/servers/controller1/server.xml

Updated controller1 server.xml with scalingController-1.0 feature

Save and close the file.

48

Page 49: Was liberty   elastic clusters and centralised admin

The scaling members will be created from pre-configured servers that are provided in the lab materials. Copy the pre-configured servers into the Liberty servers directory.

# cp -r ~/lab-materials/servers/autoScaled* usr/servers/

These pre-configured servers enable the scalingMember-1.0 feature, are assigned to the "elastic" cluster, are assigned unique port values and deploy the lab.war application. The lab.war application is deployed using configuration in the server.xml and is bound to the "autoscaledlab" context root.

Pre-configured auto scaled cluster member server.xml

The server's ports are set using variable substitution. The variables are defined in the bootstrap.properties file. See the Reference Material for more information on variable substitution.

49

Page 50: Was liberty   elastic clusters and centralised admin

The bootstrap.properties file supports name value pairs. Each server has a unique set of ports assigned to it by changing the values in the bootstrap.properties file. Variable subsititution allows for the server.xml to be parameterized, so that all servers can have the identifical copies of the server.xml, and the unique values for the server can be set in the bootstrap.properties file.

The bootstrap.properties file for autoScaled1

Join and start all 3 auto scaled servers to the collective.

Note: The output of the following commands is not shown for brevity. All of these commands have already been covered in this lab.

# bin/collective join autoScaled1 --host=localhost --port=9443 --user=admin --password=adminpwd --keystorePassword=InterConnect --createConfigFile# bin/collective join autoScaled2 --host=localhost --port=9443 --user=admin --password=adminpwd --keystorePassword=InterConnect --createConfigFile# bin/collective join autoScaled3 --host=localhost --port=9443 --user=admin --password=adminpwd --keystorePassword=InterConnect --createConfigFile

# bin/server start autoScaled1# bin/server start autoScaled2# bin/server start autoScaled3

Note: The auto scaled servers are now listed in Admin Center.

Because dynamic routing was enabled in the previous step, this new cluster and new application are immediately available through IHS.

50

Page 51: Was liberty   elastic clusters and centralised admin

Access the URL http://localhost:8080/autoscaledlab/ and refresh the page a few times to see the routing balance between the auto scaled servers.

Load balanced lab.war application at context root autoscaledlab

This is another example of the benefit of dynamic routing. In an environment which is using dynamic routing servers, clusters and applications can be rapidly added to the collective and be made available to support application workloads.

The new servers are now listed in Admin Center. From the dashboard, you will see there are now 3 more servers for a total of 7, and a total 2 clusters and 2 applications.

Navigate to the All Servers view.

51

Page 52: Was liberty   elastic clusters and centralised admin

All Servers view in Explore, with the started auto scaled servers

Notice that the server icon is different for the auto scaled servers: The arrow indicates that the server is auto scaled. You will see this arrow on any resource that has an auto scaling policy in place.

Another important difference is that life cycle operations (start, stop, and restart) for servers, clusters and applications are disabled for auto scaled resources. This is because the scaling controller will start and stop servers based on the configured policy. Servers can still be started and stopped from the command line however.

To demonstrate the behaviour of the scaling controller, stop all of the auto scaled servers.# bin/server stop autoScaled1# bin/server stop autoScaled2# bin/server stop autoScaled3

Note: The servers will show as stopped in the Admin Center.

52

Page 53: Was liberty   elastic clusters and centralised admin

The scaling controller will detect the auto scaled cluster is out of policy. The default behaviour of the scaling policy is to ensure that two servers within the cluster are up at any time. In response to the cluster being out of policy, the scaling controller will automatically start two of the servers.

Note: The servers that are started is chosen by the scaling controller. It could be any two of the three servers.

The scaling policy can be changed to meet different demands and criteria. The next step will cover modifications to the scaling policy.

6.2 Changing the scaling policyThe scaling policy can indicate the minimum and maximum number of cluster members to be running at any given time, as well as metrics which the scaling controller will use to decide whether or not the auto scaled cluster needs to be "scaled out" in order to increaes capacity, or if can be "scaled in" in order to reduce resource utlization.

The lab materials include a pre-configured scaling policy configuration file, defined below:<server description="Default scaling policy configuration"> <scalingDefinitions> <defaultScalingPolicy min="1" max="3" enabled="true"> <metric name="memory" min="0" max="90"/> <metric name="cpu" min="0" max="90"/> <metric name="heap" min="0" max="90" /> <in amount="1" units="instance" minInterval="1s"/> <out amount="1" units="instance" minInterval="1s"/> </defaultScalingPolicy> </scalingDefinitions></server>

This policy modifies the minimum number of default instances to be one, defines the maximum number of instances to be three, and defines additional criteria for the scaling metrics of the server. These health metrics are used by the scaling controller to decide whether the cluster can be scaled in or out. If the running cluster members are in violation of the bounds set by the metrics, then the cluster will be scaled out to increase capacity. If all of the metrics are satisfied, then the cluster can be scaled in to reduce resource utilization. These metrics include CPU utilization, heap utilization, and host memory utilization, all specified as percentages of the maximum value.

The scaling policy can also indicate how frequently decisions should be made to scale a cluster out or in. To demonstrate this capability in the lab, the scaling policy is set to check the policy every second. In production, this value should be much higher (to the order of minutes) so the scaling controller does not respond to temporary, short-lived spikes in workload.

53

Page 54: Was liberty   elastic clusters and centralised admin

The provided scaling policy will be included in the controller's server.xml.

Copy the scalingPolicy.xml to controller1's directory.

# cp ~/lab-materials/config/scalingPolicy.xml usr/servers/controller1/

Then include this file in controller1's configuration.

# gedit usr/servers/controller1/server.xml

Updated controller1 server.xml including scalingPolicy.xml

In response to this policy change, the scaling controller will stop one of the two running auto scaled servers. The number of minimum and maximum instances can be modified to define the upper and lower bounds of the size of the cluster.

Note: The maximum bound is also limited by the number of servers in the cluster available for the scaling controller to start.

54

Page 55: Was liberty   elastic clusters and centralised admin

The policy defines that a server JVM is allowed to be within 0% and 90% for its memory usage, CPU usage and heap usage. When a server JVM exceeds the defined policy metrics, a new server will be started in order to spread the workload.

To trigger the scaling policy to scale out the cluster, modify the scaling policy to set the maximum allowable heap size to 1%. This will simulate a JVM exceeding normal heap usage.

# gedit usr/servers/controller1/scalingPolicy.xml

Updated controller1 scalingPolicy.xml with a maximum allowable heapsize of 1%

Save and close the file.

In response to the policy requirement that all server JVMs have no greater than 1% heap utilization, the scaling controller will start all available servers in order to best satisfy the policy constraints.

For more information on the scaling policy configuartion, see the Reference Material.

55

Page 56: Was liberty   elastic clusters and centralised admin

7. Tagging and SearchingNew for InterConnect 2015, the collective now supports setting administrative metadata for servers and clusters. This metadata includes tags, owner and contact information, and a free-form note field. The administrative metadata for a server and cluster are set using the new admin-metadata.xml file.

Tags allow you to create relationships and groups of resources. Tags are text values, are stored as lower-case, and do not support spaces. The owner, contact and note attributes are preserve case and support spaces. Tags, along with other criteria, and can now be search on in Admin Center to allow you to quickly find what you are looking for.

7.1 Setting tags for a server and clusterIn this step, you will set the initial admin-metadata.xml for the servers in the defaultCluster. The lab materials includes a pre-defined admin-metadata.xml file is available in the lab materials directory.

<admin-metadata> <server owner="Your Name"> <tag>lab</tag> <tag>interconnect</tag> <contact>Name a friend</contact> <note>This is a generic note for the server</note> </server> <cluster owner="Your Name"> <tag>lab</tag> <tag>static</tag> <contact>Name a friend</contact> <note>This is a generic note for the cluster</note> </cluster></admin-metadata>

Multiple tags and contacts can be specified per resource. Only one owner and one note field can be specified per resource. The cluster tags, owner, contacts and note are defined in each server which belongs to the cluster. If the cluster is not homogenous, the effective values will be those of the admin-metadata.xml of the last server to be started. Remember it is recommended that clusters are homogenous.

Copy the admin-metadata.xml in the lab materials to server1's configuration directory:# cp ~/lab-materials/config/admin-metadata.xml usr/servers/server1

56

Page 57: Was liberty   elastic clusters and centralised admin

Open the admin-metadata.xml and set the server and cluster owner to be your name, and set the contact to be a friend's name.

# gedit usr/servers/server1/admin-metadata.xml

admin-metadata.xml updated to include your name and a friend's name

Note: The screen shot includes example names, use any names you wish.

Save and close the file.

Copy the updated admin-metadata.xml file to each server in the defaultCluster.# cp usr/servers/server1/admin-metadata.xml usr/servers/server2# cp usr/servers/server1/admin-metadata.xml usr/servers/server3

Note: As a beta restriction, setting the initial tags requires a server restart and the Admin Center page to be refreshed.

57

Page 58: Was liberty   elastic clusters and centralised admin

Navigate to the All Clusters view and restart the cluster.

Restart defaultCluster

Note: The restart operation make time some time as it is restarting 3 servers.

defaultCluster is restarting

58

Page 59: Was liberty   elastic clusters and centralised admin

Once the cluster has restarted, click on the defaultCluster card to open the cluster details view. By default, the metadata is not displayed. Click on the button to see the tags and other metadata.

Cluster view for defaultCluster, after tags have been assigned

Note: If the button is not displayed, refresh the page. You may need to refresh the page a few times until the metadata is published.

59

Page 60: Was liberty   elastic clusters and centralised admin

The cluster now has its tags, owner, contacts and note assigned.

Cluster view for defaultCluster, with tags displayed

By default, the contacts list is collapsed. To expand the contacts, click on the "…" icon:

Setting metadata is typically done at deployment time, but can the metadata can be changed while Liberty is running.

Note: As a beta restriction, the Admin Center does not dynamically update the tags displayed in Admin Center. To see the dynamically updated dates, refresh the Admin Center page.

Tagging is particularly useful for grouping like resources. The auto scaled servers created in the previous step included an admin-metadata.xml file, which set similar tags.

To find other clusters with the lab tag, click on the button. This will launch the search view.

60

Page 61: Was liberty   elastic clusters and centralised admin

7.2 Searching in Admin CenterThe search view provides a search bar which will enable construction of search queries.

Search view, matching all clusters with the tag '"lab"

By clicking on the lab tag, the search view was launched with the following search criteria: find all

clusters and with the tag "lab" .

The search view enables searching for applications, clusters, servers and hosts in the collective. The

search bar is comprised of 'pills'. Additional criteria can be added by clicking the button. To

execute the search, hit Enter or click the button. To clear, click the button.

Thank you! This concludes the lab.

Please take some time to experiment and play around with Liberty!

61

Page 62: Was liberty   elastic clusters and centralised admin

Reference Material

WASDev.net

http://wasdev.net

IBM HTTP Server

http://www-01.ibm.com/support/knowledgecenter/SSEQTJ_8.5.5/as_ditamaps/was855_welcome_ihs.html

Downloading & Installing Liberty

https://developer.ibm.com/wasdev/downloads/

Installing IBM HTTP Server

http://www-01.ibm.com/support/knowledgecenter/SSEQTJ_8.5.5/com.ibm.websphere.ihs.doc/ihs/welc6miginstallihsdist.html

Configuring IHS and the WebSphere plugin for use with Liberty

http://www14.software.ibm.com/webapp/wsbroker/redirect?version=phil&product=was-base-dist&topic=twlp_admin_webserver_plugin

Dynamic Routing configuration

http://www-01.ibm.com/support/knowledgecenter/was_beta_liberty/com.ibm.websphere.wlp.nd.multiplatform.doc/ae/twlp_wve_enabledynrout.html

Disabling dynamic configuration updates

The dynamic configuration update behaviour for Liberty can be disabled with the following configuration property:

<config updateTrigger="disabled" />

62