ibm power systems for sas viya 3.5 deployment guide · contract with ibm corp. 8482edno.fm draft...

54
Draft Document for Review July 17, 2020 5:53 pm SG24-8482-00 Redbooks Front cover IBM Power Systems for SAS Viya 3.5 Deployment Guide Dino Quintero Beth L. Hoffman Sandy Kao Abhijit Mane

Upload: others

Post on 04-Jul-2020

7 views

Category:

Documents


0 download

TRANSCRIPT

Draft Document for Review July 17, 2020 5:53 pm SG24-8482-00

Redbooks

Front cover

IBM Power Systems for SAS Viya 3.5 Deployment Guide

Dino Quintero

Beth L. Hoffman

Sandy Kao

Abhijit Mane

IBM Redbooks

IBM Power Systems for SAS Viya 3.5 Deployment Guide

July 2020

Draft Document for Review July 10, 2020 8:53 pm 8482edno.fm

SG24-8482-00

© Copyright International Business Machines Corporation 2020. All rights reserved.Note to U.S. Government Users Restricted Rights -- Use, duplication or disclosure restricted by GSA ADP ScheduleContract with IBM Corp.

8482edno.fm Draft Document for Review July 10, 2020 8:53 pm

First Edition (July 2020)

This edition applies to:SAS Viya 3.5Red Hat Enterprise Linux for IBM Power Systems LE (POWER9) 7.6

This document was created or updated on July 10, 2020.

Note: Before using this information and the product it supports, read the information in “Notices” on page v.

Draft Document for Review July 17, 2020 5:48 pm 8482TOC.fm

Contents

Notices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .vTrademarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vi

Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . viiAuthors. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . viiNow you can become a published author, too! . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . viiiComments welcome. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . viiiStay connected to IBM Redbooks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix

Chapter 1. Planning your deployment. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.1 Planning your deployment. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21.2 Summary. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21.3 Get more information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2

Chapter 2. IBM POWER9 deployment options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32.1 POWER9 deployment options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4

2.1.1 SMP mode deployment options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42.1.2 MPP mode deployment options . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

2.2 Network configuration options. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92.3 POWER9 hardware minimum configuration options . . . . . . . . . . . . . . . . . . . . . . . . . . . 19

2.3.1 Single-server hardware configurations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 202.3.2 Single-partition hardware configurations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 202.3.3 Multi-server minimum hardware configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . 212.3.4 Multi-partition minimum hardware configuration . . . . . . . . . . . . . . . . . . . . . . . . . . 22

2.4 Storage considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 232.4.1 IBM FlashSystem family . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 242.4.2 NVMe protocol . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 252.4.3 IBM Spectrum Virtualize for Public Cloud . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 262.4.4 IBM SAN Volume Controller . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 262.4.5 Additional information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28

2.5 Sizing considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 282.6 High availability considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 292.7 Disaster recovery considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30

Chapter 3. SAS Viya installation on POWER9 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 313.1 Installing SAS Viya on POWER9 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 323.2 Tuning suggested practices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32

3.2.1 Best frequency for performance critical workload . . . . . . . . . . . . . . . . . . . . . . . . . 323.2.2 Simultaneous multithreading. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 333.2.3 Minimize NUMA effects in multi-partition configuration with PowerVM. . . . . . . . . 343.2.4 Network tuning in a MPP environment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 353.2.5 Large memory capacity and high data bandwidth. . . . . . . . . . . . . . . . . . . . . . . . . 353.2.6 Fast storage devices for optimal performance . . . . . . . . . . . . . . . . . . . . . . . . . . . 36

Related publications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37IBM Redbooks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37Online resources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37Help from IBM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37

© Copyright IBM Corp. 2020. iii

8482TOC.fm Draft Document for Review July 17, 2020 5:48 pm

iv IBM Power Systems for SAS Viya 3.5 Deployment Guide

Draft Document for Review July 17, 2020 5:48 pm 8482spec.fm

Notices

This information was developed for products and services offered in the US. This material might be available from IBM in other languages. However, you may be required to own a copy of the product or product version in that language in order to access it.

IBM may not offer the products, services, or features discussed in this document in other countries. Consult your local IBM representative for information on the products and services currently available in your area. Any reference to an IBM product, program, or service is not intended to state or imply that only that IBM product, program, or service may be used. Any functionally equivalent product, program, or service that does not infringe any IBM intellectual property right may be used instead. However, it is the user’s responsibility to evaluate and verify the operation of any non-IBM product, program, or service.

IBM may have patents or pending patent applications covering subject matter described in this document. The furnishing of this document does not grant you any license to these patents. You can send license inquiries, in writing, to:IBM Director of Licensing, IBM Corporation, North Castle Drive, MD-NC119, Armonk, NY 10504-1785, US

INTERNATIONAL BUSINESS MACHINES CORPORATION PROVIDES THIS PUBLICATION “AS IS” WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESS OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF NON-INFRINGEMENT, MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE. Some jurisdictions do not allow disclaimer of express or implied warranties in certain transactions, therefore, this statement may not apply to you.

This information could include technical inaccuracies or typographical errors. Changes are periodically made to the information herein; these changes will be incorporated in new editions of the publication. IBM may make improvements and/or changes in the product(s) and/or the program(s) described in this publication at any time without notice.

Any references in this information to non-IBM websites are provided for convenience only and do not in any manner serve as an endorsement of those websites. The materials at those websites are not part of the materials for this IBM product and use of those websites is at your own risk.

IBM may use or distribute any of the information you provide in any way it believes appropriate without incurring any obligation to you.

The performance data and client examples cited are presented for illustrative purposes only. Actual performance results may vary depending on specific configurations and operating conditions.

Information concerning non-IBM products was obtained from the suppliers of those products, their published announcements or other publicly available sources. IBM has not tested those products and cannot confirm the accuracy of performance, compatibility or any other claims related to non-IBM products. Questions on the capabilities of non-IBM products should be addressed to the suppliers of those products.

Statements regarding IBM’s future direction or intent are subject to change or withdrawal without notice, and represent goals and objectives only.

This information contains examples of data and reports used in daily business operations. To illustrate them as completely as possible, the examples include the names of individuals, companies, brands, and products. All of these names are fictitious and any similarity to actual people or business enterprises is entirely coincidental.

COPYRIGHT LICENSE:

This information contains sample application programs in source language, which illustrate programming techniques on various operating platforms. You may copy, modify, and distribute these sample programs in any form without payment to IBM, for the purposes of developing, using, marketing or distributing application programs conforming to the application programming interface for the operating platform for which the sample programs are written. These examples have not been thoroughly tested under all conditions. IBM, therefore, cannot guarantee or imply reliability, serviceability, or function of these programs. The sample programs are provided “AS IS”, without warranty of any kind. IBM shall not be liable for any damages arising out of your use of the sample programs.

© Copyright IBM Corp. 2020. v

8482spec.fm Draft Document for Review July 17, 2020 5:48 pm

Trademarks

IBM, the IBM logo, and ibm.com are trademarks or registered trademarks of International Business Machines Corporation, registered in many jurisdictions worldwide. Other product and service names might be trademarks of IBM or other companies. A current list of IBM trademarks is available on the web at “Copyright and trademark information” at http://www.ibm.com/legal/copytrade.shtml

The following terms are trademarks or registered trademarks of International Business Machines Corporation, and might also be trademarks or registered trademarks in other countries.

Redbooks (logo) ®AIX®FlashCopy®HyperSwap®IBM®

IBM Cloud®IBM Elastic Storage®IBM FlashCore®IBM FlashSystem®IBM Spectrum®

POWER®POWER9™PowerVM®Redbooks®

The following terms are trademarks of other companies:

Intel, Intel logo, Intel Inside logo, and Intel Centrino logo are trademarks or registered trademarks of Intel Corporation or its subsidiaries in the United States and other countries.

The registered trademark Linux® is used pursuant to a sublicense from the Linux Foundation, the exclusive licensee of Linus Torvalds, owner of the mark on a worldwide basis.

Red Hat, are trademarks or registered trademarks of Red Hat, Inc. or its subsidiaries in the United States and other countries.

Other company, product, or service names may be trademarks or service marks of others.

vi IBM Power Systems for SAS Viya 3.5 Deployment Guide

Draft Document for Review July 17, 2020 5:48 pm 8482pref.fm

Preface

This IBM® Redbooks® publication provides options and best practices for deploying SAS Viya 3.5 on IBM POWER9™ servers.

SAS Viya is a complex set of artificial intelligence (AI) and analytics solutions that require a properly planned infrastructure to meet the needs of the data scientists, business analysts, and application developers who will use Viya capabilities in their daily work activities. Regardless of the user role, the underlying infrastructure matters to ensure performance expectations and service level agreement (SLA) requirements are met or exceeded. Although the general planning process is similar for deploying SAS Viya on any platform, key IBM POWER9 differentiators must be considered to ensure that an optimized infrastructure deployment is achieved.

This guide provides useful information needed during the planning, sizing, ordering, installing, configuring, and tuning phases of your SAS Viya deployment on POWER9 processor-based servers.

This book addresses topics for IT architects, IT specialists, developers, sellers and anyone looking to implement SAS Viya 3.5 on IBM POWER9 servers. Moreover, this publication provides documentation to transfer the how-to-skills to the technical teams, and solution guidance to the sales team. This book compliments the documentation available at the IBM Knowledge Center and aligns with the educational materials provided by the IBM Systems Software Education (SSE).

Authors

This book was produced by a team of specialists from around the world working at IBM Redbooks, Poughkeepsie Center.

Dino Quintero is an IT Management Consultant and an IBM Level 3 Senior Certified IT Specialist with IBM Redbooks in Poughkeepsie, New York. He has 24 years of experience with IBM Power Systems technologies and solutions. Dino shares his technical computing passion and expertise by leading teams developing technical content in the areas of enterprise continuous availability, enterprise systems management, high-performance computing, cloud computing, artificial intelligence including machine and deep learning, and cognitive solutions. He also is a Certified Open Group Distinguished IT Specialist. Dino is formerly from the province of Chiriqui in Panama. Dino holds a Master of Computing Information Systems degree and a Bachelor of Science degree in Computer Science from Marist College.

Beth L. Hoffman is an Executive IT Specialist and Solution Architect in the IBM Cognitive Systems ISV Ecosystem Technical Development organization. She leads the enablement of AI and analytics solutions on IBM Power Systems for the past 10 years. Beth has more than 15 years of experience consulting and collaborating with key software solution companies.

Sandy Kao is an Open Source Ecosystem Enablement and Optimization Engineer in the IBM Cognitive Systems ISV Ecosystem Technical Development organization. Sandy has more than 15 years of experience specializing in software performance and optimization. Most recently she leads the optimization of key AI and open source data solutions on IBM Power Systems.

© Copyright IBM Corp. 2020. vii

8482pref.fm Draft Document for Review July 17, 2020 5:48 pm

Abhijit Mane is a Technical Consultant in the IBM Cognitive Systems ISV Ecosystem Technical Development organization. He leads the enablement of AI and analytics solutions on IBM Power Systems.

Thanks to the following people for their contributions to this project:

Wade WallaceIBM Redbooks, Poughkeepsie Center

Axel WestphalIBM Germany

John PowerIBM USA

Reinaldo Tetsuo KatahiraIBM Brazil

Now you can become a published author, too!

Here’s an opportunity to spotlight your skills, grow your career, and become a published author—all at the same time! Join an IBM Redbooks residency project and help write a book in your area of expertise, while honing your experience using leading-edge technologies. Your efforts will help to increase product acceptance and customer satisfaction, as you expand your network of technical contacts and relationships. Residencies run from two to six weeks in length, and you can participate either in person or as a remote resident working from your home base.

Find out more about the residency program, browse the residency index, and apply online at:

ibm.com/redbooks/residencies.html

Comments welcome

Your comments are important to us!

We want our books to be as helpful as possible. Send us your comments about this book or other IBM Redbooks publications in one of the following ways:

� Use the online Contact us review Redbooks form found at:

ibm.com/redbooks

� Send your comments in an email to:

[email protected]

� Mail your comments to:

IBM Corporation, IBM RedbooksDept. HYTD Mail Station P0992455 South RoadPoughkeepsie, NY 12601-5400

viii IBM Power Systems for SAS Viya 3.5 Deployment Guide

Draft Document for Review July 17, 2020 5:48 pm 8482pref.fm

Stay connected to IBM Redbooks

� Find us on Facebook:

http://www.facebook.com/IBMRedbooks

� Follow us on Twitter:

http://twitter.com/ibmredbooks

� Look for us on LinkedIn:

http://www.linkedin.com/groups?home=&gid=2130806

� Explore new Redbooks publications, residencies, and workshops with the IBM Redbooks weekly newsletter:

https://www.redbooks.ibm.com/Redbooks.nsf/subscribe?OpenForm

� Stay current on recent Redbooks publications with RSS Feeds:

http://www.redbooks.ibm.com/rss.html

Preface ix

8482pref.fm Draft Document for Review July 17, 2020 5:48 pm

x IBM Power Systems for SAS Viya 3.5 Deployment Guide

Draft Document for Review July 17, 2020 5:48 pm 8482ch01.fm

Chapter 1. Planning your deployment

This chapter provides the initial steps for planning your deployment of SAS Viya on IBM POWER®.

SAS Viya is a complex set of artificial intelligence (AI) and analytics solutions that require a properly planned infrastructure to meet the needs of the data scientists, business analysts, and application developers who will use Viya capabilities in their daily work activities. Regardless of the user role, the underlying infrastructure matters to ensure performance expectations and service level agreement (SLA) requirements are met or exceeded. Although the general planning process is similar for deploying SAS Viya on any platform, key IBM POWER9 differentiators must be considered to ensure that an optimized infrastructure deployment is achieved.

This guide provides useful information needed during the planning, sizing, ordering, installing, configuring, and tuning phases of your SAS Viya deployment on POWER9 processor-based servers.

This chapter contains the following:

� Planning your deployment� Summary� Get more information

1

© Copyright IBM Corp. 2020. 1

8482ch01.fm Draft Document for Review July 17, 2020 5:48 pm

1.1 Planning your deployment

The primary step in planning your deployment is to understand your end users' requirements. It is important to know which SAS Viya capabilities will be used, who will use them, and what data will be used. These are the fundamentals to determine the optimal infrastructure environment to support the users. Another important aspect to factor in the decision is whether the SAS Viya environment needs to integrate with an existing SAS 9.4 environment.

In addition, this information along with the users' performance expectations help in determining the size and configuration of each server, and how many servers are needed for an initial SAS Viya deployment.

SAS provides a deployment guide for SAS Viya on Linux. This guide must be read and used during the planning phase to understand the Viya deployment modes and other software and environmental requirements. You can find the latest guide at:

https://support.sas.com/en/documentation/install-center/viya/deployment-guides.html

Beyond this, IBM Power Systems offer several additional flexible deployment options for SAS Viya. Chapter 2, “IBM POWER9 deployment options” on page 3 outlines these options. Although the deployment options provide examples as to the server options for the SAS Viya workload, clients must engage their SAS Account Team for assistance in sizing their workload from the SAS Enterprise Excellence Center (EEC) sizing team. This ensures that the hardware environment is properly sized for the client's specific SAS workload.

1.2 Summary

SAS Viya is a complex set of AI and analytics solutions that require a properly planned infrastructure to meet user requirements. The underlying infrastructure matters to ensure performance expectations and SLA requirements are met. Key POWER9 configurations must be considered to ensure that an optimized infrastructure deployment is achieved. Contact your IBM and SAS sales representatives for any questions or assistance with selecting the correct POWER9 deployment and configuration for your needs.

1.3 Get more information

To learn more about SAS Viya on IBM Power Systems, contact your IBM representative or IBM Business Partner, or visit the following website:

https://www.ibm.com/it-infrastructure/power/capabilities/sas-viya

2 IBM Power Systems for SAS Viya 3.5 Deployment Guide

Draft Document for Review July 17, 2020 5:48 pm 8482ch02.fm

Chapter 2. IBM POWER9 deployment options

This chapter describes SAS Viya deployment options on POWER9.

The IBM Power System portfolio of servers enables flexible deployment options for running SAS Viya. The IBM portfolio offers ultra-flexible systems with the highest reliability1. In addition, the Enterprise and scale-out servers can run Linux in addition to IBM AIX® and IBM i and allow for the consolidation of multiple workloads on a single system which can reduce data center footprint. An example of this is deploying multiple SAS workloads on the same system. Customers can also take advantage of advanced virtualization features, such as capacity on demand (CoD), to dynamically activate processor capacity and memory resources for ease of scalability and flexibility in adapting to growing demands on your infrastructure.

This chapter contains the following:

� POWER9 deployment options� Network configuration options� POWER9 hardware minimum configuration options� Storage considerations� Sizing considerations� High Availability Considerations� Disaster Recovery Considerations

2

1 IITIC Global Server Hardware, Server OS Reliability Survey: https://www.ibm.com/downloads/cas/YGLRKEEK

© Copyright IBM Corp. 2020. 3

8482ch02.fm Draft Document for Review July 17, 2020 5:48 pm

2.1 POWER9 deployment options

SAS Viya can run in two modes, symmetric multiprocessing (SMP) and massively parallel processing (MPP).

The SMP mode uses multiple cores or processors on just one node or server. SMP deployment options are appropriate when doing a proof-of-concept (PoC) or when the Viya workload requirements do not initially require a cluster. SMP mode deployments are also great options for programming-only environments and application development. There are two SMP mode deployment options for IBM Power Systems.

The MPP mode uses a cluster of servers or partitions to run SAS Viya. In the MPP mode, SAS Cloud Analytics Services (CAS) Workers offload the analytics and spread the data out to allow parallel loading of data and processing on multiple worker nodes. MPP offers a growth path and maximizes scaling.

2.1.1 SMP mode deployment options

Figure 2-1 and Figure 2-2 on page 5 show the simplest options for deploying SAS Viya on Power Systems. These are SMP mode deployments. There are two variations that use different types of Power Systems.

Figure 2-1 shows a scale-out single-server deployment where SAS Viya and all its complementary software are installed on one POWER9 server. An IBM Power System AC922 server model is shown as an example. The Power AC922 server is a powerful system built for analytics and can run CPU and GPU workloads. This server supports NVIDIA GPUs plus an NVIDIA technology called NVLink 2 which is unique to POWER9 processor-based servers and is embedded within the NVIDIA GPUs. NVLink 2 provides 5.6 times data bandwidth acceleration when software switches between CPU and GPU.

Figure 2-1 SMP mode: Scale-out single-server

4 IBM Power Systems for SAS Viya 3.5 Deployment Guide

Draft Document for Review July 17, 2020 5:48 pm 8482ch02.fm

Figure 2-2 shows a scale-up single-partition deployment, where SAS Viya including CAS and all its complementary software are installed on one POWER9 partition. In the POWER9 processor-based server family there are several scale-up server models to choose from, the IBM Power System E950 being one of them. An IBM Power System E980 server can also be used. These enterprise-class servers are large; you can use IBM PowerVM® virtualization to create a separate Linux partition, or a logical partition (LPAR), for installing SAS Viya. This SMP deployment option does not require the entire large server, so it is ideal for customers who currently run SAS 9.4 in an IBM AIX partition where they can use untapped resources (such as cores and memory) for deploying SAS Viya.

Figure 2-2 SMP mode: Scale-up single partition

2.1.2 MPP mode deployment options

In MPP mode, CAS supports workers to help offload the analytics and spread the data out to allow for parallel processing on multiple workers. There are three MPP options.

Chapter 2. IBM POWER9 deployment options 5

8482ch02.fm Draft Document for Review July 17, 2020 5:48 pm

Figure 2-3 shows a scale-out multi-server deployment, and is where SAS Viya, including the CAS controller and CAS workers, and all its complementary software are installed across a cluster of POWER9 scale-out servers. Figure 2-3 shows a cluster of Power AC922 servers. As growth is experienced, you can scale out by adding more servers to the cluster.

Figure 2-3 MPP mode: Scale-out multi-server

6 IBM Power Systems for SAS Viya 3.5 Deployment Guide

Draft Document for Review July 17, 2020 5:48 pm 8482ch02.fm

Figure 2-4 shows another MPP mode deployment, where the analytics and data are spread out to allow for parallel processing on the multiple worker nodes. This example is for a scale-up multi-partition deployment, where all the same SAS Viya software is installed across a virtual cluster of POWER9 partitions. Figure 2-7 on page 11 shows the set of PowerVM partitions on a Power E980 server for running the Viya software (in green). These partitions can be added to an existing server where other workloads have already been running. Some workload examples can be SAS 9.4 on an AIX partition or SAP HANA in a Linux partition, or a partition that runs system backups. These new Viya partitions are separate and coexist with the other partitions. The advantage of this scale-up environment is that you can grow without necessarily having to order more hardware. You can activate unused cores and memory, and either add more resources to an existing LPAR or add more LPARs, such as additional worker nodes. You also benefit from a simplified network configuration between the nodes because all nodes are on the same server. This scale-up MPP deployment option is applicable for CPU-based workloads.

Figure 2-4 MPP mode: Scale-up multi-partition

Chapter 2. IBM POWER9 deployment options 7

8482ch02.fm Draft Document for Review July 17, 2020 5:48 pm

Figure 2-5 shows an MPP SAS Viya deployment in a mixed environment. SAS Viya is installed across a set of partitions on a scale-up server and one or more scale-out servers. In this case, the CAS controller and management type services run in the partitions on the left, and CAS worker nodes can run either on partitions (for non-GPU workloads) or on scale-out servers (for CPU or GPU workloads). Again, you can see the SAS Viya components in green and non-Viya software in the other colors. In this environment, you gain a combined set of the benefits for scaling, supporting parallel loading and processing of data, a growth path, scaling by using unused resources (which allows adding more partitions, resources to existing partitions or worker nodes), simplified networking between LPAR nodes, supporting GPU-enabled workloads as they are added, and performance benefits with NVLink 2.0 technology.

Figure 2-5 MPP mode: Mixed environment

Table 2-1 shows a summary of the different deployment options along with a summary of the key differences and the use cases that are good fits.

Table 2-1 Summary of POWER9 deployment options for SAS Viya

POWER9 deployment options Description

Single-server on accelerated compute server. � This is ideal for PoCs, small workloads that do not require a cluster, or when a few application developers use a SAS Viya programming-only environment.

� Provides options for bare metal, and CPU and GPU.

8 IBM Power Systems for SAS Viya 3.5 Deployment Guide

Draft Document for Review July 17, 2020 5:48 pm 8482ch02.fm

2.2 Network configuration options

Each deployment option shown in 2.1, “POWER9 deployment options” on page 4 can be configured using either internal storage devices or external storage solutions for the workload source data. In either case, the servers used for a SAS Viya deployment need to be networked together. Figure 2-6 on page 10, Figure 2-7 on page 11, Figure 2-8 on page 13, Figure 2-9 on page 14, Figure 2-10 on page 15, Figure 2-11 on page 16, Figure 2-12 on page 17, Figure 2-13 on page 18 and Figure 2-14 on page 19 provide a network configuration example for each deployment option, illustrating how the servers and storage can be networked and connected. The following types of networks can be considered in your network design:

� Basic network: This provides users access to the system and provides a path for administrators to perform systems management.

Single-partition on a PowerVM-based server. � This is ideal for PoCs, small workloads that do not require a cluster, or when a few application developers use a SAS Viya programming-only environment.

� This is also ideal for clients already running SAS 9.4 on AIX who want to integrate with SAS Viya and take advantage of unused POWER9 system resources.

� Supports PowerVM virtualization and CPUs only.

Multi-servers on accelerated compute servers. � This is ideal for SAS Viya full deployment environments of any size.

� Offers a growth path and maximizes scaling.� Supports parallel loading of data and

processing. Options for bare metal, and CPU and GPU.

Multi-partitions on a PowerVM-based server. � This is ideal for SAS Viya full deployment environments.

� Offers a growth path and maximizes scaling leveraging existing unused server resources.

� This is also ideal for clients already running SAS 9.4 on AIX who want to integrate with SAS Viya.

� Has simplified networking between nodes.� Supports parallel loading of data and

processing.� Supports PowerVM virtualization and CPUs

only.

Mixed environment multi-servers with both PowerVM-based and accelerated compute servers.

� This is ideal for SAS Viya full deployment environments of any size.

� Offers a growth path and maximizes scaling by leveraging existing unused server resources and adding more servers as needed (for example, when GPU-enabled workloads are needed).

POWER9 deployment options Description

Chapter 2. IBM POWER9 deployment options 9

8482ch02.fm Draft Document for Review July 17, 2020 5:48 pm

� Node network: This is typically a private network that is used to provide high-speed, high-bandwidth, and low-latency communication between the nodes in the cluster. This needs to be a minimum of 10 Gb network.

� External data network: This provides a separate connection for data transmission between the nodes and the external data source.

With the single-server and single-partition deployments, the network design is simpler because it does not require internode connectivity. With the multi-partition deployments, the internode connectivity is simplified because all partitions are on the same physical server.

Figure 2-6 shows an example of a single-server deployment with a bare metal Power AC922 server running Red Hat Enterprise Linux 7.6. For internal storage, solid-state drive (SSD) or NVMe devices (shown in green) can be used in the POWER processor-based server. Figure 2-6 also illustrates the use of a two-port 32 Gb Fibre Channel card (shown in orange) if workload source data will be accessed from an external storage solution such as IBM FlashSystem®. The on-board 1 GbE Ethernet adapter can be used for basic network access.

Figure 2-6 Single-server (internal and external storage)

10 IBM Power Systems for SAS Viya 3.5 Deployment Guide

Draft Document for Review July 17, 2020 5:48 pm 8482ch02.fm

Figure 2-7 illustrates an example of a single-partition deployment using one Red Hat Enterprise Linux LPAR on an IBM Power E980 server. For internal storage, SSD or NVMe devices can be used in the POWER processor-based server. Figure 2-7 illustrates the use of a two-port 32 Gb Fibre Channel card, dedicated to the LPAR, if workload source data will be accessed from an external storage solution such as IBM FlashSystem. This example uses VIOS for virtualized network and internal storage. A 1 GbE or 10 GbE Ethernet adapter can be used for basic network access.

Figure 2-7 Single-partition (internal and external storage)

Chapter 2. IBM POWER9 deployment options 11

8482ch02.fm Draft Document for Review July 17, 2020 5:48 pm

Figure 2-8 on page 13 is an example of a multi-server deployment with a set of bare metal Power AC922 servers. This example shows the use of two NVMe adapters (shown in green) in the POWER processor-based server. One 10 GbE or 100 GbE two-port adapter on each server can be used for internode communications. The on-board 1 GbE Ethernet adapters can be used for basic network access.

12 IBM Power Systems for SAS Viya 3.5 Deployment Guide

Draft Document for Review July 17, 2020 5:48 pm 8482ch02.fm

Figure 2-8 Multi-server with internal storage

Chapter 2. IBM POWER9 deployment options 13

8482ch02.fm Draft Document for Review July 17, 2020 5:48 pm

Figure 2-9 shows you can add two-port 32 Gb Fibre Channel cards (shown in orange) if workload source data will be accessed from an external storage solution such as IBM FlashSystem.

Figure 2-9 Multi-server with external storage

14 IBM Power Systems for SAS Viya 3.5 Deployment Guide

Draft Document for Review July 17, 2020 5:48 pm 8482ch02.fm

Figure 2-10 illustrates an example of a multi- partition deployment using a set of Red Hat Enterprise Linux LPARs on an IBM Power E980 server. This example shows the use of one NVMe adapter (shown in green) in each LPAR. One 10 GbE or 100 GbE two-port adapter can be configured to each LPAR for internode communications. The VIOS is used for virtualized network and internal storage. 10 GbE Ethernet adapters can be used for basic network access.

Figure 2-10 Multi-partition with internal storage

Chapter 2. IBM POWER9 deployment options 15

8482ch02.fm Draft Document for Review July 17, 2020 5:48 pm

Figure 2-11 illustrates the use of a two-port 32 Gb Fibre Channel cards (shown in orange) if workload source data will be accessed from an external storage solution such as IBM FlashSystem.

Figure 2-11 Multi-partition with external storage

16 IBM Power Systems for SAS Viya 3.5 Deployment Guide

Draft Document for Review July 17, 2020 5:48 pm 8482ch02.fm

Figure 2-12 illustrates an example of a mixed server deployment using a set of Red Hat Enterprise Linux LPARs on an IBM Power E980 server and a set of bare metal Power AC922 servers. Each LPAR and server uses one or more NVMe adapter (shown in green) for internal storage. One 10 GbE two-port adapter is configured to each LPAR and each server for internode communications. The VIOS is used for virtualized network and internal storage on the Power E980 server. Either 1 GbE or 10 GbE Ethernet adapters can be used for basic network access.

Figure 2-12 Mixed-environment with internal storage

Chapter 2. IBM POWER9 deployment options 17

8482ch02.fm Draft Document for Review July 17, 2020 5:48 pm

Figure 2-13 illustrates the use of two-port 32 Gb Fibre Channel cards (shown in orange) if workload source data will be accessed from an external storage solution such as IBM FlashSystem.

Figure 2-13 Mixed-environment with external storage

18 IBM Power Systems for SAS Viya 3.5 Deployment Guide

Draft Document for Review July 17, 2020 5:48 pm 8482ch02.fm

Figure 2-14 illustrates the use of two-port 10 GbE or 100 GbE Ethernet adapters (shown in orange) if workload source data will be accessed from an external storage solution such as IBM Elastic Storage® System (ESS) 3000.

Figure 2-14 Mixed-environment with network attached storage appliance

2.3 POWER9 hardware minimum configuration options

This section describes the minimum hardware configurations for the various deployment options. When ordering POWER9 servers for a SAS Viya deployment, the SAS section in the IBM POWER eConfigurator tool reflects the minimum configuration options documented here.

There are three types of storage required. The Viya management services (non-worker node functions) need two drives and solid-state drives (SSDs) are sufficient. The CAS_Disk_Cache (CDC) requires internal temporary storage. The workload source data requires permanent storage and it is best to have at least two drives for striping the data. For a configuration that allows for growth, consider a larger number of cores per processor and more memory. With the AC922 8335-GTH type and model, a maximum of two CPUs can be ordered (with up to 20 cores each) and up to four GPUs if the client’s Viya workloads can make use of GPUs. If six GPUs are needed in one server, the server will need to be the water-cooled model. Viya 3.5 cannot take advantage of more than four GPUs

For more information about storage, refer to the 2.4, “Storage considerations” on page 23. For more information about networking options, refer to the 2.2, “Network configuration options” on page 9.

Chapter 2. IBM POWER9 deployment options 19

8482ch02.fm Draft Document for Review July 17, 2020 5:48 pm

2.3.1 Single-server hardware configurations

Table 2-2 shows the minimum configuration for an IBM Power AC922 server when it is used in a single-server deployment. The minimum configuration is dictated by a combination of the hardware configuration minimum of the server components and the SAS Viya software needs. Memory needs to be at least 16 GB per core unless more is needed to load the data into memory.

Table 2-2 Use cases: PoCs, small projects, programming-only, application development

2.3.2 Single-partition hardware configurations

Table 2-3 on page 21 shows the minimum configuration for an enterprise-class server when it is used in a single-partition deployment. This configuration requires the use of PowerVM to create the partition where SAS Viya will be deployed. The minimum number of cores required by SAS is 16. The enterprise-class server models offer large configuration sizes (for cores and memory) that allow for growth and allow room for other workloads in other partitions on the same server.

IBM Power AC922 Minimum configuration

Growth configuration Notes

POWER9 server� Type-Model

IBM Power AC922� 8335-GTH� Air-cooled

IBM Power AC922� 8335-GTH� Air-cooled

N/A

Processors, cores per processor

Two, 16-cores Two, 20-cores N/A

Memory 512 GB Up to 2 TB 16 GB per core minimum

Storage:� Permanent space:

Management services (internal)

� Temporary space: CDC (internal)

� Permanent space: Workload data (internal or external)

Two SSDsOne SSD or NVMe

Two SSD or NVMe

Two SSDsOne SSD or NVMe

Two SSD or NVMe

Min goal is 25 MBps per core.

Prefer stripe across more than 1 drive.

Network adapter:� Basic access� External data

(optional)

One 1 GbEOne 32 Gb HBA or 100 GbE NIC card

One 1 GbEUp to two 32 Gb HBA or 100 GbE NIC cards

NIC card bandwidth is important.

Optional

GPUs: NVIDIA Tesla V100 GPU accelerators with NVLink

Two Four Six GPUs require water-cooling GTH model. Viya 3.5 cannot take advantage of more than four GPUs.

GPU cache 16 GB 32 GB N/A

20 IBM Power Systems for SAS Viya 3.5 Deployment Guide

Draft Document for Review July 17, 2020 5:48 pm 8482ch02.fm

Table 2-3 Use cases: PoCs, small projects, programming-only, application development

2.3.3 Multi-server minimum hardware configuration

Table 2-4 shows the minimum configuration for a cluster of IBM Power AC922 servers. The top row shows the different types of SAS Viya nodes. The three management type nodes require less memory than the worker nodes and will need internal drives for storage management related data. There are three different types of worker nodes. One worker node must be designated as a secondary CAS controller. One type of worker node supports CPU-based workloads, and the third type supports GPU-based workloads such as Visual Data Mining and Machine Learning (VDMML). The GPU-based worker nodes are optional and are currently only taken advantage of if VDMML is used.

Table 2-4 Use cases: Visual analytics [CPU-enabled, VDMML (GPU-enabled)]

IBM POWER9 enterprise-class server

Minimum configuration Growth configuration

Processors, cores per processor

Two, 8-cores Up to two, 96-cores

Memory 512 GB Up to 64 TB

Storage:� Permanent space:

Management services (internal)

� Temporary space: CDC (internal)

� Permanent space: Workload data (internal or external)

Two SSDsOne SSD or NVMeTwo SSD or NVMe

Two SSDsUp to 16 NVMe cards

Network adapter:� Basic access� External data (optional)

One 1 GbEOne 32 Gb HBA or 100 GbE NIC card

One 1 GbEOne 32 Gb HBA or 100 GbE NIC card

Virtualization hypervisor IBM PowerVM IBM PowerVM

SAS Viya node type

CAS controller (primary)

Viya microservices and other processes

SAS Programming Runtime

CAS workerworkloads: CPU-enabled+ CAS controller (secondary)

CAS workerworkloads: CPU-enabled

Workload: VDMML/GPU-enabled

POWER9 server� Type-model

AC922� 8335-GTH� Air-cooled

AC922� 8335-GTH� Air-cooled

AC922� 8335-GTH� Air-cooled

AC922� 8335-GTH� Air-cooled

AC922� 8335-GTH� Air-cooled

AC922� 8335-GTH� Air-cooled

Minimum quantity

1 1 1 1 1 0

Processor, cores per processor

Two, 16-cores

Two, 16-cores Two, 16-cores Two, 16-cores Two, 16-cores Two, 16-cores

Memory 512 GB 512 GB 512 GB 512 GB 512 GB 512 GB

Chapter 2. IBM POWER9 deployment options 21

8482ch02.fm Draft Document for Review July 17, 2020 5:48 pm

2.3.4 Multi-partition minimum hardware configuration

Table 2-5 shows the minimum configuration for a cluster of enterprise-class server partitions. Because of the flexibility of partition configurations, the number of cores and amount of memory can be smaller to match the minimum required by the SAS Viya software. Because GPUs are not available on enterprise-class servers, there is no GPU-based worker node type. VDMML workloads can still run on enterprise-class servers using CPUs for processing.

Table 2-5 Use cases: Visual analytics, VDMML

Storage formanagement servicesa

Two SSD Two SSD Two SSD N/A N/A N/A

Storagea*:� CDC

(temporary)

� Workload data (or external)

One SSD or NVMeTwo SSD or NVMe

N/A N/A One SSD or NVMeTwo SSD or NVMe

One SSD or NVMeTwo SSD or NVMe

One SSD or NVMeTwo SSD or NVMe

Network adapter:� Basic

access� Node

inter-connectivity

� External data (optional)

One 1 GbEOne 10 GbE

One 1 GbEOne 10 GbE

One 1 GbEOne 10 GbE

One 1 GbEOne 10 GbEOne 32 Gb HBA or 100 GbE NIC card

One 1 GbEOne 10 GbEOne 32 Gb HBA or 100 GbE NIC card

One 1 GbEOne 10 GbEOne 32 Gb HBA or 100 GbE NIC card

GPUs: NVIDIA Tesla V100 GPUs with NVLink

N/A N/A N/A N/A N/A Two

GPU cache N/A N/A N/A N/A N/A 16 GB

a. The storage is assumed to be internal storage for permanent purposes unless otherwise noted.

SAS Viya node type

CAS controller (primary)

Viya microservices and other processes

SAS Programming Runtime

CAS workerworkloads: CPU-enabled+ CAS controller (secondary)

CAS workerworkloads: CPU-enabled

Workload: VDMML/GPU-enabled

IBM POWER9 enterprise-class server

CAS controller (primary)

Viya microservices and other processes

SAS Programming Runtime

CAS workerworkload: CPU-enabled+ CAS controller (secondary)

CAS workerworkload: CPU-enabled

Minimum quantity of partitions

1 1 1 1 1

Processors, cores per processor

Two, 8-cores Two, 8-cores Two, 8-cores Two, 8-cores Two, 8-cores

22 IBM Power Systems for SAS Viya 3.5 Deployment Guide

Draft Document for Review July 17, 2020 5:48 pm 8482ch02.fm

For mixed environment deployments where the management nodes are installed on a set of enterprise-class server partitions and worker nodes are deployed on enterprise-class server partitions and Power AC922 servers, use Table 2-4 on page 21 and Table 2-5 on page 22 for configuration guidance.

2.4 Storage considerations

SAS Viya requires storage for proper functioning. It allows some flexibility with the type and location of the storage drives or devices, at the same time recommending internal drives for some specific elements. SAS documents several key storage considerations in the CAS storage performance consideration document2 with a focus on making the best storage choice to ensure that optimal Viya performance is achieved. It is important to read and apply those recommendations.

IBM Power Systems offer a wide variety of SSD and NVMe devices that can be used in SAS Viya deployments. The device sizes vary depending on the specific server type and model and provide the performance and capacity required by SAS Viya.

There are three primary types of Viya data-related storage entities. These types are mentioned in Table 2-2 on page 20, Table 2-3 on page 21, Table 2-4 on page 21, Table 2-5 on page 22, and 2.3, “POWER9 hardware minimum configuration options” on page 19. Data and software associated with Viya-related management services, such as CAS controller and microservices, must reside on the internal storage (that is, front-facing devices). Generally, two or more write-intensive, high-performance SSDs are sufficient. The CDC is an on-device extension of memory mapped files, and also acts as a backing store for memory mapped file

Memory 512 GB 512 GB 512 GB 512 GB 512 GB

Storage formanagement servicesa

Two SSD Two SSD Two SSD N/A N/A

Storagea:� CDC temporary)� Workload data (or

external)

One SSD or NVMeTwo SSD or NVMe

N/A N/A One SSD or NVMeTwo SSD or NVMe

One SSD or NVMeTwo SSD or NVMe

Network adapter:� Basic access� Node

interconnectivity� External data

(optional)

One 1 GbEOne 10 GbE

One 1 GbEOne 10 GbE

One 1 GbEOne 10 GbE

One 1 GbEOne 10 GbEOne 32 Gb HBA or 100 GbE NIC card

One 1 GbEOne 10 GbEOne 32 Gb HBA or 100 GbE NIC card

Virtualization hypervisor

PowerVM PowerVM PowerVM PowerVM PowerVM

a. The storage is assumed to be internal storage for permanent purposes unless otherwise noted.

IBM POWER9 enterprise-class server

CAS controller (primary)

Viya microservices and other processes

SAS Programming Runtime

CAS workerworkload: CPU-enabled+ CAS controller (secondary)

CAS workerworkload: CPU-enabled

2 Engineering CAS Performance Hardware Network and Storage Considerations for CAS servers:https://www.sas.com/content/dam/SAS/support/en/sas-global-forum-proceedings/2019/3351-2019.pdf

Chapter 2. IBM POWER9 deployment options 23

8482ch02.fm Draft Document for Review July 17, 2020 5:48 pm

segments. The CDC requires high bandwidth and low latency. It is best provisioned as multiple write-intensive, high-performance SSDs or NVMe devices. Enterprise-class storage devices with the largest drive writes per day (DWPD) metric can be considered. The number and architecture of the devices (file system, parity, and striping choices) must be guided by the SAS Infrastructure Assessment activities to fit the EEC sizing or hardware estimate workload guidelines.

The SAS Programming Run-time (SPRE) node needs persistent storage for files that need to persist between runs. The SPRE node needs a SAS WORK file system, similar to SAS 9.4.

Persistent storage is where all Viya workload source and result data persistently resides. Although this can be internal (collocated) storage, it is often external storage due to the volume of data. A high-performance Fibre Channel or Ethernet attached storage solution can be used for CASLIB PATH serial or distributed NFS (DNFS) parallel access to CAS nodes for processing.

IBM offers several storage solutions that are great choices for SAS Viya environments. The IBM FlashSystem family is a great choice for Fibre Channel attached storage with high bandwidth and low latency. You can see example network configurations for a SAS Viya deployment that leverages IBM POWER9 server and FlashSystem 9200 in the 2.2, “Network configuration options” on page 9 of this document.

2.4.1 IBM FlashSystem family

The IBM FlashSystem family combines the performance of flash and end-to-end Non-Volatile Memory Express (NVMe) with the following benefits for SAS Viya environments:

� Reliability and innovation of IBM FlashCore® technology.� Ultra-low latency of Storage Class Memory (SCM).� Rich features of IBM Spectrum Virtualize.� AI predictive storage management.� Proactive support by Storage Insights.

Built in a powerful 2U enterprise-class, blazing fast storage all-flash array is shown in Figure 2-15. Providing intensive data-driven multicloud storage capacity, FlashSystems helps you to easily add in the multicloud solutions that best support your most critical demands.

Figure 2-15 IBM FlashSystem 9200 control enclosure

Note: IBM Spectrum® Scale can support HDFS via the Hadoop Connector.

24 IBM Power Systems for SAS Viya 3.5 Deployment Guide

Draft Document for Review July 17, 2020 5:48 pm 8482ch02.fm

2.4.2 NVMe protocol

NVM Express (NVMe) is an optimized, high-performance scalable host controller interface designed to address the needs of systems that utilize PCI Express-based solid-state storage. The NVMe protocol is an interface specification for communicating with storage devices. It is functionally analogous to other protocols, such as SAS. However, the NVMe interface has been designed for extremely fast storage media, such as flash-based solid-state drives (SSDs) and low-latency non-volatile storage technologies.

NVMe storage devices are typically directly attached to a host system over a PCI Express (PCIe) bus. That is, the NVMe controller is contained in the storage device itself, alleviating the need for an additional I/O controller between the CPU and the storage device. The architecture results in lower latency, throughput scalability, and simpler system designs. NVMe protocol supports multiple I/O queues, versus traditional SAS and SATA protocols that use only a single queue.

The PCIe bus, however, presents certain challenges for storage systems at scale. Practical challenges limit the number of NVMe drives that can be attached to a host over PCIe to a few tens of devices. NVMe over Fabrics (NVMe-oF) overcomes the limitations of the PCIe bus by extending the benefits of low latency and high efficiency of the NVMe technology across network fabrics to support sharing of NVMe storage at a large scale (100s or 1000s of devices) and over distance. Refer to Figure 2-16.

Figure 2-16 The traditional I/O stack and the new paradigm3

According to Fibre Channel Industry Association (FCIA) publications4, as a result of the simplicity, parallelism and efficiently of NVMe, it delivers significant performance gains versus SCSI protocol.

3 Storage and the NVM Express Revolution: http://www.redbooks.ibm.com/abstracts/redp5437.html?Open4 Shared Storage with NVMe: https://fibrechannel.org/shared-storage-with-nvme/

Chapter 2. IBM POWER9 deployment options 25

8482ch02.fm Draft Document for Review July 17, 2020 5:48 pm

2.4.3 IBM Spectrum Virtualize for Public Cloud

Available in IBM Cloud® and AWS, IBM Spectrum Virtualize is the leading software-defined storage that has been proven for years in IBM SAN Volume Controller (SVC) and FlashSystem families of storage systems with more than 180,000 systems running IBM Spectrum Virtualize and managing more than 11 exabytes of data. IBM Spectrum Virtualize for Public Cloud provides the following features:

� Migrates SAS Viya data between on-premises and public cloud data centers or between public cloud data centers.

� Implements disaster recovery strategies between on-premises and public cloud data centers.

� Enables cloud-based DevOps with easy replication of data from on-premises sources.� Improves cyber resiliency with SAS Viya data backups on AWS using air gap snapshots to

S3.

These all-flash systems include IBM Spectrum Virtualize software and introduce remarkable new features in comparison to the predecessor models:

� End-to-end NVMe support: NVMe is a logical device interface standard from 2011 for accessing non-volatile storage media that is attached by way of a PCI Express bus.

� Lower latencies through RDMA: Direct memory access from the memory of one node into that of another without involving either one's operating system.

� Data reduction pools (DRP) represent a significant enhancement to the storage pool concept. Now with the introduction of data reduction technology, compression, and deduplication, it has become more of a requirement to have an uncomplicated way to stay thin.

� FlashCore Modules (FCMs) or industry standard NVMe drives can be used for the IBM FlashSystems. If the FCM option is chosen, then the user can take advantage of the built-in hardware compression, which automatically tries to compress the stored data when written to the drives.

� Thin-provisioned IBM FlashCopy® uses disk space only when updates are made to the source or target data, and not for the entire capacity of a volume copy.

� HyperSwap® capability enables each volume to be presented by two IBM FlashSystems. This high-availability configuration tolerates combinations of node and site failures, using host multipathing driver, based on the one that is available for the regular IBM FlashSystem.

� The IBM FlashSystem 9200 supports the new low latency, high speed Storage Class Memory (SCM). SCM is a non-volatile memory device that performs faster (~10µs) than traditional NAND SSDs(100µs), but slower than DRAM (100ns).

� IBM Storage Insights is an additional part of the monitoring capability of the IBM FlashSystem 9200 system and supplements the views available in the GUI.

For more information about the IBM FlashSystem family, refer to the following data sheet:

https://www.ibm.com/downloads/cas/AR09E0LV

2.4.4 IBM SAN Volume Controller

Built with IBM Spectrum Virtualize software, IBM SAN Volume Controller is a hardware and software storage virtualization solution through which diverse storage devices can be grouped into a common pool that is available to the SAN. IBM SAN Volume Controller also assists SAS Viya customers, who are seeking to simplify storage management and reduce storage costs with SAN installations.

IBM SAN Volume Controller offers the following highlights:

26 IBM Power Systems for SAS Viya 3.5 Deployment Guide

Draft Document for Review July 17, 2020 5:48 pm 8482ch02.fm

� Transparent migration of back-end storage for performance tuning and disk migration without needing to stop or modify the application.

� 25-Gbps iSCSI server attachment that supports high performance at a lower cost.� Takes control of existing storage, retaining all existing information, which helps to speed

and simplify implementation in addition to minimizing the need for additional storage.� When combined with IBM Spectrum Control, allows the user to tune storage and improve

performance and utilization.� Provides information about the amount of available storage, the amount of storage being

used at which location, the hosts that are attached to which storage, and potential or actual outages, performance reporting, and trending.

� Scales from small configurations (1 TB) to large enterprises (500 TB and larger). New IBM SAN Volume Controller engines deliver dramatically better throughput, supporting larger and more I/O intensive environments.

� IBM SAN Volume Controller supports more than 500 flash, hybrid and disk storage systems from IBM and other storage vendors5.

IBM SAN Volume Controller provides in-band storage virtualization by creating a pool of managed disks from attached back-end disk storage systems. These managed disks are then mapped to a set of virtual disks for use by various host computer systems. IBM SAN Volume Controller is flexible and can manage all host storage requirements. Additionally, it also offers an alternative for FlashCopy, VolumeCopy, and Enhanced Remote Mirroring for disaster recovery, maintenance, and high availability.

Disk storage systems with a lower internal cache have performance value based on the amount of cache that they can offer. Introducing IBM SAN Volume Controller in front of these storage systems improves the overall performance. Cache also reduces the latency of write commands by completing them without sending the write blocks to disks.

IBM SAN Volume Controller, as shown in Figure 2-17, consists of two nodes for high availability. All write blocks are stored in both nodes. If a disk failure occurs, written blocks are stored in a cache. The blocks are then marked as a pinned memory block. For read cache, all IBM SAN Volume Controller memory is available, minus the pinned blocks if they exist.

Figure 2-17 IBM SAN Volume Controller nodes

5 IBM SVC data sheet: https://www.ibm.com/downloads/cas/JOWZXNMW

Chapter 2. IBM POWER9 deployment options 27

8482ch02.fm Draft Document for Review July 17, 2020 5:48 pm

2.4.5 Additional information

For more information about IBM SAN Volume Controller, see the following resources:

� Implementing the IBM SAN Volume Controller with IBM Spectrum Virtualize V8.3.1:

http://www.redbooks.ibm.com/redpieces/abstracts/sg248465.html?Open

� IBM Knowledge Center: IBM SAN Volume Controller product overview:

https://www.ibm.com/support/knowledgecenter/en/STPVGU_8.3.1/com.ibm.storage.svc.console.831.doc/svc_svcovr_1bcfiq.html

IBM ESS 3000 is a network storage appliance and comes with high performance IBM Spectrum Scale software. An example network configuration for a SAS Viya deployment that leverages IBM POWER9 server and ESS 3000 can be found in Figure 2-14 on page 19.

SAS Viya can be used with a variety of file systems. For example, on Red Hat Enterprise Linux V7, XFS is a high performance, scalable file system and is the default. ext4 is also supported. Both have extent-based allocation schemes which help in overall file system throughput.

When deploying CAS on an MPP system with a secondary CAS controller, a shared file system is needed. It is also needed when using a Direct Network File System (DNFS) type of data source (introduced for accessing data from conventional storage solutions) where CAS nodes have concurrent access to the same data to support parallel loading.

IBM Spectrum Scale clustered file system is a great choice for this purpose. GFS2 can also be used.

When a Hadoop cluster and SASHDAT (SAS proprietary file format) files are used with SAS Viya, Hadoop Distributed File System (HDFS) is the natural choice especially when Hadoop data nodes are co-located with the SAS Viya CAS nodes.

With the CAS_disk_cache, only local file systems such as XFS or ext4 are supported. Cluster file systems are not supported.

To learn more about file system requirements, refer to the SAS Viya Deployment Guide at:

https://support.sas.com/en/documentation/install-center/viya/deployment-guides.html

2.5 Sizing considerations

The client requirements for analytics workloads and usage will influence the decision for which POWER9 deployment option and hardware configuration is best to deploy. The sizing process is critical for assessing the requirements and making that determination. The sizing process involves experts within SAS and IBM. Key inputs are needed from the client to start the sizing process. Table 2-6 shows a list of the types of input that must be gathered before starting the sizing process. After the key inputs are known, engage with your SAS account team to submit a sizing request to the SAS EEC Sizing team.

Table 2-6 Examples of sizing input needed

Input required Description Server specifications influenced

Workload type Type of applications, analytics CPU or GPU-enabled.

� Server model� Number of servers� Number of GPUs

28 IBM Power Systems for SAS Viya 3.5 Deployment Guide

Draft Document for Review July 17, 2020 5:48 pm 8482ch02.fm

2.6 High availability considerations

It is important to read and understand the HA options for SAS Viya deployments documented by SAS Institute. Here are three important HA documents written by SAS experts:

� Proper Planning Prevents Possible Problems: SAS Viya High-Availability Considerations

https://www.sas.com/content/dam/SAS/support/en/sas-global-forum-proceedings/2019/3481-2019.pdf

� SAS Viya: Architect for High Availability Now and Users Will Thank You Later

https://www.sas.com/content/dam/SAS/support/en/sas-global-forum-proceedings/2018/1835-2018.pdf

� Optimize SAS Viya: Dividing Load through the Use of Multiple SAS Cloud Analytic Services Servers

https://www.sas.com/content/dam/SAS/support/en/sas-global-forum-proceedings/2020/4582-2020.pdf

High availability of SAS Viya is achieved through the load balancing of system components. If you plan to use high availability, you must understand your options and decide how and when to set it up. The SAS Viya components to be load balanced are divided into four categories or components:

1. SAS Cloud Analytic Services (CAS).

SAS products to be installed Type of workloads allowed to be processed.

� Server model� Number of servers

Number of users/concurrent users

Estimated number of authorized SAS users by workload type, if known.

� Number of servers� Overall size of server

Data set size GB, TB, PBs of data to be ingested and analyzed.

� Amount of memory� Type of storage (for

example, internal versus external)

� Speed of storage (for example HDD, SSD, NVMe)

Data set growth rate in 1-3 years

Helps select the correct initial configuration that will enable easy scaling and growth.

� Number of servers� Amount of memory� Type of storage (for

example, internal versus external)

Location of data Where will workload data be saved.

� Type and number of adapters

� Type of storage (for example, internal versus external)

Interested in using available space on an existing enterprise server

Helps determine if available resources are sufficient for planned usage.

Server model

Input required Description Server specifications influenced

Chapter 2. IBM POWER9 deployment options 29

8482ch02.fm Draft Document for Review July 17, 2020 5:48 pm

2. SAS Viya Programming Run-Time Servers.

3. Microservices.

4. Infrastructure servers (often run on the same server as Microservices).

HA is essentially accomplished by having a secondary or backup server for each of these four components. For each of these components, HA is achieved through automatic failover by the SAS component in some cases and for other components is achieved through manual failover. Refer to the SAS documents previously mentioned for more details.

When implementing HA for SAS Viya in a POWER environment that uses only POWER9 accelerated servers (for example, AC922, IC922 servers), the HA techniques and methods remain the same as for other platforms. You essentially add a second server for each component. When implementing HA in a SAS Viya environment installed across PowerVM LPARs on a POWER9 enterprise server, a second POWER9 enterprise server is required for installing each of the secondary or backup SAS Viya components, each in their own separate LPAR. This second enterprise server LPARs can be configured using shared resources so that the majority of cores and memory are allocated to those secondary LPARs only when the primary enterprise server or any of its LPARs become unusable and HA is activated. This allows the hardware resources on the second enterprise server to be primarily used by other mission-critical solutions running in other LPARs on that same server. For guidance on configuring shared resources with PowerVM, read the following resources available at this website:

https://www.ibm.com/us-en/marketplace/ibm-powervm

Alternatively, high availability can also be achieved using front-end load-balancing across multiple SAS Viya environments. You can read more in this document:

� High Availability with SAS Viya 3.4: Front-end Load-Balancing Considerations

https://communities.sas.com/t5/SAS-Communities-Library/High-Availability-with-SAS-Viya-3-4-Front-end-Load-Balancing/ta-p/595753

2.7 Disaster recovery considerations

For mission-critical business operations that takes advantage of SAS Viya software, it is important to consider a deployment architecture that provides continued operations and sufficient recovery in the case of a disaster. Common disaster recovery (DR) methods must be evaluated based on specific SAS Viya SLA and usage requirements. Instead of splitting a cluster of SAS Viya services across multiple sites, the preferred approach recommended by SAS Institute is to deploy separate, independent instances of the SAS solution at each site. With that approach implemented, the challenges become how to keep the SAS sites in synchronization and how to determine when the DR site must become operational. Synchronization will necessitate a process that copies updates from the primary site over to the DR site on some regular basis.

For more information, read Contemplating Disaster Recovery for SAS at the following website:

https://communities.sas.com/t5/SAS-Communities-Library/Contemplating-disaster-recovery-for-SAS/ta-p/479700

30 IBM Power Systems for SAS Viya 3.5 Deployment Guide

Draft Document for Review July 17, 2020 5:48 pm 8482ch03.fm

Chapter 3. SAS Viya installation on POWER9

This chapter discusses SAS Viya deployment guidelines and tuning suggested practices on POWER9.

This chapter contains the following:

� Installing SAS Viya on POWER9� Tuning suggested practices

3

© Copyright IBM Corp. 2020. 31

8482ch03.fm Draft Document for Review July 17, 2020 5:48 pm

3.1 Installing SAS Viya on POWER9

SAS Viya 3.5 must be installed following the instructions documented in the SAS Viya 3.5 for Linux Deployment Guide. You can find this guide at:

https://support.sas.com/en/documentation/install-center/viya/deployment-guides.html

The steps are identical for installing it on an IBM POWER9 processor-based system running Red Hat Enterprise Linux. If Viya will be installed in one or more PowerVM partitions, those partitions must be created before the Viya software installation process.

SAS Viya is only supported on Red Hat Enterprise Linux for IBM Power Systems LE (POWER9) 7.6 (sometimes referred to as Red Hat Enterprise Linux 7.6alt). SAS Viya cannot be installed on Red Hat Enterprise Linux 7.6.3.10 classic as it does not fully support the POWER9 instruction set.

3.2 Tuning suggested practices

There are several recommendations for running SAS Viya on POWER9 servers.

3.2.1 Best frequency for performance critical workload

On a bare metal server, the CPU energy governor is controlled by the operating system. Set the CPUfreq governor to performance mode for best performance. Use the cpupower command to verify the performance policy and set the CPUfreq governor as follows:

cpupower -c all frequency-infocpupower -c all frequency-set -g performance

On a PowerVM server, the controls for all power saver modes are available on the Advanced System Management Interface (ASMI) and can be dynamically modified as shown in Figure 3-1 on page 33. The default performance mode depends on the POWER9 processor-based server model. For Power E980 systems, the Maximum Performance mode is enabled by default. More detailed information about POWER9 power and performance management is available on section 2.1.10 of following publication:

http://www.redbooks.ibm.com/redpapers/pdfs/redp5510.pdf

32 IBM Power Systems for SAS Viya 3.5 Deployment Guide

Draft Document for Review July 17, 2020 5:48 pm 8482ch03.fm

Figure 3-1 Advanced System Management Interface: Power and Performance Mode Setup

3.2.2 Simultaneous multithreading

Simultaneous multithreading (SMT) enables a single physical processor core to simultaneously dispatch instructions from more than one hardware thread context. With SMT, each POWER9 core can present up to eight hardware threads. Because there are multiple hardware threads per physical processor core, more instructions can run at the same time. SMT8 is the default for IBM Power System E980, and SMT4 is the default for the IBM Power System AC922. POWER9 processor-based servers offer up to eight threads per core, Intel processor-based servers provide only two threads per core.

However, SAS Viya automatically determines parallelization based on the number of available hardware threads, and the application can over-parallelize to have more than 1000 threads running concurrently. In certain workloads, large numbers of runnable threads can cause lock contention (Example 3-2 on page 34) and longer runtime. The lock contention is most

Chapter 3. SAS Viya installation on POWER9 33

8482ch03.fm Draft Document for Review July 17, 2020 5:48 pm

prominent when the NUMA assignment is not optimal for the LPARs. In such situations, reducing SMT to four or two can alleviate the lock contention and improve runtime. Use the ppc64_cpu command to set the SMT mode to 4 or 2 as shown in Example 3-1.

Example 3-1 vmstat command shows high number of runnable threads in the first column

ppc64_cpu --smt=4ppc64_cpu --smt=2

procs -----------memory---------- ---swap-- -----io---- -system-- ------cpu-----r b swpd free buff cache si so bi bo in cs us sy id wa st1303 2 0 131319808 512 248346368 0 0 1890659 57 106992 136739 83 17 0 0 01180 1 0 127116992 512 258590144 0 0 1944883 258 104255 143950 88 12 0 0 01073 4 0 118337728 512 266641664 0 0 1553706 319264 101604 105751 79 21 0 0 01506 5 0 107629248 512 275541312 0 0 1631779 182636 119852 192827 78 21 1 0 0

Example 3-2 shows output of the perf tool with lock contention.

Example 3-2 Profile from the perf tool showing lock contention at the top of the list

# Overhead Command Shared Object Symbol # ........ ............... ........... .............................................# 39.16% cas [kernel.kallsyms] [k] _raw_spin_lock 5.93% cas airlev.so [.] IPRA.$GetClasVals 5.58% cas libessl.so.1 [.] datb17v 5.24% cas airlev.so [.] IPRA.$MakeOneEffectXBuf

3.2.3 Minimize NUMA effects in multi-partition configuration with PowerVM

Most multi-socket servers experience nonuniform memory access (NUMA) effects that can affect performance. The most optimal partition assignment is to stay within single NUMA node on a single socket. If the required core count exceeds one single NUMA nodes, try to contain the partition in 2 NUMA nodes within the same Central Electronics Complex (CEC) for memory affinity.

When a partition is created or deployed, the PowerVM hypervisor chooses the physical resources to assign to the partition. The hypervisor has information about the dual inline memory modules (DIMMs), processor sockets, I/O devices, and so on and uses this location information to optimize the resources assigned to partitions. But over time, when there are multiple partitions defined on a server, especially with the use of functions such as dynamic LPAR (DLPAR), the assignment of resources is no longer optimal.

It is a good practice to use the numactl Linux command to check the NUMA nodes assignment in each partition to ensure optimal memory assignment:

numactl --hardware

In the situation where a partition is unnecessarily assigned across multiple NUMA nodes, a reconfiguration of memory assignment can improve the performance of the partition.

In a non-production development environment where a power off on the server is possible, the following steps can reconfigure the memory assignment to all the partitions on the server for the most optimal assignment:

1. Make any profile updates that you need to make to the partitions.

2. Power off all the defined partitions.

34 IBM Power Systems for SAS Viya 3.5 Deployment Guide

Draft Document for Review July 17, 2020 5:48 pm 8482ch03.fm

3. Activate (power on) all the partitions specified in the current configuration with the necessary attributes. The partition needs to be activated to at least the SMS prompt.

4. After all partitions are simultaneously activated with the updated profile, start powering off all the partitions.

5. Power off the server.

6. Power on the server.

7. Activate the partitions from the management console. Order of activation is not important.

In a production environment, there are often other existing workloads and users running on the same server which makes a server power-off not possible. The Dynamic Platform Optimizer (DPO) is a function provided as part of PowerVM that allows for the dynamic optimization of the resources assigned to active and inactive partitions to minimize these NUMA effects without interrupting the availability of the existing workloads.

One way to determine if a DPO operation must be initiated is to use the lsmemopt Hardware Management Console (HMC) command-line interface (CLI) command. Its output score is a representation of how close a partition or the server is to optimal assignment of resources. The optmem HMC CLI command is used to actually initiate a DPO operation on the server. There are also partitions exclusion parameters and partition ordering parameters for these two commands. Below is a link to a detailed DPO documentation:

https://community.ibm.com/community/user/power/blogs/pete-heyrman1/2020/06/16/powervm-dynamic-platform-optimizer

3.2.4 Network tuning in a MPP environment

When there are significant internode networking activities in a MPP environment, increasing the maximum transmission unit (MTU) size from 1500 (default) to 9000 (JumboFrames) can reduce the RX error count and improve network performance. Enabling MTU per port on the network switch can be required.

To change the MTU size, use the /usr/sbin/ifconfig command as follows on each node:

ifconfig <Interface> mtu 9000 up

For example:

ifconfig enp1s0f0 mtu 9000 up

Test MTU JumboFrames support between the nodes in the internode network as follows:

ping -M do -s 8972 <destination IP>

If MTU is supported and enabled in client and destination, the ping output is as follows:

PING xxx.xxx.xxx.xxx (xxx.xxx.xxx.xxx): 8184 data bytes8192 bytes from xxx.xxx.xxx.xxx: icmp_seq=0 ttl=128 time=0.714 ms

More MTU test information available at the following link:

https://access.redhat.com/solutions/2440411

3.2.5 Large memory capacity and high data bandwidth

The POWER9 processor has exceptional cache, memory capacity, and interconnect bandwidths that makes it a perfect environment for SAS Viya's in-memory analytics.

Chapter 3. SAS Viya installation on POWER9 35

8482ch03.fm Draft Document for Review July 17, 2020 5:48 pm

The IBM Power System E980 server has up to 64 TB memory capacity, and the IBM Power System AC922 server has up to 2 TB memory capacity. When equipped with a large memory capacity, it can support a much larger in-memory database than other platforms that are limited by memory capacity.

The POWER9 processor, with enhanced cache hierarchy, has a total of 120 MB of L3 caches with twelve 20-way associative regions and advanced placement policies. The L3 caches are fed by a 7 TBps on-chip bandwidth. Along with its peak I/O bandwidth of 80 GBps, as powered by PCIe Gen4 (with Gen3 compatibility), POWER9 provides a high data bandwidth environment.

3.2.6 Fast storage devices for optimal performance

The recommendation is to use fast storage options, such as SSDs and NVMe devices that provide a faster I/O speed that matches with the computing power on the POWER9 processor-based servers. Installing multiple SSDs or NVMe devices allows multiple storage devices to be accessed concurrently thus increasing total I/O write and read throughput. Creating a striped logical volume is a good method for balancing I/O across multiple SSDs or NVMe devices with the following Linux command examples:

� Linux command to create LVM physical volumes:

pvcreate /dev/nvme0n1 /dev/nvme1n1

� Linux command to create volume group:

vgcreate <striped_vol_group> /dev/nvme0n1 /dev/nvme1n1

� Linux command to create striped logical volume:

lvcreate -i2 -L<volume size> -n <striped_logical_volume> <striped_vol_group>

Refer to the following Red Hat documentation on striped logical volume implementation:

https://red.ht/2JuoyOQ

36 IBM Power Systems for SAS Viya 3.5 Deployment Guide

Draft Document for Review July 17, 2020 5:48 pm 8482bibl.fm

Related publications

The publications listed in this section are considered particularly suitable for a more detailed discussion of the topics covered in this book.

IBM Redbooks

The following IBM Redbooks publications provide additional information about the topic in this document. Note that some publications referenced in this list might be available in softcopy only.

� IBM Power System AC922 Introduction and Technical Overview, REDP-5472

� IBM Power System E980: Technical Overview and Introduction, REDP-5510

� IBM Power System E950: Technical Overview and Introduction, REDP-5509

� IBM FlashSystem 9200 and 9100 Best Practices and Performance Guidelines, SG24-8448

� Introduction Guide to the IBM Elastic Storage System, REDP-5253

You can search for, view, download or order these documents and other Redbooks, Redpapers, Web Docs, draft and additional materials, at the following website:

ibm.com/redbooks

Online resources

These websites are also relevant as further information sources:

� SAS Viya Deployment Guides

https://support.sas.com/en/documentation/install-center/viya/deployment-guides.html

� SAS Viya on IBM Power Systems

https://www.ibm.com/it-infrastructure/power/capabilities/sas-viya

� IBM Redbooks highlighting POWER9 processor-based technology

http://www.redbooks.ibm.com/Redbooks.nsf/pages/power9?Open

Help from IBM

IBM Support and downloads

ibm.com/support

IBM Global Services

ibm.com/services

© Copyright IBM Corp. 2020. 37

8482bibl.fm Draft Document for Review July 17, 2020 5:48 pm

38 IBM Power Systems for SAS Viya 3.5 Deployment Guide

ISB

N D

ocISB

N

SG

24-8482-00

ISB

N D

ocISB

N

SG

24-8482-00

To determ

ine the spine width of a book, you divide the paper P

PI into the num

ber of pages in the book. An exam

ple is a 250 page book using Plainfield opaque 50# sm

ooth which has a P

PI of 526. D

ivided 250 by 526 w

hich equals a spine width of .4752". In this case, you w

ould use the .5” spine. Now

select the Spine w

idth for the book and hide the others: Sp

ecial>Co

nd

ition

al T

ext>Sh

ow

/Hid

e>Sp

ineS

ize(-->Hid

e:)>Set . M

ove the changed Conditional text settings to all files in your book by opening the book file w

ith the spine.fm still open and F

ile>Imp

ort>F

orm

ats the C

onditional Text S

ettings (ON

LY!) to the book files.

Draft D

ocument for R

eview July 10, 2020 8:53 pm

8482spin

e.fm39

ISB

N D

ocISB

N

SG

24-8482-00

(0.1”spine)0.1”<

->0.169”

53<->

89 pages

(0.2”spine)0.17”<

->0.473”

90<->

249 pages

(1.5” spine)1.5”<

-> 1.998”

789 <->

1051 pages

(1.0” spine)0.875”<

->1.498”

460 <->

788 pages

(0.5” spine)0.475”<

->0.873”

250 <->

459 pages

IBM Pow

er Systems for SAS Viya 3.5 Deploym

ent Guide

IBM Pow

er Systems for SAS Viya

3.5

IBM Pow

er Systems for SAS Viya

3.5

IBM Pow

er Systems for SAS Viya 3.5 Deploym

ent Guide

ISB

N D

ocISB

N

SG

24-8482-00

ISB

N D

ocISB

N

SG

24-8482-00

(2.0” spine)2.0” <

-> 2.498”

1052 <->

1314 pages

(2.5” spine) 2.5”<

->nnn.n”

1315<->

nnnn pages

To determ

ine the spine width of a book, you divide the paper P

PI into the num

ber of pages in the book. An exam

ple is a 250 page book using Plainfield opaque 50# sm

ooth which has a P

PI of 526. D

ivided 250 by 526 w

hich equals a spine width of .4752". In this case, you w

ould use the .5” spine. Now

select the Spine w

idth for the book and hide the others: Sp

ecial>Co

nd

ition

al T

ext>Sh

ow

/Hid

e>Sp

ineS

ize(-->Hid

e:)>Set . M

ove the changed Conditional text settings to all files in your book by opening the book file w

ith the spine.fm still open and F

ile>Imp

ort>F

orm

ats the C

onditional Text S

ettings (ON

LY!) to the book files.

Draft D

ocument for R

eview July 10, 2020 8:53 pm

8482spin

e.fm40

IBM Pow

er Systems for

SAS Viya 3.5

IBM Pow

er Systems for SAS Viya

3.5Deploym

ent Guide

ibm.com/redbooks

Draft Document for Review July 17, 2020 5:54 pm

Printed in U.S.A.

Back cover

ISBN DocISBN

SG24-8482-00

®