emc® service assurance suite® 9

12
Abstract The performance and scalability guidelines outlined here are based on tests run on EMC M&R configured in a production-like environment. Both live and simulated configuration items were used to measure the performance and scalability of EMC M&R. September 2017 EMC® Service Assurance Suite® 9.5 SolutionPack Performance and Scalability Guidelines

Upload: others

Post on 29-Nov-2021

7 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: EMC® Service Assurance Suite® 9

Abstract

The performance and scalability guidelines outlined here are based on tests run on EMC M&R configured in a production-like environment. Both live and simulated configuration items were used to measure the performance and scalability of EMC M&R. September 2017

EMC® Service Assurance Suite® 9.5

SolutionPack Performance and Scalability Guidelines

Page 2: EMC® Service Assurance Suite® 9

EMC Service Assurance Suite 9.5 SolutionPack Performance and Scalability Guidelines

2

Copyright © 2008-2017 Dell Inc, and its subsidiaries. All rights reserved. Published in the USA.

Published September 2017

Dell believes the information in this publication is accurate as of its publication date. The information is subject to change without notice.

The information in this publication is provided as is. Dell Corporation makes no representations or warranties of any kind with respect to the information in this publication, and specifically disclaims implied warranties of merchantability or fitness for a particular purpose. Use, copying, and distribution of any Dell software described in this publication requires an applicable software license.

Dell, EMC, and other trademarks of Dell Inc. or its subsidiaries. Other trademarks may be the property of their respective owners. EMC Corporation Hopkinton, Massachusetts 01748-9103 1-508-435-1000 In North America 1-866-464-7381 www.EMC.com 302-003-649 Rev 01

Page 3: EMC® Service Assurance Suite® 9

EMC Service Assurance Suite 9.5 SolutionPack Performance and Scalability Guidelines

3

Table of Contents Table of Contents .................................................................................................... 3

1 Overview .......................................................................................................... 4

2 Audience ......................................................................................................... 4

3 Purpose ........................................................................................................... 4

4 Infrastructure planning guidelines .................................................................... 4

5 Deployment Planner tool .................................................................................. 5

I. Resource description and specification of deployment types .................................... 5

II. Guideline P2- Increase default disk space................................................................ 5

6 SAS SolutionPacks Supported for Deployment .................................................. 6

6.1 SolutionPack for EMC Smarts ..................................................................................... 6

I. Deployment Planner Inputs ....................................................................................... 6

II. Steps to obtain Managed Ports and Interfaces ......................................................... 6

III. Comparison of 9.4.2 Vs 9.5 ..................................................................................... 7

IV. Reference ............................................................................................................... 7

6.2 SolutionPack for Optical Wavelength Services ........................................................... 7

I. Deployment Planner Inputs ....................................................................................... 7

II. Comparison of 9.4 .2 Vs 9.5 ..................................................................................... 8

6.3 SolutionPack for Huawei iManager M2000 ................................................................ 8

I. Deployment Planner Inputs ....................................................................................... 8

II. Comparison of 9.4.2 Vs 9.5 ...................................................................................... 8

III. Reference ............................................................................................................... 8

6.4 SolutionPack for EMC Network Configuration Manager .............................................. 8

I. Sizing Inputs ............................................................................................................. 8

II. Comparison of 9.4.2 Vs 9.5 ...................................................................................... 9

III. Reference ............................................................................................................... 9

6.5 SolutionPack for Transaction ................................................................................... 10

I. Deployment Planner Inputs ..................................................................................... 10

II. Comparison of 9.4.2 Vs 9.5 .................................................................................... 10

6.6 SolutionPack for Traffic Flows .................................................................................. 10

I. Sizing Inputs ........................................................................................................... 10

6.7 SolutionPack for CUCM ............................................................................................ 11

I. Test Bed Details: ..................................................................................................... 11

II. CDR/CMR report processing and report rendering results ....................................... 11

Page 4: EMC® Service Assurance Suite® 9

EMC Service Assurance Suite 9.5 SolutionPack Performance and Scalability Guidelines

4

1 Overview The guidelines for an optimized performance and scalability are related to planning the hardware and virtual environment, disk space, network latency, and deployment and configuration.

Planning guidelines help determine the right ESX hardware and the number of virtual machines needed for a specific data center environment.

Disk space guidelines help prepare and configure datastore capacity before EMC M&R deployment.

Network latency guidelines help identify the proximity and placement of EMC M&R virtual machines with respect to the configuration items.

Deployment and configuration guidelines assist with deployment and post-deployment.

EMC M&R is fully deployable on non-ESX hosts. EMC M&R Performance and Scalability Guidelines do not address non-ESX host deployment scenarios.

EMC M&R Performance and Scalability Guidelines do not address all of the SolutionPacks contained in EMC M&R.

2 Audience This article is for EMC M&R installers, administrators or anyone who manages the EMC M&R application.

3 Purpose After reading this article, you will have guidelines to follow to ensure you can configure EMC M&R for optimal performance based on the scale of your environment.

4 Infrastructure planning guidelines Use these guidelines to plan the infrastructure such as the right ESX hardware, number of virtual machines needed, and other details described in the tables to ensure an optimum performance and scalability.

Note: Any Solution Pack that generates more than 5M metrics should have an exclusive EMC M&R deployment.

Page 5: EMC® Service Assurance Suite® 9

EMC Service Assurance Suite 9.5 SolutionPack Performance and Scalability Guidelines

5

5 Deployment Planner tool The EMC M&R Deployment Planner tool can be used in conjunction with these guidelines to help scope specific environment requirements. Using known data about the environment, the tool will estimate EMC M&R hardware needs, provide metric counts (with growth factor), and include pointers to performance and scalability guidelines specific to the environment.

Access the Deployment Planner tool on the Downloads for EMC M&R page at:

https://support.emc.com/products/31946_Service-Assurance-Suite/Tools/

I. Resource description and specification of deployment types

Deployment Type Purpose of VM vCPUs Memory (GB) Disk Space(GB) VM Count

All-in-one (POC – Not for Production)

All components in a single VM

4 32 600 1

Standard Deployment (SA Suite 4-VM)

Frontend 4 16 120 1

Backend-Primary 4 24 600 - 1000 1

Backend-Secondary

4 16 1,200 - 2650 1

Collector 4 16 120 1

Scale-out Deployment (Requires Standard Deployment)

Additional Backend

4 16 1,200 - 2650 1

Additional Collector

4 8 -16 120 1

Notes:

It is recommended to remove the Generic RSC and Generic SNMP Collector Managers from the Additional Collectors if there are no hosts or SNMP devices being discovered, so as to reduce the Collector footprint.

The Frontend has been tested for 10 concurrent users

The disk space specification in this table assumes default retention policies.

II. Guideline P2- Increase default disk space

After deploying EMC M&R standard deployment (SA Suite 4-VM), the disk space must be increased for both primary and secondary backends. The default disk space on the primary and secondary backends is 150 GB.

Page 6: EMC® Service Assurance Suite® 9

EMC Service Assurance Suite 9.5 SolutionPack Performance and Scalability Guidelines

6

The default disk space on Frontend is 150 GB. Increase the disk space on the Frontend

6 SAS SolutionPacks Supported for Deployment The following SAS SolutionPacks are supported by the Deployment Planner:

o SolutionPack for EMC Smarts

o SolutionPack for Optical Wavelength Services

o SolutionPack for Huawei iManager M2000

o SolutionPack for Network Configuration Manager (NCM)

o SolutionPack for Transaction

o SolutionPack for Traffic Flows

o SolutionPack for Cisco Unified Communication Manager (CUCM)

6.1 SolutionPack for EMC Smarts

I. Deployment Planner Inputs

Input Field Description

Devices Total number of Devices Performance Instrumentation

Total number of Managed Ports and Interfaces with Performance Instrumentation

Fault Instrumentation

Total number of Managed Ports and Interfaces with Fault Instrumentation

II. Steps to obtain Managed Ports and Interfaces

Execute the following command to obtain the total number of Fault and Performance Instrumentation per IP Domain Manager. This should be repeated for the all the IP Domain Managers and the total across domains should be provided as input.

# ./sm_tpmgr -s INCHARGE-AM-PM --sizes | grep "Total Number of NetworkAdapters"

Output: Total Number of NetworkAdapters: 20000 [20000/20000]

Note: In the square brackets, the first part is Fault and the second one is Performance Instrumentation Numbers.

Page 7: EMC® Service Assurance Suite® 9

EMC Service Assurance Suite 9.5 SolutionPack Performance and Scalability Guidelines

7

III. Comparison of 9.4.2 Vs 9.5

Smarts SP 9.4.2 9.5

Maximum Collector Instances 2 2

Maximum Metrics per collector 2M 2M

IV. Reference

The following sizing sheets to be used to size Smarts Domain Managers for your environment:

EMC Smarts IP Version 9.5 Performance Benchmarking and Sizing Guidelines

EMC Smarts SAM Version 9.5 Performance Benchmarking and Sizing Guidelines

EMC Smarts ESM Version 9.5 Performance Benchmarking and Sizing Guidelines

6.2 SolutionPack for Optical Wavelength Services

I. Deployment Planner Inputs

Input Field Description

Optical Network Elements Total number of Optical Network Elements

Equipments Average number of Equipments per Optical Network Element Physical Termination Points

Average number of Physical Termination Points per Optical Network Element.

Subnetwork Connections Total number of Subnetwork Connections(SNC)

Topological Links Total number of Topological Links

Example:- An Optical Network Element could have ~600 Physical Termination Points. The input here will be #OpticalNetworkElements * Avg. #PhysicalTerminationPoints. An Optical Network Element could have ~50 Equipments. The input here will be ##OpticalNetworkElements * Avg. #Equipments

Page 8: EMC® Service Assurance Suite® 9

EMC Service Assurance Suite 9.5 SolutionPack Performance and Scalability Guidelines

8

II. Comparison of 9.4 .2 Vs 9.5

Optical Wavelength Services SP 9.4.2 9.5

Maximum Collector Instances 3 3

Maximum Metrics per collector 2M 2M

6.3 SolutionPack for Huawei iManager M2000

I. Deployment Planner Inputs

Input Field Description

Devices Total number of Devices = Cell + Station + Controller

Counters Average number of Counters per Counter-group.

Example: “67109376” is one of the Counter-groups and “VS.RAB.AbnormRel.CS” & “VS.RAB.NormRel.CS” are 2 Counters part of this group. Get the average number of Counters of all the Counter-groups and provide this average as input to “# Counters” in the deployment planner.

II. Comparison of 9.4.2 Vs 9.5

Huawei iManager M2000 SP 9.4.2 9.5

Maximum Collector Instances 2 2

Maximum Metrics per collector 2M 2M

III. Reference

1. For Hard Disk space usage: Please refer SolutionPack for Huawei iManager M2000 -> Performing post installation tasks-> Refer point 1

6.4 SolutionPack for EMC Network Configuration Manager

I. Sizing Inputs

Input Field Description

Devices Total number of Devices

Tests

Total number of Tests = Total number of policies * Standards per policy * Tests per standard Example: If there are 10 policies, each policy has 10 Standard associated and Each standard has 5 Test associated

Jobs Total number of Jobs data

Page 9: EMC® Service Assurance Suite® 9

EMC Service Assurance Suite 9.5 SolutionPack Performance and Scalability Guidelines

9

II. Comparison of 9.4.2 Vs 9.5

Network Configuration Manager SP 9.4.2 9.5

Maximum Collector Instances 2* 2*

Maximum Metrics per collector 1.7M 1.7M

* Two collector manager instances are for Compliance and Jobs (jobs + inventory) respectively

The following are the deployment recommendations for NCM Solution Pack v2.4

Option1: (Recommended for <3.5M (Compliance and Jobs data))

M&R Platform + NCM Solution Pack -> Single VM (32G, 4vCPU)

NCM Distributed/Combo

Heap size on collector manager instance increased to 8GB

Hard disk space increased to 600GB

System instability observed beyond 4M with single VM configuration (High Memory/CPU utilization, backend and UI unresponsiveness)

Option2: (Recommended for > 3.5M and <7.8M (Compliance and Jobs data))

M&R Platform + NCM Solution Pack -> Distributed (4VMs)

1VM for Frontend (16G, 4 vCPU)

1VM for Collector (24G, 4 vCPU)

1VM for Backend Database (24G, 4vCPU) and

1VM for Additional Backend Database(16G, 4vCPU)

NCM Distributed

Heap size on collector manager instances, load balancer, Front-end, APG backend(apg, apg1, apg2, apg3, apg4) increased to 8GB

All the 4 VMs Hard disk space increased to 600GB

System instability observed beyond 7.8M with distributed VM configuration (High Memory/CPU utilization, backend and UI unresponsiveness)

III. Reference

The following sizing sheets to be used to size NCM Core for your environment:

EMC Network Configuration Manager Version 9.5 Performance Benchmarking and Sizing Guidelines

Page 10: EMC® Service Assurance Suite® 9

EMC Service Assurance Suite 9.5 SolutionPack Performance and Scalability Guidelines

10

6.5 SolutionPack for Transaction

I. Deployment Planner Inputs

Input Field Description

HTTP Servers Total Number of HTTP & HTTP HEAD Servers Authentication Servers Total Number of LDAP Servers

Other Servers Total Number of DNS, FTP, SFTP, TCP CONNECT, RADIUS, SQL and SCRIPT

II. Comparison of 9.4.2 Vs 9.5

Transaction SP 9.4.2 9.5

Maximum Collector Instances 3 3

Maximum Metrics per collector 40000 40000

6.6 SolutionPack for Traffic Flows

I. Sizing Inputs

Input Field Description

Devices Total number of Devices Ports & Interfaces Total number of Ports & Interfaces

Flows per second Total number of Flows per second

The following are the deployment recommendations for Traffic flow Solution Pack v3.1

Option1: Combo /Single Server Deployment

M&R Platform + Traffic Flow Solution Pack -> Single VM (32GB, 4vCPU)

Heap size on event processing manager instance increased to 8GB

3 GHz Dual-processor server

600GB (OS and Application) Disk

Page 11: EMC® Service Assurance Suite® 9

EMC Service Assurance Suite 9.5 SolutionPack Performance and Scalability Guidelines

11

Separate 1TB Flow storage

Up to 10K Flows per second (per Core CPU) are processed on a single VM configuration.

Option2: Distributed Deployment

M&R Platform + Traffic Flow Solution Pack -> Distributed (3 VMs)

1VM for Frontend (32GB, 4 vCPU)

1VM for Collector (32GB, 4 vCPU)

1VM for Backend Database (32GB, 4vCPU)

Heap size on event processing manager instances, Front-end, APG backend increased to 8GB

3 GHz Dual-processor server

600GB (OS and Application) Disk

Separate 1TB Flow storage

Up to 10K Flows per second (per Core CPU) per event processing manager on a collector VM are processed. There can be only one event processing manager per collector VM.

6.7 SolutionPack for CUCM

I. Test Bed Details:

Setup is created on a Single VM with CentOS 6.8 Operating System, 200 GB of Disk Space, RAM 32GB (dedicated), and 4 CPUs per host. Captured below are the readings for CDR/CMR records that were processed and their rendering times.

II. CDR/CMR report processing and report rendering results

Note: Average number of calls that are incrementally updated on the front-end is 7200 calls per hour.

Solution Pack for CUCM v1.1.1 with 6.5u4 beyond 250K events observed High CPU utilization, backend and UI unresponsiveness.

Page 12: EMC® Service Assurance Suite® 9

EMC Service Assurance Suite 9.5 SolutionPack Performance and Scalability Guidelines

12

Scenarios CUCM SP v1.1.1 (6.5u4)

CUCM SP v2.1 (6.7u1)

CUCM SP v3.0 (6.8u2)

Time taken for Cisco-VoIP-CUCM Collector to process 10k events

2.750s 2.75s 8s

Number of events -> 170K 170k 170K

Time taken for rendering Call OverView Report

0.303s 0.356s 0.550s

Time taken for rendering Call Duration OverView Report

0.795s 0.549s 0.470s

Time taken for rendering QoS Report

0.393s 0.409s 1.018s

Time taken for rendering Voice Quality by Streams

2.897s 0.377s 1.760s

Time taken for rendering Calls Statistics

2.897s 1.167s 2.5s

Note: Average number of calls that are incrementally updated on the front-end is 7200 calls per hour.

Solution Pack for CUCM v1.1.1 with 6.5u4 beyond 250K events observed High CPU utilization, backend and UI unresponsiveness.