hp intelligent management centercommunity.hpe.com/hpeb/attachments/hpeb/network... · 4/24/2012...

28
HP Intelligent Management Center Deployment and Hardware Configuration Schemes Document version: V5.10

Upload: others

Post on 27-May-2020

8 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: HP Intelligent Management Centercommunity.hpe.com/hpeb/attachments/hpeb/network... · 4/24/2012  · iMC Deployment and Hardware Configuration Schemes Restricted 2012-04-24 Page 3

HP Intelligent Management Center Deployment and Hardware Configuration Schemes

Document version: V5.10

Page 2: HP Intelligent Management Centercommunity.hpe.com/hpeb/attachments/hpeb/network... · 4/24/2012  · iMC Deployment and Hardware Configuration Schemes Restricted 2012-04-24 Page 3

iMC Deployment and Hardware Configuration Schemes Restricted

2012-04-24 Page 2 of 28

Legal and notice information

© Copyright 2011 Hewlett-Packard Development Company, L.P.

No part of this documentation may be reproduced or transmitted in any form or by any means without prior written consent of Hewlett-Packard Development Company, L.P.

The information contained herein is subject to change without notice.

HEWLETT-PACKARD COMPANY MAKES NO WARRANTY OF ANY KIND WITH REGARD TO THIS MATERIAL, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. Hewlett-Packard shall not be liable for errors contained herein or for incidental or consequential damages in connection with the furnishing, performance, or use of this material.

The only warranties for HP products and services are set forth in the express warranty statements accompanying such products and services. Nothing herein should be construed as constituting an additional warranty. HP shall not be liable for technical or editorial errors or omissions contained herein.

Page 3: HP Intelligent Management Centercommunity.hpe.com/hpeb/attachments/hpeb/network... · 4/24/2012  · iMC Deployment and Hardware Configuration Schemes Restricted 2012-04-24 Page 3

iMC Deployment and Hardware Configuration Schemes Restricted

2012-04-24 Page 3 of 28

Content

Legal and notice information ······································································································································· 2 

Hardware requirements ··············································································································································· 4 Overall deployment requirements ··································································································································· 4 iMC platform deployment scheme ·································································································································· 6 NTA/UBA deployment scheme ···································································································································· 10 

NTA deployment scheme (for processing NetStreamV5 or NetFlowV5 logs) ················································ 10 NTA deployment scheme (for processing sFlow logs) ······················································································· 10 UBA deployment scheme (for processing NetStreamV5, NetFlowV5, NAT, or flow logs) ··························· 11 Deploying the NTA and UBA on one server (for processing NetStreamV5 or NetFlowV5 logs) ················· 11 DIG collectors deployment scheme ····················································································································· 12 NTA deployment scheme (in cooperation with DIG collectors) ········································································ 12 UBA deployment scheme (in cooperation with DIG collectors) ········································································ 12 Deploying the NTA and UBA on one server (working with DIG collectors) ··················································· 13 

SHM deployment scheme ·············································································································································· 13 APM deployment scheme (for Windows only) ············································································································ 13 SOM deployment scheme ············································································································································· 14 UAM/CAMS/EAD/DAM deployment scheme ·········································································································· 14 TAM deployment scheme ·············································································································································· 14 MVM deployment scheme ············································································································································· 15 WSM deployment scheme ············································································································································ 17 EPM deployment scheme ·············································································································································· 20 EoCM deployment scheme ··········································································································································· 21 VSM deployment scheme ·············································································································································· 22 IVM deployment scheme ··············································································································································· 22 BIMS deployment scheme ············································································································································· 24 

Software requirements ··············································································································································· 26 Operating system (x86-64-bit operating systems preferred) ····················································································· 26 

Windows ································································································································································ 26 Linux ········································································································································································ 26 

Databases ······································································································································································· 26 Databases embedded in the iMC ························································································································ 26 Oracle 11g release 1 and release 2 (on Linux only) ························································································ 27 SQL server 2005 SP3 ··········································································································································· 27 SQL server 2008 SP2 ··········································································································································· 27 SQL server 2008 R2 ············································································································································· 27 

Web browser ································································································································································· 28 

Page 4: HP Intelligent Management Centercommunity.hpe.com/hpeb/attachments/hpeb/network... · 4/24/2012  · iMC Deployment and Hardware Configuration Schemes Restricted 2012-04-24 Page 3

iMC Deployment and Hardware Configuration Schemes Restricted

2012-04-24 Page 4 of 28

iMC Deployment and Hardware Configuration Schemes

Hardware requirements

Overall deployment requirements Intelligent Management Center (iMC) provides such services as the intelligent management platform, which includes resource management, alarm management, performance management, ACLM, iCC, Syslog management, and VLAN management, and such components as user access management, EAD security policy, CAMS, TAM, MPLS VPN management, and WSM wireless service management. According to service requirements, a user can deploy the desired service components based on the iMC.

The iNode intelligent client includes the iNode management center software and the iNode intelligent client software. They are installed independently of the iMC platform.

For normal iMC system operation and service processing that meets basic running environment requirements, iMC distributed deployment and convergence management have the following server requirements:

iMC component Server requirements

iMC Platform Standalone server (primary server)

iMC SHM Standalone server is recommended, or the same server as the iMC platform if the SHM module manages less than 5000 NQA instances. An NQA instance is a test instance between the NQA client and NQA server.

iMC UAM

Standalone server is recommended:

If the iMC Platform manages less than 100 devices, the UAM can be deployed on the same server as the iMC platform, and the UAM can manage the same number of users as that when it is deployed on a standalone server.

You can deploy the user self-service component on the same server as the UAM when the UAM manages no more than 10000 users. If the UAM manages more, deploy the user self-service component on another computer.

iMC EAD HP recommends installing EAD and UAM on the same server.

iMC DAM HP recommends installing DAM and EAD on the same server.

iMC CAMS HP recommends installing CAMS and UAM on the same server.

iMC TAM Standalone server is recommended (If TAM manages less than 5000 devices, TAM can be deployed on the same server as the iMC platform or UAM/EAD.)

iMC MVM Standalone server is recommended, or the same server as the iMC platform if the number of MVM audit link instances to be managed is less than 1000.

Page 5: HP Intelligent Management Centercommunity.hpe.com/hpeb/attachments/hpeb/network... · 4/24/2012  · iMC Deployment and Hardware Configuration Schemes Restricted 2012-04-24 Page 3

iMC Deployment and Hardware Configuration Schemes Restricted

2012-04-24 Page 5 of 28

iMC component Server requirements

iMC NTA

Standalone server is recommended.

If the NTA processes more than 200 Mbps traffic, deploy the NTA on a separate server.

Multiple NTA components can be deployed on separate servers respectively.

iMC UBA

Standalone server is recommended.

If the UBA processes more than 200 Mbps traffic, deploy the UBA on a separate server.

Multiple UBA components can be deployed on separate servers respectively.

iMC WSM Standalone server is recommended, or the same server as the iMC platform if the WSM manages less than 100 APs.

iMC EPM This component can be installed on the same server as the iMC platform. When the iMC platform manages more than 200 devices, use a higher-level server configuration than the recommended server configuration.

iMC EoCM Standalone server is recommended, or the same server as the iMC platform if the EoCM manages less than 2000 CNUs.

iMC VSM Standalone server is recommended, or the same server as the iMC platform if the VSM manages less than 5000 devices.

iMC IVM Standalone server is recommended, or the same server as the iMC platform if the IVM manages less than 1000 devices.

iMC BIMS Standalone server is recommended. When BIMS manages less than 100 CPEs, BIMS can be deployed on the same server as the iMC platform.

iMC QoSM This component can be installed on the same server as the iMC platform.

iMC APM Standalone server is recommended. When APM manages less than 100 monitors, APM can be deployed on the same server as the iMC platform.

iMC SOM Standalone server is recommended, or the same server as the iMC platform if the number of online SOM users is less than 50 and the number of equivalent CI nodes, which can be calculated based on the number of devices, is less than 500.

iNode Management Center

Standalone server

iNode Intelligent Client Authentication client installed on the desktop terminal

If the user does not need the iMC platform to provide network management but only needs service management through the UAM/EAD, NTA, and UBA, any of these components can be installed on the same server as the iMC platform. For a system in which the UAM/EAD manages more than 20000 users, HP recommends you to install the user self-service component and Portal Web component on another server in distributed mode, reducing the burden on the primary server.

The following sections detail the server configuration requirements of the iMC platform and each component.

Caution: Some 32-bit operating systems support memory of less than 4 GB. Therefore, select memory sizes based on operating system requirements. The following table lists the maximum memory size supported by some common Windows operating systems.

The data provided in the following table is for reference only. For the memory sizes supported by different Windows operating systems, see the latest Microsoft documentation.

Page 6: HP Intelligent Management Centercommunity.hpe.com/hpeb/attachments/hpeb/network... · 4/24/2012  · iMC Deployment and Hardware Configuration Schemes Restricted 2012-04-24 Page 3

iMC Deployment and Hardware Configuration Schemes Restricted

2012-04-24 Page 6 of 28

Operating systems Memory sizes

Windows Server 2003 SP2 /

Windows Server 2003 R2 SP2

32-bit

Standard Edition 4 GB

Enterprise Edition 32 GB

Datacenter Edition 64 GB

64-bit

Standard Edition 32 GB

Enterprise Edition 1 TB

Datacenter Edition 1 TB

Windows Server 2008 SP2 /

Windows Server 2008 R2 SP1

32-bit

Standard Edition 4 GB

Enterprise Edition 64 GB

Datacenter Edition 64 GB

64-bit

Standard Edition 32 GB

Enterprise Edition 2 TB

Datacenter Edition 2 TB

iMC platform deployment scheme Before selecting the deployment schemes described in this section, understand the following:

1. In iMC resource components, device states are polled every one minute by default and device configurations are polled every two hours by default. If the system needs to manage more than 3000 devices, the device state polling interval and device configuration polling interval need to be increased. An interval increase rule can be as follows: for every 3000 devices, device states are polled every one minute and device configurations are polled every two hours. If the system needs to manage 6000 devices, the device state polling interval should be increased to two minutes and the device configuration polling interval to four hours. Alternatively, a user can configure different polling intervals depending on the importance of the devices managed. The iMC provides the interface for configuring state polling and configuration polling intervals in batches. To configure state polling interval and configuration polling interval, log on to the iMC, click Batch Operation under the Resource tab, and set the device polling parameters.

2. Collection unit: A collection unit is an instance collected every five minutes and is calculated by using the formula 5 minutes/collection interval of a collection instance in minutes. A collection unit is an instance of performance collection indexes. For example, the collection of the CPU utilization ratio of a device is a collection unit, so is the collection of the input rate of an interface. Suppose a device has one CPU and one memory module and needs to monitor the output rates and input rates of ten interfaces. Meanwhile, device unreachability ratio and response time need to be monitored. The monitoring of one CPU occupies a collection unit, so does the monitoring of one memory module. The monitoring of the input rates of ten interfaces occupies ten collection units, so does the monitoring of the output rates of the ten interfaces. Device unreachability ratio monitoring occupies one collection unit, so does device response time monitoring. Therefore, this device occupies 24 collection instances. If every collection interval of the instances is 5 minutes, this device occupies 24 collection units. If every collection interval of the instances is 10 minutes, this device occupies 12 collection units.

3. To improve the input/output (I/O) performance, follow these guidelines:

Page 7: HP Intelligent Management Centercommunity.hpe.com/hpeb/attachments/hpeb/network... · 4/24/2012  · iMC Deployment and Hardware Configuration Schemes Restricted 2012-04-24 Page 3

iMC Deployment and Hardware Configuration Schemes Restricted

2012-04-24 Page 7 of 28

If the number of collection units is from 100 K to 200 K, install two or more disks and a RAID card with a cache of 256 MB or more.

If the number of collection units is from 200 K to 300 K, install two or more disks and a RAID card with a cache of 512 MB or more.

If the number of collection units is from 300 K to 400 K, install four or more disks and a RAID card with a cache of 1 GB or more.

HP recommends you to set the RAID level to 5, which needs three or more disks. When you use more than four disks, HP recommends you to set the RAID level to 0+1.

4. imcInstallDir: Installation directory of the iMC.

5. imcDataDir: Directory that stores data when the iMC is deployed.

6. Java heap size: Maximum memory size to be used by Java processes on the iMC web server. It can be set to 1 GB at most in 32-bit operating systems. If the Java heap size is more than 1 GB, use 64-bit operating systems. Versions before iMC PLAT 3.20R2606P09 supports only 32-bit operating systems. For iMC PLAT 3.20R2606P09 or a later version, the default Java heap size is 2 GB on a Windows 64-bit operating system and 512 MB on other operating systems. Follow these steps to change the Java heap size:

a. Stop the iMC through the intelligent deployment monitoring agent.

b. Run script installation directory\client\bin\setmem.bat 1024 in the windows environment or run script installation directory/client/bin/setmem.sh 1024 in the Linux environment to set the maximum Java heap size to 1024 MB (the value ranges 256 to 1024). Make sure that the set Java heap size ranges from 1/4 to 1/3 of the physical server memory.

c. Restart the iMC on the Monitor tab of the intelligent deployment monitoring agent.

7. Modify the memory option of the SQL server, so that the maximum SQL server memory is half of the physical server memory.

8. When multiple components are deployed in distributed mode, the CPU/memory configuration of the primary server should meet the highest requirement of these components.

9. The resource management and alarm management module of the iMC platform are the foundation of iMC, and many components work depending on the two modules. HP recommends deploying the two modules on the primary server.

10. The number of cores mentioned below refers to the number of physical cores, and does not mean the number of virtual cores created by hyper treading.

32-bit Windows environment

Management scale System requirements (minimum)

Node count

Collection units(1)

Max. online operators

CPU (main frequency ≥ 2.5 GHz)

Memory

Java heap size

Disk space for installation (imcInstallDir)

Disk space for data storage (imcDataDir)

0 to 200

0 to 5 K 20 2-core CPU 4 GB

512 MB 3 GB

30 GB

5 K to 50 K 10 60 GB

200 to 500

0 to 10 K 30 4-core CPU 6 GB 1 GB 3 GB

50 GB

10 K to 100 K 10 100 GB

Page 8: HP Intelligent Management Centercommunity.hpe.com/hpeb/attachments/hpeb/network... · 4/24/2012  · iMC Deployment and Hardware Configuration Schemes Restricted 2012-04-24 Page 3

iMC Deployment and Hardware Configuration Schemes Restricted

2012-04-24 Page 8 of 28

Management scale System requirements (minimum)

(1) A value in the range 0 to 5 K means not starting performance monitoring or starting minor performance monitoring. This explanation to Collection units applies to all of the following environments.

64-bit Windows environment (recommended)

Management scale System requirements (minimum)

Node count

Collection units

Max. online operators

CPU (main frequency ≥ 2.5 GHz)

Memory

Java heap size

Disk space for installation (imcInstallDir)

Disk space for data storage (imcDataDir)

0 to 200

0 to 5 K 20 2-core CPU 4 GB 2 GB 3 GB

30 GB

5 K to 50 K 10 60 GB

200 to 1 K

0 to 10 K 30 4-core CPU 8 GB 2 GB 3 GB

50 GB

10 K to 100 K 10 100 GB

1 K to 2 K

0 to 20 K 30 6-core CPU

12 GB 4 GB 4 GB

60 GB

20 K to 200 K 10 200 GB

2 K to 5 K

0 to 30 K 40 8-core CPU

24 GB

8 GB 5 GB 80 GB

30 K to 300 K 20 250 GB

5 K to 10 K

0 to 40 K 50 16-core CPU

32 GB

12 GB

7 GB 100 GB

40 K to 400 K 20 300 GB

10K~15K

0~40K 50 24-core CPU 64G 16G 10GB

200GB

40K~400K 20 600GB

CAUTION:

To improve the I/O performance, follow these guidelines:

If the number of collection units is from 100 K to 200 K, install two or more disks and a RAID card witha cache of 256 MB or more.

If the number of collection units is from 200 K to 300 K, install two or more disks and a RAID card witha cache of 512 MB or more.

If the number of collection units is from 300 K to 400 K, install four or more disks and a RAID card with a cache of 1 GB or more.

HP recommends you to set the RAID level to 5, which needs three or more disks. If you use more than four disks, HP recommends you to set the RAID level to 0+1.

32-bit Linux environment

Management scale System requirements (minimum)

Node count

Collection units

Max. online operators

CPU (main frequency ≥ 2.5 GHz)

Memory

Java heap size

Disk space for installation (imcInstallDir)

Disk space for data storage (imcDataDir)

0 to 200

0 to 5 K 20 2-core CPU 6 GB

512 MB 3 GB

30 GB

5 K to 50 K 10 60 GB

Page 9: HP Intelligent Management Centercommunity.hpe.com/hpeb/attachments/hpeb/network... · 4/24/2012  · iMC Deployment and Hardware Configuration Schemes Restricted 2012-04-24 Page 3

iMC Deployment and Hardware Configuration Schemes Restricted

2012-04-24 Page 9 of 28

Management scale System requirements (minimum)

200 to 500

0 to 10 K 30 4-core CPU 8 GB 1 GB 3 GB

50 GB

10 K to 100 K 10 100 GB

64-bit Linux environment (recommended)

Management scale System requirements (minimum)

Node count

Collection units

Max. online operators

CPU (main frequency ≥ 2.5 GHz)

Memory

Java heap size

Disk space for installation (imcInstallDir)

Disk space for data storage (imcDataDir)

0 to 200

0 to 5 K 20 2-core CPU 6 GB 2 GB 3 GB

30 GB

5 K to 50 K 10 60 GB

200 to 1 K

0 to 10 K 30 4-core CPU

12 GB 4 GB 3 GB 50 GB

10 K to 100 K 10 100 GB

1 K to 2 K

0 to 20 K 30 6-core CPU 16 GB 6 GB 4 GB

60 GB

20 K to 200 K 10 200 GB

2 K to 5 K

0 to 30 K 40 8-core CPU 24 GB 8 GB 5 GB

80 GB

30 K to 300 K 20 250 GB

5 K to 10 K

0 to 40 K 50 16-core CPU

32 GB 12 GB 7 GB 100 GB

40 K to 400 K 20 300 GB

10K~15K

0~40K 50 24-core CPU 64G 16G 10GB

200GB

40K~400K 20 600GB

CAUTION:

To improve the I/O performance, follow these guidelines:

If the number of collection units is from 100 K to 200 K, install two or more disks and a RAID card witha cache of 256 MB or more.

If the number of collection units is from 200 K to 300 K, install two or more disks and a RAID card witha cache of 512 MB or more.

If the number of collection units is from 300 K to 400 K, install four or more disks and a RAID card with a cache of 1 GB or more.

HP recommends you to set the RAID level to 5, which needs three or more disks. If you use more than four disks, HP recommends you to set the RAID level to 0+1.

GSM modem (optional)

The following models are tested and verified that they can operate with iMC. For more specifications about the GSM modem, see the related product manuals:

WaveCom M2306B

WaveCom TS-WGC1 (Q2403A)

Wanxiang serial port GSM modem (DG-C1A)

Wanxiang USB GSM modem (DG-U1A)

Page 10: HP Intelligent Management Centercommunity.hpe.com/hpeb/attachments/hpeb/network... · 4/24/2012  · iMC Deployment and Hardware Configuration Schemes Restricted 2012-04-24 Page 3

iMC Deployment and Hardware Configuration Schemes Restricted

2012-04-24 Page 10 of 28

Wanxing USB min GSM modem (DG-MINI)

WaveCom M1206B GSM modem (chip: 24PL)

WaveCom USB M1206B GSM modem (chip: Q24PL, Q2403A)

NTA/UBA deployment scheme The specifications and system requirements provided in this section is for deploying the NTA or UBA on a server. To improve performance especially in a network where the amount of data is large, a user can deploy the NTA or UBA on multiple servers for data analysis. The NTAs or UBAs will work in distributed mode.

The NTA can process NetStream, NetFlow, and sFlow data, and log data generated by DIG collectors. The UBA can process logs in NetStream, NetFlow, NAT, or Flow formats and logs generated by DIG collectors.

The amount of data processed by the NTA/UBA is great and disk input/output operations affect the writing of data into databases. If the required disk space is more than 200 GB, install two or more disks and a RAID card with a cache of 256 MB or more. If the required disk space is more than 500 GB, install four or more disks and a RAID card with a cache of 512 MB or more. If the required disk space is more than one TB, install four or more disks and a RAID card with a cache of one GB or more. HP recommends you to set the RAID level to 5, which needs three or more disks. When you use more than four disks, HP recommends you to set the RAID level to 0+1.

NTA deployment scheme (for processing NetStreamV5 or NetFlowV5 logs)

Management scale System requirements (minimum)

Number of managed interfaces Operating system CPU (main frequency

≥ 2.5 GHz) Memory Disk space

1 GE port or 10 FE ports

Windows

2-core CPU 4 GB 50 GB

3 GE ports or 30 FE ports 4-core CPU 6 GB 100 GB

6 GE ports or 60 FE ports 6-core CPU 8 GB 200 GB

10 GE ports or 100 FE ports 8-core CPU 12 GB 300 GB

1 GE ports or 10 FE ports

Linux

2-core CPU 6 GB 50 GB

3 GE ports or 30 FE ports 4-core CPU 8 GB 100 GB

6 GE ports or 60 FE ports 6-core CPU 12 GB 200 GB

10 GE ports or 100 FE ports 8-core CPU 16 GB 300 GB

NTA deployment scheme (for processing sFlow logs) Data in the following table is created at a sampling ratio of 20000:1 or above. The management scale should decrease in proportion with the sampling ratio.

Page 11: HP Intelligent Management Centercommunity.hpe.com/hpeb/attachments/hpeb/network... · 4/24/2012  · iMC Deployment and Hardware Configuration Schemes Restricted 2012-04-24 Page 3

iMC Deployment and Hardware Configuration Schemes Restricted

2012-04-24 Page 11 of 28

Management scale System requirements (minimum)

Number of managed interfaces Operating system CPU (main frequency ≥

2.5 GHz) Memory Disk space

100 Windows

2-core CPU 4 GB 100 GB

200 4-core CPUs 6 GB 200 GB

100 Linux

2-core CPU 6 GB 100 GB

200 4-core CPUs 8 GB 200 GB

UBA deployment scheme (for processing NetStreamV5, NetFlowV5, NAT, or flow logs)

Management scale System requirements (minimum)

Number of managed interfaces

Operating system

CPU (main frequency ≥ 2.5 GHz) Memory Disk space

2 FE ports

Windows

2-core CPU 4 GB 700 GB

5 FE ports 4-core CPU 6 GB 1.5 TB

1 GE port or 10 FE ports 6-core CPU 8 GB 3 TB

2 GE ports or 20 FE ports 8-core CPU 12 GB 6 TB

2 FE ports

Linux

2-core CPU 6 GB 700 GB

5 FE ports 4-core CPU 8 GB 1.5 TB

1 GE port or 10 FE ports 6-core CPU 12 GB 3 TB

2 GE ports or 20 FE ports 8-core CPU 16 GB 6 TB

Deploying the NTA and UBA on one server (for processing NetStreamV5 or NetFlowV5 logs)

Management scale System requirements (minimum)

Number of managed interfaces

Operating system

CPU (main frequency ≥ 2.5 GHz) Memory Disk space

2 FE ports

Windows

2-core CPU 4 GB 700 GB

5 FE ports 4-core CPU 6 GB 1.5 TB

1 GE ports or 10 FE ports 6-core CPU 8 GB 3 TB

2 GE ports or 20 FE ports 8-core CPU 12 GB 6 TB

2 FE ports

Linux

2-core CPU 6 GB 700 GB

5 FE ports 4-core CPU 8 GB 1.5 TB

1 GE ports or 10 FE ports 6-core CPU 12 GB 3 TB

2 GE ports or 20 FE ports 8-core CPU 16 GB 6 TB

Page 12: HP Intelligent Management Centercommunity.hpe.com/hpeb/attachments/hpeb/network... · 4/24/2012  · iMC Deployment and Hardware Configuration Schemes Restricted 2012-04-24 Page 3

iMC Deployment and Hardware Configuration Schemes Restricted

2012-04-24 Page 12 of 28

DIG collectors deployment scheme Management scale System requirements (minimum)

Collected traffic CPU (In a Linux Redhat ES3/AS5 environment, the main frequency is greater than or equal to 2.5 GHz.) Memory Disk

Space 0 MB to 300 MB 1-core CPU 2 GB 100 GB

300 MB to 1 GB 2-core CPU 4 GB 100 GB

Capable of sampling statistics, DIG collectors can perform dynamic sampling according to system load. Alternatively, a fixed sampling ratio can be set. When the amount of traffic sent to the probe exceeds one GB, DIG collectors sample the traffic.

NTA deployment scheme (in cooperation with DIG collectors) Management scale System requirements (minimum)

Corresponding DIG collected traffic

Operating system CPU (main frequency ≥ 2.5 GHz) Memory Disk space

0 MB to 500 MB

Windows

1-core CPU 4 GB 50 GB

500 MB to 1 GB 2-core CPU 6 GB 100 GB

1 GB to 3 GB 4-core CPU 8 GB 200 GB

0 MB to 500 MB

Linux

1-core CPU 6 GB 50 GB

500 MB to 1 GB 2-core CPU 8 GB 100 GB

1 GB to 3 GB 4-core CPU 8 GB 200 GB

UBA deployment scheme (in cooperation with DIG collectors) Management scale System requirements (minimum)

Corresponding DIG collected traffic

Operating system CPU (main frequency ≥ 2.5 GHz) Memory Disk space

0 MB to 300 MB

Windows

1-core CPU 4 GB 700 GB

300 MB to 500 MB 2-core CPU 6 GB 1.5 TB

500 MB to 1 GB 4-core CPU 8 GB 3 TB

0 MB to 300 MB

Linux

1=core CPU 6 GB 700 GB

300 MB to 500 MB 2-core CPU 8 GB 1.5 TB

500 MB to 1 GB 4-core CPU 8 GB 3 TB

Page 13: HP Intelligent Management Centercommunity.hpe.com/hpeb/attachments/hpeb/network... · 4/24/2012  · iMC Deployment and Hardware Configuration Schemes Restricted 2012-04-24 Page 3

iMC Deployment and Hardware Configuration Schemes Restricted

2012-04-24 Page 13 of 28

Deploying the NTA and UBA on one server (working with DIG collectors)

Management scale System requirements (minimum)

Corresponding DIG collected traffic

Operating system CPU (main frequency ≥ 2.5 GHz) Memory Disk space

0 MB to 300 MB

Windows

1-core CPU 4 GB 700 GB

300 MB to 500 MB 2-core CPU 6 GB 1.5 TB

500 MB to 1 GB 4-core CPU 8 GB 3 TB

0 MB to 300 MB

Linux

1-core CPU 6 GB 700 GB

300 MB to 500 MB 2-core CPU 8 GB 1.5 TB

500 MB to 1 GB 4-core CPU 8 GB 3 TB

SHM deployment scheme The following table shows different management scales based on the number of NQA instances.

Management scale System requirements (minimum)

Number of NQA instances Operating system CPU (main frequency ≥ 2.5

GHz) Memory Disk space

10000 Windows/Linux

2-core CPU 2 GB 100 GB

20000 4-core CPU 4 GB 200 GB

APM deployment scheme (for Windows only) The default collection interval is 5 minutes for APM. The following APM deployment scheme is based on the default collection interval. If you change the collection, you must adjust the management scale accordingly. For example, when you change the collection interval to 10 minutes, you can double the management scale.

Management scale System requirements

Operating system CPU (main frequency ≥ 2.5 GHz) Memory Disk space

0 to 100 monitors Windows/Linux

2-core CPU 4 GB 10 GB

100 to 500 monitors 4-core CPU 6 GB 30 GB

Page 14: HP Intelligent Management Centercommunity.hpe.com/hpeb/attachments/hpeb/network... · 4/24/2012  · iMC Deployment and Hardware Configuration Schemes Restricted 2012-04-24 Page 3

iMC Deployment and Hardware Configuration Schemes Restricted

2012-04-24 Page 14 of 28

SOM deployment scheme 32-bit Windows/Linux environment

Management scale System requirements (minimum)

Max. online SOM users

CPU (main frequency ≥ 3 GHz) Memory Java

heap size

Disk space for installation (imcInstallDir)

Disk space for data storage (imcDataDir)

50 1-core CPU 4 GB 512 MB 1 GB 4 GB

100 1-core CPU 6 GB 1 GB 1 GB 6 GB

64-bit Windows/Linux environment

Management scale System requirements (minimum)

Max. online SOM users

CPU (main frequency ≥ 3 GHz) Memory

Java heap size

Disk space for installation (imcInstallDir)

Disk space for data storage (imcDataDir)

100 1-core CPU 4 GB 1 GB 1 GB 4 GB

200 1-core CPU 4 GB 1.5 GB 1 GB 6 GB

500 2-core CPU 6 GB 2 GB 1 GB 8 GB

UAM/CAMS/EAD/DAM deployment scheme To get an appropriate UAM/CAMS/EAD/DAM deployment scheme for your network, use the tool "iMC UAM and EAD Deployment Scheme Recommender."

TAM deployment scheme 64-bit operating system (recommended)

Management scale System requirements (minimum)

Number of managed devices

CPU (main frequency ≥ 2.6 GHz)

Memory Java heap size

Disk space for installation (imcInstallDir)

Disk space for data storage (imcDataDir)

≤5000 4-core CPU 8 G 2 G 3 GB 160 GB

≤20000 8-core CPU 16 G 4 G 3 GB 320 GB

≤100000 12-core CPU 32 G 8 G 3 GB 500 GB

Page 15: HP Intelligent Management Centercommunity.hpe.com/hpeb/attachments/hpeb/network... · 4/24/2012  · iMC Deployment and Hardware Configuration Schemes Restricted 2012-04-24 Page 3

iMC Deployment and Hardware Configuration Schemes Restricted

2012-04-24 Page 15 of 28

CAUTION:

To improve the input/output (I/O) performance, use a dual channel Ultra 320 SCSI controller or higher with 256 MB cache or higher, and set the RAID level to 0, 1, 1+0, or 5.

MVM deployment scheme Follow these guidelines when deploying MVM:

1. Make sure that the number of VPN devices does not exceed the iMC platform license size. When the number of VPN devices is larger than the license size, manage the CEs as virtual CEs.

2. The MVM performance depends on the number of VPNs, in addition to the number of VPN devices and the number of CE-CE connections. It is a good practice to manage no more than 10 VPNs in one MVM. If you need to manage more VPNs, upgrade the hardware configuration to a higher level.

3. Deploy the MVM component on a different server from the iMC platform in distributed mode.

32-bit Windows environment

Management scale System requirements (minimum)

Node count

CE-CE connections per VPN

Max. Online operators

CPU (main frequency ≥ 2.5 GHz)

Memory

Java heap size

Disk space for installation (imcInstallDir)

Disk space for data storage (imcDataDir)

0 to 500

0 to 200 20 2-core CPU 4 G 1 G 3 GB

30 GB

200 to 500 10 60 GB

64-bit Windows environment (recommended)

Management scale System requirements (minimum)

Node count

CE-CE connections per VPN

Max. Online operators

CPU (main frequency ≥ 2.5 GHz)

Memory

Java heap size

Disk space for installation (imcInstallDir)

Disk space for data storage (imcDataDir)

0 to 500

0 to 200 20 2-core CPU 4 G 1 G 3 GB

30 GB

200 to 500 10 60 GB

500 to 1 K

0 to 500 20 4-core CPU 8 G 4 G 3 GB

50 GB

500 to 1000 10 100 GB

1 K to 2 K

0 to 1000 20 6-core CPU

16 G 6 G 4 GB

60 GB

1000 to 2000 10 200 GB

2 K to 5 K

0 to 2000 20 8-core CPU

24 G

8 G 5 GB 80 GB

2000 to 5000 10 250 GB

5 K to 10 K

0 to 5000 20 12-core CPU

32 G

12 G 7 GB 100 GB

5000 to 10000 10 300 GB

Page 16: HP Intelligent Management Centercommunity.hpe.com/hpeb/attachments/hpeb/network... · 4/24/2012  · iMC Deployment and Hardware Configuration Schemes Restricted 2012-04-24 Page 3

iMC Deployment and Hardware Configuration Schemes Restricted

2012-04-24 Page 16 of 28

CAUTION:

To improve the I/O performance, follow these guidelines:

If the number of CE-CE connections per VPN is from 1K to 2K, install two or more disks and a RAID card with a cache of 256 MB or more.

If the number of CE-CE connections per VPN is from 2K to 5K, install two or more disks and a RAID card with a cache of 512 MB or more.

If the number of CE-CE connections per VPN is from 5K to 10K, install four or more disks and a RAIDcard with a cache of 1 GB or more.

HP recommends you to set the RAID level to 5, which needs three or more disks. When you use more than four disks, HP recommends you to set the RAID level to 0+1.

32-bit Linux environment

Management scale System requirements (minimum)

Node count

CE-CE connections per VPN

Max. Online operators

CPU (main frequency ≥ 2.5 GHz)

Memory

Java heap size

Disk space for installation (imcInstallDir)

Disk space for data storage (imcDataDir)

0 to 500

0 to 200 20 2-core CPU 6 G 512 M 3 GB

30 GB

200 to 500 10 60 GB

64-bit Linux environment (recommended)

Management scale System requirements (minimum)

Node count

CE-CE connections per VPN

Max. online operators

CPU (main frequency ≥ 2.5 GHz)

Memory

Java heap size

Disk space for installation (imcInstallDir)

Disk space for data storage (imcDataDir)

0 to 500

0 to 200 20 2-core CPU

4 G

1 G 3 GB 30 GB

200 to 500 10 60 GB

500 to 1 K

0 to 500 20 4-core CPU

8 G

4 G 3 GB 50 GB

500 to 1000 10 100 GB

1 K to 2 K

0 to 1000 20 2 × 6-core CPUs

16 G 6 G 3 GB

60 GB

1000 to 2000 10 200 GB

2 K to 5 K

0 to 2000 20 8-core CPU

24 G 8 G 5 GB

80 GB

2000 to 5000 10 250 GB

5 K to 10 K

0 to 5000 20 12-core CPU

32 G 12 G 7 GB

100 GB

5000 to 10000 10 300 GB

Page 17: HP Intelligent Management Centercommunity.hpe.com/hpeb/attachments/hpeb/network... · 4/24/2012  · iMC Deployment and Hardware Configuration Schemes Restricted 2012-04-24 Page 3

iMC Deployment and Hardware Configuration Schemes Restricted

2012-04-24 Page 17 of 28

CAUTION:

To improve the I/O performance, follow these guidelines:

If the number of CE-CE connections per VPN is from 1K to 2K, install two or more disks and a RAID card with a cache of 256 MB or more.

If the number of CE-CE connections per VPN is from 2K to 5K, install two or more disks and a RAID card with a cache of 512 MB or more.

If the number of CE-CE connections per VPN is from 5K to 10K, install four or more disks and a RAIDcard with a cache of 1 GB or more.

HP recommends you to set the RAID level to 5, which needs three or more disks. When you use more than four disks, HP recommends you to set the RAID level to 0+1.

WSM deployment scheme Follow these guidelines when deploying WSM:

1. If an AP supports switching between fit and fat AP mode, count the AP as a fat AP and choose the hardware configuration according to the number of fat APs.

2. In a network configured with AC backup (1+1 backup or N+1 backup), choose the hardware configuration by twice the number of fit APs.

3. When the number of managed APs exceeds 100, deploy the WSM component in distributed mode. When more than 8000 fit APs or 5000 fat APs exist in the service provider network, or when more than 10000 fit APs or 5000 fat APs exist in the enterprise network, you must deploy multiple WSM components.

4. It is a good practice to allow no more than 10 online operators. To allow more, select a hardware configuration of a higher level.

32-bit Windows environment

Management scale System requirements (minimum)

Node count Collection units

Max. Online operators

CPU (main frequency ≥ 2.5 GHz)

Memory

Java heap size

Disk space for installation (imcInstallDir)

Disk space for data storage (imcDataDir)

Fit AP: 0 to 500 or Fat AP: 0 to 300

0 to 50 K 10 2-core CPU

4 G 1 G 3 GB 60 GB

64-bit Windows environment (recommended)

Management scale System requirements (minimum)

Node count Collection units

Max. Online operators

CPU (main frequency ≥ 2.5 GHz)

Memory

Java heap size

Disk space for installation (imcInstallDir)

Disk space for data storage (imcDataDir)

Fit AP: 0 to 500

Or Fat AP: 0 to 300

0 to 50 K

10 2-core CPU 4 G

1 G 3 GB 60 GB

Page 18: HP Intelligent Management Centercommunity.hpe.com/hpeb/attachments/hpeb/network... · 4/24/2012  · iMC Deployment and Hardware Configuration Schemes Restricted 2012-04-24 Page 3

iMC Deployment and Hardware Configuration Schemes Restricted

2012-04-24 Page 18 of 28

Management scale System requirements (minimum)

Node count Collection units

Max. Online operators

CPU (main frequency ≥ 2.5 GHz)

Memory

Java heap size

Disk space for installation (imcInstallDir)

Disk space for data storage (imcDataDir)

Fit AP: 500 to 1000

Or Fat AP: 300 to 700

16 K to 90 K

10 4-core CPU 8 G

4 G 3 GB 100 GB

Fit AP: 1000 to 3000

Or Fat AP: 700 to 2000

32 K to 150 K

10 6-core CPU 16 G 6 G 4 GB 200 GB

Fit AP: 3000 to 5000

Or Fat AP: 2000 to 3000

100 K to 250 K

10 8-core CPU 24 G 8 G 5 GB 250 GB

Fit AP: 5000 to 10000

Or Fat AP: 3000 to 5000 in an enterprise network

160 K to 400 K

10 12-core CPU 32 G

12 G 7 GB 300 GB Fit AP: 5000 to 8000

Or Fat AP: 3000 to 5000 in a service provider network

CAUTION:

To improve the I/O performance, follow these guidelines:

If the number of collection units is from 100K to 200K, install two or more disks and a RAID card witha cache of 256 MB or more.

If the number of collection units is from 200K to 300K, install two or more disks and a RAID card witha cache of 512 MB or more.

If the number of collection units is from 300K to 400K, install four or more disks and a RAID card witha cache of 1 GB or more.

HP recommends you to set the RAID level to 5, which needs three or more disks. When you use more than four disks, HP recommends you to set the RAID level to 0+1.

Page 19: HP Intelligent Management Centercommunity.hpe.com/hpeb/attachments/hpeb/network... · 4/24/2012  · iMC Deployment and Hardware Configuration Schemes Restricted 2012-04-24 Page 3

iMC Deployment and Hardware Configuration Schemes Restricted

2012-04-24 Page 19 of 28

32-bit Linux environment

Management scale System requirements (minimum)

Node count Collection units

Max. Online operators

CPU (main frequency ≥ 2.5 GHz)

Memory

Java heap size

Disk space for installation (imcInstallDir)

Disk space for data storage (imcDataDir)

Fit AP: 0 to 500 or Fat AP: 0 to 300

0 to 50 K 10 2-core CPU

4 G 1 G 3 GB 60 GB

64-bit Linux environment (recommended)

Management scale System requirements (minimum)

Node count Collection units

Max. Online operators

CPU (main frequency ≥ 2.5 GHz)

Memory

Java heap size

Disk space for installation (imcInstallDir)

Disk space for data storage (imcDataDir)

Fit AP: 0 to 500

Or Fat AP: 0 to 300

0 to 50 K

10 2-core CPU 4 G

1 G 3 GB 60 GB

Fit AP: 500 to 1000

Or Fat AP: 300 to 700

16 K to 90 K

10 4-core CPU 8 G

4 G 3 GB 100 GB

Fit AP: 1000 to 3000

Or Fat AP: 700 to 2000

32 K to 150 K

10 6-core CPU 16 G

6 G 4 GB 200 GB

Fit AP: 3000 to 5000

Or Fat AP: 2000 to 3000

100 K to 250 K

10 12-core CPU 24 G 8 G 5 GB 250 GB

Fit AP: 5000 to 10000

Or Fat AP: 3000 to 5000 in an enterprise network

160 K to 400 K

10 16-core CPU 32 G

12 G 7 GB 300 GB Fit AP: 5000 to 8000

Or Fat AP: 3000 to 5000 in a service provider network

Page 20: HP Intelligent Management Centercommunity.hpe.com/hpeb/attachments/hpeb/network... · 4/24/2012  · iMC Deployment and Hardware Configuration Schemes Restricted 2012-04-24 Page 3

iMC Deployment and Hardware Configuration Schemes Restricted

2012-04-24 Page 20 of 28

CAUTION:

To improve the I/O performance, follow these guidelines:

If the number of collection units is from 100K to 200K, install two or more disks and a RAID card witha cache of 256 MB or more.

If the number of collection units is from 200K to 300K, install two or more disks and a RAID card witha cache of 512 MB or more.

If the number of collection units is from 300K to 400K, install four or more disks and a RAID card witha cache of 1 GB or more.

HP recommends you to set the RAID level to 5, which needs three or more disks. When you use more than four disks, HP recommends you to set the RAID level to 0+1.

EPM deployment scheme 32-bit Windows environment

Management scale System requirements (minimum)

Node count (ONU)

ONUs per OLT

Plat node count

Max. online operators

CPU (main frequency ≥ 2.5 GHz)

Memory

Java heap size

Disk space for installation (imcInstallDir)

Disk space for data storage (imcDataDir)

0 to 1000

0 to 200 50 20 2-core CPU 4 G 1 G 3 GB

30 GB

200 to 500 50 10 60 GB

64-bit Windows environment (recommended)

Management scale System requirements (minimum)

Node count (ONU)

ONUs per OLT

Plat node count

Max. online operators

CPU (main frequency ≥ 2.5 GHz)

Memory

Java heap size

Disk space for installation (imcInstallDir)

Disk space for data storage (imcDataDir)

0 to 1000

0 to 200 50 20 2-core CPU 4 G 1 G 3 GB

30 GB

200 to 500 50 10 60 GB

1 K to 2 K

0 to 500 50 20

4-core CPU 8 G 2 G 3 GB

50 GB

500 to 1000 50 10 100 GB

2 K to 5 K

0 to 500 100 20

6-core CPU 16 G

4 G 4 GB

60 GB

500 to 1000

100 10 200 GB

5 K to 10 K

0 to 1000 200 20

8-core CPU 24 G 8 G 5 GB

80 GB

1000 to 2000 200 10 250 GB

Page 21: HP Intelligent Management Centercommunity.hpe.com/hpeb/attachments/hpeb/network... · 4/24/2012  · iMC Deployment and Hardware Configuration Schemes Restricted 2012-04-24 Page 3

iMC Deployment and Hardware Configuration Schemes Restricted

2012-04-24 Page 21 of 28

CAUTION:

If more than 2000 optical network units (ONUs) are connecting to an optical line terminal (OLT), the MIBinterface performance of the OLT is poor. In this case, enhancing server performance does not solve the problem.

EoCM deployment scheme Before you deploy EoCM, HP recommends you to configure the coaxial-cable line terminals (CLTs) and coaxial-cable network units (CNUs) at the ratio of 1:20. If the ratio is smaller, choose a server of a higher level. If the iMC platform manages less than 50 devices, you can install EoCM on the same server as the iMC platform. Otherwise, you must deploy EoCM on a secondary server.

32-bit Windows environment

Management scale System requirements (minimum)

Node count (CNU)

Collection units(1)

Max. online operators

CPU (main frequency ≥ 2.5 GHz)

Memory

Java heap size

Disk space for installation (imcInstallDir)

Disk space for data storage (imcDataDir)

0 to 5 K 0 to 5 K 20

2-core CPU 4 G 1 G 3 GB 30 GB

5 K to 200 K 10 60 GB

(1) A value in the range 0 to 5 K means not starting performance monitoring or starting minor performance monitoring.

64-bit Windows environment (recommended)

Management scale System requirements (minimum)

Node count (CNU)

Collection units(1)

Max. online operators

CPU (main frequency ≥ 2.5 GHz)

Memory

Java heap size

Disk space for installation (imcInstallDir)

Disk space for data storage (imcDataDir)

0 to 5 K

0 to 5 K 20 2-core CPU 4 G 2 G 3 GB

30 GB

5 K to 200 K 10 60 GB

5 K to 20 K

0 to 5 K 20 4-core CPU 8 G 2 G 3 GB

50 GB

5 K to 300 K 10 100 GB

20 K to 50 K

0 to 5 K 20 6-core CPU

16 G 4 G 4 GB

60 GB

5 K to 400 K 10 200 GB

50 K-80 K

0 to 5 K 20 8-core CPU

24 G 8 G 5 GB

80 GB

5 K to 400 K 10 250 GB

(1) A value in the range 0 to 5 K means not starting performance monitoring or starting minor performance monitoring.

Page 22: HP Intelligent Management Centercommunity.hpe.com/hpeb/attachments/hpeb/network... · 4/24/2012  · iMC Deployment and Hardware Configuration Schemes Restricted 2012-04-24 Page 3

iMC Deployment and Hardware Configuration Schemes Restricted

2012-04-24 Page 22 of 28

VSM deployment scheme Follow these guidelines when deploying VSM:

1. MSR voice gateways consume the iMC platform license size. For how to deploy the VSM server in a network where only MSR voice gateways exist, see the iMC platform configuration requirements.

2. When no IP phones are in the network but software terminal applications exist, see the IP phone configuration requirements for traffic statistics.

32-bit Windows environment

Management scale System requirements (minimum)

Node count (IP Phone)

Call number

Max. online operators

CPU (main frequency ≥ 2.5 GHz)

Memory

Java heap size

Disk space for installation (imcInstallDir)

Disk space for data storage (imcDataDir)

0 to 500 100 K to 200 K 10 2-core CPU 4 G 1 G 3 GB 60 GB

64-bit Windows environment (recommended)

Management scale System requirements (minimum)

Node count (IP Phone)

Call number

Max. online operators

CPU (main frequency ≥ 2.5 GHz)

Memory

Java heap size

Disk space for installation (imcInstallDir)

Disk space for data storage (imcDataDir)

0 to 500 100 K to 200 K 10 2-core CPU

4 G 1 G 3 GB 60 GB

500 to 2 K

200 K to 500 K

10 4-core CPU 8 G

2 G 3 GB 100 GB

2 K to 5 K 500 K to 1000 K

10 6-core CPU 16 G

4 G 4 GB 200 GB

5 K to 10 K

1 M to 2 M

10 8-core CPU 24 G

8 G 5 GB 250 GB

IVM deployment scheme The IPsec VPN uses the hub spoke mode, so the number of spoke devices determines the number of nodes, and one hub device uses one device node from the license size. See the iMC platform configuration requirements for the IVM server deployment.

Page 23: HP Intelligent Management Centercommunity.hpe.com/hpeb/attachments/hpeb/network... · 4/24/2012  · iMC Deployment and Hardware Configuration Schemes Restricted 2012-04-24 Page 3

iMC Deployment and Hardware Configuration Schemes Restricted

2012-04-24 Page 23 of 28

32-bit Windows environment

Management scale System requirements (minimum)

Node count (Spoke device)

Collection units (1)

Max. online operators

CPU (main frequency ≥ 2.5 GHz)

Memory

Java heap size

Disk space for installation (imcInstallDir)

Disk space for data storage (imcDataDir)

0 to 200 0 to 5 K 20

2-core CPU 4 G 1 G 3 GB

30 GB

5 K to 50 K 10 60 GB

(1) A value in the range 0 to 5 K means not starting performance monitoring or starting minor performance monitoring.

64-bit Windows environment (recommended)

Management scale System requirements (minimum)

Node count (Spoke device)

Collection units (1)

Max. online operators

CPU (main frequency ≥ 2.5 GHz)

Memory

Java heap size

Disk space for installation (imcInstallDir)

Disk space for data storage (imcDataDir)

0 to 200 0 to 5 K 20

2-core CPU 4 G

1 G 3 GB 30 GB

5 K to 50 K 10 60 GB

200 to 500

0 to 5 K 20 4-core CPU

8 G

4 G 3 GB 50 GB

5 K to 50 K 10 100 GB

500 to 2 K

0 to 5 K 20 6-core CPU

12 G 6 G 4 GB

60 GB

5 K to 100 K 10 200 GB

2 K to 5 K

0 to 5 K 20 8-core CPU

16 G

8 G 5 GB 80 GB

5K to 100 K 10 250 GB

5 K to 10 K

0 to 5 K 20 12-core CPU

32 G 12 G 7 GB

100 GB

5 K to 100 K 10 300 GB

(1) A value in the range 0 to 5 K means not starting performance monitoring or starting minor performance monitoring.

32-bit Linux environment

Management scale System requirements (minimum)

Node count (Spoke device)

Collection units (1)

Max. online operators

CPU (main frequency ≥ 2.5 GHz)

Memory

Java heap size

Disk space for installation (imcInstallDir)

Disk space for data storage (imcDataDir)

0 to 200 0 to 5 K 20

2-core CPU 4 G

1 G 3 GB 30 GB

5 K to 50 K 10 60 GB

Page 24: HP Intelligent Management Centercommunity.hpe.com/hpeb/attachments/hpeb/network... · 4/24/2012  · iMC Deployment and Hardware Configuration Schemes Restricted 2012-04-24 Page 3

iMC Deployment and Hardware Configuration Schemes Restricted

2012-04-24 Page 24 of 28

Management scale System requirements (minimum)

(1) A value in the range 0 to 5 K means not starting performance monitoring or starting minor performance monitoring.

64-bit Linux environment (recommended)

Management scale System requirements (minimum)

Node count (Spoke device)

Collection units (1)

Max. online operators

CPU (main frequency ≥ 2.5 GHz)

Memory

Java heap size

Disk space for installation (imcInstallDir)

Disk space for data storage (imcDataDir)

0 to 200 0 to 5 K 20

2-core CPU 4 G

1 G 3 GB 30 GB

5 K to 50 K 10 60 GB

200 to 500

0 to 5 K 20 4-core CPU

8 G 4 G 3 GB

50 GB

5 K to 50 K 10 100 GB

500 to 2 K

0 to 5 K 20 6-core CPU

12 G 6 G 4 GB

60 GB

5 K to 100 K 10 200 GB

2 K to 5 K

0 to 5 K 20 8-core CPU

16 G

8 G 5 GB 80 GB

5K to 100 K 10 250 GB

5 K to 10 K

0 to 5 K 20 12-core CPU

32 G

12 G 7 GB 100 GB

5 K to 100 K 10 300 GB

(1) A value in the range 0 to 5 K means not starting performance monitoring or starting minor performance monitoring.

BIMS deployment scheme Follow these guidelines when deploying BIMS:

1. When the number of managed CPEs is more than 100, you must deploy BIMS and iMC on different servers.

2. When the number of managed CPEs is more than 5000, HP recommends you to deploy multiple ACSs separately.

32-bit Windows environment

Management scale System requirements (minimum)

Node count

Max. online operators

CPU (main frequency ≥ 2.5 GHz)

Memory

Java heap size

Disk space for installation (imcInstallDir)

Disk space for data storage (imcDataDir)

0 to 200 10 2-core CPU 4 G 1 G 3 GB 60 GB

Page 25: HP Intelligent Management Centercommunity.hpe.com/hpeb/attachments/hpeb/network... · 4/24/2012  · iMC Deployment and Hardware Configuration Schemes Restricted 2012-04-24 Page 3

iMC Deployment and Hardware Configuration Schemes Restricted

2012-04-24 Page 25 of 28

64-bit Windows environment (recommended)

Management scale System requirements (minimum)

Node count

Max. online operators

CPU (main frequency ≥ 2.5 GHz)

Memory

Java heap size

Disk space for installation (imcInstallDir)

Disk space for data storage (imcDataDir)

0 to 200 10 2-core CPU 4 G 1 G 3 GB 60 GB

200 to 1 K

10 4-core CPU 8 G 4 G 3 GB 100 GB

1 K to 2 K

10 6-core CPU 16 G 6 G 4 GB 200 GB

2 K to 5 K

10 8-core CPU 24 G 8 G 5 GB 250 GB

5 K to 7 K

10 12-core CPU 32 G 12 G 7 GB 300 GB

32-bit Linux environment

Management scale System requirements (minimum)

Node count

Max. online operators

CPU (main frequency ≥ 2.5 GHz)

Memory

Java heap size

Disk space for installation (imcInstallDir)

Disk space for data storage (imcDataDir)

0 to 200 10 2-core CPU 4 G 1 G 3 GB 60 GB

64-bit Linux environment (recommended)

Management scale System requirements (minimum)

Node count

Max. online operators

CPU (main frequency ≥ 2.5 GHz)

Memory

Java heap size

Disk space for installation (imcInstallDir)

Disk space for data storage (imcDataDir)

0 to 200 10 2-core CPU 4 G 1 G 3 GB 60 GB

200 to 1 K

10 4-core CPU 8 G 4 G 3 GB 100 GB

1 K to 2 K 10 6-core CPU 16 G 6 G 3 GB 200 GB

2 K to 5 K 10 8-core CPU 24 G 8 G 5 GB 250 GB

5 K to 7 K 10 12-core CPU 32 G 12 G 7 GB 300 GB

Page 26: HP Intelligent Management Centercommunity.hpe.com/hpeb/attachments/hpeb/network... · 4/24/2012  · iMC Deployment and Hardware Configuration Schemes Restricted 2012-04-24 Page 3

iMC Deployment and Hardware Configuration Schemes Restricted

2012-04-24 Page 26 of 28

Software requirements

Operating system (x86-64-bit operating systems preferred) Windows

Windows Server 2003 with Service Pack 2 (32-bit)

Windows Server 2003 with Service Pack 2 (64-bit) and patch KB942288

Windows Server 2003 R2 with Service Pack 2 (32-bit)

Windows Server 2003 R2 with Service Pack 2 (64-bit) and patch KB942288

Windows Server 2008 with Service Pack 2 (32-bit)

Windows Server 2008 with Service Pack 2 (64-bit)

Windows Server 2008 R2 with Service Pack 1 (64-bit)

Linux Red Hat Enterprise Linux Server 5 (32-bit)

Red Hat Enterprise Linux Server 5 (64-bit)

Red Hat Enterprise Linux Server 5.5 (32-bit)

Red Hat Enterprise Linux Server 5.5 (64-bit)

Red Hat Enterprise Linux Server 6.1 (64bit)

Databases Databases embedded in the iMC

The maximum capacity of an SQL Server 2008 R2 Express database embedded in the iMC is 10 GB. If the total number of collection units of the platform components is less than 20 K, the total number of alarms saved in the database is less than 100000, and the number of managed device nodes is less than 1000, you can use SQL Server Express databases. Otherwise, use external databases.

For the NTA/UBA component, the volume of collected data is generally very large and therefore embedded databases are not recommended.

For the UAM, EAD, and CAMS components, the volume of data increases rapidly. Many factors contribute to the increase in database sizes, which often exceed the 4 GB embedded database size limit. Therefore, embedded databases are not recommended.

Page 27: HP Intelligent Management Centercommunity.hpe.com/hpeb/attachments/hpeb/network... · 4/24/2012  · iMC Deployment and Hardware Configuration Schemes Restricted 2012-04-24 Page 3

iMC Deployment and Hardware Configuration Schemes Restricted

2012-04-24 Page 27 of 28

For the MVM, which runs on the core network, stability is of vital importance and therefore embedded databases are not recommended.

The WSM needs to summarize and analyze the large volumes of report data at the operator's office. The volume of data processed is large and embedded databases have a performance bottleneck. Therefore, embedded databases are not recommended. If the number of APs on an enterprise network exceeds 200, embedded databases are not recommended.

For the EoCM component, if the number of CNUs exceeds 1000, embedded databases are not recommended.

For the EPM component, if the number of ONUs exceeds 1000, embedded databases are not recommended.

Oracle 11g release 1 and release 2 (on Linux only) For Oracle database versions, the numbers of CPUs supported are limited. When choosing a deployment scheme, see the following table (The data listed in the following table is for reference only. For the actual CPU support of different Oracle 11g Release 1 versions, see Oracle documentation). Oracle 11g Release 1 applies to Linux systems only.

Standard edition one

Standard edition

Enterprise edition

Remarks

Number of supported CPUs 2 4 No limit

Multi-core processors are supported.

SQL server 2005 SP3

SQL server 2008 SP2

SQL server 2008 R2 For SQL Server versions, the number of supported CPUs and RAM sizes are limited. When choosing a deployment scheme, see the following table (This table takes for example the CPU and memory support for SQL Server 2008 database versions. This is similar to the CPU and memory support for SQL Server 2005 database versions. The data in this table is for reference only. For the actual CPU and RAM support for SQL Server 2008 database versions, see Microsoft documentation.)

Express Workgroup Standard Enterprise Remarks

Number of supported CPUs 1 2 4 No limit

Multi-core processors are supported.

Supported memory 1 GB 4 GB OS Max OS Max

The memory size cannot exceed the maximum value supported by the operating system.

Page 28: HP Intelligent Management Centercommunity.hpe.com/hpeb/attachments/hpeb/network... · 4/24/2012  · iMC Deployment and Hardware Configuration Schemes Restricted 2012-04-24 Page 3

iMC Deployment and Hardware Configuration Schemes Restricted

2012-04-24 Page 28 of 28

Web browser Turn off all the window blocking settings of the browser.

Enable cookies on the browser.

The client screen resolution should be at least 1024 × 768.

IE 8 or later.

Firefox 3.6 or later.