asm eva best practices white paper

32
Best practices for Oracle 10g with Automatic Storage Management (ASM) and HP StorageWorks 8000 Enterprise Virtual Array 8000 white paper Environment: Oracle 10g with ASM on both single instance database and Real Application Clusters (RAC) using HP ProLiant DL580 servers running Window 2003 Enterprise Edition with HP StorageWorks EVA8000 Executive summary............................................................................................................................... 3 Key findings ........................................................................................................................................ 3 Overview............................................................................................................................................ 3 Components........................................................................................................................................ 4 Configuring the hardware..................................................................................................................... 6 Configuring the EVA8000 storage array ............................................................................................ 8 Configuring the HP ProLiant DL580 server ........................................................................................... 8 Configuring the software ...................................................................................................................... 8 Configuring the MPIO driver ............................................................................................................. 8 Configuring Windows for ASM ......................................................................................................... 9 Using 32-bit Windows ...................................................................................................................... 9 Modifying ASM hidden parameters.................................................................................................. 10 Working with Benchmark Factory .................................................................................................... 12 ASM performance results .................................................................................................................... 16 Batch test results ............................................................................................................................. 17 OLTP test results ............................................................................................................................. 18 DSS test results............................................................................................................................... 21 Conclusions ...................................................................................................................................... 26 Summary of results ......................................................................................................................... 26 Project learnings ............................................................................................................................ 26 Best practices .................................................................................................................................... 27 Best practices for Storage Administrators .......................................................................................... 27 Best practices for Server Administrators ............................................................................................ 27 Best practices for Database Administrators ........................................................................................ 27 Appendix A. Bill of Materials .............................................................................................................. 28 Appendix B. Examples ....................................................................................................................... 29 For more information.......................................................................................................................... 32 HP technical references ................................................................................................................... 32 HP solutions and training ................................................................................................................ 32

Upload: turhantekin

Post on 08-Mar-2015

136 views

Category:

Documents


2 download

TRANSCRIPT

Page 1: ASM EVA Best Practices White Paper

Best practices for Oracle 10g with Automatic Storage Management (ASM) and HP StorageWorks 8000 Enterprise Virtual Array 8000 white paper

Environment: Oracle 10g with ASM on both single instance database and Real Application Clusters (RAC) using HP ProLiant DL580 servers running Window 2003 Enterprise Edition with HP StorageWorks EVA8000

Executive summary............................................................................................................................... 3 Key findings........................................................................................................................................ 3 Overview............................................................................................................................................ 3 Components........................................................................................................................................ 4 Configuring the hardware..................................................................................................................... 6

Configuring the EVA8000 storage array ............................................................................................ 8 Configuring the HP ProLiant DL580 server........................................................................................... 8

Configuring the software ...................................................................................................................... 8 Configuring the MPIO driver ............................................................................................................. 8 Configuring Windows for ASM ......................................................................................................... 9 Using 32-bit Windows...................................................................................................................... 9 Modifying ASM hidden parameters.................................................................................................. 10 Working with Benchmark Factory .................................................................................................... 12

ASM performance results.................................................................................................................... 16 Batch test results............................................................................................................................. 17 OLTP test results ............................................................................................................................. 18 DSS test results............................................................................................................................... 21

Conclusions ...................................................................................................................................... 26 Summary of results ......................................................................................................................... 26 Project learnings ............................................................................................................................ 26

Best practices .................................................................................................................................... 27 Best practices for Storage Administrators .......................................................................................... 27 Best practices for Server Administrators ............................................................................................ 27 Best practices for Database Administrators ........................................................................................ 27

Appendix A. Bill of Materials .............................................................................................................. 28 Appendix B. Examples ....................................................................................................................... 29 For more information.......................................................................................................................... 32

HP technical references................................................................................................................... 32 HP solutions and training ................................................................................................................ 32

Page 2: ASM EVA Best Practices White Paper

HP product sites ............................................................................................................................. 32 Oracle.......................................................................................................................................... 32 Quest Software.............................................................................................................................. 32

Page 3: ASM EVA Best Practices White Paper

Executive summary Improve profitability, manage risk, and lower TCO with HP StorageWorks Application-Based Solutions. In this project, the HP StorageWorks Customer Focused Test team provided best practices and optimal performance for configuring an Oracle® 10g database with Automatic Storage Management (ASM) on the HP StorageWorks 8000 Enterprise Virtual Array (EVA8000). A special look into the ASM disk group and EVA disk group configurations is the critical emphasis of this project. Results of this testing include the optimal EVA8000 configuration with respect to disk groups, database file layout, and mapping to corresponding ASM disk groups.

The following questions were specifically addressed in this project:

• What is the best method to map Oracle/ASM data extents to EVA disk groups? • Is there an advantage for “double-striping” in which the hardware and software are striping the

data across the same physical disks? • What is the recommended configuration for an Oracle/ASM database on the EVA? • Does this recommendation change based on workload type, that is, batch, Online Transaction

Processing (OLTP), and Decision Support Systems (DSS)? • Does this recommendation change based on single instance database or Real Application Clusters

(RAC)?

Key findings Testing successfully provided the following high-level results:

• ASM striping can improve database performance in conjunction with EVA virtualization. • The EVA configuration for batch loading of 500 GB was not significant. All configurations

performed equally well. • The EVA configuration for OLTP testing shows an advantage toward a configuration that uses both

ASM and EVA striping (double-striping). The advantages were more pronounced as the database size increased (2.5-TB RAC database with 1,000 concurrent users).

• The greatest benefit from double-striping was most noticeable with DSS workloads. Obvious differences in the configurations were observed, especially with the 2.5-TB RAC database with nine concurrent users running a portion of the DSS power test.

• Even greater performance on DSS workloads can be achieved by modifying one of the ASM hidden parameters, allocation unit size, to maximize data placement on the EVA.

Important findings uncovered during the tests are documented in the Best practices section.

Overview The main purpose of the project was to test various EVA disk group configurations and determine the best configuration when running an Oracle 10g database with ASM. The introduction of Oracle ASM capabilities in Oracle 10g provided alternative storage management options to those available in the storage hardware alone. ASM provides some similar functionality to the HP StorageWorks EVA. Customers want to understand where the functionality overlaps, if they can co-exist, and if so, what is the best way to configure HP StorageWorks EVA storage to optimize application performance under various database workloads?

3

Page 4: ASM EVA Best Practices White Paper

Several test scenarios were evaluated in regards to their impact on database performance:

• EVA configurations—Three EVA configurations were tested to see how Oracle performed during the benchmark testing.

• Benchmarks—Three industry-standard benchmarks (batch, OLTP, and DSS) were run against the EVA configurations. The intent was to see if the best practices changed with differing workloads, that is, did a particular EVA configuration perform better with DSS workload than with an OLTP workload?

• Database type—Both single instance and RAC databases were tested to see if clustering had any effect on the overall best practice.

Components To run these tests, HP configured the systems illustrated in Figure 1 and Figure 2. These environments were based on input from customers and are representative of typical Oracle database environments. The key components include the following:

• Oracle 10g—Quest Software’s Benchmark Factory for Databases was used to generate standard Oracle workloads against the Oracle database in a variety of workload types (batch, OLTP, and DSS).

• HP ProLiant DL580 server—This server was used to host the Oracle database. Two configurations were used—a single instance database (Figure 1) and a two-node RAC instance (Figure 2).

• HP ProLiant BL20p server—This server was used to host the necessary storage management software (HP StorageWorks Command View EVA) for monitoring and configuring the storage array. Also, the storage performance metrics from EVAPerf were managed from this server.

• EVA8000—SAN-based storage array that stored the Oracle database, logs, and so on. • Microsoft® Windows® 2003 Enterprise Edition operating system—The database servers,

benchmark servers, and storage management server all ran on Windows 2003 EE.

4

Page 5: ASM EVA Best Practices White Paper

Figure 1. Environment for the Single Instance database

5

Page 6: ASM EVA Best Practices White Paper

Figure 2. Environment for the RAC database

Configuring the hardware For the complete list of all hardware and software, see Appendix A. Bill of Materials.

HP constructed three different disk layouts for the EVA configurations (Figure 3). These configurations meet both HP and Oracle best practices of having two separate disk groups: one for online/system data and the other for backup/recovery data. This best practice is not for performance purposes, but to ensure recovery efforts by separating these files. By no means are these meant to be the only possible configurations. These are merely a guideline for best practices. Database Administrators (DBAs) can modify these configurations to suit their particular environments.

• Configuration #1—Both ASM and EVA striping (“double-striping”) across the same physical disks This configuration highlights the concept of double-striping across the same physical disks using ASM software-based striping and EVA hardware-based striping algorithms. This is the expected optimal performing configuration.

• Configuration #2—Single-striping (EVA striping only) In this scenario, the virtual disk is subject to “single-striping,” performed only by the EVA. Because only one LUN is presented to the ASM disk group, ASM cannot stripe. There are obvious scalability issues with this configuration for large databases since the EVA has a current maximum LUN size of 2 TB.

• Configuration #2a—Variation This variation of Configuration #2 is necessary to scale databases larger than 2 TB. This configuration slowly evolves into Configuration #1 as more LUNs are added.

6

Page 7: ASM EVA Best Practices White Paper

• Configuration #3—Double-striping across different physical disks In this scenario, the virtual disk is still subject to double-striping, performed by both the EVA and ASM. This is different from Configuration #1 because the disks that ASM is striping across do not belong to the same EVA disk group. Because of the smaller spindle count in the EVA disk groups, there is a reduced advantage to EVA striping. Therefore, this configuration is not expected to perform well, nor is recommended in general.

Figure 3. EVA configurations

EVA 8000 (Logical View)2 Disk Groups

ASM Disk Group

Disk Group

LUNLUN

User FilesSystem FilesControl Files

Online Redo Logs

Archive LogsFlashback AreaRMAN Backups

ASM Disk Group

LUN LUN

Disk Group

LUN

LUN

LUNLUN

Configuration 1

LUN

EVA 8000 (Logical View)2 Disk Groups

ASM Disk Group

Disk Group

LUN

User FilesSystem FilesControl Files

Online Redo Logs

Archive LogsFlashback AreaRMAN Backups

ASM Disk Group

Disk Group

LUN

Configuration 2a

LUN

7

Page 8: ASM EVA Best Practices White Paper

Configuring the EVA8000 storage array The EVA8000 configuration included the following:

• EVA XCS Firmware: v5.100 • 1 EVA8000 Controller Pair • 12 EVA disk shelves • 168 146-GB 15k FC disks

The fibre connections were connected to Brocade Silkworm 3800 switches and configured in dual 2-GB fabrics.

All virtual disks in the EVA configurations used VRAID1. The virtual disks were manually balanced across both EVA controllers for optimal performance. The physical drives were divided across the two disk groups with more spindles in the database disk group, for example, 96 drives in the database disk group and 64 drives in the backup disk group. The disk groups were configured for single disk failure protection. The virtual disks were presented only to the host ports that were connected to the master controller for the virtual disk. This was done, in part, to eliminate any possible impact to performance from traffic crossing on the mirror ports of the EVA controllers. It is recognized that such a setup prevents access to the virtual disk in the event of a controller failover and would typically not be utilized in a customer environment.

Configuring the HP ProLiant DL580 server An HP ProLiant DL580 server was used for each database server, with four Intel® Xeon™ processors (hyper-threaded to eight) and 32-GB RAM. The ProLiant Support Pack 7.5.1 was used to configure the server and install the Windows 2003 EE operating system. Fibre connectivity was provided by way of four Emulex LP1050 dual channel host bus adapters (HBAs) with the native Windows MPIO driver used for host connectivity, for a total of eight physical paths to the fabric. Because of strict zoning to control access to the master controller for each virtual disk, a total of 32 paths (instead of the typical 64 paths) was available for each host I/O.

Configuring the software

Configuring the MPIO driver The Windows MPIO/HP DSM driver provides multipathing on Windows servers and supports Active-Active storage arrays. The EVA8000 storage array was configured to load balance across HBAs and controllers with Active-Active enabled. The latest Windows MPIO/HP DSM driver can be obtained from the HP website under “support and downloads.” The default method of SQST was used in this testing (Table 1) for load balancing across the HBA ports. MPIO/DSM version was 2.00.01.

Table 1. MPIO multipathing options

Type Policy Description

NLB No_Load_Balance All I/O requests are routed through a chosen active path.

RR Round_Robin All I/O requests are distributed across all active paths to the device in a round robin manner.

SQR Shortest_Queue_Requests Each I/O request is routed to the active path with the least number of outstanding requests.

SQB Shortest_Queue_Bytes Each I/O request is routed to the active path with the least number of outstanding data bytes.

SQST Shortest_Queue_ServiceTime Each I/O request is routed to the active path where the total outstanding time for pending I/O requests is the least.

8

Page 9: ASM EVA Best Practices White Paper

Configuring Windows for ASM The diskpart command was run to enable auto-mounting of volumes. This command allows the asmtoolg to see the volume. In Windows 2003 server editions, this value is off by default. Next, the virtual disk was presented to the server, the disk initialized in Computer Management, and a basic disk created and the volume formatted as a raw device, that is, a file system was not installed (Figure 4).

DISKPART> automount enable Automatic mounting of new volumes enabled.

Figure 4. Unformatted volume

Using the asmtoolg tool, the disk was stamped with the ASM signature for use by the Oracle database. For more information, see Appendix B. Examples.

The stamped ASM disk was verified with the asmtool –list command.

C:\>asmtool –list \Device\Harddisk0\Partition1 69445M ORCLDISKDATA0 \Device\Harddisk1\Partition1 1019M \Device\Harddisk2\Partition1 2096121M \Device\Harddisk3\Partition1 1048570M

In the RAC environment, the virtual disk was presented to both servers. One of the servers was used to create the ASM volume. It is necessary to reboot the other node for it to recognize the volume and later install the database.

Using 32-bit Windows If you are running 32-bit Windows with greater than 4 GB of RAM, several extra steps are necessary to install and configure the Oracle database.

• Add /3GB and /PAE in the Windows boot.ini file. – On a 4-GB RAM server, /3GB will allow applications to use 3 GB of RAM (instead of the default

2 GB). – On servers with more than 4-GB RAM, /PAE will allow applications access to the higher memory.

Note: Database buffer cache is the only parameter that can utilize memory greater than 3 GB by way of DB_BLOCK_SIZE x DB_DATA_BUFFERS. Other SGA components, PGA, other memory parameters, and Oracle processes must all share 3 GB of memory. For more information, refer to Metalink Note 225349.1.

– If used in conjunction with the /PAE for higher memory amounts, /3GB will limit your /PAE to 16-GB RAM.

9

Page 10: ASM EVA Best Practices White Paper

• Adjust registry setting awe_window_memory. – This parameter creates a window space to more than 4 GB of memory when /PAE is in affect.

• Assign the Oracle user the Windows “Lock Pages in Memory” system privilege in Administrator Tools, Local Security Policy, Local Policies, User Rights Assignment.

• Set the initialization parameter USE_INDIRECT_DATA_BUFFERS to TRUE.

Modifying ASM hidden parameters There are a few hidden ASM parameters that can be modified to change the ASM stripe size and allocation unit. These ASM parameters are originally intended for use in very large databases (multiple TB to PB range) and are used to reduce the amount of metadata overhead on the ASM instance. Working with recommendations from Oracle engineering, HP evaluated variations of these parameters on smaller size databases to see if they would affect or improve overall database performance.

Note the following ASM parameters must start with an “_” (underscore):

• _asm_ausize (allocation unit) • _asm_stripesize • fine grain/coarse grain striping templates

For more information on setting and configuring these parameters, refer to Oracle Metalink Note 368055.1.

ASM allows for the modification of two key parameters that affect the way data is striped across an ASM disk group: allocation unit size and stripe size. ASM Allocation Unit Size (_asm_ausize) determines the extent size for all database files. For example, a 1-GB data file using the default ausize of 1 MB will consist of 1,024 extents of 1 MB each (Figure 5).

Figure 5. ASM striping with 1 MB

10

Page 11: ASM EVA Best Practices White Paper

The same data file using an ausize of 8 MB will consist of 128 extents of 8 MB each (Figure 6). Data is striped across extents, which are distributed equally across the disks in the ASM disk group. ASM Stripe Size (_asm_stripsize) determines the I/O size for fine grain templates (coarse grain templates are always 1 MB—the Oracle maximum I/O size). The default _asm_stripesize is 128 KB.

Figure 6. ASM striping with 8 MB

In our testing, the _asm_ausize showed the most promise for improving performance. To review the entire project, refer to the following knowledge brief at www.hp.com/go/hpcft. The benchmarks used in this testing were random read/write (OLTP) and large sequential reads (DSS). The OLTP benchmark did not show significant performance differences when applying the ASM parameter changes. However, the large sequential reads benchmark did improve when changing the _asm_ausize value. These DSS results are shown in Figure 7.

11

Page 12: ASM EVA Best Practices White Paper

Figure 7. DSS results with _asm_ausize =8 MB

Sequential Reads

100%

116%

0%

25%

50%

75%

100%

125%

ASM Defaults 8MB AU size

Configurations

Perf

orm

ance

% A

gain

st B

asel

ine

BASELINE

In this test, the DSS scripts completed much faster when the ASM allocation unit size was set to 8 MB (it outperformed the baseline by completing in less time). The sequential reads appear to function better when the ASM allocation unit is set to 8 MB. In this test, setting the ASM AU size improved overall DSS query performance by 16%. HP recommends that customers look at this parameter in a pre-production environment to see if database performance can be improved when running against their own data and workloads.

Working with Benchmark Factory Quest Software’s Benchmark Factory for Databases 5.0 (Beta 4) was used to conduct the workload generation. Benchmark Factory is a generic workload generation utility that can perform DSS and OLTP workloads. Benchmark Factory can also perform a custom workload based on any trace generated by a database or any custom SQL script.

The workloads used for benchmark testing in accordance with the industry standard are defined in Table 2. Each test scenario was executed a minimum of three times, and averages were taken from all test results.

Table 2. Benchmark scale factors

Host Size User load

Batch—single instance 500 GB 1

Batch—RAC N/A N/A

OLTP—single instance 500 GB 500

OLTP—RAC 2.5 TB 1,000

DSS—single instance 500 GB 3

DSS—RAC 2.5 TB 9

12

Page 13: ASM EVA Best Practices White Paper

Benchmark Factory scale factors are approximate and should not be used as absolute guides. The following example shows a scale factor of 1000 for the DSS benchmark. While this appears to create a 934-GB database, that value does not include indexes or stored procedures, which will increase the overall database size even more. The scale factor shown is only an estimate as stated in Figure 8.

Figure 8. Benchmark scale factors

13

Page 14: ASM EVA Best Practices White Paper

Using advanced features in Benchmark Factory (Figure 9), a user can create the tables in different tablespaces, use the parallel parameter, turn on or off logging, and so on.

Figure 9. Benchmark Factory advanced features

14

Page 15: ASM EVA Best Practices White Paper

Benchmark Factory can also load balance the workload across all nodes in the RAC environment. Figure 10 is a screenshot of 20 virtual users, each connecting to the database concurrently. Notice that Benchmark Factory has assigned the first 10 users to RAC node 1 and the second 10 users to RAC node 2. The same is true when the user loads of 500 and 1,000 were used for the OLTP testing.

Figure 10. Benchmark Factory load balancing

15

Page 16: ASM EVA Best Practices White Paper

While the benchmark test is running, the user can monitor the progress from the benchmark agent shown in Figure 11. This screenshot shows the 20 concurrent virtual users running the OLTP benchmark and the exact query they are currently processing. It also shows transactions per second (TPS), min and max transaction time, as well as errors. This screenshot was taken at the beginning of a test, so some of these values are not yet populated. In addition, the complete display is not shown due to space constrictions (min and max transactions are not shown from the screenshot).

Figure 11. Benchmark Factory agent

ASM performance results The testing effort provided the following results:

• Batch testing—No discernable performance differences between configurations • OLTP testing single instance—Configuration #1 offered the best performance • OLTP testing RAC—Configuration #1 offered the best performance • DSS testing single instance—Configuration #1 offered the best performance • DSS testing RAC—Configuration #1 offered the best performance

16

Page 17: ASM EVA Best Practices White Paper

Batch test results Batch testing consisted of a single user issuing sequential writes by way of Oracle Data Pump in parallel mode into two related tables (Table 3). These tables provided the basis for the OLTP benchmark. Total size was approximately 500 GB representing 4 billion rows of data. The benchmark is measured in completion time (minutes:seconds). The test iteration was approximately 3 hours.

Table 3. OLTP benchmark

Table Rows

Product 777,000,000

Part 3,080,000,000

The database was set up with a single bigfile tablespace. The user schema contained the tables that were to be loaded during the benchmark. This is diagrammed in Figure 12.

Figure 12. Batch database configuration

17

Page 18: ASM EVA Best Practices White Paper

The test results did not show any major performance difference related to configuration, as shown in Figure 13. This was expected since as long as there is no bottleneck in the system, sequential writes should not be influenced by the disk configuration.

Figure 13. Batch results on single instance

Batch Results on Single Instance

100% 100% 100%

0%

20%

40%

60%

80%

100%

120%

Config #1 Config #2 Config #3

Perf

orm

ance

% A

gain

st B

asel

ine

BASELINE

The batch testing on RAC was not completed due to a known issue in Oracle’s Data Pump load tool, specifically the ability to utilize the parallel parameter in a RAC environment. Work-arounds included shutting down all but one RAC node or running without the parallel parameter. In either case, this would result in a “single instance” configuration, which was already covered on the previous test. This is Oracle bug #5472417 and is currently being tracked by Oracle.

OLTP test results The OLTP test consisted of random read/write load on two related tables consisting of approximately 4 billion rows (Table 4). This test is measured in TPS to completion. The test iteration was run for exactly 3 hours.

Table 4. OLTP benchmark 500 GB

Table Rows

Product 777,000,000

Part 3,080,000,000

18

Page 19: ASM EVA Best Practices White Paper

The database was set up with a single bigfile tablespace. The user schema contained the tables and data necessary for the benchmark execution. This is diagrammed in Figure 14.

Figure 14. OLTP 500-GB database configuration

Five user schemas (each 500 GB) were used to scale up to the 2.5-TB database size used in the RAC testing, consisting of approximately 20 billion rows (Figure 15).

Figure 15. OLTP 2.5-TB database configuration

19

Page 20: ASM EVA Best Practices White Paper

Transaction ratio “weights” are shown in Figure 16. The transaction types can be weighted to control the percentage that they will occur during the random process of the OLTP benchmark. These weights were left at the industry-standard defaults.

Figure 16. OLTP transaction mix

The results for single instance testing on the 500-GB database with 500 concurrent users do not show a significant trend (Figure 17). Keep in mind that an OLTP workload is typically a server-intensive benchmark. Migrating to a 2-node RAC environment, with more users and a larger database size, increases the impact of the various EVA configurations.

Figure 17. OLTP results on single instance

OLTP Results on Single Instance

100.00% 99.99% 99.99%

0.00%

25.00%

50.00%

75.00%

100.00%

125.00%

Config #1 Config #2 Config #3

Perf

orm

ance

% A

gain

st B

asel

ine

BASELINE

20

Page 21: ASM EVA Best Practices White Paper

On the 2.5-TB RAC database with 1,000 concurrent virtual users, the workload on the storage array has increased. Now you start to see the divergences between the three EVA configurations (Figure 18). Configuration #1 is performing the best (highest throughput measured in TPS). The other configurations are performing at 97% and 96% of the baseline (Configuration #1). In other words, the other configurations had slightly lower TPS during the three-hour benchmark.

Figure 18. OLTP results on RAC

OLTP Results on RAC

100% 96%97%

0%

25%

50%

75%

100%

125%

Config #1 Config #2 Config #3

Perf

orm

ance

% A

gain

st B

asel

ine

BASELINE

DSS test results The DSS test consists of large sequential reads from eight related tables with approximately 2.6 billion rows. It is measured in time (minutes:seconds) to completion. Test iteration is approximately 7 hours. The 300 scale for the DSS benchmark is shown in Table 5.

Table 5. DSS benchmark

Table Rows

Part 60,000,000

Supplier 3,000,000

Partsupp 240,000,000

Customer 45,000,000

Order 450,000,000

Lineitem 1,800,364,500

Nation 25

Region 5

21

Page 22: ASM EVA Best Practices White Paper

To create the 2.5-TB database, a combination of DSS scales (Table 6) was used from three user schemas (1000, 1000, 300). This is approximately 8.7 billion rows. Again, this benchmark is measured in time (minutes:seconds) to completion. Test iteration is approximately 6 hours.

Table 6. DSS benchmark 1000 scale

Table Rows

Part 200,000,000

Supplier 10,000,000

Partsupp 800,000,000

Customer 150,000,000

Order 1,500,000,000

Lineitem 6,001,215,000

Nation 25

Region 5

The database was set up with a single bigfile tablespace. The user schema contained the tables and data necessary to run the benchmark. This is diagrammed in Figure 19.

Figure 19. DSS 500-GB database configuration

22

Page 23: ASM EVA Best Practices White Paper

To create the 2.5-TB database, a combination of DSS scales were used from three user schemas (1000, 1000, 300). Each schema had three concurrent users issuing the Power Test benchmark (Figure 20).

Figure 20. DSS 2.5-TB database configuration

There are 22 queries in the DSS Power Test. A selection of 15 queries was used for our benchmark to keep the testing time within a manageable timeframe (no effect on the test outcome). These were selected based on their impact to the storage array and average time to completion. The queries with an asterisk were dropped from the RAC testing. Again, this was only in the interest of time.

Power Test—Measured in time to completion. Queries:

• Minimum Cost Supplier Query • Customer Distribution Query * • Order Priority Checking Query • Promotion Effect Query • Local Supplier Volume Query • Top Supplier Query • Forecasting Revenue Change Query • Parts/Supplier Relationship Query • Volume Shipping Query* • Small-Quantity-Order Revenue Query • National Market Share Query* • Discounted Revenue Query • Important Stock Identification Query* • Global Sales Opportunity Query • Shipping Modes and Order Priority Query*

23

Page 24: ASM EVA Best Practices White Paper

The results of the 500-GB database with three concurrent users in Figure 21 start to show the difference on database performance based on the configuration of the EVA virtual disks. Configuration #1 performed the best, completing the 15 queries of the Power Test in the fastest time. Configurations #2 and #3 performed within 91% and 97% of the baseline (Configuration #1). In other words, Configurations #2 and #3 took longer to complete the Power Test.

Figure 21. DSS results on single instance

DSS Results on Single Instance

100%

91% 93%

0%

25%

50%

75%

100%

125%

Config #1 Config #2 Config #3

Perf

orm

ance

% A

gain

st B

asel

ine

BASELINE

24

Page 25: ASM EVA Best Practices White Paper

When the workload is increased to nine concurrent users on the 2.5-TB RAC database, the greatest divergence is seen between the configurations (Figure 22). This particular benchmark stressed the storage array greater than any other workload. Configuration #1 has a substantial advantage over the other configurations, which performed 88% to 60% of the baseline. Note that Configuration #3 actually had fewer overall spindles than Configurations # 1 and #2 (84 verses 80). To create a fair test, Configuration #3 was re-run with an increased spindle count. This helped to raise its performance from 60% to 74% of the baseline, but it was still the lowest performing configuration in this benchmark.

Figure 22. DSS results on RAC

DSS Results on RAC

100%

88%

60%

74%

0%

25%

50%

75%

100%

125%

Config #1 Config #2 Config #3 Config #3a

Perf

orm

ance

% A

gain

st B

asel

ine

BASELINE

25

Page 26: ASM EVA Best Practices White Paper

Conclusions The following results were observed during this testing.

Summary of results • Single Instance Testing

– Batch—No discernable difference – OLTP—Configuration #1 – DSS—Configuration #1

• RAC Testing – Batch—Not applicable – OLTP—Configuration #1 – DSS—Configuration #1

Project learnings • Does it matter how Oracle/Automatic Storage Management (ASM) maps data extents to the

Enterprise Virtual Array (EVA) disk groups? – Yes, especially in DSS environments (consider modifying ASM hidden parameter _asm_ausize for

optimal performance).

• Is there a penalty for double-striping? – No, Configuration #1 (double-striping) was the best performing configuration in testing. In

addition, Configuration 1 also complies with both ASM and EVA configuration best practices.

• What is the general recommended configuration for an Oracle/ASM database on the EVA? – Configuration #1

• Does this recommendation change based on workload type? – No. Configuration #1 was the best performing in OLTP and DSS workloads.

• Does this recommendation change based on single instance or Real Application Clusters (RAC)? – No. Configuration #1 was the best performing in both single instance and RAC testing. It was

also noted that as configuration size increases (addition of RAC nodes, increase in database size, increase in users), the performance advantages of Configuration #1 increase.

26

Page 27: ASM EVA Best Practices White Paper

Best practices During testing, several best practices were developed to improve database performance for each scenario.

Best practices for Storage Administrators • Create at least two EVA disk groups (one for data files, one for archive/backup files). • Create multiple LUNs from each disk group, as needed, for the database. • Use VRAID1 for the database disk group (online files). • Use VRAID1 or VRAID5 for the archive log file disk group (backup files). • Place a greater amount of high-performance disks into the first disk group, when possible, to

improve database performance. • If necessary, use lower performance disks in the second disk group. Also, consider using fewer

disks. • Balance the LUNs across EVA controllers.

Best practices for Server Administrators • Use multiple HBAs for path availability and optimal performance. • Manage host access to the virtual disk by way of paths to the master controller where possible. • Create OCR and voting disk on logical drives (Microsoft Windows). • Create and stamp ASM disks on raw partitions.

Best practices for Database Administrators • Create at least two ASM disk groups. • Use external redundancy, leaving the RAID functionality to the storage array. • Place the online files into the first disk group. • Place backup files and Flashback Area into the second disk group.

– If using the Flashback Area, be aware that a mirror copy of the online redo logs is placed in this area by default.

• Present multiple LUNs, as needed, to each of the ASM disk groups. • Consider changing the _asm_ausize to 8 MB for DSS-like workloads.

27

Page 28: ASM EVA Best Practices White Paper

Appendix A. Bill of Materials

Item Quantity Version

Storage

EVA8000 (2C12D) 2 V5.100

HP StorageWorks 146-GB 15K FC HDD 168 HP01

Servers

HP ProLiant DL580 server 2 G3

HP 36-GB 15K U320 Pluggable Hard Drive 2

Intel X3.00GHz G3 Processor 4

HP DL580R03 Memory Expansion Board 2

HP 4-GB REG PC2-3200 2x2-GB DDR Memory 8

Emulex LP1050DC HBA 4 7.1.11x1 / 1.91a1

HP ProLiant DL580 server 8 G2

HP 36-GB 15K U320 Pluggable Hard Drive 2

Intel Xeon MP X2.85 GHz-2-MB Processor 4

HP Hot Plug Expansion Board Memory 2

HP 4096-MB PC1600 Reg SDRAM Memory 8

Emulex LP1050DC HBA 4 5.1.11x1 / 1.91a1

HP ProLiant BL20p server 2 G3

Intel Xeon 3.06-GHz Processor 2

HP 72-GB 15K U320 Pluggable Hard Drive 1

Qlogic QLA2312 HBA 2 9.0.2.16 / 1.45

SAN Infrastructure

Brocade SilkWorm 3800 SAN switches 8 V3.2.1

Software

Oracle 10g Enterprise Edition 10.2.0.2

Microsoft Windows 2003 Enterprise Edition RC2, SP1, SP2 beta

Quest Software Benchmark Factory for Databases 5.0 beta 4

HP MPIO Full Featured DSM for EVA Disk Arrays 2.00.01

HP MPIO DSM Manager 1.02.00

HP ProLiant Support Pack 7.51

HP StorageWorks Command View EVA 5.1

28

Page 29: ASM EVA Best Practices White Paper

Appendix B. Examples This section shows screenshots from the Oracle asmtoolg utility to stamp a volume for ASM. The tool is launched from the command line.

C:\>asmtoolg

Figure 23. asmtoolg

29

Page 30: ASM EVA Best Practices White Paper

30

Page 31: ASM EVA Best Practices White Paper

31

Page 32: ASM EVA Best Practices White Paper

For more information

HP technical references • Configuring Oracle ASM hidden parameters for EVA8000 • HP StorageWorks Command View EVA installation guide • HP StorageWorks Command View EVA user guide • HP StorageWorks Enterprise Virtual Array configuration best practices — white paper • HP StorageWorks SAN design reference guide

HP solutions and training • Customer Focused Testing • HP & Oracle alliance • Network Storage Services • HP Storage and SAN Education Courses

HP product sites • Enterprise Class Storage Portfolio • HP StorageWorks Command View EVA • HP ProLiant DL Servers • HP BladeSystem • B-Series SAN Switches • Multi-Path Options for HP Arrays • Fibre Channel Host Bus Adapters

Oracle • Oracle® Database Installation Guide 10g Release 2 (10.2) for Microsoft Windows (x64) • Oracle® Database Installation Guide 10g Release 2 (10.2) for Microsoft Windows (32-Bit) • Oracle® Database Oracle Clusterware and Oracle Real Application Clusters Installation Guide

10g Release 2 (10.2) for Microsoft Windows • Oracle® Database 10g Release 2 Automatic Storage Management Overview and Technical Best

Practices

Quest Software • Benchmark Factory® for Databases (Database Performance and Scalability Testing)

© 2006 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice. The only warranties for HP products and services are set forth in the express warranty statements accompanying such products and services. Nothing herein should be construed as constituting an additional warranty. HP shall not be liable for technical or editorial errors or omissions contained herein.

Intel and Xeon are trademarks or registered trademarks of Intel Corporation or its subsidiaries in the United States and other countries. Microsoft and Windows are U.S. registered trademarks of Microsoft Corporation.

4AA0-9728ENW, December 2006