dell compellent oracle best practices
TRANSCRIPT
Oracle Best Practices on Compellent Storage Center
Dell | Compellent Technical Best Practices
Page ii
© 2011 Dell Inc. All rights reserved. Reproduction of this material in any manner whatsoever without
the express written permission of Dell Inc. is strictly forbidden. For more information, contact Dell.
Trademarks used in this text: DellTM, the DELLTM logo, and CompellentTM are trademarks of Dell Inc.
Other trademarks and trade names may be used in this document to refer to either the entities
claiming the marks and names or their products. Dell Inc. disclaims any proprietary interest in
trademarks and trade names other than its own.
March 2011 Rev. A
Part number goes here
Page 1
Contents Introduction ............................................................................................................... 2
Customer Support ...................................................................................................... 2
General Syntax .......................................................................................................... 3
Document Revision ..................................................................................................... 3
Audience ................................................................................................................. 3
Overview of Compellent Storage Center ............................................................................ 4
Data Instant Replay (DIR) ............................................................................................. 4
Data Progression ........................................................................................................ 4
Dynamic Capacity (Thin Provisioning) ............................................................................... 4
Consistency Group ...................................................................................................... 5
Storage Setup and Configuration...................................................................................... 6
Disk Drives & RAID Recommendations .............................................................................. 6
RAID levels and Data Progression for Oracle databases .......................................................... 7
Recommended Disk Configuration Layouts ......................................................................... 8
ASM Disk Group Devices ............................................................................................... 11
Filesystem ................................................................................................................ 11
Database Setup and Configuration .................................................................................. 12
Putting it all Together ................................................................................................. 12
Using Compellent Data Progression and Data Instant Replay Features ..................................... 13
Table 4A. SSD & Data Progression & Data Instant Replay ..................................................... 14
Table 4B. Without SSD & Data Progression & Data Instant Replay ........................................... 14
Oracle RAC Tested Configuration ................................................................................... 16
Network Setup ........................................................................................................ 16
Network Configuration ............................................................................................... 16
Conclusion ............................................................................................................... 17
Appendix 1: Example of ASM installation on Linux ............................................................ 18
Appendix 2: 11g R2 with Multipath Setting....................................................................... 19
Page 2
Introduction
This white paper describes the best practices for running Oracle Databases (single instance or RAC) on
a Compellent Storage Center.
Oracle performance tuning is beyond the scope of this paper. Please visit www.oracle.com for the
Oracle Database Performance Tuning Guide for more in depth information in tuning your database.
Table 1. Benefits of Oracle on Compellent Storage Center
Benefit Details
Lower total cost of
ownership (TCO)
Reduces acquisition, administration, and maintenance costs
Greater manageability Ease of use, implementation, provisioning, and management
Simplified RAC
Implementation
Provides shared storage (raw or filesystems)
High availability and
scalability
Clustering provides higher levels of data availability and
combined processing power of multiple server for greater
throughput and scalability
Compellent Information Life
Cycle (ILM) benefits
Provides tiered storage, dynamic capacity, data progression,
thin provisioning, instant replay (snapshot) and more
Customer Support
Compellent provides live support 1-866-EZSTORE (866.397.8673), 24 hours a day, 7 days a week, 365
days a year. For additional support, email Compellent at [email protected]. Compellent
responds to emails during normal business hours.
Page 3
General Syntax
Table 1. Conventions
Item Convention
Menu items, dialog box titles, field names, keys Bold
Mouse click required Click
User Input Monospace Font
User typing required Type:
System response to commands Blue
Output omitted for brevity <…snipped…>
Website addresses http://www.dell.com
Email addresses [email protected]
Document Revision
Table 2. Revision History
Date Revision Description
5/26/2011 A Initial draft
Audience
This paper is intended for database administrators, system administrators and storage administrators that need to understand how to configure Oracle databases on Dell Compellent Storage Center. Readers should be familiar with Compellent Storage Center and have prior experience in configuring and operating the following:
Oracle 10, 11g
Real Application Clusters (RAC)
General understanding of SAN technologies
Automated Storage Management (ASM)
Page 4
Overview of Compellent Storage Center
Data Instant Replay (DIR)
Data Instant Replay is most compared to "snapshot" technology. This is the ability to create point-in-time-copies where further changes to a volume of data are journaled in a way that allows the volume to be rolled back to its original state when the point-in-time-copy was taken. These point-in-time-copies can be mounted as volumes of data for the sake of partial data restore as well as full volume restore. Of course, these point-in-time-copies have various other uses in an organization but for the sake of this paper we will focus on backup/recovery.
Data Instant Replay is one of many features that Compellent offers. This feature gives you the ability to make a point in time snapshot of your volumes (filesystems). There is no limit on the number of Instant Replay taken.
Data Progression
Data Progression is another feature that Compellent offers. This feature allows your least frequently used data to migrate to lower tier (cheaper SATA) disks, therefore saving space on the higher tier (expensive FC/SAS) disks.
With Data Progression feature enabled on the Compellent system, you don’t have to worry about disk space taken up by data that has not been accessed. You can change the number of days on a volume to alert Data Progression when to move the data. The default is 12 days. Vice versa, if the data has been accessed for a certain amount of cycles that data will then migrate back to higher tier disks for performance. This also can be changed based on your preference.
Dynamic Capacity (Thin Provisioning)
With traditional storage systems, administrators must purchase, allocate and manage capacity upfront, speculating where to place storage resources and creating large, underutilized volumes with long term growth built in. This practice leaves the majority of disk space allocated yet unused, and only available to specific applications.
Compellent Dynamic Capacity eliminates the allocated but unused capacity that is an unfortunate by-product of traditional storage allocation methods.
Page 5
Compellent’s Thin Provisioning, called Dynamic Capacity, delivers the highest storage utilization possible by eliminating allocated but unused capacity. Dynamic Capacity completely separates storage allocation from utilization, enabling users to create any size virtual volume upfront, yet only consume actual physical capacity when data is written by the application.
Consistency Group
Compellent Consistency Group feature allows storage administrators to take a snapshot of an Oracle database atomically. When creating a snapshot of a running Oracle database using storage functionality, you must ensure that all storage volumes (LUNs) that make up your database be atomically snapped because of multiplexed control files and redo log files. Remember that Oracle writes to multiplexed control files and redo log files concurrently, so without a consistency group you cannot create a usable snapshot of a running database. Without a consistency group, in order to create a usable snapshot of an online database, the database must be configured with all control files in one volume and all redo log files in the same volume or another volume but cannot be spread across volumes whether file system or Oracle ASM was used. The Consistency Group feature gives you the ability to create a usable snapshot of an online database with control files and redo log files spread across volumes which is to safeguard against single point of failure. Also, you can create a re-startable copy of an Oracle database with Consistency Group without having to put the database in hot backup mode. This scenario is similar to having a power outage on the database server. At restart, Oracle performs crash recovery, rolling forward any changes that did not make it to the data files and rolling back changes that had not committed. However, roll-forward recovery using archive logs to a point-in-time after the re-startable copy is created is NOT supported. When creating a volume on Compellent Storage Center, you guarantee data redundancy at the disk level whether or not the control files and redo log files are multiplexed at different locations. However, if the control files and the redo log files are not spread across mount points and the operating system cannot access the mount point that holds the control files or redo log files for whatever reason, your Oracle database will stop functioning. But if you spread those files across mount points without Consistency Group feature, then you cannot create a functional snapshot of your database online.
A Consistency Group replay profile should be created for all respective volumes of a database.
Page 6
Storage Setup and Configuration
Disk Drives & RAID Recommendations
Table 2. Disk Drives & RAID Recommendations
Description RAID10 SSD RAID10/FC/SAS
15K rpm
RAID10/FC/SAS
10K rpm
RAID5/FC/SAS
15K rpm
RAID5/FC/SAS
10K rpm
All RAID
FC/SAS/SATA
Data files OK (W) Recommended
(W)
OK (W) DP DP DP
Control
files
Recommended
(W)
OK (W) OK (W) Not Required Not Required Not Required
Online
Redo Logs
Recommended
(W)
OK (W) OK (W) Avoid Avoid Avoid
Archived
Redo Logs
Not Required Recommended
(W)
OK (W) Not Required Not Required DP
Flashback
Recovery
Area
Not Required Recommended
(W)
OK (W) Not Required Not Required DP
OCR files /
Voting Disk
Not Required Recommended
(W)
OK (W) Avoid Avoid Avoid
Abbreviation:
W Writes
DP Data Progression
If Fast Track is licensed, then it is enabled by default and will be utilized behind the scenes. No manual configuration is required.
Drives with higher RPM provide higher overall random-access throughput and shorter response times than drives with lower RPM.
Because of better performance, SAS or Fibre Channel drives with 15K rpm are always recommended for storing Oracle datafiles and online redo logs.
Serial ATA and lower cost Fibre Channel drives have slower rotational speed and therefore recommended for Oracle archived redo logs and flashback recovery area with Data Progression.
Page 7
RAID levels and Data Progression for Oracle databases
Solid State Disks (SSD)
Before implementing SSD on Compellent Storage Center for Oracle databases, you must determine if the database in question warrants the high performance that SSD provides. Since the price per gigabyte of SSD is more expensive than SAS or Fibre Channel, you need to carefully evaluate your database performance.
Compellent recommends using SSD RAID10 for database online redo logs if the database is transactional and is IO constrained.
Data Warehouse databases should not use SSD for online redo logs as you will not have any performance gain unless the whole database resides on SSD which may be very costly depending on how large the database is.
If using SSD for online redo logs, do not configure Data Progression on this volume as you will not gain any space back due to the nature of online redo logs.
SAS or Fibre Channel Disks
Compellent highly recommends using RAID10 with 15K rpm disks for datafile volumes, online redo log volumes (if SSD is not available), and archived log volumes for all production databases.
For non-production systems, the use of RAID10 with 10K rpm disks is sufficient.
Whether you are using 15K rpm disks or 10K rpm disks, the same rules apply to Data Progression and Data Instant Replay.
Please refer to the table 4 below under the Data Progression and Data Instant Replay section for more information on how to configure the settings appropriately.
Page 8
Recommended Disk Configuration Layouts
VOL1
Frequently access Data Files
Control Files
Online Redo Logs
Tablespaces
System
Sysaux
Temp
Undo
VOL2
Tier 1
Raid 5/5
Tier 1
Raid 10
Storage Profile
Writable Data Replay Data
Other Tablespaces
X
Y
Z
Frequently access Data Files
Multiplex Controlfiles
Multiplex Online Redo Logs
VOL3
Archive Redo Logs
Flashback Logs
RMAN backup setsTier 3
Raid 5/5
Tier 1
Raid 10
Writable Data Replay Data
Oracle with Cooked
File Systems
Drive Type
FC / SAS 15K
Drive Type
SATA
FC / SAS 15KTier 1
Raid 5/5
Page 9
Frequently access Data FilesTablespaces
System
Sysaux
Temp
Undo
VOL4
Tier 2
Raid 5/5
Tier 2
Raid 10
Storage Profile
Writable Data Replay Data
Other Tablespaces
X
Y
Z
Frequently access Data Files
VOL5
Archive Redo Logs
Flashback Logs
RMAN backup sets
Tier 3
Raid 5/5
Tier 2
Raid 10
Writable Data Replay Data
Oracle with SSD
Drive Type
Drive Type
SATA
FC / SAS 15K
VOL1Online Redo Logs
Control Files
Tier 1
Raid 10
Tier 1
Raid 10
Writable Data Replay DataDrive Type
SSD
FC / SAS 15K
Tier 2
Raid 5/5
VOL2
Multiplex Online Redo Logs
Multiplex Control Files
VOL3
Page 10
VOL1 VOL2
Tier 1
Raid 5/5
Tier 1
Raid 10
Storage Profile
Writable Data Replay Data
Frequently access Data Files
Control Files
Online Redo Logs
Archive Redo Logs
Flashback Logs
Multiplex Control Files
Multiplex Online Redo Log Files
RMAN backup setsTier 3
Raid 5/5
Tier 1
Raid 10
Writable Data Replay Data
ASM Data Disk Group
VOL1 VOL2
ASM FRA Disk Group
Oracle with Automatic Storage Management
(ASM)
Drive Type
FC / SAS 15K
Drive Type
SATA
FC / SAS 15KTier 1
Raid 5/5
Note: If your Storage Center is on 4.x code, there is no consistency group available when using Data
Instant Replay for snapshot. In order to take a snapshot of your database while it is running, you need
to setup your Oracle database online redo log files and control files on the same volume. You cannot
multiplex your online redo logs and control files across volumes. If your Storage Center is on 5.x code,
then there is no restriction. Also note that you need to test to determine the number LUNs required
for your database for optimal configuration in terms of performance since every operating system is
different.
Page 11
Table 3. ASM Disk Group Layout
ASM Disk Group Contents RAW Device Support ASMLib Suport
+DATA Oracle Datafiles Yes Yes
+FLASH Flashback Recovery Area Yes Yes
ASM Disk Group Devices
Compellent Storage Center supports RAW devices with or without ASMLib service.
Oracle recommends that generally no more than two disk groups be maintained and managed per RAC cluster or single ASM instance.
Select External Redundancy when creating ASM disk group.
When creating LUNs for ASM disk group, make sure the LUNs are on the same disk characteristics (ex. FC 15K rpm or 10K rpm and 146GB or 300GB). LUNs should not contain mixed speed drives.
When creating multiple LUNs for an ASM disk group, make sure to create the same size LUN to avoid imbalance.
Create larger LUNs to reduce LUN management overhead. This will reduce LUN management and will allow future growth. Since the Compellent Storage Center provides thin provisioning, disk space only taken up when there is data written to it.
Filesystem
Compellent Storage Center supports various filesystems and raw devices. o JFS & JFS2 o UFS o VxFS o EXT2 & 3 & 4 o ReiserFS o NTFS o OCFS o Etc.
Page 12
Database Setup and Configuration
Direct I/O and Async I/O
Oracle recommends using direct I/O and async I/O. The use of Direct I/O bypasses the filesystem buffer cache, hence reduces CPU overhead on your server.
Direct I/O is very beneficial to Oracle’s log writer, both in terms of throughput and latency. Async I/O is beneficial for datafiles I/O.
When mounting filesystems for your Oracle database, make sure to understand how your Operating System supports Direct I/O. Some Operating Systems require certain mount options to enable Direct I/O or Async I/O.
Putting it all Together
When provisioning storage for your Oracle database server, you need to take into consideration the following before deciding how to setup and configure the Compellent Storage Center:
Operating System Types (UNIX, Linux, Windows, etc.) File System Types (NTFS, VxFS, JFS, Ext3, UFS, etc.) Number of Oracle databases per server Database Types (OLTP, Data Warehouse, Reporting, etc.) Database Usage (Heavy, Medium, Light) Archived Log Mode or No Archived Log Mode Database Block Size
Based on a small workload OLTP database, you would need to create one volume dedicated to your data files, one volume dedicated to your online redo logs, and one volume dedicated to your archived redo logs. Of course, you need to create other volumes for your Oracle software and any non-Oracle related data. Refer to the table 2 of this document to decide which RAID level you should be using for your volumes.
When creating volumes for your Oracle server, you do not have to use any software striping at the Operating System level. The data in the Compellent volume is automatically striped depending on which RAID level you have selected for your volume. However, you should create multiple LUNs for
Page 13
your larger workload database and use Operating System striping mechanism (eg. LVM, VxVM) to get better performance because of multiple disk queues at the OS level.
For a more complex database, you might have to create multiple volumes dedicated to your data files, assuming you separate your data tablespaces from your index tablespaces.
Again, when creating these volumes, refer to your Operating System manual on how to mount these volumes with Direct I/O and Async I/O options. This will benefit your database performance. Also remember to set the filesystemio_options to setall or DirectIO in your initialization parameter file.
When creating volumes for your databases, it is recommended that you initially create the volumes (datafiles, archived redo logs, flash recovery area) larger than needed for future database growth. Since the Compellent Storage Center has the ability to dynamically provision storage, disk space is not taken up until actual data has been written to it. This way you can create your tablespaces with the AUTOEXTEND parameter so you don’t have to worry about running out of disk space in that volume.
Compellent Data Instant Replay is perfect for Oracle backup and recovery. With Data Instant Replay there is no limit on the number of replays (snapshots) taken on the Compellent Storage Center. If you need to perform an online backup (your database must run in archive log mode in order to take an online backup), you need to put your database in online backup mode, take a replay of your data file volumes, redo log volumes, and archived log volumes, end your database online backup, and mount these replays on your backup server and send the data to tape or you can just leave these replays as is and not expire them until you have copied them to tape for offsite storage. If you use export and import frequently, you should create a separate volume dedicated for your export dump files.
*for more information on Oracle backup and recovery please refer to the document Oracle Backup & Recovery for Compellent Storage Center
Using Compellent Data Progression and Data Instant
Replay Features
In order to utilize Data Progression effectively, you need to use Data Instant Replay whether or not backup and recovery is required. For Data Progression to work effectively, a replay (snapshot) should be taken at least once a week on all volumes. Below is the recommended Data Progression setting for your Oracle database volumes:
Page 14
There are two different configurations. One with SSD and one without.
In a SSD configuration, SSD will become Tier 1 and 15K & 10K rpm disks will become Tier 2, and SATA is
Tier 3.
Table 4A. SSD & Data Progression & Data Instant Replay
Tier 1
SSD
Writeable Data Replay Data
R10 Online Redo Logs /Control Files
R5/5
R5/9
Tier 2
15K/10K
FC/SAS
R10 Datafiles / Archived Logs / FRA / OCR & VOTE
R5/5 Data Files
R5/9
Tier 3
SATA
R10
R5/5 Archived Logs / FRA
R5/9
In a Non-SSD configuration, 15K rpm disks will be in Tier 1, and 10K rpm disks will become Tier 2, and
SATA is still Tier 3.
Table 4B. Without SSD & Data Progression & Data Instant Replay
Tier 1
15K FC/SAS
Writeable Data Replay Data
R10 Datafiles / Archived Logs / FRA / OCR & VOTE
R5/5 Data Files
R5/9
Tier 2
10K FC/SAS
Page 15
R10
R5/5 Data Files
R5/9
Tier 3
SATA
R10
R5/5 Archived Logs / FRA
R5/9
Based on the above two tables, for optimal database performance you should create volumes with the following settings:
For SSD
o Datafile volumes Refer to the Recommended Disk Configuration Layout.
o Online Redo Log volumes Refer to the Recommended Disk Configuration Layout.
o Archived Redo Log volumes Refer to the Recommended Disk Configuration Layout.
o Flash Recovery Area Refer to the Recommended Disk Configuration Layout.
o OCR & VOTE Disk Create a new Storage Profile and select RAID10 only in the RAID Levels Used section and select Tier 2 in the Storage Tiers Used section and apply to volume.
For Non-SSD
o Data file volumes Refer to the Recommended Disk Configuration Layout.
o Online Redo Log volumes Refer to the Recommended Disk Configuration Layout.
o Archived Redo Log volumes Refer to the Recommended Disk Configuration Layout.
o Flash Recovery Area Refer to the Recommended Disk Configuration Layout.
o OCR & VOTE Disk Create a new Storage Profile and select RAID10 only in the RAID Levels Used section and select Tier 1 in the Storage Tiers Used section and apply to volume.
Page 16
Oracle RAC Tested Configuration
One configuration was tested consisting of a two node cluster running Oracle RAC 11g Release 2 with ASM running Oracle Enterprise Linux.
21 U
Cisco MDS9124 FC
Switch
Node 1
FC
SYST
RPS
STAT
SPEED
MODE
DUPLX
Catalyst 2970
2X
1X 11X
12X 14X
13X 23X
24X
SERIES
Private IP Network
FC
FC
FCPrivate IP Network
Public IP Network
Public IP Network
Network
SwitchInter-Connect
Switch
Node 2
Two Nodes Oracle RAC
Configuration
Compellent
Storage Center
Network Setup
Table5. Network Setup
VLAN ID or Separate
Switches
Description CRS Setting
1 or switch A Client Network Public
2 or switch B RAC Interconnect Private
Network Configuration
If possible, configure jumbo frames for private network. Note: when configuring jumbo frames, you need to configure on all legs of the RAC interconnect networks (i.e. servers, switches, etc.).
If configuring jumbo frames is not possible, you need to configure the interconnect network with at least 1Gbps link.
Page 17
Conclusion
With Compellent SAN technologies combined with Oracle software, this has become a very attractive
storage system for Oracle Databases. Running Oracle Database (single instance or RAC) with
Compellent Storage Center provides the best availability, scalability, manageability, and performance
for your database applications.
Page 18
Appendix 1: Example of ASM installation on Linux
Step 1: Make sure the following RPMs are installed.
# rpm -qa | grep asm
oracleasmlib-2.0.4-1.el5
oracleasm-2.6.18-194.0.0.0.3.el5-2.0.5-1.el5
oracleasm-support-2.1.3-1.el5
Step 2: Configure oracleasm.
# cd /etc/init.d
# ./oracleasm configure
Configuring the Oracle ASM library driver.
This will configure the on-boot properties of the Oracle ASM library
driver. The following questions will determine whether the driver is
loaded on boot and what permissions it will have. The current values
will be shown in brackets ('[]'). Hitting <ENTER> without typing an
answer will keep that current value. Ctrl-C will abort.
Default user to own the driver interface [grid]:
Default group to own the driver interface [oinstall]:
Start Oracle ASM library driver on boot (y/n) [y]:
Scan for Oracle ASM disks on boot (y/n) [y]:
Writing Oracle ASM library driver configuration: done
Initializing the Oracle ASMLib driver: [ OK ]
Scanning the system for Oracle ASMLib disks:
[ OK ]
Step 3: Modify /etc/sysconfig/oracleasm for multipathing
Change ORACLEASM_SCANORDER to “dm” and ORACLEASM_SCANEXCLUDE to “sd”
I.e.
# ORACLEASM_SCANORDER: Matching patterns to order disk scanning
ORACLEASM_SCANORDER="dm"
# ORACLEASM_SCANEXCLUDE: Matching patterns to exclude disks from scan
ORACLEASM_SCANEXCLUDE="sd"
Page 19
Step 4: In Compellent Storage Center, create two new volumes for the ASM diskgroups DATA and FRA
and map to server.
Step 5: Scan new LUNs and configure multipathing.
# vi /etc/multipath.conf
}
multipath {
wwid 36000d3100003d0000000000000001f10
alias asm_data
}
multipath {
wwid 36000d3100003d0000000000000001f11
alias asm_fra
}
Step 6: Label ASM disks via oracleasm
# cd /etc/init.d
# ./oracleasm createdisk DATA /dev/mapper/asm_data
Marking disk "DATA" as an ASM disk: [ OK ]
# ./oracleasm createdisk FRA /dev/mapper/asm_fra
Marking disk "FRA" as an ASM disk: [ OK ]
Step 7: Proceed with 10g or 11g Oracle software installation. Please refer to installation
documentation at www.oracle.com/technetwork/indexes/documentation/index.html
Appendix 2: 11g R2 with Multipath Setting
#############################################
## ##
## Oracle 11gR2 RAC with Multipath Setting ##
## ##
#############################################
RAC NODES - Eldorado & Lynx
Page 20
#############################
## /etc/modprobe.conf file ##
#############################
- Eldorado
[root@eldorado ~]# cat /etc/modprobe.conf
alias scsi_hostadapter ata_piix
alias scsi_hostadapter1 qla2xxx
alias eth0 bnx2
alias eth1 bnx2
options qla2xxx qlport_down_retry=5
- Lynx
[root@lynx ~]# cat /etc/modprobe.conf
alias scsi_hostadapter ata_piix
alias scsi_hostadapter1 qla2xxx
alias eth0 bnx2
alias eth1 bnx2
options qla2xxx qlport_down_retry=5
############################################################################
## After changing the modprobe.conf file, you need to recreate the initrd ##
############################################################################
- cd /boot
- cp -p initrd-2.6.18-194.0.0.0.3.el5.img initrd-2.6.18-
194.0.0.0.3.el5.img.bak
- mkinitrd -f initrd-$(uname -r).img $(uname -r)
- reboot
## Verify
[root@eldorado ~]# cat /sys/module/qla2xxx/parameters/qlport_down_retry
5
[root@lynx ~]# cat /sys/module/qla2xxx/parameters/qlport_down_retry
5
Page 21
##############################
## /etc/multipath.conf file ##
##############################
- Eldorado
[root@eldorado ~]# cat /etc/multipath.conf
# This is a basic configuration file with some examples, for device mapper
# multipath.
# For a complete list of the default configuration values, see
# /usr/share/doc/device-mapper-multipath-0.4.7/multipath.conf.defaults
# For a list of configuration options with descriptions, see
# /usr/share/doc/device-mapper-multipath-0.4.7/multipath.conf.annotated
defaults {
udev_dir /dev
polling_interval 10
selector "round-robin 0"
path_grouping_policy multibus
getuid_callout "/sbin/scsi_id -g -u -s /block/%n"
prio_callout /bin/true
path_checker readsector0
rr_min_io 100
max_fds 8192
rr_weight priorities
failback immediate
no_path_retry queue
user_friendly_names yes
}
blacklist {
wwid 26353900f02796769
devnode "^(ram|raw|loop|fd|md|dm-|sr|scd|st)[0-9]*"
devnode "^hd[a-z]"
}
multipaths {
multipath {
wwid 36000d310000069000000000000000df8
alias grid
}
multipath {
wwid 36000d310000069000000000000000df7
alias ocr
}
}
Page 22
- Lynx
[root@lynx ~]# cat /etc/multipath.conf
# This is a basic configuration file with some examples, for device mapper
# multipath.
# For a complete list of the default configuration values, see
# /usr/share/doc/device-mapper-multipath-0.4.7/multipath.conf.defaults
# For a list of configuration options with descriptions, see
# /usr/share/doc/device-mapper-multipath-0.4.7/multipath.conf.annotated
defaults {
udev_dir /dev
polling_interval 10
selector "round-robin 0"
path_grouping_policy multibus
getuid_callout "/sbin/scsi_id -g -u -s /block/%n"
prio_callout /bin/true
path_checker readsector0
rr_min_io 100
max_fds 8192
rr_weight priorities
failback immediate
no_path_retry queue
user_friendly_names yes
}
blacklist {
wwid 26353900f02796769
devnode "^(ram|raw|loop|fd|md|dm-|sr|scd|st)[0-9]*"
devnode "^hd[a-z]"
}
multipaths {
multipath {
wwid 36000d310000069000000000000000df9
alias grid
}
multipath {
wwid 36000d310000069000000000000000df7
alias ocr
}
}
For more information, please refer to the Dell Compellent Linux Best Practices document.
Page 23