11gr2 rac/grid clusterware: best practices, pitfalls, and lessons learned presented during doug...
DESCRIPTION
11gR2 RAC/Grid Clusterware: Best Practices, Pitfalls, and Lessons Learned Presented during DOUG meeting held on 10/21/2010 at Dallas, TX. Ramkumar Rajagopal. I ntroduction. DBARAC is a specialty database consulting firm with expertise in a variety of industries based at Austin , Texas. - PowerPoint PPT PresentationTRANSCRIPT
![Page 1: 11gR2 RAC/Grid Clusterware: Best Practices, Pitfalls, and Lessons Learned Presented during DOUG meeting held on 10/21/2010 at Dallas, TX](https://reader035.vdocuments.us/reader035/viewer/2022062316/56816934550346895de08cf0/html5/thumbnails/1.jpg)
11gR2 RAC/Grid Clusterware: Best Practices, Pitfalls, and
Lessons LearnedPresented during DOUG meeting held on 10/21/2010 at Dallas, TX
Ramkumar Rajagopal
![Page 2: 11gR2 RAC/Grid Clusterware: Best Practices, Pitfalls, and Lessons Learned Presented during DOUG meeting held on 10/21/2010 at Dallas, TX](https://reader035.vdocuments.us/reader035/viewer/2022062316/56816934550346895de08cf0/html5/thumbnails/2.jpg)
2
Introduction
DBARAC is a specialty database consulting firm with expertise in a variety of industries based at Austin, Texas.Our people are experts in Oracle Real application clustered database focused solutions for managing large database systems.We provide proactive database management services including but not limited to In-house, on-shore DBA support , remote DB support, database maintenance and backup and recovery.Our DBA Experts provide specialized services in the areas of -
Root cause analysis Capacity planning Performance tuning, Database migration and consolidation Broad industry expertiseHigh-Availability RAC database specialistsEnd-to-end database support
11GR2 Grid clusterware
![Page 3: 11gR2 RAC/Grid Clusterware: Best Practices, Pitfalls, and Lessons Learned Presented during DOUG meeting held on 10/21/2010 at Dallas, TX](https://reader035.vdocuments.us/reader035/viewer/2022062316/56816934550346895de08cf0/html5/thumbnails/3.jpg)
3
IntroductionPresenter
• Senior Database Consultant DBARAC • Oracle Database/Applications DBA since 1995• Dell, JP Morgan Chase, Verizon • Presenter @ Oracle Open world 2007• Author Dell Power Solutions articles
11GR2 Grid clusterware
![Page 4: 11gR2 RAC/Grid Clusterware: Best Practices, Pitfalls, and Lessons Learned Presented during DOUG meeting held on 10/21/2010 at Dallas, TX](https://reader035.vdocuments.us/reader035/viewer/2022062316/56816934550346895de08cf0/html5/thumbnails/4.jpg)
4
AGENDA
• Introduction• Node eviction issue in 10g• What is “11GR2 Grid Clusterware”?• The Challenges• What’s different today?• “We’ve seen this before, smart guy…”
• Architecture and Capacity Planning• Upgrade Paths• Pre-installation best practices• Grid Clusterware Installation• Clusterware Startup sequence• Post Install steps • RAC Database build steps• Summary• Q&A
11GR2 Grid clusterware
![Page 5: 11gR2 RAC/Grid Clusterware: Best Practices, Pitfalls, and Lessons Learned Presented during DOUG meeting held on 10/21/2010 at Dallas, TX](https://reader035.vdocuments.us/reader035/viewer/2022062316/56816934550346895de08cf0/html5/thumbnails/5.jpg)
5
Why a node is evicted?
• Split brain condition• • IO fencing
• CRS keeps the lowest number node up
• Node eviction detection
11GR2 Grid clusterware
![Page 6: 11gR2 RAC/Grid Clusterware: Best Practices, Pitfalls, and Lessons Learned Presented during DOUG meeting held on 10/21/2010 at Dallas, TX](https://reader035.vdocuments.us/reader035/viewer/2022062316/56816934550346895de08cf0/html5/thumbnails/6.jpg)
6
Root Causes of Node Eviction
Network heartbeat lost
Voting disk problems
cssd is not healthy
Oprocd
Hang check timer
cssd and oclsomon race to suicide
11GR2 Grid clusterware
![Page 7: 11gR2 RAC/Grid Clusterware: Best Practices, Pitfalls, and Lessons Learned Presented during DOUG meeting held on 10/21/2010 at Dallas, TX](https://reader035.vdocuments.us/reader035/viewer/2022062316/56816934550346895de08cf0/html5/thumbnails/7.jpg)
7
11GR2 Grid Clusterware Improvements
• Node eviction algorithm is enhanced
• Prevent a split-brain problem without rebooting the node
• Oracle High Availability Services Daemon
• Will still reboot in some cases
• Faster relocation of services on node failure in 11GR2
11GR2 Grid Clusterware
![Page 8: 11gR2 RAC/Grid Clusterware: Best Practices, Pitfalls, and Lessons Learned Presented during DOUG meeting held on 10/21/2010 at Dallas, TX](https://reader035.vdocuments.us/reader035/viewer/2022062316/56816934550346895de08cf0/html5/thumbnails/8.jpg)
8
9i/10g RAC Scenario
• Several separate versions of databases
• Several servers
• Space/resource issues
• Lesser resources
• Provisioning takes time
11GR2 Grid clusterware
![Page 9: 11gR2 RAC/Grid Clusterware: Best Practices, Pitfalls, and Lessons Learned Presented during DOUG meeting held on 10/21/2010 at Dallas, TX](https://reader035.vdocuments.us/reader035/viewer/2022062316/56816934550346895de08cf0/html5/thumbnails/9.jpg)
9
Top concerns
• How many are using 11GR2 Grid clusterware?• Do you have more than one mission-critical databases
within single RAC cluster?• Can you allocate resources dynamically to handle peak
volumes of various application loads without downtime?
• Issues on using shared infrastructure• Will my database availability and recovery suffer ?• Will my database performance suffer ?• How to manage a large clustered environment to meet sla’s
for several applications?
11GR2 Grid clusterware
![Page 10: 11gR2 RAC/Grid Clusterware: Best Practices, Pitfalls, and Lessons Learned Presented during DOUG meeting held on 10/21/2010 at Dallas, TX](https://reader035.vdocuments.us/reader035/viewer/2022062316/56816934550346895de08cf0/html5/thumbnails/10.jpg)
10
Why 11GR2 Grid CRS?
11GR2 Grid Clusterware is…
An Architecture
An IT Strategy– Clusterware & ASM storage deployed together – Many, many Oracle Database Instances– Drives Consolidation
11GR2 Grid clusterware
![Page 11: 11gR2 RAC/Grid Clusterware: Best Practices, Pitfalls, and Lessons Learned Presented during DOUG meeting held on 10/21/2010 at Dallas, TX](https://reader035.vdocuments.us/reader035/viewer/2022062316/56816934550346895de08cf0/html5/thumbnails/11.jpg)
11
Challenges
• Skilled resources
• Meeting SLA’s
• End-to-end testing not possible
• Security Controls
• Capacity issues
• Higher short-term costs
11GR2 Grid clusterware
![Page 12: 11gR2 RAC/Grid Clusterware: Best Practices, Pitfalls, and Lessons Learned Presented during DOUG meeting held on 10/21/2010 at Dallas, TX](https://reader035.vdocuments.us/reader035/viewer/2022062316/56816934550346895de08cf0/html5/thumbnails/12.jpg)
12
What’s different today?
• 11gR2 Grid CRS & ASM supports• 11GR2, 11GR1, 10gR1 and 10gR2 Single Instances•
• Powerful servers, 64Bit O/s
• Provisioning Framework to deploy
• Grid control
11GR2 Grid clusterware
![Page 13: 11gR2 RAC/Grid Clusterware: Best Practices, Pitfalls, and Lessons Learned Presented during DOUG meeting held on 10/21/2010 at Dallas, TX](https://reader035.vdocuments.us/reader035/viewer/2022062316/56816934550346895de08cf0/html5/thumbnails/13.jpg)
13
11GR2 RAC DB Architecture Planning
11GR2 Grid clusterware
![Page 14: 11gR2 RAC/Grid Clusterware: Best Practices, Pitfalls, and Lessons Learned Presented during DOUG meeting held on 10/21/2010 at Dallas, TX](https://reader035.vdocuments.us/reader035/viewer/2022062316/56816934550346895de08cf0/html5/thumbnails/14.jpg)
14
Capacity Planning
• What are the current requirements?
• What are the future growth requirements in the next 6-12months?
• To meet the demand – estimate the hardware requirements
• Data retention requirements
• Archiving and purging
11GR2 Grid clusterware
![Page 15: 11gR2 RAC/Grid Clusterware: Best Practices, Pitfalls, and Lessons Learned Presented during DOUG meeting held on 10/21/2010 at Dallas, TX](https://reader035.vdocuments.us/reader035/viewer/2022062316/56816934550346895de08cf0/html5/thumbnails/15.jpg)
15
Capacity Planning metrics
• Database metrics for capacity planning• CPU & memory Utilization• I/O rates• Device utilization• Queue length• Storage utilization• Response time• Transaction rate• Network Packet loss• Network Bandwidth utilization
11GR2 Grid clusterware
![Page 16: 11gR2 RAC/Grid Clusterware: Best Practices, Pitfalls, and Lessons Learned Presented during DOUG meeting held on 10/21/2010 at Dallas, TX](https://reader035.vdocuments.us/reader035/viewer/2022062316/56816934550346895de08cf0/html5/thumbnails/16.jpg)
16
Capacity Planning Strategy
• Examine existing engagement processes
• Examine existing capacity of servers/storage
• Define Hardware/database scalability
• Provisioning for adding capacity
• Integration testing
• Large clustered database
• SLA requirements•
11GR2 Grid clusterware
![Page 17: 11gR2 RAC/Grid Clusterware: Best Practices, Pitfalls, and Lessons Learned Presented during DOUG meeting held on 10/21/2010 at Dallas, TX](https://reader035.vdocuments.us/reader035/viewer/2022062316/56816934550346895de08cf0/html5/thumbnails/17.jpg)
17
Comparison – 10g vs 11GR2
• Server consolidation
• Database consolidation
• Instance consolidation
• Storage consolidation
11GR2 Grid clusterware
![Page 18: 11gR2 RAC/Grid Clusterware: Best Practices, Pitfalls, and Lessons Learned Presented during DOUG meeting held on 10/21/2010 at Dallas, TX](https://reader035.vdocuments.us/reader035/viewer/2022062316/56816934550346895de08cf0/html5/thumbnails/18.jpg)
18
AGENDA so far…
• Introduction• Node eviction issue in 10g• What is “11GR2 Grid clusterware”?• The Challenges• What’s different today?• “We’ve seen this before, smart guy…”
• Architecture and Capacity Planning• Upgrade Paths• Pre-installation best practices• Grid Clusterware Installation• Clusterware Startup sequence• Post Install steps • RAC Database build steps• Summary• Q&A
11GR2 Grid clusterware
![Page 19: 11gR2 RAC/Grid Clusterware: Best Practices, Pitfalls, and Lessons Learned Presented during DOUG meeting held on 10/21/2010 at Dallas, TX](https://reader035.vdocuments.us/reader035/viewer/2022062316/56816934550346895de08cf0/html5/thumbnails/19.jpg)
19
• Upgrade Paths
• Out-of-place clusterware upgrade
• Rolling Upgrade
• Oracle 10gR2 - from 10.2.0.3
• Oracle 11gR1 - from 11.1.0.6
11GR2 Grid clusterware
![Page 20: 11gR2 RAC/Grid Clusterware: Best Practices, Pitfalls, and Lessons Learned Presented during DOUG meeting held on 10/21/2010 at Dallas, TX](https://reader035.vdocuments.us/reader035/viewer/2022062316/56816934550346895de08cf0/html5/thumbnails/20.jpg)
20
Pre-installation best practices
• Network Requirements
• Cluster Hardware Requirements
• ASM Storage Requirements
• Verification Checks
11GR2 Grid clusterware
![Page 21: 11gR2 RAC/Grid Clusterware: Best Practices, Pitfalls, and Lessons Learned Presented during DOUG meeting held on 10/21/2010 at Dallas, TX](https://reader035.vdocuments.us/reader035/viewer/2022062316/56816934550346895de08cf0/html5/thumbnails/21.jpg)
21
Pre-Installation best practices Network Configuration
• SCAN -Single Client Access Name • Failover - Faster relocation of services• Better Load balancing
• MTU package size of Network Adapter (NIC) • Forwarder, zone entries and reverse lookup • Ping tests • Two dedicated interconnect switches for redundant interconnects• Run cluvfy
11GR2 Grid clusterware
![Page 22: 11gR2 RAC/Grid Clusterware: Best Practices, Pitfalls, and Lessons Learned Presented during DOUG meeting held on 10/21/2010 at Dallas, TX](https://reader035.vdocuments.us/reader035/viewer/2022062316/56816934550346895de08cf0/html5/thumbnails/22.jpg)
22
Pre install - Network - SCAN Configuration
11GR2 Grid clusterware
![Page 23: 11gR2 RAC/Grid Clusterware: Best Practices, Pitfalls, and Lessons Learned Presented during DOUG meeting held on 10/21/2010 at Dallas, TX](https://reader035.vdocuments.us/reader035/viewer/2022062316/56816934550346895de08cf0/html5/thumbnails/23.jpg)
23
Pre Install Network - SCANVIP Troubleshooting
• SCAN Configuration:• $GRID_HOME/bin/srvctl config scan
• SCAN Listener Configuration:• $GRID_HOME/bin/srvctl config scan_listener• SCAN Listener Resource Status:
• $GRID_HOME/bin/crsctl stat res -w "TYPE = ora.scan_listener.type“
• $GRID_HOME/Listener.ora • Local and remote listener parameters
11GR2 Grid clusterware
![Page 24: 11gR2 RAC/Grid Clusterware: Best Practices, Pitfalls, and Lessons Learned Presented during DOUG meeting held on 10/21/2010 at Dallas, TX](https://reader035.vdocuments.us/reader035/viewer/2022062316/56816934550346895de08cf0/html5/thumbnails/24.jpg)
24
Pre Install -Cluster Hardware requirements
• Os/kernel same on all servers in the cluster• Minimum 32 GB of RAM• Minimum Swap space 16GB • Minimum Grid Home free space 16GB• For each Oracle Home directory allocate 32 GB of space (for each
db -32GB)• Allocate adequate disk space for centralized backups • Allocate adequate storage for ASM diskgroups – DATA and FRA
11GR2 Grid clusterware
![Page 25: 11gR2 RAC/Grid Clusterware: Best Practices, Pitfalls, and Lessons Learned Presented during DOUG meeting held on 10/21/2010 at Dallas, TX](https://reader035.vdocuments.us/reader035/viewer/2022062316/56816934550346895de08cf0/html5/thumbnails/25.jpg)
25
Cluster Hardware requirements continued…
Most cases: use UDP over 1 Gigabit Ethernet
For large databases - Infiniband/IP or 10 Gigabit Ethernet
Use OS Bonding/teaming to “virtualize” interconnect
Set UDP send/receive buffers high enough
Crossover cables are not supported
11GR2 Grid clusterware
![Page 26: 11gR2 RAC/Grid Clusterware: Best Practices, Pitfalls, and Lessons Learned Presented during DOUG meeting held on 10/21/2010 at Dallas, TX](https://reader035.vdocuments.us/reader035/viewer/2022062316/56816934550346895de08cf0/html5/thumbnails/26.jpg)
26
Pre Install - ASM Storage configuration
• In 11gR2 ASM diskgroups are used • Grid infrastructure - OCR, Voting disk and ASMspfile. • Database - DATA and FRA.
• OCR and voting disks for Grid clusterware • OCR can now be stored in Automatic Storage Management (ASM).
• Add Second diskgroup for ocr using• - ./ocrconfig -add +DATA02
Change the compatibility of the new diskgroup to 11.2 as follows:ALTER DISKGROUP DATA02 SET ATTRIBUTE ‘COMPATIBILITY.ASM’=’11.2’; ALTER DISKGROUP DATA02 SET ATTRIBUTE ‘COMPATIBILITY.RDBMS’=’11.2’;
11GR2 Grid clusterware
![Page 27: 11gR2 RAC/Grid Clusterware: Best Practices, Pitfalls, and Lessons Learned Presented during DOUG meeting held on 10/21/2010 at Dallas, TX](https://reader035.vdocuments.us/reader035/viewer/2022062316/56816934550346895de08cf0/html5/thumbnails/27.jpg)
27
AGENDA so far…
• Introduction• Node eviction issue in 10g• What is “11GR2 Grid clusterware”?• The Challenges• What’s different today?• “We’ve seen this before, smart guy…”
• Architecture and Capacity Planning• Upgrade Paths• Pre-installation best practices• Grid Clusterware Installation• Clusterware Startup sequence• Post Install steps • RAC Database build steps• Q&A
11GR2 Grid clusterware
![Page 28: 11gR2 RAC/Grid Clusterware: Best Practices, Pitfalls, and Lessons Learned Presented during DOUG meeting held on 10/21/2010 at Dallas, TX](https://reader035.vdocuments.us/reader035/viewer/2022062316/56816934550346895de08cf0/html5/thumbnails/28.jpg)
28
Hardware/Software details
• 10gR2 architecture – 9 database servers, 25TB storage– Original Database Version: 10.2.0.5– Original RAC cluster version : 10.2.0.1– Original Operating System: Ret Hat Linux 5 As 64 Bit– Storage Type : ASM & RAW Storage
• 11gR2 Grid architecture – 4 database servers, 40TB storage– New Database Version: 11.2.0.2– New Grid Clusterware/ASM version: 11.2.0.2– New Operating System : Ret Hat Linux 5 As 64Bit– Data migration steps using Rman backup and restore and
data pump export dump files
11GR2 Grid Clusterware
![Page 29: 11gR2 RAC/Grid Clusterware: Best Practices, Pitfalls, and Lessons Learned Presented during DOUG meeting held on 10/21/2010 at Dallas, TX](https://reader035.vdocuments.us/reader035/viewer/2022062316/56816934550346895de08cf0/html5/thumbnails/29.jpg)
29
11GR2 Migration Steps
• Install 11gR2 Grid clusterware and Asm• Install 11gR2 database binaries for each
database separately • Create the 11gR2 database • Add additional ASM diskgroups• Install 11GR1/10gR2 database binaries • Create 11GR1/10gR2 databases • Take backup • Restore the data
11GR2 Grid Clusterware
![Page 30: 11gR2 RAC/Grid Clusterware: Best Practices, Pitfalls, and Lessons Learned Presented during DOUG meeting held on 10/21/2010 at Dallas, TX](https://reader035.vdocuments.us/reader035/viewer/2022062316/56816934550346895de08cf0/html5/thumbnails/30.jpg)
30
Pre-verification checks- cluvfy
• Before Clusterware installation • ./cluvfy stage -pre crsinst -n node1,node2, node3 –verbose
• Before Database installation• ./cluvfy stage -pre dbinst -n node1,node2, node3 -fixup -verbose
11GR2 Grid Clusterware
![Page 31: 11gR2 RAC/Grid Clusterware: Best Practices, Pitfalls, and Lessons Learned Presented during DOUG meeting held on 10/21/2010 at Dallas, TX](https://reader035.vdocuments.us/reader035/viewer/2022062316/56816934550346895de08cf0/html5/thumbnails/31.jpg)
31
11gR2 Grid Clusterware Installation – Step 1
11GR2 Grid Clusterware
![Page 32: 11gR2 RAC/Grid Clusterware: Best Practices, Pitfalls, and Lessons Learned Presented during DOUG meeting held on 10/21/2010 at Dallas, TX](https://reader035.vdocuments.us/reader035/viewer/2022062316/56816934550346895de08cf0/html5/thumbnails/32.jpg)
32
Step-2
11GR2 Grid Clusterware
![Page 33: 11gR2 RAC/Grid Clusterware: Best Practices, Pitfalls, and Lessons Learned Presented during DOUG meeting held on 10/21/2010 at Dallas, TX](https://reader035.vdocuments.us/reader035/viewer/2022062316/56816934550346895de08cf0/html5/thumbnails/33.jpg)
33
Step-3
11GR2 Grid Clusterware
![Page 34: 11gR2 RAC/Grid Clusterware: Best Practices, Pitfalls, and Lessons Learned Presented during DOUG meeting held on 10/21/2010 at Dallas, TX](https://reader035.vdocuments.us/reader035/viewer/2022062316/56816934550346895de08cf0/html5/thumbnails/34.jpg)
34
Step-4
11GR2 Grid Clusterware
![Page 35: 11gR2 RAC/Grid Clusterware: Best Practices, Pitfalls, and Lessons Learned Presented during DOUG meeting held on 10/21/2010 at Dallas, TX](https://reader035.vdocuments.us/reader035/viewer/2022062316/56816934550346895de08cf0/html5/thumbnails/35.jpg)
35
Step 5
11GR2 Grid Clusterware
![Page 36: 11gR2 RAC/Grid Clusterware: Best Practices, Pitfalls, and Lessons Learned Presented during DOUG meeting held on 10/21/2010 at Dallas, TX](https://reader035.vdocuments.us/reader035/viewer/2022062316/56816934550346895de08cf0/html5/thumbnails/36.jpg)
36
Step 6
11GR2 Grid Clusterware
![Page 37: 11gR2 RAC/Grid Clusterware: Best Practices, Pitfalls, and Lessons Learned Presented during DOUG meeting held on 10/21/2010 at Dallas, TX](https://reader035.vdocuments.us/reader035/viewer/2022062316/56816934550346895de08cf0/html5/thumbnails/37.jpg)
37
Step 7
11GR2 Grid Clusterware
![Page 38: 11gR2 RAC/Grid Clusterware: Best Practices, Pitfalls, and Lessons Learned Presented during DOUG meeting held on 10/21/2010 at Dallas, TX](https://reader035.vdocuments.us/reader035/viewer/2022062316/56816934550346895de08cf0/html5/thumbnails/38.jpg)
38
Step 8
11GR2 Grid Clusterware
![Page 39: 11gR2 RAC/Grid Clusterware: Best Practices, Pitfalls, and Lessons Learned Presented during DOUG meeting held on 10/21/2010 at Dallas, TX](https://reader035.vdocuments.us/reader035/viewer/2022062316/56816934550346895de08cf0/html5/thumbnails/39.jpg)
39
Step 8 cont..
11GR2 Grid Clusterware
![Page 40: 11gR2 RAC/Grid Clusterware: Best Practices, Pitfalls, and Lessons Learned Presented during DOUG meeting held on 10/21/2010 at Dallas, TX](https://reader035.vdocuments.us/reader035/viewer/2022062316/56816934550346895de08cf0/html5/thumbnails/40.jpg)
40
Step 9
11GR2 Grid Clusterware
![Page 41: 11gR2 RAC/Grid Clusterware: Best Practices, Pitfalls, and Lessons Learned Presented during DOUG meeting held on 10/21/2010 at Dallas, TX](https://reader035.vdocuments.us/reader035/viewer/2022062316/56816934550346895de08cf0/html5/thumbnails/41.jpg)
41
Step 9 cont...
11GR2 Grid Clusterware
![Page 42: 11gR2 RAC/Grid Clusterware: Best Practices, Pitfalls, and Lessons Learned Presented during DOUG meeting held on 10/21/2010 at Dallas, TX](https://reader035.vdocuments.us/reader035/viewer/2022062316/56816934550346895de08cf0/html5/thumbnails/42.jpg)
42
Step 10
11GR2 Grid Clusterware
![Page 43: 11gR2 RAC/Grid Clusterware: Best Practices, Pitfalls, and Lessons Learned Presented during DOUG meeting held on 10/21/2010 at Dallas, TX](https://reader035.vdocuments.us/reader035/viewer/2022062316/56816934550346895de08cf0/html5/thumbnails/43.jpg)
43
Step 11
11GR2 Grid Clusterware
![Page 44: 11gR2 RAC/Grid Clusterware: Best Practices, Pitfalls, and Lessons Learned Presented during DOUG meeting held on 10/21/2010 at Dallas, TX](https://reader035.vdocuments.us/reader035/viewer/2022062316/56816934550346895de08cf0/html5/thumbnails/44.jpg)
44
Step 11 cont..
11GR2 Grid Clusterware
![Page 45: 11gR2 RAC/Grid Clusterware: Best Practices, Pitfalls, and Lessons Learned Presented during DOUG meeting held on 10/21/2010 at Dallas, TX](https://reader035.vdocuments.us/reader035/viewer/2022062316/56816934550346895de08cf0/html5/thumbnails/45.jpg)
45
Step 12
11GR2 Grid Clusterware
![Page 46: 11gR2 RAC/Grid Clusterware: Best Practices, Pitfalls, and Lessons Learned Presented during DOUG meeting held on 10/21/2010 at Dallas, TX](https://reader035.vdocuments.us/reader035/viewer/2022062316/56816934550346895de08cf0/html5/thumbnails/46.jpg)
46
Step 12
11GR2 Grid Clusterware
![Page 47: 11gR2 RAC/Grid Clusterware: Best Practices, Pitfalls, and Lessons Learned Presented during DOUG meeting held on 10/21/2010 at Dallas, TX](https://reader035.vdocuments.us/reader035/viewer/2022062316/56816934550346895de08cf0/html5/thumbnails/47.jpg)
47
Step 13
11GR2 Grid Clusterware
![Page 48: 11gR2 RAC/Grid Clusterware: Best Practices, Pitfalls, and Lessons Learned Presented during DOUG meeting held on 10/21/2010 at Dallas, TX](https://reader035.vdocuments.us/reader035/viewer/2022062316/56816934550346895de08cf0/html5/thumbnails/48.jpg)
48
Step 14
11GR2 Grid Clusterware
![Page 49: 11gR2 RAC/Grid Clusterware: Best Practices, Pitfalls, and Lessons Learned Presented during DOUG meeting held on 10/21/2010 at Dallas, TX](https://reader035.vdocuments.us/reader035/viewer/2022062316/56816934550346895de08cf0/html5/thumbnails/49.jpg)
49
Runfixup.sh
• root> /tmp/runfixup.sh• Response file being used is
:/tmp/CVU_11.2.0.1.0_grid/fixup.response• Enable file being used is
:/tmp/CVU_11.2.0.1.0_grid/fixup.enable• Log file location: /tmp/CVU_11.2.0.1.0_grid/orarun.log• Setting Kernel Parameters...• fs.file-max = 327679• fs.file-max = 6815744• net.ipv4.ip_local_port_range = 9000 65500• net.core.wmem_max = 262144• net.core.wmem_max = 1048576• uid=501(grid)gid=502(oinstall)groups=502(oinstall),• 503(asmadmin),504(asmdba)
11GR2 Grid Clusterware
![Page 50: 11gR2 RAC/Grid Clusterware: Best Practices, Pitfalls, and Lessons Learned Presented during DOUG meeting held on 10/21/2010 at Dallas, TX](https://reader035.vdocuments.us/reader035/viewer/2022062316/56816934550346895de08cf0/html5/thumbnails/50.jpg)
50
Step 15
11GR2 Grid Clusterware
![Page 51: 11gR2 RAC/Grid Clusterware: Best Practices, Pitfalls, and Lessons Learned Presented during DOUG meeting held on 10/21/2010 at Dallas, TX](https://reader035.vdocuments.us/reader035/viewer/2022062316/56816934550346895de08cf0/html5/thumbnails/51.jpg)
51
Step 16
11GR2 Grid Clusterware
![Page 52: 11gR2 RAC/Grid Clusterware: Best Practices, Pitfalls, and Lessons Learned Presented during DOUG meeting held on 10/21/2010 at Dallas, TX](https://reader035.vdocuments.us/reader035/viewer/2022062316/56816934550346895de08cf0/html5/thumbnails/52.jpg)
52
Step 16 cont.…
11GR2 Grid Clusterware
![Page 53: 11gR2 RAC/Grid Clusterware: Best Practices, Pitfalls, and Lessons Learned Presented during DOUG meeting held on 10/21/2010 at Dallas, TX](https://reader035.vdocuments.us/reader035/viewer/2022062316/56816934550346895de08cf0/html5/thumbnails/53.jpg)
53
./orainstRoot.sh
Cd /home/oracle/oraInventory[root@oradb-grid1 oraInventory]# ./orainstRoot.shChanging permissions of /home/oracle/oraInventory.Adding read, write permissions for group.Removing read,write,execute permissions for world.
Changing groupname of /home/oracle/oraInventory to oinstall.The execution of the script is complete.[root@oradb-grid1 oraInventory]# cd /u01/app/oracle[root@oradb-grid1 oracle]# lsproduct scripts[root@oradb-grid1 oracle]# cd product[root@oradb-grid1 product]# ls11.2.0[root@oradb-grid1 product]# cd 11*[root@oradb-grid1 11.2.0]# lsgrid[root@oradb-grid1 11.2.0]# cd db*[root@oradb-grid1 grid]# ls root*root.sh[root@oradb-grid1 grid]# ./root.shRunning Oracle 11g root.sh script...
11GR2 Grid Clusterware
![Page 54: 11gR2 RAC/Grid Clusterware: Best Practices, Pitfalls, and Lessons Learned Presented during DOUG meeting held on 10/21/2010 at Dallas, TX](https://reader035.vdocuments.us/reader035/viewer/2022062316/56816934550346895de08cf0/html5/thumbnails/54.jpg)
54
./orainstRoot.sh
The following environment variables are set as: ORACLE_OWNER= oracle ORACLE_HOME= /u01/app/oracle/product/11.2.0/grid
Enter the full pathname of the local bin directory: [/usr/local/bin]:The file "dbhome" already exists in /usr/local/bin. Overwrite it? (y/n)[n]:The file "oraenv" already exists in /usr/local/bin. Overwrite it? (y/n)[n]:The file "coraenv" already exists in /usr/local/bin. Overwrite it? (y/n)[n]:Creating /etc/oratab file...Entries will be added to the /etc/oratab file as needed byDatabase Configuration Assistant when a database is createdFinished running generic part of root.sh script.Now product-specific root actions will be performed.2009-10-02 10:31:44: Parsing the host name2009-10-02 10:31:44: Checking for super user privileges2009-10-02 10:31:44: User has super user privilegesUsing configuration parameter file: /u01/app/oracle/product/11.2.0/grid/crs/install/crsconfig_paramsCreating trace directoryLOCAL ADD MODE
11GR2 Grid Clusterware
![Page 55: 11gR2 RAC/Grid Clusterware: Best Practices, Pitfalls, and Lessons Learned Presented during DOUG meeting held on 10/21/2010 at Dallas, TX](https://reader035.vdocuments.us/reader035/viewer/2022062316/56816934550346895de08cf0/html5/thumbnails/55.jpg)
55
./orainstRoot.sh
Creating OCR keys for user 'root', privgrp 'root'..Operation successful. root wallet root wallet cert root cert export peer wallet profile reader wallet pa wallet peer wallet keys pa wallet keys peer cert request pa cert request peer cert pa cert peer root cert TP profile reader root cert TP pa root cert TP peer pa cert TP pa peer cert TP profile reader pa cert TP profile reader peer cert TP peer user cert pa user certAdding daemon to inittab
11GR2 Grid Clusterware
![Page 56: 11gR2 RAC/Grid Clusterware: Best Practices, Pitfalls, and Lessons Learned Presented during DOUG meeting held on 10/21/2010 at Dallas, TX](https://reader035.vdocuments.us/reader035/viewer/2022062316/56816934550346895de08cf0/html5/thumbnails/56.jpg)
56
./orainstRoot.sh
CRS-4123: Oracle High Availability Services has been started.ohasd is startingCRS-2672: Attempting to start 'ora.gipcd' on 'oradb-grid1'CRS-2672: Attempting to start 'ora.mdnsd' on 'oradb-grid1'CRS-2676: Start of 'ora.gipcd' on 'oradb-grid1' succeededCRS-2676: Start of 'ora.mdnsd' on 'oradb-grid1' succeededCRS-2672: Attempting to start 'ora.gpnpd' on 'oradb-grid1'CRS-2676: Start of 'ora.gpnpd' on 'oradb-grid1' succeededCRS-2672: Attempting to start 'ora.cssdmonitor' on 'oradb-grid1'CRS-2676: Start of 'ora.cssdmonitor' on 'oradb-grid1' succeededCRS-2672: Attempting to start 'ora.cssd' on 'oradb-grid1'CRS-2672: Attempting to start 'ora.diskmon' on 'oradb-grid1'CRS-2676: Start of 'ora.diskmon' on 'oradb-grid1' succeededCRS-2676: Start of 'ora.cssd' on 'oradb-grid1' succeededCRS-2672: Attempting to start 'ora.ctssd' on 'oradb-grid1'CRS-2676: Start of 'ora.ctssd' on 'oradb-grid1' succeeded
ASM created and started successfully.
DiskGroup DATA created successfully.
11GR2 Grid Clusterware
![Page 57: 11gR2 RAC/Grid Clusterware: Best Practices, Pitfalls, and Lessons Learned Presented during DOUG meeting held on 10/21/2010 at Dallas, TX](https://reader035.vdocuments.us/reader035/viewer/2022062316/56816934550346895de08cf0/html5/thumbnails/57.jpg)
57
./orainstRoot.sh
clscfg: -install mode specifiedSuccessfully accumulated necessary OCR keys.Creating OCR keys for user 'root', privgrp 'root'..Operation successful.CRS-2672: Attempting to start 'ora.crsd' on 'oradb-grid1'CRS-2676: Start of 'ora.crsd' on 'oradb-grid1' succeededCRS-4256: Updating the profileSuccessful addition of voting disk 659585bf3a834f39bf281fd47e9ed6db.Successful addition of voting disk 762177cd6f844f25bfc677fb681a02ab.Successful addition of voting disk 154c17a0de9c4ffdbff9a3b2f22b52f6.Successfully replaced voting disk group with +DATA.CRS-4256: Updating the profileCRS-4266: Voting file(s) successfully replaced## STATE File Universal Id File Name Disk group-- ----- ----------------- --------- --------- 1. ONLINE 659585bf3a834f39bf281fd47e9ed6db (/dev/oracleasm/disks/VOL1) [DATA] 2. ONLINE 762177cd6f844f25bfc677fb681a02ab (/dev/oracleasm/disks/VOL2) [DATA] 3. ONLINE 154c17a0de9c4ffdbff9a3b2f22b52f6 (/dev/oracleasm/disks/VOL4) [DATA]Located 3 voting disk(s).
11GR2 Grid Clusterware
![Page 58: 11gR2 RAC/Grid Clusterware: Best Practices, Pitfalls, and Lessons Learned Presented during DOUG meeting held on 10/21/2010 at Dallas, TX](https://reader035.vdocuments.us/reader035/viewer/2022062316/56816934550346895de08cf0/html5/thumbnails/58.jpg)
58
./orainstRoot.sh
CRS-2673: Attempting to stop 'ora.crsd' on 'oradb-grid1'CRS-2677: Stop of 'ora.crsd' on 'oradb-grid1' succeededCRS-2673: Attempting to stop 'ora.asm' on 'oradb-grid1'CRS-2677: Stop of 'ora.asm' on 'oradb-grid1' succeededCRS-2673: Attempting to stop 'ora.ctssd' on 'oradb-grid1'CRS-2677: Stop of 'ora.ctssd' on 'oradb-grid1' succeededCRS-2673: Attempting to stop 'ora.cssdmonitor' on 'oradb-grid1'CRS-2677: Stop of 'ora.cssdmonitor' on 'oradb-grid1' succeededCRS-2673: Attempting to stop 'ora.cssd' on 'oradb-grid1'CRS-2677: Stop of 'ora.cssd' on 'oradb-grid1' succeededCRS-2673: Attempting to stop 'ora.gpnpd' on 'oradb-grid1'CRS-2677: Stop of 'ora.gpnpd' on 'oradb-grid1' succeededCRS-2673: Attempting to stop 'ora.gipcd' on 'oradb-grid1'CRS-2677: Stop of 'ora.gipcd' on 'oradb-grid1' succeededCRS-2673: Attempting to stop 'ora.mdnsd' on 'oradb-grid1'CRS-2677: Stop of 'ora.mdnsd' on 'oradb-grid1' succeededCRS-2672: Attempting to start 'ora.mdnsd' on 'oradb-grid1'CRS-2676: Start of 'ora.mdnsd' on 'oradb-grid1' succeededCRS-2672: Attempting to start 'ora.gipcd' on 'oradb-grid1'CRS-2676: Start of 'ora.gipcd' on 'oradb-grid1' succeededCRS-2672: Attempting to start 'ora.gpnpd' on 'oradb-grid1'CRS-2676: Start of 'ora.gpnpd' on 'oradb-grid1' succeededCRS-2672: Attempting to start 'ora.cssdmonitor' on 'oradb-grid1'CRS-2676: Start of 'ora.cssdmonitor' on 'oradb-grid1' succeeded
11GR2 Grid Clusterware
![Page 59: 11gR2 RAC/Grid Clusterware: Best Practices, Pitfalls, and Lessons Learned Presented during DOUG meeting held on 10/21/2010 at Dallas, TX](https://reader035.vdocuments.us/reader035/viewer/2022062316/56816934550346895de08cf0/html5/thumbnails/59.jpg)
59
./orainstRoot.sh
CRS-2672: Attempting to start 'ora.cssd' on 'oradb-grid1'CRS-2672: Attempting to start 'ora.diskmon' on 'oradb-grid1'CRS-2676: Start of 'ora.diskmon' on 'oradb-grid1' succeededCRS-2676: Start of 'ora.cssd' on 'oradb-grid1' succeededCRS-2672: Attempting to start 'ora.ctssd' on 'oradb-grid1'CRS-2676: Start of 'ora.ctssd' on 'oradb-grid1' succeededCRS-2672: Attempting to start 'ora.asm' on 'oradb-grid1'CRS-2676: Start of 'ora.asm' on 'oradb-grid1' succeededCRS-2672: Attempting to start 'ora.crsd' on 'oradb-grid1'CRS-2676: Start of 'ora.crsd' on 'oradb-grid1' succeededCRS-2672: Attempting to start 'ora.evmd' on 'oradb-grid1'CRS-2676: Start of 'ora.evmd' on 'oradb-grid1' succeededCRS-2672: Attempting to start 'ora.asm' on 'oradb-grid1'CRS-2676: Start of 'ora.asm' on 'oradb-grid1' succeededCRS-2672: Attempting to start 'ora.DATA.dg' on 'oradb-grid1'CRS-2676: Start of 'ora.DATA.dg' on 'oradb-grid1' succeededCRS-2672: Attempting to start 'ora.registry.acfs' on 'oradb-grid1'CRS-2676: Start of 'ora.registry.acfs' on 'oradb-grid1' succeeded
11GR2 Grid Clusterware
![Page 60: 11gR2 RAC/Grid Clusterware: Best Practices, Pitfalls, and Lessons Learned Presented during DOUG meeting held on 10/21/2010 at Dallas, TX](https://reader035.vdocuments.us/reader035/viewer/2022062316/56816934550346895de08cf0/html5/thumbnails/60.jpg)
60
./orainstRoot.sh
oradb-grid1 2009/10/02 10:37:08 /u01/app/oracle/product/11.2.0/grid/cdata/oradb-grid1/backup_20091002_103708.olrPreparing packages for installation...cvuqdisk-1.0.7-1Configure Oracle Grid Infrastructure for a Cluster ... succeededUpdating inventory properties for clusterwareStarting Oracle Universal Installer...
Checking swap space: must be greater than 500 MB. Actual 39997 MB PassedThe inventory pointer is located at /etc/oraInst.locThe inventory is located at /home/oracle/oraInventory'UpdateNodeList' was successful.
oradb-grid2 output :-
[root@oradb-grid2 oraInventory]# pwd/home/oracle/oraInventory[root@oradb-grid2 oraInventory]# ./orainstRoot.shChanging permissions of /home/oracle/oraInventory.
11GR2 Grid Clusterware
![Page 61: 11gR2 RAC/Grid Clusterware: Best Practices, Pitfalls, and Lessons Learned Presented during DOUG meeting held on 10/21/2010 at Dallas, TX](https://reader035.vdocuments.us/reader035/viewer/2022062316/56816934550346895de08cf0/html5/thumbnails/61.jpg)
61
Step 16
11GR2 Grid Clusterware
![Page 62: 11gR2 RAC/Grid Clusterware: Best Practices, Pitfalls, and Lessons Learned Presented during DOUG meeting held on 10/21/2010 at Dallas, TX](https://reader035.vdocuments.us/reader035/viewer/2022062316/56816934550346895de08cf0/html5/thumbnails/62.jpg)
62
Step 16 cont.…
11GR2 Grid Clusterware
![Page 63: 11gR2 RAC/Grid Clusterware: Best Practices, Pitfalls, and Lessons Learned Presented during DOUG meeting held on 10/21/2010 at Dallas, TX](https://reader035.vdocuments.us/reader035/viewer/2022062316/56816934550346895de08cf0/html5/thumbnails/63.jpg)
63
Grid CRS Startup sequence
11GR2 Grid Clusterware
OHASD
oraRootAgent oraAgent cssdAgent cssdMonitor
crsd ctssd Diskmon ACFS
mdnsd gipcd gpnpd evmd
crsdRootAgent
GCS VIP
ASM
crsdOraAgent
networkResource SCAN VIPs Node VIPs ACFS Reg
DB Resource SCAN Listener Services ONS
![Page 64: 11gR2 RAC/Grid Clusterware: Best Practices, Pitfalls, and Lessons Learned Presented during DOUG meeting held on 10/21/2010 at Dallas, TX](https://reader035.vdocuments.us/reader035/viewer/2022062316/56816934550346895de08cf0/html5/thumbnails/64.jpg)
64
Clusterware verification
Clusterware processes–$ ps -ef|grep -v grep |grep d.binoracle 9824 1 0 Jul14 ? 00:00:00 /u01/app/grid11gR2/bin/oclskd.bin root 22161 1 0 Jul13 ? 00:00:15 /u01/app/grid11gR2/bin/ohasd.bin reboot oracle 24161 1 0 Jul13 ? 00:00:00 /u01/app/grid11gR2/bin/mdnsd.bin oracle 24172 1 0 Jul13 ? 00:00:00 /u01/app/grid11gR2/bin/gipcd.bin oracle 24183 1 0 Jul13 ? 00:00:03 /u01/app/grid11gR2/bin/gpnpd.bin oracle 24257 1 0 Jul13 ? 00:01:26 /u01/app/grid11gR2/bin/ocssd.bin root 24309 1 0 Jul13 ? 00:00:06 /u01/app/grid11gR2/bin/octssd.bin root 24323 1 0 Jul13 ? 00:01:03 /u01/app/grid11gR2/bin/crsd.bin reboot root 24346 1 0 Jul13 ? 00:00:00 /u01/app/grid11gR2/bin/oclskd.binoracle 24374 1 0 Jul13 ? 00:00:03 /u01/app/grid11gR2/bin/evmd.bin
11GR2 Grid Clusterware
![Page 65: 11gR2 RAC/Grid Clusterware: Best Practices, Pitfalls, and Lessons Learned Presented during DOUG meeting held on 10/21/2010 at Dallas, TX](https://reader035.vdocuments.us/reader035/viewer/2022062316/56816934550346895de08cf0/html5/thumbnails/65.jpg)
65
Clusterware verification
Clusterware checks–$ crsctl check crs CRS-4638: Oracle High Availability Services is online CRS-4537: Cluster Ready Services is online CRS-4529: Cluster Synchronization Services is online CRS-4533: Event Manager is online $ crsctl check has CRS-4638: Oracle High Availability Services is online $ crsctl query crs activeversion Oracle Clusterware active version on the cluster is [11.2.0.2.0]
11GR2 Grid Clusterware
![Page 66: 11gR2 RAC/Grid Clusterware: Best Practices, Pitfalls, and Lessons Learned Presented during DOUG meeting held on 10/21/2010 at Dallas, TX](https://reader035.vdocuments.us/reader035/viewer/2022062316/56816934550346895de08cf0/html5/thumbnails/66.jpg)
66
Clusterware verification
Clusterware processes$ crsctl check cluster -all ************************************************************** rat-rm2-ipfix006: CRS-4537: Cluster Ready Services is online CRS-4529: Cluster Synchronization Services is online CRS-4533: Event Manager is online ************************************************************** rat-rm2-ipfix007: CRS-4537: Cluster Ready Services is online CRS-4529: Cluster Synchronization Services is online CRS-4533: Event Manager is online ************************************************************** rat-rm2-ipfix008: CRS-4537: Cluster Ready Services is online CRS-4529: Cluster Synchronization Services is online CRS-4533: Event Manager is online **************************************************************
11GR2 Grid Clusterware
![Page 67: 11gR2 RAC/Grid Clusterware: Best Practices, Pitfalls, and Lessons Learned Presented during DOUG meeting held on 10/21/2010 at Dallas, TX](https://reader035.vdocuments.us/reader035/viewer/2022062316/56816934550346895de08cf0/html5/thumbnails/67.jpg)
67
Clusterware verification
$ crsctl status resource -t NAME TARGET STATE SERVER STATE_DETAILS ------------------------------------------------ ora.DATA.dg ASM disk group (new resource) ONLINE ONLINE rat-rm2-ipfix006 ONLINE ONLINE rat-rm2-ipfix007 ONLINE ONLINE rat-rm2-ipfix008 ora.LISTENER.lsnr ONLINE ONLINE rat-rm2-ipfix006 ONLINE ONLINE rat-rm2-ipfix007 ONLINE ONLINE rat-rm2-ipfix008 ora.asm ONLINE ONLINE rat-rm2-ipfix006 StartedONLINE ONLINE rat-rm2-ipfix007 Started ONLINE ONLINE rat-rm2-ipfix008 Started ora.eons new resource ONLINE ONLINE rat-rm2-ipfix006 ONLINE ONLINE rat-rm2-ipfix007 ONLINE ONLINE rat-rm2-ipfix008 ora.gsd
11GR2 Grid Clusterware
![Page 68: 11gR2 RAC/Grid Clusterware: Best Practices, Pitfalls, and Lessons Learned Presented during DOUG meeting held on 10/21/2010 at Dallas, TX](https://reader035.vdocuments.us/reader035/viewer/2022062316/56816934550346895de08cf0/html5/thumbnails/68.jpg)
68
Post Install Steps
• One-off patch –unlock the Grid home first• # perl rootcrs.pl -unlock -crshome
/u01/app/11.2.0/grid• Download and Install the latest Patch Updates• Back Up the root.sh Script• Install Cluster Health Management• Install OS Watcher and RACDDT• Check the backups of OCR and voting disks• Lock the Grid home after patch installation
11GR2 Grid Clusterware
![Page 69: 11gR2 RAC/Grid Clusterware: Best Practices, Pitfalls, and Lessons Learned Presented during DOUG meeting held on 10/21/2010 at Dallas, TX](https://reader035.vdocuments.us/reader035/viewer/2022062316/56816934550346895de08cf0/html5/thumbnails/69.jpg)
69
Manageability
• Grid control
• Adding/dropping nodes
• Automatically discovers services
• Policy-based cluster management
• Automated Cluster Patching
• End to end management of the cluster
11GR2 Grid Clusterware
![Page 70: 11gR2 RAC/Grid Clusterware: Best Practices, Pitfalls, and Lessons Learned Presented during DOUG meeting held on 10/21/2010 at Dallas, TX](https://reader035.vdocuments.us/reader035/viewer/2022062316/56816934550346895de08cf0/html5/thumbnails/70.jpg)
70
flexibilityScalability & Flexibility
• Sever pools of hardware available
• Consolidation of hardware and storage
• Rapid provisioning of resources to add capacity where its required
• Improved utilization of resources
• Better ROI
11GR2 Grid Clusterware
![Page 71: 11gR2 RAC/Grid Clusterware: Best Practices, Pitfalls, and Lessons Learned Presented during DOUG meeting held on 10/21/2010 at Dallas, TX](https://reader035.vdocuments.us/reader035/viewer/2022062316/56816934550346895de08cf0/html5/thumbnails/71.jpg)
71
Summary
• Reduced Hardware• Improved availability –SLAs• Shorter Time to add additional server or storage• Higher Security• Data Sharing & visibility• Better application performance• Centralized backup and archive• Higher ROI – higher utilization• Manageability• Pride in Ownership, Eliminating the Assembly Line• Bottom line = reduce TCO!
11GR2 Grid Clusterware
![Page 72: 11gR2 RAC/Grid Clusterware: Best Practices, Pitfalls, and Lessons Learned Presented during DOUG meeting held on 10/21/2010 at Dallas, TX](https://reader035.vdocuments.us/reader035/viewer/2022062316/56816934550346895de08cf0/html5/thumbnails/72.jpg)
72
References
• 11gR2Oracle® Clusterware Administration and Deployment Guide• Metalink ID 1054902.1 for Network configuration• RAC Assurance Support Team: RAC Starter Kit and Best Practices
(Linux)• Metalink NOTE:887522.1 - 11gR2 Grid Infrastructure Single Client
Access Name • Metalink NOTE:946452.1 - DNS and DHCP Setup Example for Grid
Infrastructure GNS Metalink Pre 11.2 Database Issues in 11gR2 Grid Infrastructure
Environment [ID 948456.1]
11GR2 Grid Clusterware