march 2010, regional user groups (minneapolis, chicago, milwaukee) 0 ibm db2 purescale - overview...

37
March 2010, Regional User Groups (Minneapolis, Chicago, Milwaukee) 1 IBM DB2 pureScale - Overview and Technical Deep Dive Aamer Sachedina, Senior Technical Staff Member, IBM, [email protected]

Upload: basil-willis

Post on 23-Dec-2015

217 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: March 2010, Regional User Groups (Minneapolis, Chicago, Milwaukee) 0 IBM DB2 pureScale - Overview and Technical Deep Dive Aamer Sachedina, Senior Technical

March 2010, Regional User Groups (Minneapolis, Chicago, Milwaukee)1

IBM DB2 pureScale - Overview and Technical Deep Dive

Aamer Sachedina, Senior Technical Staff Member,IBM,[email protected]

Page 2: March 2010, Regional User Groups (Minneapolis, Chicago, Milwaukee) 0 IBM DB2 pureScale - Overview and Technical Deep Dive Aamer Sachedina, Senior Technical

2

Agenda

Technology Goals and Overview

Technology In-Depth Key Concepts & Internals

Efficient scaling

Failure modes & recovery automation

Configuration, Monitoring, Installation Cluster configuration and operational status

Client configuration and load balancing

Stealth Maintenance

Page 3: March 2010, Regional User Groups (Minneapolis, Chicago, Milwaukee) 0 IBM DB2 pureScale - Overview and Technical Deep Dive Aamer Sachedina, Senior Technical

3

DB2 pureScale : Goals

Unlimited Capacity Any transaction processing or ERP workload Start small Grow easily, with your business

Application Transparency Avoid the risk and cost of tuning your applications to the

database topology

Continuous Availability Maintain service across planned and unplanned events

Page 4: March 2010, Regional User Groups (Minneapolis, Chicago, Milwaukee) 0 IBM DB2 pureScale - Overview and Technical Deep Dive Aamer Sachedina, Senior Technical

4

Cluster Interconnect

DB2 pureScale : Technology Overview

Single Database View

Clients

Database

Log Log Log Log

Shared Storage Access

CS CS CSCS

CS CS

CS

Member Member Member Member

Primary 2nd-ary

DB2 engine runs on several host computers Co-operate with each other to provide coherent access to the database

from any member

Data sharing architecture Shared access to database Members write to their own logs Logs accessible from another host (used during recovery)

PowerHA pureScale technology from STG Efficient global locking and buffer management Synchronous duplexing to secondary ensures availability

Low latency, high speed interconnect Special optimizations provide significant advantages on RDMA-

capable interconnects (eg. Infiniband)

Clients connect anywhere,…… see single database

Clients connect into any member Automatic load balancing and client reroute may change

underlying physical member to which client is connected

Integrated cluster services Failure detection, recovery automation, cluster file system In partnership with STG (GPFS,RSCT) and Tivoli (SA MP)

Leverage IBM’s System z Sysplex Experience and Know-How

Page 5: March 2010, Regional User Groups (Minneapolis, Chicago, Milwaukee) 0 IBM DB2 pureScale - Overview and Technical Deep Dive Aamer Sachedina, Senior Technical

5

Scale with Ease

Log

LogLogLog

Without changing applications Efficient coherency protocols designed to scale without application

change Applications automatically and transparently workload balanced

across members

Without administrative complexity No data redistribution required

To 128 members in initial release

Single Database View

DB2 DB2 DB2 DB2

Log

DB2

Page 6: March 2010, Regional User Groups (Minneapolis, Chicago, Milwaukee) 0 IBM DB2 pureScale - Overview and Technical Deep Dive Aamer Sachedina, Senior Technical

6

Online Recovery

Log

LogLogLog

DB2 DB2 DB2 DB2

A key DB2 pureScale design point is to maximize availability during failure recovery processing

When a database member fails, only data in-flight on the failed member remains locked during the automated recovery

In-flight = data being updated on the member at the time it failed

% o

f Dat

a A

vaila

ble

Time (~seconds)

Only data in-flight updates locked during recovery

Database member failure

100

50

Page 7: March 2010, Regional User Groups (Minneapolis, Chicago, Milwaukee) 0 IBM DB2 pureScale - Overview and Technical Deep Dive Aamer Sachedina, Senior Technical

7

Stealth System Maintenance

Log

LogLogLog

Goal: allow DBAs to apply system maintenance without negotiating an outage window

Procedure:1. Drain (aka Quiesce)

2. Remove & Maintain 3. Re-integrate

4. Repeat until done

Single Database View

DB2 DB2 DB2 DB2

Page 8: March 2010, Regional User Groups (Minneapolis, Chicago, Milwaukee) 0 IBM DB2 pureScale - Overview and Technical Deep Dive Aamer Sachedina, Senior Technical

8

Agenda

Technology Goals and Overview

Technology In-Depth Key Concepts & Internals Efficient scaling Failure modes & recovery automation Stealth Maintenance

Configuration, Monitoring, Installation Cluster configuration and operational status Client configuration and load balancing Stealth Maintenance

Page 9: March 2010, Regional User Groups (Minneapolis, Chicago, Milwaukee) 0 IBM DB2 pureScale - Overview and Technical Deep Dive Aamer Sachedina, Senior Technical

9

What is a Member ?

A DB2 engine address space i.e. a db2sysc process and its threads

Members Share Data All members access the same shared database Aka “Data Sharing”

Each member has it’s own … Bufferpools Memory regions Log files

Members are logical.Can have …

1 per machine or LPAR (recommended) >1 per machine or LPAR (not recommended)

db2 agents & other threads

log buffer, dbheap, &other heaps

bufferpool(s)

Member 1

Shared database(Single database partition)

Log Log

db2sysc process

Member 0

db2 agents & other threads

bufferpool(s)

db2sysc process

log buffer, dbheap, &other heaps

Page 10: March 2010, Regional User Groups (Minneapolis, Chicago, Milwaukee) 0 IBM DB2 pureScale - Overview and Technical Deep Dive Aamer Sachedina, Senior Technical

10

What is a PowerHA pureScale ?

Software technology that assists in global buffer coherency management and global locking

Shared lineage with System z Parallel Sysplex

Software based

Services provided include Group Bufferpool (GBP) Global Lock Management (GLM) Shared Communication Area (SCA)

Members duplex GBP, GLM, SCA state to both a primary and secondary

Done synchronously Duplexing is optional (but recommended) Set up automatically, by default

Shared database(Single database partition)

Log Log

GBP GLM SCA

Primary

Secondary

db2 agents & other threads

log buffer, dbheap, &other heaps

bufferpool(s)

db2 agents & other threads

bufferpool(s)

log buffer, dbheap, &other heaps

Page 11: March 2010, Regional User Groups (Minneapolis, Chicago, Milwaukee) 0 IBM DB2 pureScale - Overview and Technical Deep Dive Aamer Sachedina, Senior Technical

11

The Role of the GBP

GBP acts as fast disk cache Dirty pages stored in GBP, then later, written

to disk Provides fast retrieval of such pages when

needed by other members

GBP includes a “Page Registry” Keeps track of what pages are buffered in

each member and at what memory address Used for fast invalidation of such pages

when they are written to the GBP

Force-at-Commit (FAC) protocol ensures coherent access to data across members

DB2 “forces” (writes) updated pages to GBP at COMMIT (or before)

GBP synchronously invalidates any copies of such pages on other members

– New references to the page on other members will retrieve new copy from GBP

– In-progress references to page can continue

bufferpool(s)

Member 2

GBP GLM SCA

bufferpool(s)

Member 1

Client B :Update T1 set C1=X where C2=YCommit

PageRegistry

M1

Write P

age

Read

Page

M2M2

“Sile

nt” I

nval

idat

e

Client A :Select from T1 where C2=YClient C :

Select from T1 where C2=Y

Page 12: March 2010, Regional User Groups (Minneapolis, Chicago, Milwaukee) 0 IBM DB2 pureScale - Overview and Technical Deep Dive Aamer Sachedina, Senior Technical

12

The Role of the GLM

Member 2

GBP GLM SCA

Member 1

Client B :Update T1 set C1=X where C2=YCommit

PageRegistry

M1

Write P

age

Read

Page

M2M2

“Sile

nt” I

nval

idat

e

Client A :Select from T1 where C2=YClient C :

Select from T1 where C2=Y

Grants locks to members upon request

If not already held by another member, or held in a compatible mode

Maintains global lock state Which member has what lock, in what

mode Also - interest list of pending lock

requests for each lock

Grants pending lock requests when available

Via asynchronous notification

Notes When a member owns a lock, it may grant

further, locally “Lock Avoidance” : DB2 avoids lock

requests when log sequence number in page header indicates no update on the page could be uncommitted

X Lock Req

R32

R34

Page LSN is old, row lock not needed

Lock Release

R33R33R33M1-X

R33

Page LSN is recent, row lock

needed

S Lock R

eq

R33M2-S

R33

Page 13: March 2010, Regional User Groups (Minneapolis, Chicago, Milwaukee) 0 IBM DB2 pureScale - Overview and Technical Deep Dive Aamer Sachedina, Senior Technical

13

Achieving Efficient Scaling : Key Design Points

Deep RDMA exploitation over low latency fabric Enables round-trip response time

~10-15 microseconds

Silent Invalidation Informs members of page updates requires

no CPU cycles on those members

No interrupt or other message processing required

Increasingly important as cluster grows

Hot pages available without disk I/O from GBP memory RDMA and dedicated threads enable read

page operations in ~10s of microseconds

GBP GLM SCA

Buffer Mgr

Lock Mgr Lock Mgr Lock Mgr Lock Mgr

Can I have this lock ?

Yup, here you are.

New

page image

Rea

d P

age

Page 14: March 2010, Regional User Groups (Minneapolis, Chicago, Milwaukee) 0 IBM DB2 pureScale - Overview and Technical Deep Dive Aamer Sachedina, Senior Technical

14

Scalability : Example Transaction processing workload

modeling warehouse & ordering process

Write transactions rate to 20% Typical read/write ratio of many OLTP

workloads

No cluster awareness in the application

No affinity No partitioning No routing of transactions to members Testing key DB2 pureScale design point

Configuration 12 8-core p550 members

64 GB, 5 GHz each Duplexed PowerHA pureScale across 2

additional 8-core p550s 64 GB, 5 GHz each

DS8300 storage 576 15K disks, Two 4Gb FC Switches

IBM 20Gb/s IB HCAs 7874-024 IB Switch

1Gb Ethernet Client Connectivity

20Gb IB pureScale Interconnect

7874-024 Switch

Two 4Gb FC Switches

DS8300 Storage

p550 members

p550 powerHA pureScale

Clients (2-way x345)

Page 15: March 2010, Regional User Groups (Minneapolis, Chicago, Milwaukee) 0 IBM DB2 pureScale - Overview and Technical Deep Dive Aamer Sachedina, Senior Technical

15

Scalability : Example

0123456789

101112

0 5 10 15

1.98x @ 2 members

3.9x @ 4 members

# Members

Th

rou

ghp

ut v

s 1

me

mb

er

7.6x @ 8 members

10.4x @ 12 members

Page 16: March 2010, Regional User Groups (Minneapolis, Chicago, Milwaukee) 0 IBM DB2 pureScale - Overview and Technical Deep Dive Aamer Sachedina, Senior Technical

16

CF

DB2 Cluster Services Overview

DB2 Cluster Services: Cluster File System

(GPFS)

DB2 Cluster Services: Cluster Manager (RSCT) Cluster Automation (Tivoli SA MP)

DB2DB2 DB2 DB2 DB2

CF

Integrated DB2 component Single install as part of DB2 installation

Integrated DB2 component Integrated DB2 component Single install as part of DB2 installation Upgrades and maintenance through DB2 fixpacks

Integrated DB2 component Single install as part of DB2 installation Upgrades and maintenance through DB2 fixpacks Designed to interact with DBA

Integrated DB2 component Single install as part of DB2 installation Upgrades and maintenance through DB2 fixpacks Designed to interact with DBA

No need to learn cluster management mantras!

Page 17: March 2010, Regional User Groups (Minneapolis, Chicago, Milwaukee) 0 IBM DB2 pureScale - Overview and Technical Deep Dive Aamer Sachedina, Senior Technical

17

DB2 Cluster Services

Robust heart-beating and failure detection algorithm. Utilizes redundant networks (Ethernet and Infiniband)

Prevents false downs in cases of network congestion and heavy CPU utilization through a patented algorithm.

SCSI-3 Persistent Reserve I/O Fencing for split-brain avoidance. Guarantees protection of shared data in the event that one or more errant hosts splits

from the network.

Substantially more robust than technology used by others (self initiated reboot based algorithms or STONITH).

Allows re-integration of a split host into the cluster when the network heals without requiring a reboot.

Host failure detection and I/O fencing in ~ 3 seconds

Page 18: March 2010, Regional User Groups (Minneapolis, Chicago, Milwaukee) 0 IBM DB2 pureScale - Overview and Technical Deep Dive Aamer Sachedina, Senior Technical

18

Cluster Interconnect

DB2 pureScale HA Architecture

GPFS

2nd-ary

CSCS

Member

DB2 CS

Member

DB2 CS

Member

DB2 CS

Member

DB2 CS

PrimarySecondary

Page 19: March 2010, Regional User Groups (Minneapolis, Chicago, Milwaukee) 0 IBM DB2 pureScale - Overview and Technical Deep Dive Aamer Sachedina, Senior Technical

19

DB2 DB2 DB2 DB2

Single Database View

Shared Data

host1host0 host3host2

host4

Clients

host5

> db2instance -list

ID TYPE STATE HOME_HOST CURRENT_HOST ALERT

0 MEMBER STARTED host0 host0 NO1 MEMBER STARTED host1 host1 NO2 MEMBER STARTED host2 host2 NO3 MEMBER STARTED host3 host3 NO4 CF PRIMARY host4 host4 NO5 CF PEER host5 host5 NO

HOST_NAME STATE INSTANCE_STOPPED ALERT

host0 ACTIVE NO NOhost1 ACTIVE NO NOhost2 ACTIVE NO NOhost3 ACTIVE NO NOhost4 ACTIVE NO NOhost5 ACTIVE NO NO

DB2 pureScale Instance Status

0 host0 0 - MEMBER1 host1 0 - MEMBER2 host2 0 - MEMBER3 host3 0 - MEMBER4 host4 0 - CF5 host5 0 - CF

db2nodes.cfg

> db2instance -list

PrimarySecondary

Page 20: March 2010, Regional User Groups (Minneapolis, Chicago, Milwaukee) 0 IBM DB2 pureScale - Overview and Technical Deep Dive Aamer Sachedina, Senior Technical

20

Log

CS

CS

DB2

Member SW Failure : “Member Restart on Home Host”

Single Database View

Shared Data

Clients

kill -9 erroneously issued to a member

DB2 Cluster Services automatically detects member’s death

Informs other members & powerHA pureScale servers Initiates automated member restart of light DB2 member on same

(“home”) host Member restart is like a database crash recovery in a single system

database, but is much faster• Redo limited to inflight transactions (due to FAC)• Benefits from page cache in CF

In the mean-time, client connections are transparently re-routed to healthy members

Based on least load (by default), or, Pre-designated failover member

Other members remain fully available throughout – “Online Failover”

Primary retains update locks held by member at the time of failure Other members can continue to read and update data not locked for

write access by failed member

Member restart completes Retained locks released and all data fully available Full DB2 member started and available for transaction

processing

CS

DB2

CS

DB2

CS

DB2

CS

Updated Pages Global Locks

LogLogLog

Log Records Pages

PrimarySecondary

Updated Pages Global Locks

kill -9 Automatic;

Ultra Fast;

OnlineDB2

Page 21: March 2010, Regional User Groups (Minneapolis, Chicago, Milwaukee) 0 IBM DB2 pureScale - Overview and Technical Deep Dive Aamer Sachedina, Senior Technical

21

DB2 DB2 DB2 DB2

Shared Data

host1host0 host3host2

host4host5

Member SW Failure and Restart on Home Host

0 host0 0 - MEMBER1 host1 0 - MEMBER2 host2 0 - MEMBER3 host3 0 - MEMBER4 host4 0 - CF5 host5 0 - CF

db2nodes.cfg

kill -9

> db2instance -list

ID TYPE STATE HOME_HOST CURRENT_HOST ALERT

0 MEMBER STARTED host0 host0 NO1 MEMBER STARTED host1 host1 NO2 MEMBER STARTED host2 host2 NO3 MEMBER STARTED host3 host3 NO4 CF PRIMARY host4 host4 NO5 CF PEER host5 host5 NO

HOST_NAME STATE INSTANCE_STOPPED ALERT

host0 ACTIVE NO NOhost1 ACTIVE NO NOhost2 ACTIVE NO NOhost3 ACTIVE NO NOhost4 ACTIVE NO NOhost5 ACTIVE NO NO

> db2instance -list

ID TYPE STATE HOME_HOST CURRENT_HOST ALERT

0 MEMBER STARTED host0 host0 NO1 MEMBER STARTED host1 host1 NO2 MEMBER STARTED host2 host2 NO3 MEMBER RESTARTING host3 host3 NO4 CF PRIMARY host4 host4 NO5 CF PEER host5 host5 NO

HOST_NAME STATE INSTANCE_STOPPED ALERT

host0 ACTIVE NO NOhost1 ACTIVE NO NOhost2 ACTIVE NO NOhost3 ACTIVE NO NOhost4 ACTIVE NO NOhost5 ACTIVE NO NO

> db2instance -list

ID TYPE STATE HOME_HOST CURRENT_HOST ALERT

0 MEMBER STARTED host0 host0 NO1 MEMBER STARTED host1 host1 NO2 MEMBER STARTED host2 host2 NO3 MEMBER STARTED host3 host3 NO4 CF PRIMARY host4 host4 NO5 CF PEER host5 host5 NO

HOST_NAME STATE INSTANCE_STOPPED ALERT

host0 ACTIVE NO NOhost1 ACTIVE NO NOhost2 ACTIVE NO NOhost3 ACTIVE NO NOhost4 ACTIVE NO NOhost5 ACTIVE NO NO

> db2instance -list

CS

PrimarySecondary

Page 22: March 2010, Regional User Groups (Minneapolis, Chicago, Milwaukee) 0 IBM DB2 pureScale - Overview and Technical Deep Dive Aamer Sachedina, Senior Technical

22

Log

CS

CS

DB2

Member HW Failure and Restart Light

Shared Data

Clients

Power cord tripped over accidentally

DB2 Cluster Services looses heartbeat and declares member down

Informs other members & PowerHA pureScale servers Fences member from logs and data Initiates automated member restart on another (“guest”) host

Using reduced, and pre-allocated memory model Member restart is like a database crash recovery in a single system

database, but is much faster• Redo limited to inflight transactions (due to FAC)• Benefits from page cache in PowerHA pureScale

In the mean-time, client connections are automatically re-routed to healthy members

Based on least load (by default), or, Pre-designated failover member

Other members remain fully available throughout – “Online Failover”

Primary retains update locks held by member at the time of failure Other members can continue to read and update data not locked for

write access by failed member

Member restart completes Retained locks released and all data fully available

CS

DB2

CSDB2

CS

Updated Pages Global Locks

LogLogLog

PrimarySecondary

Updated Pages Global Locks

Fence

CS

DB2

DB2

Pages

Log Recs

Single Database View

Automatic;

Ultra Fast;

Online

Page 23: March 2010, Regional User Groups (Minneapolis, Chicago, Milwaukee) 0 IBM DB2 pureScale - Overview and Technical Deep Dive Aamer Sachedina, Senior Technical

23

> db2instance -list

ID TYPE STATE HOME_HOST CURRENT_HOST ALERT

0 MEMBER STARTED host0 host0 NO1 MEMBER STARTED host1 host1 NO2 MEMBER STARTED host2 host2 NO3 MEMBER STARTED host3 host3 NO4 CF PRIMARY host4 host4 NO5 CF PEER host5 host5 NO

HOST_NAME STATE INSTANCE_STOPPED ALERT

host0 ACTIVE NO NOhost1 ACTIVE NO NOhost2 ACTIVE NO NOhost3 ACTIVE NO NOhost4 ACTIVE NO NOhost5 ACTIVE NO NO

Member Hardware Failure and Restart Light on Guest Host

0 host0 0 - MEMBER1 host1 0 - MEMBER2 host2 0 - MEMBER3 host3 0 - MEMBER4 host4 0 - CF5 host5 0 - CF

db2nodes.cfg

DB2 DB2 DB2 DB2

Shared Data

host1host0 host3host2

host4

host4

PrimarySecondary

> db2instance -list

ID TYPE STATE HOME_HOST CURRENT_HOST ALERT

0 MEMBER STARTED host0 host0 NO1 MEMBER STARTED host1 host1 NO2 MEMBER STARTED host2 host2 NO3 MEMBER RESTARTING host3 host2 NO4 CF PRIMARY host4 host4 NO5 CF PEER host5 host5 NO

HOST_NAME STATE INSTANCE_STOPPED ALERT

host0 ACTIVE NO NOhost1 ACTIVE NO NOhost2 ACTIVE NO NOhost3 INACTIVE NO YEShost4 ACTIVE NO NOhost5 ACTIVE NO NO

Log

DB2

Pages

Log Recs

CSFence

LogLogLog

DB2

> db2instance -list

ID TYPE STATE HOME_HOST CURRENT_HOST ALERT

0 MEMBER STARTED host0 host0 NO1 MEMBER STARTED host1 host1 NO2 MEMBER STARTED host2 host2 NO3 MEMBER WAITING_FOR_FAILBACK host3 host2 NO4 CF PRIMARY host4 host4 NO5 CF PEER host5 host5 NO

HOST_NAME STATE INSTANCE_STOPPED ALERT

host0 ACTIVE NO NOhost1 ACTIVE NO NOhost2 ACTIVE NO NOhost3 INACTIVE NO YEShost4 ACTIVE NO NOhost5 ACTIVE NO NO

host5

Page 24: March 2010, Regional User Groups (Minneapolis, Chicago, Milwaukee) 0 IBM DB2 pureScale - Overview and Technical Deep Dive Aamer Sachedina, Senior Technical

24

DB2 DB2 DB2 DB2

Shared Data

host1host0 host3host2

host4host5

Member SW Failure and Restart Light on Guest Host

0 host0 0 - MEMBER1 host1 0 - MEMBER2 host2 0 - MEMBER3 host3 0 - MEMBER4 host4 0 - CF5 host5 0 - CF

db2nodes.cfg

CS

Secondary

kill -9

CSDB2

Pages

Log Recs LogLogLogLog

> db2instance -list

ID TYPE STATE HOME_HOST CURRENT_HOST ALERT

0 MEMBER STARTED host0 host0 NO1 MEMBER STARTED host1 host1 NO2 MEMBER STARTED host2 host2 NO3 MEMBER WAITING_FOR_FAILBACK host3 host2 YES4 CF PRIMARY host4 host4 NO5 CF PEER host5 host5 NO

HOST_NAME STATE INSTANCE_STOPPED ALERT

host0 ACTIVE NO NOhost1 ACTIVE NO NOhost2 ACTIVE NO NOhost3 ACTIVE NO NOhost4 ACTIVE NO NOhost5 ACTIVE NO NO

DB2

> db2cluster –list –alert

1.Alert: The member 3 failed to start on its home host “host3”. Check the db2diag.log for messages concerning failures on host “host3” for member 3. See the DB2 Information Center for more details.

Action: This alert must be cleared manually with the command: db2cluster -cm -clear -alert.

Impact: Member 3 will not be able to service requests until this alert has been cleared and the member returns to its home host.

> db2cluster -clear –alert

The alert(s) has been successfully cleared.

Page 25: March 2010, Regional User Groups (Minneapolis, Chicago, Milwaukee) 0 IBM DB2 pureScale - Overview and Technical Deep Dive Aamer Sachedina, Senior Technical

25

Log

CS

CS

DB2

Primary PowerHA pureScale Failure

Shared Data

Clients Power cord tripped over accidentally

DB2 Cluster Services looses heartbeat and declares primary down

Informs members and secondary PowerHA pureScale service momentarily blocked All other database activity proceeds normally

Eg. accessing pages in bufferpool, existing locks, sorting, aggregation, etc

Members send missing data to secondary Eg. read locks

Secondary becomes primary PowerHA pureScale service continues where it left off No errors are returned to DB2 members

CS

DB2

CSDB2

CS

Updated Pages Global Locks

LogLogLog

PrimarySecondary

Updated Pages Global Locks

CS

DB2

Single Database View

Primary

Automatic;

Ultra Fast;

Online

Page 26: March 2010, Regional User Groups (Minneapolis, Chicago, Milwaukee) 0 IBM DB2 pureScale - Overview and Technical Deep Dive Aamer Sachedina, Senior Technical

26

DB2 DB2 DB2 DB2

Shared Data

host1host0 host3host2

host4host5

Primary PowerHA pureScale Failure

0 host0 0 - MEMBER1 host1 0 - MEMBER2 host2 0 - MEMBER3 host3 0 - MEMBER4 host4 0 - CF5 host5 0 - CF

db2nodes.cfg

Secondary

CS

LogLogLogLog

> db2instance -list

ID TYPE STATE HOME_HOST CURRENT_HOST ALERT

0 MEMBER STARTED host0 host0 NO1 MEMBER STARTED host1 host1 NO2 MEMBER STARTED host2 host2 NO3 MEMBER STARTED host3 host3 NO4 CF ERROR host4 host4 NO5 CF BECOMING_PRIMARY host5 host5 NO

HOST_NAME STATE INSTANCE_STOPPED ALERT

host0 ACTIVE NO NOhost1 ACTIVE NO NOhost2 ACTIVE NO NOhost3 ACTIVE NO NOhost4 INACTIVE NO YEShost5 ACTIVE NO NO

Primary

> db2instance -list

ID TYPE STATE HOME_HOST CURRENT_HOST ALERT

0 MEMBER STARTED host0 host0 NO1 MEMBER STARTED host1 host1 NO2 MEMBER STARTED host2 host2 NO3 MEMBER STARTED host3 host3 NO4 CF ERROR host4 host4 NO5 CF PRIMARY host5 host5 NOHOST_NAME STATE INSTANCE_STOPPED ALERT

host0 ACTIVE NO NOhost1 ACTIVE NO NOhost2 ACTIVE NO NOhost3 ACTIVE NO NOhost4 INACTIVE NO YEShost5 ACTIVE NO NO

Page 27: March 2010, Regional User Groups (Minneapolis, Chicago, Milwaukee) 0 IBM DB2 pureScale - Overview and Technical Deep Dive Aamer Sachedina, Senior Technical

27

Log

CS

CS

DB2

PowerHA pureScale Re-integration

Shared Data

Clients

Power restored and system re-booted

DB2 Cluster Services automatically detects system availability

Informs members and primary

New system assumes secondary role in ‘catchup’ state

Members resume duplexing Members asynchronously send lock and other state

information to secondary Members asynchronously castout pages from primary

to disk

CS

DB2

CSDB2

CS

Updated Pages Global Locks

LogLogLog

Secondary

Updated Pages Global Locks

CS

DB2

Single Database View

Primary

(Catchup state)(Peer state)

> db2instance -list

ID TYPE STATE HOME_HOST CURRENT_HOST ALERT

0 MEMBER STARTED host0 host0 NO1 MEMBER STARTED host1 host1 NO2 MEMBER STARTED host2 host2 NO3 MEMBER STARTED host3 host3 NO4 CF CATCHUP host4 host4 NO5 CF PRIMARY host5 host5 NO

HOST_NAME STATE INSTANCE_STOPPED ALERT

host0 ACTIVE NO NOhost1 ACTIVE NO NOhost2 ACTIVE NO NOhost3 ACTIVE NO NOhost4 ACTIVE NO NOhost5 ACTIVE NO NO

> db2instance -list

ID TYPE STATE HOME_HOST CURRENT_HOST ALERT

0 MEMBER STARTED host0 host0 NO1 MEMBER STARTED host1 host1 NO2 MEMBER STARTED host2 host2 NO3 MEMBER STARTED host3 host3 NO4 CF PEER host4 host4 NO5 CF PRIMARY host5 host5 NO

HOST_NAME STATE INSTANCE_STOPPED ALERT

host0 ACTIVE NO NOhost1 ACTIVE NO NOhost2 ACTIVE NO NOhost3 ACTIVE NO NOhost4 ACTIVE NO NOhost5 ACTIVE NO NO

Page 28: March 2010, Regional User Groups (Minneapolis, Chicago, Milwaukee) 0 IBM DB2 pureScale - Overview and Technical Deep Dive Aamer Sachedina, Senior Technical

28

Summary (Single Failures)

Failure Mode

DB2 DB2 DB2 DB2

CF CF

DB2 DB2 DB2 DB2

CF CF

DB2 DB2 DB2 DB2

CF CF

Member

PrimaryPowerHApureScale

SecondaryPowerHApureScale

OtherMembersRemainOnline ?

Automatic &Transparent ?

Connections to failed member transparently move to another member

Page 29: March 2010, Regional User Groups (Minneapolis, Chicago, Milwaukee) 0 IBM DB2 pureScale - Overview and Technical Deep Dive Aamer Sachedina, Senior Technical

29

DB2 DB2 DB2 DB2

CF CF

DB2 DB2 DB2 DB2

CF CF

DB2 DB2 DB2 DB2

CF CF

Simultaneous Failures

Failure Mode

OtherMembersRemainOnline ?

Automatic &Transparent ?

Connections to failed member transparently move to another member

Connections to failed member transparently move to another member

Connections to failed member transparently move to another member

Page 30: March 2010, Regional User Groups (Minneapolis, Chicago, Milwaukee) 0 IBM DB2 pureScale - Overview and Technical Deep Dive Aamer Sachedina, Senior Technical

30

Agenda

Technology Goals and Overview

Technology In-Depth Key Concepts & Internals

Efficient scaling

Failure modes & recovery automation

Stealth Maintenance

Configuration, Monitoring, Installation Cluster configuration and operational status

Client configuration and load balancing

Stealth Maintenance

Page 31: March 2010, Regional User Groups (Minneapolis, Chicago, Milwaukee) 0 IBM DB2 pureScale - Overview and Technical Deep Dive Aamer Sachedina, Senior Technical

31

DB2

db2nodes.cfg

DB2 DB2 DB2

Single Database View

Shared Data

Clients

0 host0 0 host0ib MEMBER1 host1 0 host1ib MEMBER2 host2 0 host2ib MEMBER3 host3 0 host3ib MEMBER4 host4 0 host4ib CF5 host5 0 host5ib CF

db2nodes.cfg

host1host0 host3host2

host4 host5

Page 32: March 2010, Regional User Groups (Minneapolis, Chicago, Milwaukee) 0 IBM DB2 pureScale - Overview and Technical Deep Dive Aamer Sachedina, Senior Technical

32

Instance and Host Status

0 host0 0 - MEMBER1 host1 0 - MEMBER2 host2 0 - MEMBER3 host3 0 - MEMBER4 host4 0 - CF5 host5 0 - CF

db2nodes.cfg Host status

Instance statusDB2 DB2 DB2 DB2

Single Database View

CF CF

Shared Data

host1host0 host3host2

host5

Clients

host4

> db2start08/24/2008 00:52:59 0 0 SQL1063N DB2START processing was successful. 08/24/2008 00:53:00 1 0 SQL1063N DB2START processing was successful. 08/24/2008 00:53:01 2 0 SQL1063N DB2START processing was successful.08/24/2008 00:53:01 3 0 SQL1063N DB2START processing was successful. SQL1063N DB2START processing was successful.

> db2instance -list

ID TYPE STATE HOME_HOST CURRENT_HOST ALERT

0 MEMBER STARTED host0 host0 NO1 MEMBER STARTED host1 host1 NO2 MEMBER STARTED host2 host2 NO3 MEMBER STARTED host3 host3 NO4 CF PRIMARY host4 host4 NO5 CF PEER host5 host5 NO

HOST_NAME STATE INSTANCE_STOPPED ALERT

host0 ACTIVE NO NOhost1 ACTIVE NO NOhost2 ACTIVE NO NOhost3 ACTIVE NO NOhost4 ACTIVE NO NOhost5 ACTIVE NO NO

Page 33: March 2010, Regional User Groups (Minneapolis, Chicago, Milwaukee) 0 IBM DB2 pureScale - Overview and Technical Deep Dive Aamer Sachedina, Senior Technical

33

Client Connectivity and Workload Balancing Run-time load information used to automatically balance load across the cluster (as in System z sysplex)

Load information of all members kept on each member Piggy-backed to clients regularly Used to route next connection (or optionally next transaction) to least loaded member Routing occurs automatically (transparent to application)

Failover Load of failed member evenly distributed to surviving members automatically

Fallback Once the failed member is back online, fallback does the reverse

Clients

Clients

Page 34: March 2010, Regional User Groups (Minneapolis, Chicago, Milwaukee) 0 IBM DB2 pureScale - Overview and Technical Deep Dive Aamer Sachedina, Senior Technical

34

Optional Affinity-based Routing App Servers

Group A

App Servers Group B

App Servers Group C

App Servers Group D

Allows you to target different groups of clients or workloads to different members in the cluster

Maintained after failover … … and fallback

Example use cases

Consolidate separate workloads/applications on same database infrastructure Minimize total resource requirements for disjoint workloads

Easily configured through client configuration

db2dsdriver.cfg file

Page 35: March 2010, Regional User Groups (Minneapolis, Chicago, Milwaukee) 0 IBM DB2 pureScale - Overview and Technical Deep Dive Aamer Sachedina, Senior Technical

35

Adding capacity

1. Complete pre-requisite work AIX installed, on the network, access to shared disks

2. Add the member db2iupdt –add –m <MemHostName:MemIBHostName> InstName

Note: extending and shrinking the instance is an offline task in the initial release

SD image

You can also: Drop member Add / drop powerHA pureScale server

3. DB2 does all tasks to add the member to the cluster

Copies the image and response file to host6

Runs install Adds M4 to the resources

for the instance. Sets up access to the

cluster file system for M4

Initial installation Complete pre-requisite work: AIX installed, hosts on the network, access to shared

disks enabled. Copies the DB2 pureScale image to the Install Initiating Host. Installs the code on the specified hosts using a response file. Creates the instance, members, and primary and secondary PowerHA pureScale

servers as directed. Adds members, primary and secondary PowerHA pureScale servers, hosts, HCA

cards, etc. to the domain resources. Creates the cluster file system and sets up each member’s access to it.

Add a member

host3host0

Install

Member 0

CSCS

scp image and rsp file

host4

Install

CSCS

Primary

host5

Install

CSCS

Secondary

Install

host1

Member 1

CSCS

Install

host2

Member 2

CSCS

Install

host3

Member 3

CSCS

InstallInitiating Host

Copy Image Locally

DB2pureScal

eImage

Member 4

Install

CSCS

Member 4

Page 36: March 2010, Regional User Groups (Minneapolis, Chicago, Milwaukee) 0 IBM DB2 pureScale - Overview and Technical Deep Dive Aamer Sachedina, Senior Technical

36

“Stealth” Maintenance : Example

Log

LogLogLog

1. Ensure automatic load balancing is enabled (it is by default)

2. db2stop member 3 quiesce

3. db2stop instance on host <hostname>

4. Perform desired maintenance eg. install AIX PTF

5. db2start instance on host <hostname>

6. db2start member 3

Single Database View

DB2 DB2 DB2 DB2

Page 37: March 2010, Regional User Groups (Minneapolis, Chicago, Milwaukee) 0 IBM DB2 pureScale - Overview and Technical Deep Dive Aamer Sachedina, Senior Technical

37

DB2 pureScale : A Complete Solution

Single Database View

Clients

Database

Log Log Log Log

Shared Storage Access

Cluster Interconnect

Member

CS

Member

CS

Member

CS

CS

CS

Member

CSCS

DB2 pureScale is a complete software solution

Comprised of tightly integrated subcomponents

Single install invocation Installs all components across desired

hosts Automatically configures best practices

No cluster manager scripting or configuration required

This is set up automatically, upon installation