green plum 300-012-943

162
EMC Corporation Corporate Headquarters: Hopkinton, MA 01748-9103 1-508-435-1000 www.EMC.com Configuring EMC ® Greenplum ® Data Computing Appliance Using SAN Mirror and EMC Symmetrix ® VMAX for Disaster Recovery Configuration Guide P/N 300-012-943 REV A01

Upload: bcbose

Post on 03-Jan-2016

102 views

Category:

Documents


3 download

DESCRIPTION

Green Plum

TRANSCRIPT

Page 1: Green Plum 300-012-943

Configuring EMC® Greenplum®

Data Computing Appliance Using SAN Mirror andEMC Symmetrix® VMAX™ for Disaster Recovery

Configuration Guide

P/N 300-012-943REV A01

EMC CorporationCorporate Headquarters:

Hopkinton, MA 01748-9103

1-508-435-1000www.EMC.com

Page 2: Green Plum 300-012-943

2

Copyright © 2011 EMC Corporation. All rights reserved.

Published August, 2011

EMC believes the information in this publication is accurate as of its publication date. The information is subject to change without notice.

THE INFORMATION IN THIS PUBLICATION IS PROVIDED "AS IS." EMC CORPORATION MAKES NO REPRESENTATIONS OR WARRANTIES OF ANY KIND WITH RESPECT TO THE INFORMATION IN THIS PUBLICATION, AND SPECIFICALLY DISCLAIMS IMPLIED WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE.

Use, copying, and distribution of any EMC software described in this publication requires an applicable software license.

For the most up-to-date listing of EMC product names, see EMC Corporation Trademarks on EMC.com.

All other trademarks used herein are the property of their respective owners.

Configuring EMC Greenplum DCA Using SAN Mirror and EMC Symmetrix VMAX for Disaster Recovery

Page 3: Green Plum 300-012-943

Contents

Preface.............................................................................................................................. 5

Chapter 1 Introducing the SAN Mirror SolutionIntroduction ....................................................................................... 10

Business need ............................................................................. 10EMC Greenplum Data Computing Appliance (DCA) ......... 11Terminology ............................................................................... 12

Using a SAN Mirror solution .......................................................... 13Overview..................................................................................... 13How SAN Mirror increases the capacity of the DCA........... 15How EMC Greenplum DCA mirrors data............................. 15Mirroring data to the SAN with Greenplum SAN Mirror .. 16

Components of a SAN Mirror configuration................................ 18EMC Greenplum DCA.............................................................. 18EMC Symmetrix VMAX storage system................................ 21

Considerations and best practices .................................................. 25Performance................................................................................ 25Data integrity.............................................................................. 26

Chapter 2 Setup and Operation of SAN with VMAXIntroduction to the SAN Mirror test configuration...................... 30Detailed list of tested components ................................................. 31

EMC Greenplum DCAs ............................................................ 31VMAX.......................................................................................... 32

Prerequisites....................................................................................... 34Configuring for SAN Mirror ........................................................... 35

Configuring a new system ....................................................... 35Preparing the VMAX for SAN Mirror attachment ............... 36

Configuring EMC Greenplum DCA Using SAN Mirror and EMC Symmetrix VMAX for Disaster Recovery 1

Page 4: Green Plum 300-012-943

Configuring the local DCA for the Symmetrix ..................... 38Configuring the remote DCA .................................................. 42Operating a SAN Mirror configuration.................................. 42

Summary............................................................................................ 47

Appendix A Discovering DCA Internal Server WWNs Discovering DCA internal server WWNs .................................... 50

Appendix B Verifying DCA Volume SizesVerifying DCA volume sizes .......................................................... 52

Appendix C Creating VMAX Volumes for DCA Mirror Database Creating VMAX volumes for DCA mirror database .................. 54Assigning volumes to the DCA segment servers ....................... 56

Appendix D Creating the SRDF Configuration Creating the SRDF configuration .................................................. 58

Appendix E Configuring DCA Internal Switch for SAN Mirror Configuring DCA internal switch for SAN Mirror ..................... 60

Appendix F Automating DCA Internal SAN Configuration Script to automate DCA internal SAN configuration ................ 68

Appendix G Creating SNAPs on Remote Symmetrix for Disaster Recovery Creating SNAPs on remote Symmetrix for disaster recovery ... 76Creating a single SNAP of R2 standard devices ......................... 77Creating multiple SNAPS of R2 standard devices ...................... 91

Appendix H Mounting and Checking a Symmetrix SNAP of the Greenplum Database Mounting and checking Symmetrix SNAP of Greenplum database ........................................................................................... 104

Configuring EMC Greenplum DCA Using SAN Mirror and EMC Symmetrix VMAX for Disaster Recovery2

Page 5: Green Plum 300-012-943

Appendix I Recovering a DCA Using a SNAP Session and SRDF Recovering a DCA using a SNAP session and SRDF ............... 126

Appendix J Starting EMC Greenplum Database from Mirror Segments Starting EMC Greenplum database from mirror segments ..... 152

3Configuring EMC Greenplum DCA Using SAN Mirror and EMC Symmetrix VMAX for Disaster Recovery

Page 6: Green Plum 300-012-943

Configuring EMC Greenplum DCA Using SAN Mirror and EMC Symmetrix VMAX for Disaster Recovery4

Page 7: Green Plum 300-012-943

Preface

This guide documents how to configure an EMC Greenplum Data Computing Appliance (DCA) to attach to an EMC Symmetrix VMAX system, where the primary data copy resides on the DCA and the VMAX is used as a mirror. The facilities of the VMAX storage array can then be used to provide remote replicas for disaster recovery and create point-in-time copies to allow quick recovery from equipment or data center outages.

This document specifies the steps to move the data manually. The appendices provide detailed, step-by-step instructions and output from commands, taken from EMC's engineering lab tests. Forthcoming tools from Greenplum will help customers automate the movement of data to external storage.

As part of an effort to improve and enhance the performance and capabilities of its product line, EMC from time to time releases revisions of its hardware and software. Therefore, some functions described in this document may not be supported by all revisions of the software or hardware currently in use. For the most up-to-date information on product features, refer to your product release notes.

If a product does not function properly or does not function as described in this document, contact your EMC representative.

Audience This guide is intended for EMC Field personnel who are implementing or considering a SAN Mirror solution for the Greenplum DCA. It will also be useful for customers and partners to gain understanding of the benefits and requirements of SAN Mirror implementations.

Configuring EMC Greenplum DCA Using SAN Mirror and EMC Symmetrix VMAX for Disaster Recovery 5

Page 8: Green Plum 300-012-943

6

Preface

Relateddocumentation

More information is available about each of the components discussed in this document. A partial list is shown next. Documents are available at http://powerlink.emc.com.

Related documents include:

◆ White Paper: EMC Greenplum Data Computing Appliance Using SAN Mirror and EMC Symmetrix VMAX for Disaster Recovery

◆ White Paper: EMC Greenplum Data Computing Appliance: High Performance for Data Warehousing and Business intelligence - An Architectural Overview

◆ White Paper: EMC Greenplum Database 4.0 - Critical Mass Innovation

◆ Greenplum Database 4.1 Administrator Guide (describes the database operation in great detail)

◆ Greenplum Database 4.1 Installation Guide

◆ Greenplum DCA and DIA 1.0.3.x Getting Started Guide

◆ Symmetrix VMAX Series Product Guide

◆ Symmetrix Remote Data Facility (SRDF) Product Guide

◆ Symmetrix TimeFinder Product Guide

◆ Solutions Enabler 7.3 Documentation Set

◆ PowerPath Family Product Guide

◆ PowerPath for Linux Installation and Administration Guide 5.5 A03

Conventions used inthis guide

EMC uses the following conventions for notes, cautions, and warnings.

Note: A note presents information that is important, but not hazard-related.

IMPORTANT!An important notice contains information essential to operation of the software.

Configuring EMC Greenplum DCA Using SAN Mirror and EMC Symmetrix VMAX for Disaster Recovery

Page 9: Green Plum 300-012-943

Preface

Typographical conventionsEMC uses the following type style conventions in this document:

Normal Used in running (nonprocedural) text for:• Names of interface elements (such as names of windows,

dialog boxes, buttons, fields, and menus)• Names of resources, attributes, pools, Boolean expressions,

buttons, DQL statements, keywords, clauses, environment variables, filenames, functions, utilities

• URLs, pathnames, filenames, directory names, computer names, links, groups, service keys, file systems, notifications

Bold Used in running (nonprocedural) text for:• Names of commands, daemons, options, programs,

processes, services, applications, utilities, kernels, notifications, system call, man pages

Used in procedures for:• Names of interface elements (such as names of windows,

dialog boxes, buttons, fields, and menus)• What user specifically selects, clicks, presses, or types

Italic Used in all text (including procedures) for:• Full titles of publications referenced in text• Emphasis (for example a new term)• Variables

Courier Used for:• System output, such as an error message or script • URLs, complete paths, filenames, prompts, and syntax when

shown outside of running text

Courier bold Used for:• Specific user input (such as commands)

Courier italic Used in procedures for:• Variables on command line• User input variables

< > Angle brackets enclose parameter or variable values supplied by the user

[ ] Square brackets enclose optional values

| Vertical bar indicates alternate selections - the bar means “or”

{ } Braces indicate content that you must specify (that is, x or y or z)

... Ellipses indicate nonessential information omitted from the example

Configuring EMC Greenplum DCA Using SAN Mirror and EMC Symmetrix VMAX for Disaster Recovery 7

Page 10: Green Plum 300-012-943

8

Preface

Where to get help EMC support, product, and licensing information can be obtained as follows.

Product information — For documentation, release notes, software updates, or for information about EMC products, licensing, and service, go to the EMC Powerlink website (registration required) at:

http://Powerlink.EMC.com

Technical support — For technical support, go to EMC Customer Service on Powerlink. To open a service request through Powerlink, you must have a valid support agreement. Please contact your EMC sales representative for details about obtaining a valid support agreement or to answer any questions about your account.

Your comments Your suggestions will help us continue to improve the accuracy, organization, and overall quality of the user publications. Please send your opinion of this guide to:

[email protected]

Configuring EMC Greenplum DCA Using SAN Mirror and EMC Symmetrix VMAX for Disaster Recovery

Page 11: Green Plum 300-012-943

1Invisible Body Tag

This chapter provides the following information.

◆ Introduction ........................................................................................ 10◆ Using a SAN Mirror solution ........................................................... 13◆ Components of a SAN Mirror configuration ................................. 18◆ Considerations and best practices ................................................... 25

Introducing the SANMirror Solution

Introducing the SAN Mirror Solution 9

Page 12: Green Plum 300-012-943

10

Introducing the SAN Mirror Solution

IntroductionThis section provides a brief introduction to the following:

◆ “Business need” on page 10

◆ “EMC Greenplum Data Computing Appliance (DCA)” on page 11

◆ “Terminology” on page 12

Business needData warehouses are a critical tool for making business decisions. As data warehouse and business intelligence systems (DW/BI) continue to grow, they have faced some difficult challenges:

◆ DW/BI infrastructures are growing exponentially over time and often require many terabytes or even petabytes of high-performance, protected storage.

◆ DW/BI systems must be continually available. High performance update, backup and disaster recovery are necessary to prevent and recover from outages.

Currently, data warehouse environments are demanding a more comprehensive strategy for data protection, security, and high availability than ever before, based on the business impact of data unavailability. Data recovery options must align with application and business requirements to yield the highest availability.

To integrate the EMC® Greenplum® Data Computing Appliance (DCA) into larger data centers, some customers require compatibility with advanced storage and software infrastructures, such as the EMC Symmetrix® VMAX™ in order to achieve the highest levels of fault tolerance and to provide industry-leading disaster recovery.

To facilitate this, EMC engineered a solution that integrates the DCA with the VMAX, where the VMAX provides local and remote replicas of the data for disaster recovery and point-in-time snapshots. This allows customers to recover data warehouse/business intelligence (DW/BI) functionality quickly in the face of hardware or software failure, or even total site loss.

Configuring EMC Greenplum DCA Using SAN Mirror and EMC Symmetrix VMAX for Disaster Recovery

Page 13: Green Plum 300-012-943

Introducing the SAN Mirror Solution

EMC Greenplum Data Computing Appliance (DCA)The EMC Greenplum Data Computing Appliance (DCA) is an industry-leading, purpose-built data warehouse appliance that helps organizations make educated strategic and operational decisions. The DCA addresses essential business requirements, delivering predictable performance, scalability and functionality. This eliminates the complexity and unpredictability of highly customized, in-house solutions.

The DCA is a self-contained system built of industry-standard, commodity components. It includes both processor, storage, and software resources needed to implement large-scale DW/BI infrastructures. Data in the DCA is fully protected and the system is able to ride through multiple component failures non-disruptively.

For more information, refer to “How EMC Greenplum DCA mirrors data” on page 15.

By connecting the DCA to industry-leading SAN storage, such as the VMAX system, customers can get all the performance and functionality of the DCA, combined with robust remote replication and point-in-time copies provided by external storage. This capability is called SAN Mirror, discussed further in “Using a SAN Mirror solution” on page 13.

Introduction 11

Page 14: Green Plum 300-012-943

12

Introducing the SAN Mirror Solution

TerminologyThe following list key terms used in this document.

Term Definition

Business Intelligence (BI)

The effective use of information assets to improve the profitability, productivity, or efficiency of a business. IT professionals use this term to refer to the business applications and tools that enable such information usage. The source of information is frequently the data warehouse.

CNA Converged Network Adapter

Data Warehouse (DW)

The process of organizing and managing information assets of an enterprise. IT professionals often refer to the physical stored data content in some databases managed by database management software as the data warehouse. They refer to the applications that manipulate the data stored in such databases as DW applications.

Data Computing Appliance (DCA)

The DCA is a purpose-built, highly scalable, parallel DW appliance that architecturally integrates database, compute, storage, and network into an enterprise-class, easy-to-implement system.

Massively Parallel Processing (MPP)

A type of distributed computing architecture where tens to hundreds of processors team up to work concurrently to solve large computational problems

Configuring EMC Greenplum DCA Using SAN Mirror and EMC Symmetrix VMAX for Disaster Recovery

Page 15: Green Plum 300-012-943

Introducing the SAN Mirror Solution

Using a SAN Mirror solutionThis section contains the following information:

◆ “Overview” on page 13

◆ “How SAN Mirror increases the capacity of the DCA” on page 15

◆ “How EMC Greenplum DCA mirrors data” on page 15

◆ “Mirroring data to the SAN with Greenplum SAN Mirror” on page 16

Overview

The EMC Greenplum DCA maintains two copies of customer data and normally handles all data replication and protection tasks internally to the appliance. This generally achieves the highest level of performance for DW/BI tasks.

In a SAN Mirror solution, the second copy of the data is moved to SAN-based storage. The DCA retains the primary copy of the data in order to maximize query performance. The SAN Mirror copy is updated with writes, but it is not read unless a primary database segment becomes inaccessible.

By keeping the mirrored copy on the SAN, customers can use storage facilities such as EMC TimeFinder® and SRDF® to create remote copies or point-in-time images for backup and disaster recovery.

Figure 1 and Figure 2 on page 14 compare a stand-alone DCA configuration to different SAN Mirror configurations.

Using a SAN Mirror solution 13

Page 16: Green Plum 300-012-943

14

Introducing the SAN Mirror Solution

Figure 1 Stand-alone DCA versus SAN Mirror configuration

Figure 2 SAN Mirror configuration with remote replication

Configuring EMC Greenplum DCA Using SAN Mirror and EMC Symmetrix VMAX for Disaster Recovery

Page 17: Green Plum 300-012-943

Introducing the SAN Mirror Solution

How SAN Mirror increases the capacity of the DCAA SAN Mirror configuration significantly increases the DCA's internal capacity. Whereas a standard DCA configuration stores both primary and mirrored data internal to the appliance, a SAN Mirror configuration moves the mirrored data to the SAN, which frees the other half of the machine's internal capacity to be used as additional primary storage.

How EMC Greenplum DCA mirrors dataThe EMC Greenplum DCA is a fully redundant system that maintains two copies of data to ensure seamless operation if one or more components in the system fail. The system is able to process large amounts of data by distributing the load across all of the servers in the DCA. An EMC Greenplum Database™ is actually a loosely-coupled array of individual, highly-customized PostreSQL databases, working together to present a single database image.

The master is the entry point to the Greenplum Database system. The master host contains all the metadata required to distribute transactions across the system, but it does not hold any user data. It coordinates the work with the other database instances in the system, the segments, which handle data processing and storage.

When a Greenplum Database system is deployed, there is the option to mirror the segments. Mirroring allows the database to remain operational if a segment instance or segment host goes down, as long as a copy of the data is accessible in another running segment. Primary segment instances replicate to mirror instances at the sub-file level. In order to guarantee that both primary and mirror segments present the same crash-consistent database image, primary segments will wait for mirror segments to synchronize data during transaction commits and periodic database checkpoints.

On segment failure, the remaining copy goes into change tracking mode and saves a list of changes made to its data. Once the underlying segment failure is resolved, an online recovery process will refresh the recovered segment either by copying the changed data, or by executing a full copy to the failed segment.

Using a SAN Mirror solution 15

Page 18: Green Plum 300-012-943

16

Introducing the SAN Mirror Solution

In the EMC Greenplum DCA, Six primary segment instances and six mirror segment instances run on each segment server. The segments use internal DCA storage. The primary and mirror segments run on separate servers in the DCA.

A high-level picture of the mirrored configuration is shown in Figure 3.

Figure 3 DCA mirroring

Mirroring data to the SAN with Greenplum SAN Mirror In a SAN Mirror configuration, storage for both master instances and all the mirrored segments is moved from internal storage to external SAN storage. A SAN Mirror configuration allows the Greenplum database to benefit from advanced storage capabilities such as SRDF and TimeFinder. While the DCA will get highest performance from internal storage, the additional functionality of the SAN Mirror configuration makes it appealing for certain use cases.

Configuring EMC Greenplum DCA Using SAN Mirror and EMC Symmetrix VMAX for Disaster Recovery

Page 19: Green Plum 300-012-943

Introducing the SAN Mirror Solution

A conceptual picture of a SAN Mirror configuration is shown in Figure 4.

Figure 4 DCA SAN Mirror configuration

Using a SAN Mirror solution 17

Page 20: Green Plum 300-012-943

18

Introducing the SAN Mirror Solution

Components of a SAN Mirror configurationThis section briefly describes each component of the SAN Mirror solution. It is assumed that the reader is familiar with all or most of these products. Further details can be found in documents listed in “Related documentation” on page 6.

The components used in this solution are listed below and further described in this section:

◆ “EMC Greenplum DCA” on page 18

• EMC PowerPath® is loaded on all segment servers. PowerPath is additional, separately chargeable software.

• Trunking is enabled on internal EMC Connectrix® MP-8000B switches to provide a large, load-balanced pipe between the DCA and the SAN. Trunking is an additional, separately chargeable software function.

◆ “EMC Symmetrix VMAX storage system” on page 21

EMC VMAX system, with four storage engines VP

• EMC Virtual Provisioning™

• TimeFinder/SNAP for point-in-time copies

• SRDF for remote replication

• Solutions Enabler

• Symmetrix Management Console (SMC)

EMC Greenplum DCAEMC Greenplum Data Computing Appliance (DCA) is a self-contained data warehouse solution that integrates all of the database software, servers and switches necessary to perform big data analytics.

The EMC Greenplum DCA is a turn-key data warehouse solution that provides extremely high query and loading performance for analyzing large data sets. The EMC Greenplum DCA integrates the Greenplum database with compute, storage and network and is delivered racked and ready for immediate data loading and query execution.

Configuring EMC Greenplum DCA Using SAN Mirror and EMC Symmetrix VMAX for Disaster Recovery

Page 21: Green Plum 300-012-943

Introducing the SAN Mirror Solution

The EMC Greenplum DCA runs the Greenplum Database relational database management system (RDBMS) software. The EMC Greenplum Database enables large data queries and analysis.

The DCA is offered in quarter-rack, half-rack, full-rack, and multiple-rack appliance configurations to achieve maximum flexibility and scalability for organizations faced with terabyte to petabyte scale data analyses. A full rack DCA configuration is shown in Figure 5.

Figure 5 Greenplum DCA components

PowerPath PowerPath is host-based multipath software that provides high availability and load balancing across multiple Host Bus Adapter/Converged Network Adapter (HBA/CNA) ports. In a PowerPath configuration, each LUN is assigned to multiple HBA ports on a single server. PowerPath uses sophisticated algorithms to deliver I/O optimally down each path, to ensure the highest overall system performance. If a path fails, PowerPath continues to load balance down the remaining paths to each device, enabling the server to continue operation. PowerPath automatically re-enables failed paths when the failures are corrected.

Components of a SAN Mirror configuration 19

Page 22: Green Plum 300-012-943

20

Introducing the SAN Mirror Solution

PowerPath load balancing is an additional, separately-chargeable software product. Additional capabilities for data migration and data encryption can also be purchased for PowerPath.

While Linux Multi-Path I/O (MPIO) also delivers primitive load balancing and path failure protection, it is more difficult to manage than PowerPath. MPIO was not explored as an option for the SAN Mirror solution.

Trunking software The EMC Connectrix trunking feature optimizes the use of bandwidth by allowing a group of inter-switch links (ISLs) to merge into a single logical link. Trunking is automatically implemented for any eligible ISLs after you install the Connectrix ISL Trunking license. The license must be installed on each switch that participates in trunking. The trunking license should be installed on both Connectrix MP-8000B switches within the DCA, and also on the SAN switches to which the DCA is connected.

In a fabric with numerous switches, you can increase the bandwidth between switches by enabling multiple physical ports to appear as a single port. Enabling multiple physical ports form a trunking group where the traffic is distributed dynamically and in order at the frame level, thus achieving greater performance with fewer inter-switch links. Trunking groups are based on the user port number with contiguous eight ports as one group, such as 0-7, 8-15, and 16-23.

Figure 6 shows how trunking can result in more throughput by distributing data over four ISLs with no congestion. In a fabric that does not have trunking capability, some paths would be congested and other paths under-utilized.

Figure 6 Connectrix fabric trunking avoids SAN bottlenecks

Configuring EMC Greenplum DCA Using SAN Mirror and EMC Symmetrix VMAX for Disaster Recovery

Page 23: Green Plum 300-012-943

Introducing the SAN Mirror Solution

EMC Symmetrix VMAX storage systemThe EMC Symmetrix VMAX system is the industry's premier high-end storage array. It enables massive consolidation and incremental scalability, providing all the benefits of tiered storage in a single platform, as well as the flexibility to address rapidly changing business requirements. For the most extreme and demanding storage environments, Symmetrix VMAX is a powerful solution that is remarkably easy to manage. Symmetrix VMAX systems are ideal for the most extreme application demands, where large-scale consolidation is required by the most information-intensive enterprises.

Virtual Provisioning EMC Virtual Provisioning greatly simplifies data layout. Virtual Provisioning lets users make large volumes available to attached hosts without reserving large, unused capacities. Capacity is allocated as it is actually written. Data is transparently striped across all the resources of the Virtual Pool, which ensures the best performance for all applications.

Virtual Provisioning automatically reclaims "zeroed" capacity and returns it to the free capacity pool. This makes optimal use of storage resources by allocating only the capacity that is actually in use. As the system grows, it can be upgraded non-disruptively and data is transparently redistributed across all storage resources to ensure optimal performance of the system.

TimeFinder/Snap EMC TimeFinder and Symmetrix Remote Data Facility (SRDF) are the most powerful suites of local and remote storage replication solutions available in the industry, enabling business continuance volumes for parallel processing activities like backup, testing and development and local restore, as well as remotely replicated copies to guard against primary site disasters and outages. In fact, the TimeFinder and SRDF families are the most widely deployed set of local and remote replication solutions in the industry and are installed in tens of thousands of the most demanding environments worldwide. “SRDF” on page 22 provides more details on SRDF and which mode was used for this solution.

The TimeFinder family is a set of Symmetrix local replication solutions designed to non-disruptively create point-in-time copies of critical data. You can configure backup sessions, initiate copies, and terminate TimeFinder operations using EMC Solutions Enabler software, described below.

Components of a SAN Mirror configuration 21

Page 24: Green Plum 300-012-943

22

Introducing the SAN Mirror Solution

TimeFinder local replication solutions include TimeFinder/Clone and TimeFinder/Snap. TimeFinder/Clone creates full-device and extent-level point-in-time copies while TimeFinder/Snap creates pointer-based logical copies that consume less storage space on physical drives.

TimeFinder/Snap was used for the SAN Mirror solution described in this document. TimeFinder/Snap gives maximum flexibility with the lowest capacity requirement for point-in-time copies.

In an SRDF/A configuration, it is essential to enable Write Pacing on the VMAX before taking Snaps on the R2 VMAX. Write Pacing became available with EMC Enginuity™ v5875 release; it protects SRDF/A sessions by slowing down host writes if the host write bandwidth is greater than the R2 VMAX can support. Since this could impact the performance of ongoing data warehouse loads, it is strongly advised to engage a Symmetrix Performance Guru when planning the SAN Mirror configuration.

Note: Symmetrix Performance Gurus can be located on the SPEED website, accessible to EMC customers, partners, and field personnel, at http://speed.corp.emc.com , Tools > Guru List. If you do not have access to this list, contact your local EMC Sales Representative.

SRDF The Symmetrix Remote Data Facility (SRDF) is a business continuance solution that maintains a mirror image of data at the device level in Symmetrix arrays located in physically separate sites.

There are three modes of SRDF:

◆ Synchronous

◆ Asynchronous

◆ Adaptive copy

Asynchronous mode (SRDF/A) was used in this solution, as it provides a high level of consistency with minimal impact to system performance. The local and “remote” systems were immediately adjacent to each-other, so the effective distance was 0 km. For longer distances, it is critical to correctly size the links between data centers to support the expected write workload.

SRDF/A creates a dependent-write consistent, point-in-time image on the target (R2) device, which is a short period of time behind the source (R1) device. Managed in sessions, SRDF/A transfers data in

Configuring EMC Greenplum DCA Using SAN Mirror and EMC Symmetrix VMAX for Disaster Recovery

Page 25: Green Plum 300-012-943

Introducing the SAN Mirror Solution

cycles or delta sets to ensure that data at the R2 site is dependent-write consistent.

The Symmetrix system acknowledges all writes to the R1 devices in exactly the same way as other non-R1 local devices. Host writes accumulate on the R1 side until the cycle time is reached, and are then transferred to the R2 Symmetrix in one delta set. The writes are then destaged to the R2 devices to create a permanent, consistent image of the R1 data.

Because writes are transferred in cycles, writes can be coalesced, and duplicate writes to the same tracks require only a single transfer across the links. This ensures optimal link bandwidth utilization.

EMC Enginuity™ Consistency Assist (ECA) ensures that snaps and remote replicas are write-order consistent across multiple devices and even across multiple Symmetrix VMAX systems. This is a required capability for SAN Mirror, to ensure recoverability of database on the remote Symmetrix system.

Solutions Enabler The EMC Solutions Enabler kit contains all the base management software that provides a host with SYMCLI commands and APIs to configure the Symmetrix and to control Symmetrix operations for SRDF and TimeFinder.

SYMCLI resides on the host system to monitor and control operations on Symmetrix storage arrays. SYMCLI commands are invoked from the host operating system via command line or scripts. SYMCLI commands invoke low-level channel communications to specialized gatekeeper devices on the Symmetrix.

SYMCLI is required to control TimeFinder and SRDF operations from the DCA. Because of this, EMC Solutions Enabler should be loaded on the database master servers in the DCA.

SymmetrixManagement

Console (SMC)

The EMC Symmetrix Management Console (SMC) is an intuitive, browser-based user interface that configures and manages Symmetrix system arrays. SMC presents the functionality of the Solutions Enabler SYMCLI (command line interface) in a browser interface. Use Symmetrix Management Console to:

◆ Manage Symmetrix Access Controls, user accounts, and permission roles

◆ Discover Symmetrix arrays

Components of a SAN Mirror configuration 23

Page 26: Green Plum 300-012-943

24

Introducing the SAN Mirror Solution

◆ Perform configuration operations (create devices, map and mask devices, set Symmetrix system attributes, set device attributes, set port flags, create SAVE device pools)

◆ Manage devices (change device configuration, set device status, reserve devices, duplicate devices, create/dissolve meta devices)

◆ Perform and monitor replication operations (TimeFinder/Snap, TimeFinder/Clone, SRDF, Open Replicator)

◆ Configure and manage Fully Automated Storage Tiering (FAST)

◆ Monitor alerts

◆ Monitor an application's performance

Symmetrix Management Console is preinstalled on the Symmetrix VMAX system Service Processor and can also run on a data center host. Licenses are available individually, or as part of a Symmetrix VMAX Management Integration bundle.

Configuring EMC Greenplum DCA Using SAN Mirror and EMC Symmetrix VMAX for Disaster Recovery

Page 27: Green Plum 300-012-943

Introducing the SAN Mirror Solution

Considerations and best practicesThis section contains things to consider and best practices for the following:

◆ “Performance” on page 25

◆ “Data integrity” on page 26

PerformanceConnecting the DCA to SAN-based storage delivers many manageability and functionality benefits, and it is important to configure the storage properly to ensure system performance. It is essential to configure enough engines, ports and disks to support the expected read and write workloads.

A full DCA can ingest up to 13.4 TB/hr of new data (almost 4 GB/s), and can scan at up to 23.5 GB/s. A system configured for this maximum load may require one or more dedicated VMAX systems. However, very few customers will push a DCA to its bandwidth limits. In many cases, an existing Symmetrix might be used to support the needs of the data warehouse. As long as the Symmetrix configuration has enough spare bandwidth, the data warehouse workload can co-exist with other production data. The optimal Symmetrix configuration will be a function of the customer's actual expected workload.

In SRDF configurations, it is critical to size the connection between VMAX systems properly. The number and speed of links will determine how much write data the system can sustain. Too few connections may significantly impact write performance.

Please consult a SymmetrixPerformance Guru when planning a SAN Mirror configuration, to ensure that the VMAX will support the required bandwidth.

Note: Symmetrix Performance Gurus can be located on the SPEED website, accessible to EMC customers, partners, and field personnel, at http://speed.corp.emc.com , Tools > Guru List.. If you do not have access to this list, contact your local EMC Sales Representative.

The DCA contains EMC Connectrix MP-8000B 8000 switches to support internal communication between servers. In a SAN Mirror configuration, the Fibre ports on the switches are configured to attach

Considerations and best practices 25

Page 28: Green Plum 300-012-943

26

Introducing the SAN Mirror Solution

to a Fibre Channel SAN. Customers should purchase optional SAN Trunking software for the switches to provide the greatest possible throughput. This requires an additional software license for the Connectrix switches.

PowerPath should be loaded on each internal server to provide the greatest possible bandwidth and resiliency for the SAN Mirror configuration. PowerPath is a chargeable feature. Standard Linux MPIO can also be used, but may be much more difficult to manage as PowerPath is aware of the Symmetrix devices in the VMAX. Configuring and managing MPIO for the DCA is outside of the scope of this document.

Data integrity A SAN Mirror configuration moves the "mirror" database segment storage to the SAN. In normal operation, the mirror segments are only used for writes to ensure resiliency in case a primary segment becomes unavailable. Query activity uses the primary segments on internal DCA disk, since that provides the highest performance.

For a particular segment of data, Greenplum database mirroring relies on the primary instance process to send writes over the IP interconnect to a mirror instance process on a different segment server host.

If a segment server host becomes unavailable, primary instances on that host will be taken out of operation and their corresponding mirrors activated on other segment hosts. Additionally, mirror instances on that host will no longer be able to receive replication updates and write them to SAN storage.

Configuring EMC Greenplum DCA Using SAN Mirror and EMC Symmetrix VMAX for Disaster Recovery

Page 29: Green Plum 300-012-943

Introducing the SAN Mirror Solution

Since the SAN copy of the database is taken from the mirror, the SAN copy of the database will no longer be consistent. This is illustrated in Figure 7.

Figure 7 Segment failure creates inconsistent remote image

To ensure there is always a restartable copy of the database if a failure occurs, TimeFinder/Snap is used to create "gold copy" images periodically on the remote VMAX. The gold copy will be slightly behind the SRDF image, but could be brought to currency by re-applying updates to the remote database.

It is necessary to verify that there are no segment failures and that the system is operating normally when taking a snap, to ensure the gold copy is consistent. If a mirror segment is inoperable for any reason, no snap copy should be taken. This will ensure a recent, consistent gold copy on the remote VMAX.

Procedures for making and verifying a gold copy are documented in Appendix G, “Creating SNAPs on Remote Symmetrix for Disaster Recovery,” and in Appendix H, “Mounting and Checking a Symmetrix SNAP of the Greenplum Database.”

Considerations and best practices 27

Page 30: Green Plum 300-012-943

28

Introducing the SAN Mirror Solution

Configuring EMC Greenplum DCA Using SAN Mirror and EMC Symmetrix VMAX for Disaster Recovery

Page 31: Green Plum 300-012-943

2Invisible Body Tag

This chapter provides the following information on the setup and operation of SAN Mirror with VMAX. This chapter details the test system that was used and gives configuration details for both the EMC/Greenplum Data Computing Appliance (DCA) and the EMC Symmetrix VMAX system.

◆ Introduction to the SAN Mirror test configuration....................... 30◆ Detailed list of tested components .................................................. 31◆ Prerequisites........................................................................................ 34◆ Configuring for SAN Mirror ............................................................ 35◆ Summary ............................................................................................. 47

Setup and Operation ofSAN with VMAX

Setup and Operation of SAN with VMAX 29

Page 32: Green Plum 300-012-943

30

Setup and Operation of SAN with VMAX

Introduction to the SAN Mirror test configurationFor this test, a full rack configuration was configured into two separate logical "half-rack" DCAs, each with eight segment servers. The "standby master" was configured as the primary master of the second logical DCA.

Each DCA was connected to the SAN, and its "SAN Mirror" was located on a different Symmetrix VMAX on the SAN.

The primary VMAX held the mirror copy of the primary DCA's data. The VMAX was configured to use Virtual Provisioning, and 2.7 TB volumes were created to match the volumes on the primary DCA. Those Symmetrix volumes were replicated by SRDF/A to a second, "remote" Symmetrix, which held copies of the data for disaster recovery. A 7 TB database, similar to what is used for TPC-H benchmarks, was created for testing.

Figure 8 shows the high-level layout of the tested solution. Customers will typically implement this solution using separate DCAs, in separate racks.

Figure 8 SAN Mirror test configuration architecture

Configuring EMC Greenplum DCA Using SAN Mirror and EMC Symmetrix VMAX for Disaster Recovery

Page 33: Green Plum 300-012-943

Setup and Operation of SAN with VMAX

Detailed list of tested componentsThe test configuration used for this document is detailed in the Table 1 through Table 5, as follows:

◆ “EMC Greenplum DCAs” on page 31

◆ “VMAX” on page 32

EMC Greenplum DCAsTable 1 and Table 2 list the Greenplum DCA components and software.

Table 1 Greenplum DCA components

Component Quantity

Master server 1 (primary, no standby configured)

Segment servers 8 (GP100, half-rack configuration)

Interconnection Switch 2 (EMC Connectrix MP-8000B)

Administration Switch 1 (Allied Telesis)

Table 2 Greenplum DCA software

Software Version Comments

EMC Greenplum-db 4.1.1.1 EMC Greenplum database software

PowerPath 5.5 Strongly advised for multipathing and load balancing

Solutions Enabler 7.2.1.0 Enables the DCA to issue TimeFinder and SRDF commands to the VMAX

Detailed list of tested components 31

Page 34: Green Plum 300-012-943

32

Setup and Operation of SAN with VMAX

VMAXTable 3, Table 4, and Table 5 list the VMAX components and software.

Table 3 Primary VMAX hardware components

Hardware Specifications Quantity

4-Engine VMAX 1

Fibre Channel directors 8 Gb/s Fibre 24

RF (Remote Fibre) directors

8 Gb/s connection for SRDF replication 8

15k RPM Fibre Channel Disk

600 GB capacity, 15k RPM, configured RAID 5 7+1 and used with Virtual Provisioning

192

Table 4 Remote VMAX hardware component

Hardware Specifications Quantity

4-Engine VMAX 1

Fibre Channel directors 8 Gb/s Fibre 24

RF (Remote Fibre) directors

8 Gb/s connection for SRDF replication 8

15k RPM Fibre Channel Disk

600 GB capacity, 15k RPM, enough to accommodate snaps and remote replicasConfigured RAID 5 7+1 and used with Virtual Provisioning

470

Configuring EMC Greenplum DCA Using SAN Mirror and EMC Symmetrix VMAX for Disaster Recovery

Page 35: Green Plum 300-012-943

Setup and Operation of SAN with VMAX

Table 5 VMAX software

Software Specifications Quantity

Enginuity 5875.198 Base Enginuity Operating Environment 2

Virtual Provisioning EMC Virtual Provisioning allocates only the storage that was actually used, and stripes wide across all disks

2

SRDF/S Not used for this paper 2

SRDF/A Create asynchronous remote replicas (this test used zero-distance SRDF/A)

2

TimeFinder/Clone Not used for this paper 2

TimeFinder/Snap Create pointer-based “gold copy” of database volumes

2

Symmetrix Management Solution

Graphical managing and monitoring interface for Symmetrix VMAX systems

2

Detailed list of tested components 33

Page 36: Green Plum 300-012-943

34

Setup and Operation of SAN with VMAX

PrerequisitesIt is assumed that the following steps have been taken prior to configuring the SAN Mirror solution, and thus this solution will not detail the installation, setup and configuration of the components listed here.

Further information can be located in the documents listed in“Related documentation” on page 6.

◆ The EMC/Greenplum DCA should be installed and configured in the data center. Specific information on the initial configuration of the EMC/Greenplum hardware and software is described in the Greenplum Database 4.1 Installation Guide. The installation guide discusses the installation process for the DCA servers, switches and hosts. This guide also discusses the basic configuration process for the Greenplum database.

◆ EMC PowerPath should already be installed on the DCA segment servers and database masters. PowerPath is not included with the DCA and should be installed by EMC Professional Services on each server in the DCA.

◆ EMC Solutions Enabler should already be installed on the database masters. Solutions Enabler is not included with the DCA and should be installed by EMC Professional Services on the master and the standby master.

◆ VMAX systems should be installed and configured in the local data center, and also in the remote data center if SRDF is used to replicate data between data centers.

◆ SRDF should be installed, configured and replicating between the VMAX systems.

◆ The data center SAN should be installed and operational, with the VMAX systems attached to it.

Configuring EMC Greenplum DCA Using SAN Mirror and EMC Symmetrix VMAX for Disaster Recovery

Page 37: Green Plum 300-012-943

Setup and Operation of SAN with VMAX

Configuring for SAN MirrorThis section describes the steps necessary to implement SAN Mirror on an EMC/Greenplum DCA. Scripts and sample output are provided in the appendices referenced within each subsection.

The following information is provided:

◆ “Configuring a new system” on page 35

◆ “Preparing the VMAX for SAN Mirror attachment” on page 36

◆ “Configuring the local DCA for the Symmetrix” on page 38

◆ “Configuring the remote DCA” on page 42

◆ “Operating a SAN Mirror configuration” on page 42

Configuring a new systemThe steps needed to configure a new system are shown below. Each of them is discussed in detail after the high-level list. For the initial release of SAN Mirror, the steps are manual. Future releases of the DCA and of this document will provide tools to help automate this process.

Steps required to configure a new DCA for SAN Mirror are listed below:

1. Prepare the VMAX for SAN Mirror attachment.

a. Create Symmetrix volumes for DCA master and mirror databases.

b. Make volumes visible to the DCA Master and Segment servers.

c. Create SRDF mirrors of the Symmetrix volumes.

d. Create TimeFinder/Snap or Clone volumes on R2 VMAX.

2. Prepare the Greenplum DCA for SAN Mirror.

a. Load PowerPath on each server in the local and remote DCAs.

b. Install EMC Solutions Enabler on the master and standby master servers.

c. Configure internal Connectrix MP-8000B switches for SAN attachment.

Configuring for SAN Mirror 35

Page 38: Green Plum 300-012-943

36

Setup and Operation of SAN with VMAX

d. Zone Fibre Channel over Ethernet (FCoE) to the Symmetrix.

e. Ensure Symmetrix volumes are seen by the DCA servers.

f. Format and mount Symmetrix volumes on mirror locations.

3. Move master databases to VMAX storage.

4. Create a test database utilizing the VMAX SAN Mirror configuration.

5. Test failover to remote DCA to run directly off R2 volumes.

a. Split SRDF links.

b. Mount R2 volumes on Remote DCA.

c. Recover remote DCA internal volumes from R2 image.

Preparing the VMAX for SAN Mirror attachmentSYMCLI commands were used to configure the VMAX systems for this document since the commands are easily scripted to provide repeatable operations.

The EMC Symmetrix Management Console (SMC) could also have been used for one-time setup or to produce a template of failover operations for later scripting.

This section contains the following information:

◆ “Creating volumes for DCA master and mirror databases” on page 36

◆ “Configuring SRDF copies” on page 37

◆ “Snaps” on page 38

◆ “Clones” on page 38

Creating volumes for DCA master and mirror databasesEach segment server supports two, 2.7 TB database LUNs. In normal operation, one of these LUNs is used for a mirror database instance, and the other is used for a primary instance. The master server supports a single, 2.1 TB LUN. To implement SAN Mirror, LUNs of the same size should be created on the VMAX and made visible to the servers. The mirror LUN may or may not be used as an additional primary in a SAN Mirror configuration.

The LUNs can be verified by issuing omreport storage vdisk on the segment servers. By issuing the gpssh -f gpssh.txt omreport storage

Configuring EMC Greenplum DCA Using SAN Mirror and EMC Symmetrix VMAX for Disaster Recovery

Page 39: Green Plum 300-012-943

Setup and Operation of SAN with VMAX

vdisk command, it is possible to see the exact LUN configuration on each segment server.

Omreport output from the DCA and a script to create Symmetrix volumes are shown in Appendix C, “Creating VMAX Volumes for DCA Mirror Database.”

Configuring SRDF copiesTo have a consistent, remote replica of the database, all of the DCA mirror volumes are replicated to the remote VMAX. This includes all volumes from the master server and each segment server.

To ensure consistency during SRDF operations, all of the SAN Mirror volumes must be part of the same SRDF group. Configurations that span SRDF groups or Symmetrix VMAX systems must use SRDF Consistency Groups, which is beyond the scope of this paper. See EMC SRDF/A and SRDF/A Multi-Session Consistency on Unix and Windows for more information about SRDF consistency groups.

If the primary site is lost for any reason, the database can be brought up on a remote DCA attached to the remote VMAX. The procedure to do this is described in Appendix J, “Starting EMC Greenplum Database from Mirror Segments,” (such as, on the R2 VMAX).

The test system used SRDF/A to replicate data from the primary to the remote VMAX System. When using SRDF, it is important to ensure there is enough bandwidth between the systems to handle the expected write workload. A Symmetrix Performance Guru should be consulted to verify the configuration has adequate bandwidth.

Note: Symmetrix Performance Gurus can be located on the SPEED website, accessible to EMC customers, partners, and field personnel, at http://speed.corp.emc.com , Tools > Guru List. If you do not have access to this list, contact your local EMC Sales Representative.

To use TimeFinder/Snap on the remote VMAX, the Symmetrix must be configured for SRDF Device Write Pacing. With write pacing enabled, the primary Symmetrix will slow down host writes if the primary DCA's write workload is greater than the remote VMAX's ability to destaged and switch cycles. This preserves SRDF/A's ability to keep propagate data to the remote VMAX.

Sample SYMCLI commands and output from the SRDF configuration are shown in Appendix D, “Creating the SRDF Configuration.”

Configuring for SAN Mirror 37

Page 40: Green Plum 300-012-943

38

Setup and Operation of SAN with VMAX

SnapsThe test configuration used TimeFinder/Snap to create point-in-time copies of the data on the remote VMAX. Snaps provide a flexible mechanism to make instant, consistent copies of the database volumes. As hosts write data, the original data is preserved in a "SAVE" pool, and pointers for the Snap volumes are changed to point to this preserved data. Only changed data is tracked by TimeFinder/Snap. This results in the lowest capacity overhead when creating multiple Snaps of the data.

Taking frequent Snaps allows the database to be quickly rolled back to a known good state if data corruption is discovered.

The SAVE pool must be sized to accommodate the expected capacity and bandwidth of changed data. It is considered best practice to configure SAVE devices across as many physical disks as possible in order to optimize the system's performance.

Sample commands and output are shown in Appendix G, “Creating SNAPs on Remote Symmetrix for Disaster Recovery.”

ClonesTimeFinder/Clone is a facility that creates full copies of Symmetrix volumes. Clones are very useful when making copies of data for mounting and querying on a separate host, as the clone volumes can be placed on separate physical drives. By separating the clones from the source data, users can work on the clones without impacting production.

The tested configuration used TimeFinder/Snap instead of clones because multiple point-in-time copies of the data are required. Clones would require significantly more storage since each clone is a full copy of the source volume.

Configuring the local DCA for the Symmetrix

In order to attach a DCA to the SAN, it is necessary to change the configuration of the internal Connectrix switches, and to discover the WWNs of each server's Converged Network Adapters (CNAs). Once the DCA servers can recognize SAN storage, it is possible to move the master databases and mirrored segments to the SAN.

This section contains the following information:

◆ “Discovering the DCA server WWNs” on page 39

Configuring EMC Greenplum DCA Using SAN Mirror and EMC Symmetrix VMAX for Disaster Recovery

Page 41: Green Plum 300-012-943

Setup and Operation of SAN with VMAX

◆ “Enabling FCoE on the internal EMC Connectrix MP-8000B Switches” on page 39

◆ “Changing mirror locations on the DCA (if needed)” on page 39

◆ “Moving master database for existing DCA” on page 41

◆ “Moving segment database mirrors for existing DCA” on page 41

Discovering the DCA server WWNsEach master server and segment server contains a Brocade 1020 CNA for Fibre Channel Over Ethernet (FCOE) connectivity to the SAN. The internal Connectrix MP-8000B switch strips the Ethernet headers when forwarding traffic to the SAN. Thus, issuing "bcu port -list | grep fcoe" on each DCA server will display the adapters' WWNs.

The output of the command on the test harness is shown in Appendix A, “Discovering DCA Internal Server WWNs.”

Enabling FCoE on the internal EMC Connectrix MP-8000B SwitchesFCoE must be enabled on the DCA to connect the it to external SAN storage. Each CNA in each server will then show up as a separate initiator to be zoned on the SAN.

The DCA is configured with two, EMC Connectrix MP-8000B switches. Each switch contains eight Fibre ports for SAN connection. FCoE must be enabled individually on each of the MP-8000B’s internal Ethernet ports in order for the internal servers to appear on the SAN.

The procedure to accomplish this manually is shown in Appendix E, “Configuring DCA Internal Switch for SAN Mirror.” The procedure can be automated using Expect, which is a standard Linux utility on the DCA. A sample script is shown in Appendix F, “Automating DCA Internal SAN Configuration.”

Changing mirror locations on the DCA (if needed)Users specify where the DCA stores primary and mirrored data segments at installation time when gpinitsystem is run. The default is to create multiple data directories (/data1 and /data2), where each /data directory contains ./primary and ./mirror subdirectories. See The Greenplum Database Installation Guide, Chapter 3, for more information about default configurations.

In a SAN Mirror configuration, internal DCA storage should be mounted on the ./primary directories. SAN storage should be mounted on the ./san_mirror directories.

Configuring for SAN Mirror 39

Page 42: Green Plum 300-012-943

40

Setup and Operation of SAN with VMAX

Storage for the master and standby master servers must also be moved to the VMAX to enable failover to the remote VMAX. Moving an operational master to the VMAX will require a short period of downtime as the database requires restart during these operations.

IMPORTANT!For the purposes of this paper, the SAN Mirror storage was mounted directly on the ./mirror directories. An upcoming release of the Greenplum database for DCA will streamline and automate the SAN Mirror installation process. Default mount points for SAN Mirror disk will be ./san_mirror.

A high-level diagram of the recommended configuration is shown in Figure 9.

Figure 9 SAN Mirror mount points on VMAX

Configuring EMC Greenplum DCA Using SAN Mirror and EMC Symmetrix VMAX for Disaster Recovery

Page 43: Green Plum 300-012-943

Setup and Operation of SAN with VMAX

If the initial database was created without mirrors. the gpaddmirrors utility can be run to dynamically add SAN Mirrors to the existing database. More information on gpaddmirrors can be found in the Greenplum Database 4.1 Administrator's Guide.

Moving master database for existing DCADowntime is required to move the master database to VMAX storage. The test configuration did not use a standby master, so downtime had to be taken to manually copy data from the /data/master directory to the VMAX, and to mount the VMAX volume on the /data/master directory.

For existing DCAs with mirrored master databases, it is possible to remove and reinitialize the standby master server on VMAX storage. When this process is complete, the database can be failed over to the standby master while the primary master server is moved to VMAX. This greatly reduces the downtime required to implement a SAN Mirror configuration.

The Greenplum command gpinitstandby can be used to change between the master server and the standby master.

Moving segment database mirrors for existing DCAIf SAN Mirror is being added to a pre-existing DCA configuration, where both the mirrors and the primaries reside on the DCA, the gpmovemirrors utility can be used to move the existing mirrors to external SAN storage. This is an online process for the segment servers. Gpmovemirrors ships with Greenplum database release 4.1.1.1.

When using gpmovemirrors, create an input file that specifies the host address, port and system file space location of the current mirror, and the host address, port, replication port, and system file space location of the new mirror.

The input file should be a plain text file with the format:

[<filespace1_fsname>[:<filespace2_fsname>:...] <old_address>:<port>:<system_filespace_location> [<new_address:port>:<replication_port>:<system_filespace_location>[:<fselocation>:...]]

For example:

sdw1-1:50000:/data1/mirror/gpseg11 sdw1-1:50000:51000:/data1/san_mirror/gpseg11

The host address, ports and system filespace locations can be found in the gp_segment_configuratoin and pg_filespace tables. For more information, refer to the Greenplum Database 4.1 Administrator's Guide.

Configuring for SAN Mirror 41

Page 44: Green Plum 300-012-943

42

Setup and Operation of SAN with VMAX

IMPORTANT!It is critical that all of the mirrors reside on the VMAX to ensure a recoverable database on the remote DCA.

Configuring the remote DCAThe mount points on the remote DCA should be the same as on the primary DCA, with internal DCA storage on the ./primary directory for each /data directory, and VMAX storage mounted on the ./san_mirror directory.

For the purposes of this document, VMAX volumes were mounted directly on the ./mirror directory. This provided the simplest configuration for failover and backup.

Configuring the primary and remote sides identically will allow the database to recover on the remote side as explained in Appendix J, “Starting EMC Greenplum Database from Mirror Segments,” (such as, on the R2 VMAX).

Operating a SAN Mirror configurationThe external VMAX storage is not read from in normal operation. Read requests are satisfied by primary database segments, whose data reside on internal DCA storage.

By replicating the mirrored segments with SRDF, a copy of the data is propagated to a remote site. The database can then be brought up on the mirrored segments at the remote site if the primary site fails.

However, SRDF alone does not guard against the possibility data corruption. To provide an extra level of protection, TimeFinder/Snap is used to take periodic copies of the database at the remote site. This lets system administrators fall back to the last known good copy of the database if the database becomes unusable for any reason.

By default, the DCA will work from the primary copies of the data on the remote DCA. To bring TimeFinder and SRDF copies up on the remote DCA, the remote database must be configured to use the SAN Mirror devices. This requires marking the primary segments as "down" on the remote DCA prior to starting the database. Once the database is successfully started, the gprecoverseg utility can be used to resynchronize the primary segments with the SAN Mirror copies.

Configuring EMC Greenplum DCA Using SAN Mirror and EMC Symmetrix VMAX for Disaster Recovery

Page 45: Green Plum 300-012-943

Setup and Operation of SAN with VMAX

Two tables within the GP database contain the configuration and location of the GP segment databases. These tables are critical to SAN Mirror operations that bring the database up on a different DCA. They are gp_segment_configuration and pg_filespace_entry.

The gp_segment_configuration table maintains information about what database is located on which segment server, as well as what role each segment database has (that is, primary or mirror). In order to configure the GP database to use only the mirror segments, the gp_segment_configuration table space needs to be modified.

The pg_filespace_entry table maintains the filesystem and directory location of each of the segment databases. It is important for manageability to have consistent, easy-to-understand directory structures for the primary and mirror instances.

The procedure to bring up the database on the remote VMAX and to copy data from the remote VMAX to the remote DCA is detailed in Appendix J, “Starting EMC Greenplum Database from Mirror Segments,” (such as, on the R2 VMAX).

The following are further described in this section:

◆ “Using TimeFinder/SNAP to create point-in-time images on the remote VMAX” on page 43

◆ “Recovering a corrupt database using TimeFinder/SNAP and SRDF” on page 44

◆ “Failover” on page 45

◆ “Failback” on page 46

Using TimeFinder/SNAP to create point-in-time images on the remote VMAXIn normal operation a DCA will serve user requests from the primary copies of the data and will write updates to the SAN Mirror copies to ensure consistency and resiliency of the database. In a SAN Mirror configuration, additional TimeFinder/Snap images are created on the remote DCA to provide quick recovery in the event of data corruption.

The procedure outlined in Appendix G, “Creating SNAPs on Remote Symmetrix for Disaster Recovery” and Appendix H, “Mounting and Checking a Symmetrix SNAP of the Greenplum Database” can be scripted and run at regular intervals to ensure easy recoverability of the database. The frequency with which copies are made is a function of the customer's business requirements.

Configuring for SAN Mirror 43

Page 46: Green Plum 300-012-943

44

Setup and Operation of SAN with VMAX

Typical steps in normal operation are shown below:

1. Data is continually replicated to the remote VMAX using SRDF/A.

2. TimeFinder/Snap is used to take periodic copies of the data on the remote VMAX, as long as all of the DCA’s primary and mirrored segments are operational. See Appendix G, “Creating SNAPs on Remote Symmetrix for Disaster Recovery.”

3. Each Snap image is verified on the remote DCA. The steps listed below are detailed in Appendix H, “Mounting and Checking a Symmetrix SNAP of the Greenplum Database.”

a. Verify the current master server; use that master image to obtain all current configuration data.

b. Mount the Snap volumes to the remote DCA.

c. Verify that all segment servers were operational and reporting correct status.

d. Run gpcheckcat to verify that the database is correct.

4. Older Snaps may be deleted, if desired, once the new Snap is verified.

Recovering a corrupt database using TimeFinder/SNAP and SRDFIf the database becomes corrupted for any reason, it can be restored to a previous snapped image. Since the snaps are taken on the remote VMAX, the procedure to restore the primary site is as shown below. Details are shown in Appendix I, “Recovering a DCA Using a SNAP Session and SRDF.”

1. Stop the primary database and unmount volumes on primary DCA.

2. Split the SRDF devices so the snap can be restored to the R2 volumes.

3. Restore the Snaps to the R2 volumes, then terminate the Snap restore session.

4. Restore the SRDF R2 devices to the R1 devices, to restore the R1 image to a known good point.

5. Remount the R1 devices on the primary DCA

Configuring EMC Greenplum DCA Using SAN Mirror and EMC Symmetrix VMAX for Disaster Recovery

Page 47: Green Plum 300-012-943

Setup and Operation of SAN with VMAX

6. Start the database in maintenance mode on the primary DCA and mark the primary segments as "down" so the database will come up on the mirrored segments.

7. Restart the database normally on the primary DCA.

8. Type gprecoverseg -F to recover the primary volumes from the mirrors.

9. Verify that all segments have synchronized by typing gprecoverseg -m.

10. Either restart the database or type gprecoverseg -r to move primary activity to internal primary volumes.

FailoverThe database can be brought up on the remote DCA by running a series of SRDF and Greenplum commands. The step-by-step procedure to do this is detailed in Appendix J, “Starting EMC Greenplum Database from Mirror Segments,” (such as, on the R2 VMAX), and is summarized below:

1. Use SYMCLI commands to fail over the storage to the remote site.

2. Mount the R2 volumes to the remote DCA and start the database in maintenance mode.

3. Mark the remote DCA's primary segments as "down" and the mirrored segments as "up" so the database will come up on the mirrored (SAN) segments.

4. Restart the database normally.

5. Type gprecoverseg -F to recover the remote DCA's primary (internal) storage from mirrored (SAN) storage.

6. Verify that all segments have synchronized by typing gprecoverseg -m.

7. Either restart the database or type gprecoverseg -r to move primary activity to internal primary volumes.

8. When the primary site is restored, begin synchronizing from the remote VMAX to the primary VMAX.

Configuring for SAN Mirror 45

Page 48: Green Plum 300-012-943

46

Setup and Operation of SAN with VMAX

FailbackFailback from the remote to the primary site is simply the reverse of the failover process. A high-level description is listed below.

1. Ensure the primary DCA has unmounted the mirrored segments from the primary VMAX.

2. Synchronize the primary VMAX from the remote VMAX. This can be done while the database is running on the remote VMAX.

3. Use SYMCLI or the Symmetrix Management Console to issue SRDF commands to fail the database back to the primary site.

4. Mount the R1 volumes on the primary DCA and start the database in maintenance mode

5. Mark the primary DCA's primary segments as "down" and the mirrored segments as "up" so the database will come up on the mirrored (SAN) segments.

6. Restart the database normally.

7. Type gprecoverseg -F to recover the DCA's primary (internal) volumes from the mirrored (SAN) segments.

8. Verify that all segments have synchronized by typing gprecoverseg -m.

9. Either restart the database or type gprecoverseg -r to move primary activity to internal primary volumes.

Configuring EMC Greenplum DCA Using SAN Mirror and EMC Symmetrix VMAX for Disaster Recovery

Page 49: Green Plum 300-012-943

Setup and Operation of SAN with VMAX

SummaryThe EMC Greenplum database offers industry-leading capability for "big data" analysis, allowing customers to make precise business decisions more quickly. The EMC Greenplum Data Compute Appliance (DCA) is a self-contained system, designed to produce optimal performance using commodity components. It contains all the hardware and software necessary and is a complete data warehouse appliance.

By using SAN Mirror to place the DCA mirrored databases on external storage, users leverage the capabilities of external storage to augment the performance and simplicity of the DCA. The EMC Symmetrix VMAX brings robust multipathing, replication and data management to the Greenplum database, to facilitate disaster recovery and to create gold copies of the database that guard against corruption of the data warehouse.

The procedure to place SAN copies on external storage is documented here as a manual, step-by-step process. Following these procedures enables SAN Mirror to run on external VMAX arrays. Future releases of the Greenplum database will include tools to simplify SAN Mirror implementation.

Summary 47

Page 50: Green Plum 300-012-943

48

Setup and Operation of SAN with VMAX

Configuring EMC Greenplum DCA Using SAN Mirror and EMC Symmetrix VMAX for Disaster Recovery

Page 51: Green Plum 300-012-943

AInvisible Body Tag

This appendix contains the following information:

◆ Discovering DCA internal server WWNs ...................................... 50

Discovering DCAInternal Server WWNs

Discovering DCA Internal Server WWNs 49

Page 52: Green Plum 300-012-943

50

Discovering DCA Internal Server WWNs

Discovering DCA internal server WWNsBy issuing the bcu port -list command on each server, the WWNs on each segment server and the master server are displayed. These should be used to create mapping and masking files for the Symmetrix.

Issue the command below to display all the WWNs on the DCA. The sed command was used to remove colons so that it is easier to cut and paste WWNs into Symmetrix configuration scripts.

gpssh -f gpssh.txt 'bcu port --list |grep fcoe' | sort -k1 | sed -e 's/://g'

Figure 10 shows a sample output:

Figure 10 Sample output

Configuring EMC Greenplum DCA Using SAN Mirror and EMC Symmetrix VMAX for Disaster Recovery

Page 53: Green Plum 300-012-943

BInvisible Body Tag

This appendix contains the following information:

◆ Verifying DCA volume sizes ............................................................ 52

Verifying DCA VolumeSizes

Verifying DCA Volume Sizes 51

Page 54: Green Plum 300-012-943

52

Verifying DCA Volume Sizes

Verifying DCA volume sizesThe following command was used to inspect the volume sizes on the segment servers:

gpssh -f gpssh.txt omreport storage vdisk

The following is sample output from one server:

[root@mdw ~]# gpssh -f gpssh.txt omreport storage vdisk[sdw4] List of Virtual Disks in the System[sdw4][sdw4] Controller PERC H700 Integrated (Slot 4)[sdw4] ID : 0[sdw4] Status : Ok[sdw4] Name : Virtual Disk 0[sdw4] State : Ready[sdw4] Hot Spare Policy violated : Not Assigned[sdw4] Virtual Disk Bad Blocks : No[sdw4] Secured : Not Applicable[sdw4] Progress : Not Applicable[sdw4] Layout : RAID-5[sdw4] Size : 48.01 GB (51548651520 bytes)[sdw4] Device Name : /dev/sda[sdw4] Bus Protocol : SAS[sdw4] Media : HDD[sdw4] Read Policy : Adaptive Read Ahead[sdw4] Write Policy : Write Back[sdw4] Cache Policy : Not Applicable[sdw4] Stripe Element Size : 128 KB[sdw4] Disk Cache Policy : Disabled[sdw4][sdw4] ID : 1[sdw4] Status : Ok[sdw4] Name : Virtual Disk 1[sdw4] State : Ready[sdw4] Hot Spare Policy violated : Not Assigned[sdw4] Virtual Disk Bad Blocks : No[sdw4] Secured : Not Applicable[sdw4] Progress : Not Applicable[sdw4] Layout : RAID-5[sdw4] Size : 2,743.86 GB (2946202337280 bytes)[sdw4] Device Name : /dev/sdb[sdw4] Bus Protocol : SAS[sdw4] Media : HDD[sdw4] Read Policy : Adaptive Read Ahead[sdw4] Write Policy : Write Back[sdw4] Cache Policy : Not Applicable[sdw4] Stripe Element Size : 128 KB[sdw4] Disk Cache Policy : Disabled[sdw4]…… Data cut due to length of output?

Configuring EMC Greenplum DCA Using SAN Mirror and EMC Symmetrix VMAX for Disaster Recovery

Page 55: Green Plum 300-012-943

CInvisible Body Tag

This appendix contains the following information:

◆ Creating VMAX volumes for DCA mirror database .................... 54◆ Assigning volumes to the DCA segment servers.......................... 56

Creating VMAX Volumesfor DCA Mirror Database

Creating VMAX Volumes for DCA Mirror Database 53

Page 56: Green Plum 300-012-943

54

Creating VMAX Volumes for DCA Mirror Database

Creating VMAX volumes for DCA mirror databaseThe following command was used to create Symmetrix Virtual (thin) devices, to assign to the DCA. The specified volume size was 2.7 TB, to match the DCA. Actual capacity will only be allocated as it is required.

[root@mdw ~]# symconfigure -sid 4467 -cmd "create dev count=18, size=2700GB, emulation=fba, binding to pool=DCA_R1_Pool, config=tdev, meta_config=striped;" commit

The following is sample output:

Execute a symconfigure operation for symmetrix '000192604467' (y/[n]) ? y

A Configuration Change operation is in progress. Please wait...

Establishing a configuration change session...............Established. Processing symmetrix 000192604467 Performing Access checks..................................Allowed. Checking Device Reservations..............................Allowed. Initiating COMMIT of configuration changes................Started. Committing configuration changes..........................Queued. COMMIT requesting required resources......................Obtained. Step 014 of 050 steps.....................................Executing. Step 040 of 177 steps.....................................Executing. Step 044 of 177 steps.....................................Executing. Step 044 of 177 steps.....................................Executing. Step 081 of 177 steps.....................................Executing. Step 084 of 177 steps.....................................Executing. Step 088 of 177 steps.....................................Executing. Step 091 of 177 steps.....................................Executing. Step 093 of 177 steps.....................................Executing. Step 093 of 177 steps.....................................Executing. Step 107 of 177 steps.....................................Executing. Step 109 of 177 steps.....................................Executing. Step 109 of 177 steps.....................................Executing. Step 128 of 177 steps.....................................Executing. Step 139 of 177 steps.....................................Executing. Step 139 of 177 steps.....................................Executing. Step 139 of 177 steps.....................................Executing. Step 139 of 177 steps.....................................Executing. Step 139 of 177 steps.....................................Executing. Step 139 of 177 steps.....................................Executing. Local: COMMIT............................................Done. Binding devices...........................................Done.

New symdevs: 025B:0266 [Striped meta, head 025B, member size 262668 cyl] New symdevs: 0267:0272 [Striped meta, head 0267, member size 262668 cyl] New symdevs: 0273:027E [Striped meta, head 0273, member size 262668 cyl]

Configuring EMC Greenplum DCA Using SAN Mirror and EMC Symmetrix VMAX for Disaster Recovery

Page 57: Green Plum 300-012-943

Creating VMAX Volumes for DCA Mirror Database

New symdevs: 027F:028A [Striped meta, head 027F, member size 262668 cyl] New symdevs: 028B:0296 [Striped meta, head 028B, member size 262668 cyl] New symdevs: 0297:02A2 [Striped meta, head 0297, member_size 262668 cyl] New symdevs: 02A3:02AE [Striped meta, head 02A3, member size 262668 cyl] New symdevs: 02AF:02BA [Striped meta, head 02AF, member size 262668 cyl] New symdevs: 02BB:02C6 [Striped meta, head 02BB, member size 262668 cyl] New symdevs: 02C7:02D2 [Striped meta, head 02C7, member size 262668 cyl] New symdevs: 02D3:02DE [Striped meta, head 02D3, member size 262668 cyl] New symdevs: 02DF:02EA [Striped meta, head 02DF, member size 262668 cyl] New symdevs: 02EB:02F6 [Striped meta, head 02EB, member size 262668 cyl] New symdevs: 02F7:0302 [Striped meta, head 02F7, member size 262668 cyl] New symdevs: 0303:030E [Striped meta, head 0303, member size 262668 cyl] New symdevs: 030F:031A [Striped meta, head 030F, member size 262668 cyl] New symdevs: 031B:0326 [Striped meta, head 031B, member size 262668 cyl] New symdevs: 0327:0332 [Striped meta, head 0327, member size 262668 cyl] Terminating the configuration change session..............Done.

The configuration change session has successfully completed. ?

Creating VMAX volumes for DCA mirror database55

Page 58: Green Plum 300-012-943

56

Creating VMAX Volumes for DCA Mirror Database

Assigning volumes to the DCA segment serversThe following symaccess commands were used to create initiator groups for the master and segment servers in the DCA, port groups for the Symmetrix, and to assign devices to the servers. WWNs from the output above were cut and pasted into the script.

symaccess -sid 4467 create -name MDW1 -type storage devs 25B,267symaccess -sid 4467 create -name SDW1 -type storage devs 273,27Fsymaccess -sid 4467 create -name SDW2 -type storage devs 28B,297symaccess -sid 4467 create -name SDW3 -type storage devs 2A3,2AFsymaccess -sid 4467 create -name SDW4 -type storage devs 2BB,2C7symaccess -sid 4467 create -name SDW5 -type storage devs 2D3,2DFsymaccess -sid 4467 create -name SDW6 -type storage devs 2EB,2F7symaccess -sid 4467 create -name SDW7 -type storage devs 303,30Fsymaccess -sid 4467 create -name SDW8 -type storage devs 31B,327

symaccess -sid 4467 create -name MDW1 -type initiator -wwn 1000000533482f24 symaccess -sid 4467 -type initiator add -name MDW1 -wwn 1000000533482f25symaccess -sid 4467 create -name SDW1 -type initiator -wwn 100000053348159asymaccess -sid 4467 -type initiator add -name SDW1 -wwn 100000053348159bsymaccess -sid 4467 create -name SDW2 -type initiator -wwn 10000005334817e8 symaccess -sid 4467 -type initiator add -name SDW2 -wwn 10000005334817e9symaccess -sid 4467 create -name SDW3 -type initiator -wwn 10000005334819b8 symaccess -sid 4467 -type initiator add -name SDW3 -wwn 10000005334819b9symaccess -sid 4467 create -name SDW4 -type initiator -wwn 100000053348146c symaccess -sid 4467 -type initiator add -name SDW4 -wwn 100000053348146Dsymaccess -sid 4467 create -name SDW5 -type initiator -wwn 1000000533482554 symaccess -sid 4467 -type initiator add -name SDW5 -wwn 1000000533482555symaccess -sid 4467 create -name SDW6 -type initiator -wwn 1000000533481f58 symaccess -sid 4467 -type initiator add -name SDW6 -wwn 1000000533481f59symaccess -sid 4467 create -name SDW7 -type initiator -wwn 100000053348256c symaccess -sid 4467 -type initiator add -name SDW7 -wwn 100000053348256Dsymaccess -sid 4467 create -name SDW8 -type initiator -wwn 1000000533481f48 symaccess -sid 4467 -type initiator add -name SDW8 -wwn 1000000533481f49

symaccess -sid 4467 create -name DCA -type port -dirport 6g:0,8g:0,10g:0,12g:0,5f:0,7f:0,9f:0,11f:0

symaccess -sid 4467 create view -name MDW1 -sg MDW1 -ig MDW1 -pg DCAsymaccess -sid 4467 create view -name SDW1 -pg DCA -ig SDW1 -sg SDW1symaccess -sid 4467 create view -name SDW2 -pg DCA -ig SDW2 -sg SDW2symaccess -sid 4467 create view -name SDW3 -pg DCA -ig SDW3 -sg SDW3symaccess -sid 4467 create view -name SDW4 -pg DCA -ig SDW4 -sg SDW4symaccess -sid 4467 create view -name SDW5 -pg DCA -ig SDW5 -sg SDW5symaccess -sid 4467 create view -name SDW6 -pg DCA -ig SDW6 -sg SDW6symaccess -sid 4467 create view -name SDW7 -pg DCA -ig SDW7 -sg SDW7symaccess -sid 4467 create view -name SDW8 -pg DCA -ig SDW8 -sg SDW8

Configuring EMC Greenplum DCA Using SAN Mirror and EMC Symmetrix VMAX for Disaster Recovery

Page 59: Green Plum 300-012-943

DInvisible Body Tag

This appendix contains the following information:

◆ Creating the SRDF configuration..................................................... 58

Creating the SRDFConfiguration

Creating the SRDF Configuration 57

Page 60: Green Plum 300-012-943

58

Creating the SRDF Configuration

Creating the SRDF configurationThe SRDF configuration between VMAX systems was created with the following command:

symrdf addgrp -label dca_rdfg -rdfg 1 -sid 04467 -dir 5h -remote_rdfg 1 -remote_sid 04452 -remote_dir 5hsymrdf modifygrp -label dca_rdfg -sid 04467 -add -dir 6h -remote_dir 6hsymrdf modifygrp -label dca_rdfg -sid 04467 -add -dir 7h -remote_dir 7hsymrdf modifygrp -label dca_rdfg -sid 04467 -add -dir 8h -remote_dir 8hsymrdf modifygrp -label dca_rdfg -sid 04467 -add -dir 9h -remote_dir 9hsymrdf modifygrp -label dca_rdfg -sid 04467 -add -dir 10h -remote_dir 10hsymrdf modifygrp -label dca_rdfg -sid 04467 -add -dir 11h -remote_dir 11hsymrdf modifygrp -label dca_rdfg -sid 04467 -add -dir 12h -remote_dir 12h

symrdf -file rdf_pairs.txt -sid 04467 -rdfg 1 -noprompt -v createpair -type r1 -establish -rdf_mode sync -g DCA_rdfg

The file rdf_pairs.txt contained the following lines, listing the device pairs to create:

025B0405026704110273041D027F0429028B04350297044102A3044D02AF045902BB046502C7047102D3047D02DF048902EB049502F704A1030304AD030F04B9031B04C5032704D1

Configuring EMC Greenplum DCA Using SAN Mirror and EMC Symmetrix VMAX for Disaster Recovery

Page 61: Green Plum 300-012-943

EInvisible Body Tag

This appendix contains the following information:

◆ Configuring DCA internal switch for SAN Mirror....................... 60

Configuring DCAInternal Switch for SAN

Mirror

Configuring DCA Internal Switch for SAN Mirror 59

Page 62: Green Plum 300-012-943

60

Configuring DCA Internal Switch for SAN Mirror

Configuring DCA internal switch for SAN MirrorThe DCA uses dual Connectrix MP-8000B switches for internal communication. Every segment server has a 10 Gb Ethernet attachment to each of the switches. Each switch also has four, 8 Gb Fibre Channel ports to connect to an external SAN. Normally, these ports are not used. In a SAN Mirror configuration, they must be configured to connect to the Symmetrix VMAX arrays.

The number of ports required is a function of the SAN Mirror workload. Consult a Symmetrix Performance Guru to help determine the correct number of ports.

Note: Symmetrix Performance Gurus can be located on the SPEED website, accessible to EMC customers, partners, and field personnel, at http://speed.corp.emc.com , Tools > Guru List. If you do not have access to this list, contact your local EMC Sales Representative.

This section details the procedure to configure the switches. The next section provides a script that could be used to automate the switch configuration.

1. Log in to switch 1 on the DCA.

ssh 172.28.0.170

username = admin

password = changeme (default password)

2. If connecting the switch to an existing fabric make sure to change the domain ID of the switch to one that is not already assigned in the fabric. The default ID for the switches is 1.

3. Enter into the configuration shell.

cmsh

4. Enter into config mode.

configure terminal (short hand = conf t)

5. Set the protocol to spanning-tree Rapid Spanning Tree (rstp) and the bridge priority to a high number so the switch will not become the root switch when plugged into the fabric.

protocol spanning-tree rstpbridge-priority 8192

Configuring EMC Greenplum DCA Using SAN Mirror and EMC Symmetrix VMAX for Disaster Recovery

Page 63: Green Plum 300-012-943

Configuring DCA Internal Switch for SAN Mirror

Sample command screen output:

i-sw-1(config)# protocol spanning-tree rs ftpi-sw-1(conf-rstp)# bridge-priority 8192i-sw-1(conf-rstp)# exiti-sw-1(config)#

6. Create a CEE-MAP for the FCoE protocol and set the priority-group table weights.

priority-group-table 1 weight 40 pfcpriority-group-table 1 weight 60priority-table 2 2 2 1 2 2 2 15.0exit

Sample command screen output:

i-sw-1(config)#cee-map fcoe-vlani-sw-1(conf-ceemap)#priority-group-table 1 weight 40 pfci-sw-1(conf-ceemap)#priority-group-table 2 weight 60i-sw-1(conf-ceemap)#priority-table 2 2 2 1 2 2 2 15.0i-sw-1(conf-ceemap)#exiti-sw-1(config)#

7. Set up the lldp protocol.

Protocol lldpadvertise dcbx-fcoe-app-tlvadvertise dcbx-fcoe-logical-link-tlvexit

Sample command screen output:

i-sw-1(config)#protocol lldpi-sw-1(conf-lldp)#advertise dcbx-fcoe-app-tlvi-sw-1(conf-lldp)#advertise dcbx-fcoe-logical-link-tlvi-sw-1(conf-lldp)#exiti-sw-1(config)#

. 8. Set up a vlan for the FCoE traffic.

Note: Only one FCoE vlan is allowed per switch.

The vlan number can be any number other than 1 or 199. The default setting on the DCA is to route network traffic thru vlan 199 on switch 1, and 299 on switch 2. For this environment, FCOE traffic is routed thru vlan 1002 on switch 1 and 2002 on switch 2.

interface vlan 1002 (on switch one; should be 2002 on switch 2)description FCoE VLANfcf forward

Configuring DCA internal switch for SAN Mirror61

Page 64: Green Plum 300-012-943

62

Configuring DCA Internal Switch for SAN Mirror

exit

Sample command output screen capture:

i-sw-1(config)#interface vlan 1002 (on switch one; should be 2002 on switch 2)i-sw-1(conf-if-vl-1002)#description FCoE VLANi-sw-1(conf-if-vl-1002)#fcf forwardi-sw-1(conf-if-vl-1002)#exiti-sw-1(config)#

9. Setup vlan rules for FCoE traffic and add the rules to a group for assignment to the interfaces. Group 2 is assigned to the Ethernet vlan rules configured on the switch by default at the factory.

vlan classifier rule 1 proto fip encap ethv2vlan classifier rule 2 proto fcoe encap ethv2vlan classifier group 1 add rule 1vlan classifier group 1 add rule 2

Sample command output screen capture:

i-sw-1(config)#vlan classifier rule 1 proto fip encap ethv2i-sw-1(config)#vlan classifier rule 2 proto fcoe encap ethv2i-sw-1(config)#vlan classifier group 1 add rule 1i-sw-1(config)#vlan classifier group 1 add rule 2i-sw-1(config)#

10. Configure each of the connected network ports. For a full rack ports 0/0 0/17 are used. For a ½ rack ports 0/0 0/7 & 0/16 & 0/17 are used. Ports 0/16 abd 0/17 are for the master and standby servers. Ports 0/0 0/15 are for the segment servers. Repeat the below commands for each interface incrementing the “int te 0/#” for each port.

int te 0/0switchportswitchport mode convergedvlan classifier activate group 2 vlan 199vlan classifier activate group 1 vlan 1002no shutdownlldp fcoe-priority-bits 0x8spanning-tree edgeportspanning-tree edgeport bpdu-guardcee fcoe-vlanexit

Sample command output screen capture:

i-sw-1(config)#int te 0/0i-sw-1(conf-if-te-0/0)#switchport

Configuring EMC Greenplum DCA Using SAN Mirror and EMC Symmetrix VMAX for Disaster Recovery

Page 65: Green Plum 300-012-943

Configuring DCA Internal Switch for SAN Mirror

i-sw-1(conf-if-te-0/0)#switchport mode convergedi-sw-1(conf-if-te-0/0)#vlan classifier activate group 2 vlan 299i-sw-1(conf-if-te-0/0)#vlan classifier activate group 1 vlan 2002i-sw-1(conf-if-te-0/0)#no shutdowni-sw-1(conf-if-te-0/0)#lldp fcoe-priority-bits 0x8i-sw-1(conf-if-te-0/0)#spanning-tree edgeporti-sw-1(conf-if-te-0/0)#spanning-tree edgeport bpdu-guardi-sw-1(conf-if-te-0/0)#cee fcoe-vlani-sw-1(conf-if-te-0/0)#exit

11. Exit out of the configuration terminal.

i-sw-1(config)# exiti-sw-1#

12. Create a copy of the startup-config file so that you have the default configuration in the event that you have to revert back.

copy startup-config orig-startup-config

13. Copy the running-config to the startup-config so that you don’t lose the settings on a reboot or power outage.

Copy running-config startup-config.

Sample command output screen capture:

i-sw-1# copy startup-config orig-startup-configBuilding configuration...i-sw-1# copy running-config startup-configOverwrite the startup config file (y/n): yBuilding configuration...i-sw-1# dirContents of flash:// -rw-r----- 4589 Wed Apr 27 18:47:25 2011 orig-startup-configi-sw-1#

14. Exit out of the cee shell back into the switch shell

ExitSample screen capture:

i-sw-1#i-sw-1#exiti-sw-1:admin>

15. To validate that the changes made were correct check the fcoe port logins to make sure you have the proper number of WWNs logging into the switch.

fcoe –loginshow

Configuring DCA internal switch for SAN Mirror63

Page 66: Green Plum 300-012-943

64

Configuring DCA Internal Switch for SAN Mirror

Sample command output screen capture are shown below:

i-sw-1:admin> fcoe --loginshow================================================================================Port Te port Device WWN Device MAC Session MAC================================================================================8 Te 0/0 10:00:00:05:33:48:15:9b 00:05:33:48:15:9b 0e:fc:00:03:08:019 Te 0/1 10:00:00:05:33:48:17:e9 00:05:33:48:17:e9 0e:fc:00:03:09:0110 Te 0/2 10:00:00:05:33:48:19:b9 00:05:33:48:19:b9 0e:fc:00:03:0a:0111 Te 0/3 10:00:00:05:33:48:14:6d 00:05:33:48:14:6d 0e:fc:00:03:0b:0112 Te 0/4 10:00:00:05:33:48:25:55 00:05:33:48:25:55 0e:fc:00:03:0c:0113 Te 0/5 10:00:00:05:33:48:1f:59 00:05:33:48:1f:59 0e:fc:00:03:0d:0114 Te 0/6 10:00:00:05:33:48:25:6d 00:05:33:48:25:6d 0e:fc:00:03:0e:0115 Te 0/7 10:00:00:05:33:48:1f:49 00:05:33:48:1f:49 0e:fc:00:03:0f:0116 Te 0/8 10:00:00:05:33:26:b0:b3 00:05:33:26:b0:b3 0e:fc:00:03:10:0117 Te 0/9 10:00:00:05:33:64:03:19 00:05:33:64:03:19 0e:fc:00:03:11:0118 Te 0/10 10:00:00:05:33:48:e3:15 00:05:33:48:e3:15 0e:fc:00:03:12:0119 Te 0/11 10:00:00:05:33:48:e7:73 00:05:33:48:e7:73 0e:fc:00:03:13:0120 Te 0/12 10:00:00:05:33:48:e7:93 00:05:33:48:e7:93 0e:fc:00:03:14:0121 Te 0/13 10:00:00:05:33:48:19:15 00:05:33:48:19:15 0e:fc:00:03:15:0122 Te 0/14 10:00:00:05:33:48:1f:25 00:05:33:48:1f:25 0e:fc:00:03:16:0123 Te 0/15 10:00:00:05:33:48:1e:dd 00:05:33:48:1e:dd 0e:fc:00:03:17:01i-sw-1:admin>

16. You can also check each of the nodes and make sure that the fcoe port link status is up and running using gpssh with a host file containing the nodes in the DCA.

gpssh -f seg_srvr_list 'bcu port --list |grep fcoe'

Configuring EMC Greenplum DCA Using SAN Mirror and EMC Symmetrix VMAX for Disaster Recovery

Page 67: Green Plum 300-012-943

Configuring DCA Internal Switch for SAN Mirror

Sample command output screen capture:

[root@mdw /]# gpssh -f seg_srvr_list 'bcu port --list |grep fcoe'[sdw16] fcoe 10:00:00:05:33:48:1e:dc 021701 Linkup[sdw16] fcoe 10:00:00:05:33:48:1e:dd 031701 Linkup[sdw14] fcoe 10:00:00:05:33:48:19:14 021501 Linkup[sdw14] fcoe 10:00:00:05:33:48:19:15 031501 Linkup[sdw15] fcoe 10:00:00:05:33:48:1f:24 021601 Linkup[sdw15] fcoe 10:00:00:05:33:48:1f:25 031601 Linkup[sdw12] fcoe 10:00:00:05:33:48:e7:72 021301 Linkup[sdw12] fcoe 10:00:00:05:33:48:e7:73 031301 Linkup[sdw13] fcoe 10:00:00:05:33:48:e7:92 021401 Linkup[sdw13] fcoe 10:00:00:05:33:48:e7:93 031401 Linkup[sdw10] fcoe 10:00:00:05:33:64:03:18 021101 Linkup[sdw10] fcoe 10:00:00:05:33:64:03:19 031101 Linkup[sdw11] fcoe 10:00:00:05:33:48:e3:14 021201 Linkup[sdw11] fcoe 10:00:00:05:33:48:e3:15 031201 Linkup[ sdw4] fcoe 10:00:00:05:33:48:14:6c 020b01 Linkup[ sdw4] fcoe 10:00:00:05:33:48:14:6d 030b01 Linkup[ sdw5] fcoe 10:00:00:05:33:48:25:54 020c01 Linkup[ sdw5] fcoe 10:00:00:05:33:48:25:55 030c01 Linkup[ sdw6] fcoe 10:00:00:05:33:48:1f:58 020d01 Linkup[ sdw6] fcoe 10:00:00:05:33:48:1f:59 030d01 Linkup[ sdw7] fcoe 10:00:00:05:33:48:25:6c 020e01 Linkup[ sdw7] fcoe 10:00:00:05:33:48:25:6d 030e01 Linkup[ sdw1] fcoe 10:00:00:05:33:48:15:9a 020801 Linkup[ sdw1] fcoe 10:00:00:05:33:48:15:9b 030801 Linkup[ sdw2] fcoe 10:00:00:05:33:48:17:e8 020901 Linkup[ sdw2] fcoe 10:00:00:05:33:48:17:e9 030901 Linkup[ sdw3] fcoe 10:00:00:05:33:48:19:b8 020a01 Linkup[ sdw3] fcoe 10:00:00:05:33:48:19:b9 030a01 Linkup[ sdw8] fcoe 10:00:00:05:33:48:1f:48 020f01 Linkup[ sdw8] fcoe 10:00:00:05:33:48:1f:49 030f01 Linkup[ sdw9] fcoe 10:00:00:05:33:26:b0:b2 021001 Linkup[ sdw9] fcoe 10:00:00:05:33:26:b0:b3 031001 Linkup[root@mdw greenplum-db]#

17. Repeat the switch steps for switch 2 on the DCA. The IP is 172.28.0.180. Make sure to use a different vlan number for the FCoE vlan such as 2002 to keep them separate from switch 1.

Sample command output for creating vlan on switch 2:

i-sw-2(config)#interface vlan 2002i-sw-2(conf-if-vl-1002)#description FCoE VLANi-sw-2(conf-if-vl-1002)#fcf forwardi-sw-2(conf-if-vl-1002)#exiti-sw-2(config)#

Once the FCoE ports are logged into the switch you can alias and zone them to the SAN storage as a normal fiber port is zoned.

Configuring DCA internal switch for SAN Mirror65

Page 68: Green Plum 300-012-943

66

Configuring DCA Internal Switch for SAN Mirror

Configuring EMC Greenplum DCA Using SAN Mirror and EMC Symmetrix VMAX for Disaster Recovery

Page 69: Green Plum 300-012-943

FInvisible Body Tag

This appendix contains the following information:

◆ Script to automate DCA internal SAN configuration................... 68

Automating DCAInternal SAN

Configuration

Automating DCA Internal SAN Configuration 67

Page 70: Green Plum 300-012-943

68

Automating DCA Internal SAN Configuration

Script to automate DCA internal SAN configurationThe DCA segment servers and database servers are pre-loaded with the "Expect" programming language, which is able to automate repetitive typing tasks.

The following script, written in Expect, worked in the lab environment. It automates the steps outlined in Appendix E, “Configuring DCA Internal Switch for SAN Mirror,” and can be run on the database master to automatically set the internal switches for FCOE SAN attachment. It is presented "as-is", with no guarantees of correctness. Error checking in the script is primitive.

#!/usr/bin/expect -f## Script to set up internal switches in DCA for FCOE# Facilitates SAN mirror attachment# Note: minimal error checking is done for this first revision

# The "switch" variable is the last octet of the internal IP address of the # MP-8000B switch. The DCA has two switches: 172.28.0.170 and 172.28.0.180# Both switches need to be set up with an FCOE VLAN, and ports need to route to them

set switch "170"while { $switch < "181" } { puts "\r\rswitch $switch\r" if {$switch == "170"} { puts "THIS IS THE FIRST SWITCH\r" set enet_vlanid "199" set fcoe_vlanid "1002" } elseif {$switch == "180"} { puts "THIS IS THE SECOND SWITCH\r" set enet_vlanid "299" set fcoe_vlanid "2002" } puts "\r\rLogging into switch at 172.28.0.$switch\r" spawn ssh "admin\@172.28.0.$switch" expect { timeout abort failed abort "could not reach" abort "password:" } send "changeme\r" expect { timeout abort failed abort "Permission denied" abort ":admin>"

Configuring EMC Greenplum DCA Using SAN Mirror and EMC Symmetrix VMAX for Disaster Recovery

Page 71: Green Plum 300-012-943

Automating DCA Internal SAN Configuration

} sleep 1 send "cmsh\r" expect { timeout abort failed abort "Error:" abort "#" } sleep 1 puts "\r\rSetting switch protocol to Rapid Spanning Tree, and setting priority\r" send "conf t\r" expect { timeout abort failed abort "Error:" abort "(config)#" } send "protocol spanning-tree rstp\r" expect { timeout abort failed abort "Error:" abort "(conf-rstp)#" } send "bridge-priority 8192\r" expect { timeout abort failed abort "Error:" abort "(conf-rstp)#" } puts "\r\rCreating CEE Map for FCOE protocol\r" send "cee-map fcoe-vlan\r" expect { timeout abort failed abort "Error:" abort "(conf-ceemap)#" } send "priority-group-table 1 weight 40 pfc\r" expect { timeout abort failed abort "Error:" abort "(conf-ceemap)#" } send "priority-group-table 2 weight 60\r" expect { timeout abort failed abort

Script to automate DCA internal SAN configuration69

Page 72: Green Plum 300-012-943

70

Automating DCA Internal SAN Configuration

"Error:" abort "(conf-ceemap)#" } send "priority-table 2 2 2 1 2 2 2 15.0\r" expect { timeout abort failed abort "Error:" abort "(conf-ceemap)#" } send "exit\r" expect { timeout abort failed abort "Error:" abort "(config)#" } puts "\r\rSetting up lldp protocol\r" send "protocol lldp\r" expect { timeout abort failed abort "Error:" abort "(conf-lldp)#" } send "advertise dcbx-fcoe-app-tlv\r" expect { timeout abort failed abort "Error:" abort "(conf-lldp)#" } send "advertise dcbx-fcoe-logical-link-tlv\r" expect { timeout abort failed abort "Error:" abort "(conf-lldp)#" } send "exit\r" expect { timeout abort failed abort "Error:" abort "(config)#" } puts "\r\rSetting up vlan for FCOE traffic\r" send "interface vlan $fcoe_vlanid\r" expect { timeout abort failed abort "Error:" abort

Configuring EMC Greenplum DCA Using SAN Mirror and EMC Symmetrix VMAX for Disaster Recovery

Page 73: Green Plum 300-012-943

Automating DCA Internal SAN Configuration

"(conf-if-vl-$fcoe_vlanid)#" } send "description FCOE VLAN\r" expect { timeout abort failed abort "Error:" abort "(conf-if-vl-$fcoe_vlanid)#" } send "fcf forward\r" expect { timeout abort failed abort "Error:" abort "(conf-if-vl-$fcoe_vlanid)#" } send "exit\r" expect { timeout abort failed abort "Error:" abort "(config)#" }

puts "\r\rSetting up vlan rules for FCOE traffic\r" send "vlan classifier rule 1 proto fip encap ethv2\r" expect { timeout abort failed abort "Error:" abort "(config)#" } send "vlan classifier rule 2 proto fcoe encap ethv2\r" expect { timeout abort failed abort "Error:" abort "(config)#" } send "vlan classifier group 1 add rule 1\r" expect { timeout abort failed abort "Error:" abort "(config)#" } send "vlan classifier group 1 add rule 2\r" expect { timeout abort failed abort "Error:" abort "(config)#"

Script to automate DCA internal SAN configuration71

Page 74: Green Plum 300-012-943

72

Automating DCA Internal SAN Configuration

}

# Loop to set each of the configured network ports for FCOE# For full rack, ports 0-17 are used# For half rack, ports 0-7 and 16-17 are used# This script sets ALL ports to FCOE puts "\r\rSetting up individual ports for FCOE...\r" for {set i 0} {$i < 18} {incr i 1} { puts "Port $i...\r" send "int te 0/$i\r" expect { timeout abort failed abort "Error:" abort "(conf-if-te-0/$i)#" } send "switchport\r" expect { timeout abort failed abort "Error:" abort "(conf-if-te-0/$i)#" } send "switchport mode converged\r" expect { timeout abort failed abort "Error:" abort "(conf-if-te-0/$i)#" }# Note ENET vlan id is set to 199 on switch .170, 299 on switch .180 send "vlan classifier activate group 2 vlan $enet_vlanid\r" expect { timeout abort failed abort "Error:" abort "(conf-if-te-0/$i)#" }# Note FCOE vlan id is set to 1002 on switch .170, 2002 on switch .180 send "vlan classifier activate group 1 vlan $fcoe_vlanid\r" expect { timeout abort failed abort "Error:" abort "(conf-if-te-0/$i)#" } send "no shutdown\r" expect { timeout abort failed abort "Error:" abort "(conf-if-te-0/$i)#"

Configuring EMC Greenplum DCA Using SAN Mirror and EMC Symmetrix VMAX for Disaster Recovery

Page 75: Green Plum 300-012-943

Automating DCA Internal SAN Configuration

} send "lldp fcoe-priority-bits 0x8\r" expect { timeout abort failed abort "Error:" abort "(conf-if-te-0/$i)#" } send "spanning-tree edgeport\r" expect { timeout abort failed abort "Error:" abort "(conf-if-te-0/$i)#" } send "spanning-tree edgeport bpdu-guard\r" expect { timeout abort failed abort "Error:" abort "(conf-if-te-0/$i)#" } send "cee fcoe-vlan\r" expect { timeout abort failed abort "Error:" abort "(conf-if-te-0/$i)#" } send "exit\r" expect { timeout abort failed abort "Error:" abort "(config)#" } }# End FOR PORT loop

send "exit\r" expect { timeout abort failed abort "Error:" abort "#" } puts "\r\rSaving original config as orig-startup-config-before-fcoe\r" send "copy startup-config orig-startup-config-before-fcoe\r" expect { timeout abort failed abort "Error:" abort

Script to automate DCA internal SAN configuration73

Page 76: Green Plum 300-012-943

74

Automating DCA Internal SAN Configuration

"Overwrite it" {send "y\r"; exp_continue} "Building configuration" } sleep 2

puts "\r\rCopying running config to startup-config\r" send "Copy running-config startup-config\r" expect { timeout abort failed abort "Error:" abort "Overwrite the startup config file (y/n):" } send "y\r" expect { timeout abort failed abort "Error:" abort "Building configuration" } sleep 2 send "exit\r" expect { timeout abort failed abort "Error:" abort ":admin" } puts "\r\rIMPORTANT NOTE: PLEASE LOG BACK IN TO SWITCHES AND ISSUE THE FOLLOWING TO\r" puts " VERIFY THAT PORTS HAVE LOGGED IN: fcoe --loginshow\r\r" send "exit\r" set switch [expr $switch + 10];# END WHILE SWITCH loop}?

Configuring EMC Greenplum DCA Using SAN Mirror and EMC Symmetrix VMAX for Disaster Recovery

Page 77: Green Plum 300-012-943

GInvisible Body Tag

This appendix contains the following information:

◆ Creating SNAPs on remote Symmetrix for disaster recovery ..... 76◆ Creating a single SNAP of R2 standard devices............................ 77◆ Creating multiple SNAPS of R2 standard devices ........................ 91

Creating SNAPs onRemote Symmetrix for

Disaster Recovery

Creating SNAPs on Remote Symmetrix for Disaster Recovery 75

Page 78: Green Plum 300-012-943

76

Creating SNAPs on Remote Symmetrix for Disaster Recovery

Creating SNAPs on remote Symmetrix for disaster recoveryIn order to create snaps of the R2 devices on the Symmetrix, the Symmetrix must be configured with the proper amount of snap save devices and the devices are assigned to a snap pool. A proper amount of virtual devices configured with same size and layout as the R2 standard devices that the snap is conducted on.

Note: For the purpose of these procedures, the R2 devices were configured as thin virtual striped meta volumes so the virtual devices (VDEVs) required must also be striped meta volumes of the same size.

Configuring EMC Greenplum DCA Using SAN Mirror and EMC Symmetrix VMAX for Disaster Recovery

Page 79: Green Plum 300-012-943

Creating SNAPs on Remote Symmetrix for Disaster Recovery

Creating a single SNAP of R2 standard devicesTo create a single SNAP of R2 standard devices, complete the following steps:

1. Verify that the Symmetrix where the snap will be conducted has the proper amount of snap save devices configured and assigned to a snap pool.

symsnap list -sid <symm serial #> -savedevs

Sample output of command:

[root@smdw ~]# symsnap list -sid 452 -savedevs

Symmetrix ID: 000192604452

S N A P S A V E D E V I C E S---------------------------------------------------------------------Device SaveDevice Total Used Free FullSym Emulation Pool Name Tracks Tracks Tracks (%)---------------------------------------------------------------------0339 FBA R2_SNAP_POOL 3939540 0 3939540 0033A FBA R2_SNAP_POOL 3939540 0 3939540 0033B FBA R2_SNAP_POOL 3939540 0 3939540 0033C FBA R2_SNAP_POOL 3939540 0 3939540 0033D FBA R2_SNAP_POOL 3939540 0 3939540 0033E FBA R2_SNAP_POOL 3939540 0 3939540 0033F FBA R2_SNAP_POOL 3939540 0 3939540 00340 FBA R2_SNAP_POOL 3939540 0 3939540 00341 FBA R2_SNAP_POOL 3939540 0 3939540 00342 FBA R2_SNAP_POOL 3939540 0 3939540 00343 FBA R2_SNAP_POOL 3939540 0 3939540 00344 FBA R2_SNAP_POOL 3939540 0 3939540 00345 FBA R2_SNAP_POOL 3939540 0 3939540 0….. Data cut due to output length03CE FBA R2_SNAP_POOL 3939540 0 3939540 003CF FBA R2_SNAP_POOL 3939540 0 3939540 0

Total --------- --------- --------- ---- Tracks 594870540 0 594870540 0 MB(s) 37179428 0 37179428

[root@smdw ~]#

Creating a single SNAP of R2 standard devices77

Page 80: Green Plum 300-012-943

78

Creating SNAPs on Remote Symmetrix for Disaster Recovery

2. To list available save device pools (snap pools) and verify the state of the poo, make sure the pool that will be used is Enabled.

symsnap -sid <symm serial #> -pools list

Sample output of command:

[root@smdw ~]# symsnap -sid 452 -pools list

Symmetrix ID: 000192604452 S A V E D E V I C E P O O L S-------------------------------------------------------------------------------- Dev Total Enabled Used Free Full Pool SessionPool Name Emul Tracks Tracks Tracks Tracks (%) State Status--------------------------------------------------------------------------------DEFAULT_POOL FBA 0 0 0 0 0 Disabled InactiveDEFAULT_POOL 3390 0 0 0 0 0 Disabled InactiveDEFAULT_POOL 3380 0 0 0 0 0 Disabled InactiveDEFAULT_POOL AS400 0 0 0 0 0 Disabled InactiveR2_SNAP_POOL FBA 594870540 594870540 0 594870540 0 Enabled Inactive[root@smdw ~]#

3. Verify that the Symmetrix where the snap will be conducted has the proper amount of virtual devices configured and that they have the same layout as the R2 standard volumes.

symdev list -sid <symm serial #> -vdev -meta

Sample output of command:

[root@mdw gpadmin]# symdev list -sid 452 -vdev -meta

Symmetrix ID: 000192604452

Device Name Device Meta Information---------------------------- ---------------------- ---------------------------Sym Physical Config Attr Config Stripe # of CapSize Devs (MB)---------------------------- ---------------------- ---------------------------

04DF Not Visible VDEV N/Asst'd Striped 960k 12 295501504EB Not Visible VDEV N/Asst'd Striped 960k 12 295501504F7 Not Visible VDEV N/Asst'd Striped 960k 12 29550150503 Not Visible VDEV N/Asst'd Striped 960k 12 2955015050F Not Visible VDEV N/Asst'd Striped 960k 12 2955015051B Not Visible VDEV N/Asst'd Striped 960k 12 29550150527 Not Visible VDEV N/Asst'd Striped 960k 12 29550150533 Not Visible VDEV N/Asst'd Striped 960k 12 2955015053F Not Visible VDEV N/Asst'd Striped 960k 12 2955015054B Not Visible VDEV N/Asst'd Striped 960k 12 29550150557 Not Visible VDEV N/Asst'd Striped 960k 12 29550150563 Not Visible VDEV N/Asst'd Striped 960k 12 2955015

Configuring EMC Greenplum DCA Using SAN Mirror and EMC Symmetrix VMAX for Disaster Recovery

Page 81: Green Plum 300-012-943

Creating SNAPs on Remote Symmetrix for Disaster Recovery

…. Data cut due to output length071F Not Visible VDEV N/Asst'd Striped 960k 12 2955015072B Not Visible VDEV N/Asst'd Striped 960k 12 29550150737 Not Visible VDEV N/Asst'd Striped 960k 12 29550150743 Not Visible VDEV N/Asst'd Striped 960k 12 2955015074F Not Visible VDEV N/Asst'd Striped 960k 12 2955015075B Not Visible VDEV N/Asst'd Striped 960k 12 2955015

4. Create a device group to maintain the R2 standard devices and the virtual devices that will be used for the snap session.

symdg create <device group name> -type RDF2

Sample output of command:

[root@smdw ~]# symdg create DCA2880_RDF2_SNAP_1 -type RDF2[root@smdw ~]#

5. Add all of the standard R2 devices to the newly created device group.

symdg -g <device group name> addall -sid <symm serial #> -devs <symdev>:<symdev>

Sample output of command:

[root@smdw ~]# symdg -g DCA2880_RDF2_SNAP_1 addall -sid 452 -devs 0411:04D1[root@smdw ~]#

6. Verify the devices were added to the newly created device group.

symdg show <device group name>

Sample output of command:

[root@smdw config]# symdg show DCA2880_RDF2_SNAP_1

Group Name: DCA2880_RDF2_SNAP_1

Group Type : RDF2 (RDFA) Device Group in GNS : No Valid : Yes Symmetrix ID : 000192604452 Group Creation Time : Mon Jun 27 11:16:49 2011 Vendor ID : EMC Corp Application ID : SYMCLI

Number of STD Devices in Group : 17 Number of Associated GK's : 0 Number of Locally-associated BCV's : 0 Number of Locally-associated VDEV's : 0 Number of Locally-associated TGT's : 0 Number of Remotely-associated VDEV's(STD RDF): 0 Number of Remotely-associated BCV's (STD RDF): 0 Number of Remotely-associated TGT's(TGT RDF) : 0

Creating a single SNAP of R2 standard devices79

Page 82: Green Plum 300-012-943

80

Creating SNAPs on Remote Symmetrix for Disaster Recovery

Number of Remotely-associated BCV's (BCV RDF): 0 Number of Remotely-assoc'd RBCV's (RBCV RDF) : 0 Number of Remotely-assoc'd BCV's (Hop-2 BCV) : 0 Number of Remotely-assoc'd VDEV's(Hop-2 VDEV): 0 Number of Remotely-assoc'd TGT's (Hop-2 TGT) : 0 Number of Composite Groups : 0 Composite Group Names : N/A

Standard (STD) Devices (17): { -------------------------------------------------------------------------------- Sym Device Cap LdevName PdevName Dev Config Att. Sts (MB) -------------------------------------------------------------------------------- DEV001 /dev/emcpowerf 0411 RDF2+TDEV (M) RW 2955015 DEV002 N/A 041D RDF2+TDEV (M) RW 2955015 DEV003 N/A 0429 RDF2+TDEV (M) RW 2955015 DEV004 N/A 0435 RDF2+TDEV (M) RW 2955015 DEV005 N/A 0441 RDF2+TDEV (M) RW 2955015 DEV006 N/A 044D RDF2+TDEV (M) RW 2955015 DEV007 N/A 0459 RDF2+TDEV (M) RW 2955015 DEV008 N/A 0465 RDF2+TDEV (M) RW 2955015 DEV009 N/A 0471 RDF2+TDEV (M) RW 2955015 DEV010 N/A 047D RDF2+TDEV (M) RW 2955015 DEV011 N/A 0489 RDF2+TDEV (M) RW 2955015 DEV012 N/A 0495 RDF2+TDEV (M) RW 2955015 DEV013 N/A 04A1 RDF2+TDEV (M) RW 2955015 DEV014 N/A 04AD RDF2+TDEV (M) RW 2955015 DEV015 N/A 04B9 RDF2+TDEV (M) RW 2955015 DEV016 N/A 04C5 RDF2+TDEV (M) RW 2955015 DEV017 N/A 04D1 RDF2+TDEV (M) RW 2955015 }

Device Group RDF Information { RDF Type : R2 RDF (RA) Group Number : 1 (00)

Remote Symmetrix ID : 000192604467

R2 Device Is Larger Than The R1 Device : False

Paired with a Diskless Device : False Paired with a Concurrent Device : False Paired with a Cascaded Device : False Thick Thin Relationship : False

RDF Pair Configuration : Normal RDF STAR Mode : False

Configuring EMC Greenplum DCA Using SAN Mirror and EMC Symmetrix VMAX for Disaster Recovery

Page 83: Green Plum 300-012-943

Creating SNAPs on Remote Symmetrix for Disaster Recovery

RDF Mode : Synchronous RDF Adaptive Copy : Disabled RDF Adaptive Copy Write Pending State : N/A RDF Adaptive Copy Skew (Tracks) : 32767

RDF Device Domino : Disabled

RDF Link Configuration : Fibre RDF Link Domino : Disabled Prevent Automatic RDF Link Recovery : Enabled Prevent RAs Online Upon Power ON : Enabled

Device RDF Status : Ready (RW)

Device RA Status : Ready (RW) Device Link Status : Not Ready (NR) Time of Last Device Link Status Change : N/A

Device Suspend State : Offline Device Consistency State : Enabled Device Consistency Exempt State : Disabled RDF R2 Not Ready If Invalid : Disabled Device Write Pacing Exempt State : Disabled Effective Write Pacing Exempt State : Disabled

Device RDF State : Ready (RW) Remote Device RDF State : Ready (RW)

RDF Pair State ( R1 <=\=> R2 ) : Split

Number of R1 Invalid Tracks : 106 Number of R2 Invalid Tracks : 0

RDFA Information: { Session Number : 0 Cycle Number : 0 Number of Devices in the Session : 17 Session Status : Inactive Consistency Exempt Devices : No Write Pacing Exempt Devices : No

Session Consistency State : N/A Minimum Cycle Time : 00:00:15 Average Cycle Time : 00:00:00 Duration of Last cycle : 00:00:00 Session Priority : 33

Tracks not Committed to the R2 Side: 0 Time that R2 is behind R1 : 291:39:24 R2 Image Capture Time : Wed Jun 15 07:43:38 2011 R2 Data is Consistent : True

Creating a single SNAP of R2 standard devices81

Page 84: Green Plum 300-012-943

82

Creating SNAPs on Remote Symmetrix for Disaster Recovery

R1 Side Percent Cache In Use : 0 R2 Side Percent Cache In Use : 0

Transmit Idle Time : 00:00:00 R1 Side DSE Used Tracks : 0 R2 Side DSE Used Tracks : 0 R1 Side Shared Tracks : 0 } }

[root@smdw config]#

7. Add the virtual devices that will be used for the snaps created against the R2 devices to the device group.

symdg -g <device group name> addall -sid <symm serial #> -devs <symm dev>:<symm dev> -vdev

Sample output of command:

[root@smdw config]# symdg -g DCA2880_RDF2_SNAP_1 addall -sid 452 -devs 04DF:059F -vdev[root@smdw config]#

8. Verify the virtual devices were added to the device group.

symdg show <device group name>

Sample output of command:

[root@smdw config]# symdg show DCA2880_RDF2_SNAP_1

Group Name: DCA2880_RDF2_SNAP_1

Group Type : RDF2 (RDFA) Device Group in GNS : No Valid : Yes Symmetrix ID : 000192604452 Group Creation Time : Mon Jun 27 11:16:49 2011 Vendor ID : EMC Corp Application ID : SYMCLI

Number of STD Devices in Group : 17 Number of Associated GK's : 0 Number of Locally-associated BCV's : 0 Number of Locally-associated VDEV's : 17 Number of Locally-associated TGT's : 0 Number of Remotely-associated VDEV's(STD RDF): 0 Number of Remotely-associated BCV's (STD RDF): 0 Number of Remotely-associated TGT's(TGT RDF) : 0 Number of Remotely-associated BCV's (BCV RDF): 0 Number of Remotely-assoc'd RBCV's (RBCV RDF) : 0 Number of Remotely-assoc'd BCV's (Hop-2 BCV) : 0 Number of Remotely-assoc'd VDEV's(Hop-2 VDEV): 0 Number of Remotely-assoc'd TGT's (Hop-2 TGT) : 0

Configuring EMC Greenplum DCA Using SAN Mirror and EMC Symmetrix VMAX for Disaster Recovery

Page 85: Green Plum 300-012-943

Creating SNAPs on Remote Symmetrix for Disaster Recovery

Number of Composite Groups : 0 Composite Group Names : N/A

Standard (STD) Devices (17): { -------------------------------------------------------------------------------- Sym Device Cap LdevName PdevName Dev Config Att. Sts (MB) -------------------------------------------------------------------------------- DEV001 /dev/emcpowerf 0411 RDF2+TDEV (M) RW 2955015 DEV002 N/A 041D RDF2+TDEV (M) RW 2955015 DEV003 N/A 0429 RDF2+TDEV (M) RW 2955015 DEV004 N/A 0435 RDF2+TDEV (M) RW 2955015 DEV005 N/A 0441 RDF2+TDEV (M) RW 2955015 DEV006 N/A 044D RDF2+TDEV (M) RW 2955015 DEV007 N/A 0459 RDF2+TDEV (M) RW 2955015 DEV008 N/A 0465 RDF2+TDEV (M) RW 2955015 DEV009 N/A 0471 RDF2+TDEV (M) RW 2955015 DEV010 N/A 047D RDF2+TDEV (M) RW 2955015 DEV011 N/A 0489 RDF2+TDEV (M) RW 2955015 DEV012 N/A 0495 RDF2+TDEV (M) RW 2955015 DEV013 N/A 04A1 RDF2+TDEV (M) RW 2955015 DEV014 N/A 04AD RDF2+TDEV (M) RW 2955015 DEV015 N/A 04B9 RDF2+TDEV (M) RW 2955015 DEV016 N/A 04C5 RDF2+TDEV (M) RW 2955015 DEV017 N/A 04D1 RDF2+TDEV (M) RW 2955015 }

VDEV Devices Locally-associated (17): { -------------------------------------------------------------------------------- Sym Device Cap LdevName PdevName Dev Config Att. Sts (MB) -------------------------------------------------------------------------------- VDEV001 N/A 04DF VDEV (M) NR 2955015 VDEV002 N/A 04EB VDEV (M) NR 2955015 VDEV003 N/A 04F7 VDEV (M) NR 2955015 VDEV004 N/A 0503 VDEV (M) NR 2955015 VDEV005 N/A 050F VDEV (M) NR 2955015 VDEV006 N/A 051B VDEV (M) NR 2955015 VDEV007 N/A 0527 VDEV (M) NR 2955015 VDEV008 N/A 0533 VDEV (M) NR 2955015 VDEV009 N/A 053F VDEV (M) NR 2955015 VDEV010 N/A 054B VDEV (M) NR 2955015 VDEV011 N/A 0557 VDEV (M) NR 2955015 VDEV012 N/A 0563 VDEV (M) NR 2955015 VDEV013 N/A 056F VDEV (M) NR 2955015 VDEV014 N/A 057B VDEV (M) NR 2955015

Creating a single SNAP of R2 standard devices83

Page 86: Green Plum 300-012-943

84

Creating SNAPs on Remote Symmetrix for Disaster Recovery

VDEV015 N/A 0587 VDEV (M) NR 2955015 VDEV016 N/A 0593 VDEV (M) NR 2955015 VDEV017 N/A 059F VDEV (M) NR 2955015 }

Device Group RDF Information { RDF Type : R2 RDF (RA) Group Number : 1 (00)

Remote Symmetrix ID : 000192604467

R2 Device Is Larger Than The R1 Device : False

Paired with a Diskless Device : False Paired with a Concurrent Device : False Paired with a Cascaded Device : False Thick Thin Relationship : False

RDF Pair Configuration : Normal RDF STAR Mode : False

RDF Mode : Synchronous RDF Adaptive Copy : Disabled RDF Adaptive Copy Write Pending State : N/A RDF Adaptive Copy Skew (Tracks) : 32767

RDF Device Domino : Disabled

RDF Link Configuration : Fibre RDF Link Domino : Disabled Prevent Automatic RDF Link Recovery : Enabled Prevent RAs Online Upon Power ON : Enabled

Device RDF Status : Ready (RW)

Device RA Status : Ready (RW) Device Link Status : Not Ready (NR) Time of Last Device Link Status Change : N/A

Device Suspend State : Offline Device Consistency State : Enabled Device Consistency Exempt State : Disabled RDF R2 Not Ready If Invalid : Disabled Device Write Pacing Exempt State : Disabled Effective Write Pacing Exempt State : Disabled

Device RDF State : Ready (RW) Remote Device RDF State : Ready (RW)

RDF Pair State ( R1 <=\=> R2 ) : Split

Configuring EMC Greenplum DCA Using SAN Mirror and EMC Symmetrix VMAX for Disaster Recovery

Page 87: Green Plum 300-012-943

Creating SNAPs on Remote Symmetrix for Disaster Recovery

Number of R1 Invalid Tracks : 106 Number of R2 Invalid Tracks : 0

RDFA Information: { Session Number : 0 Cycle Number : 0 Number of Devices in the Session : 17 Session Status : Inactive Consistency Exempt Devices : No Write Pacing Exempt Devices : No

Session Consistency State : N/A Minimum Cycle Time : 00:00:15 Average Cycle Time : 00:00:00 Duration of Last cycle : 00:00:00 Session Priority : 33

Tracks not Committed to the R2 Side: 0 Time that R2 is behind R1 : 291:46:58 R2 Image Capture Time : Wed Jun 15 07:43:38 2011 R2 Data is Consistent : True R1 Side Percent Cache In Use : 0 R2 Side Percent Cache In Use : 0

Transmit Idle Time : 00:00:00 R1 Side DSE Used Tracks : 0 R2 Side DSE Used Tracks : 0 R1 Side Shared Tracks : 0 } }

[root@smdw config]#

9. Create a snap using the created device group.

symsnap -g <device group name> create -svp <save device pool name> -noprompt

If the save device pool is the 'DEFAULT_POOL' then the -svp <save device pool name> parameters can be dropped as the 'DEFAULT_POOL' name is the default.

Creating a single SNAP of R2 standard devices85

Page 88: Green Plum 300-012-943

86

Creating SNAPs on Remote Symmetrix for Disaster Recovery

Sample output of command:

[root@smdw ~]# symsnap -g DCA2880_RDF2_SNAP_1 create -svp R2_SNAP_POOL -noprompt

'Create' operation execution is in progress fordevice group 'DCA2880_RDF2_SNAP_1'. Please wait...

'Create' operation successfully executed for device group'DCA2880_RDF2_SNAP_1'.

[root@smdw ~]#10. Query the device group to verify the snap was created.

symsnap -g <device group name> query

Sample output of command:

[root@smdw ~]# symsnap -g DCA2880_RDF2_SNAP_1 query

Device Group (DG) Name: DCA2880_RDF2_SNAP_1DG's Type : RDF2DG's Symmetrix ID : 000192604452

Source Device Target Device State Copy------------------------- -------------------- ---------- ------------ ---- Protected ChangedLogical Sym Tracks Logical Sym GD Tracks SRC <=> TGT (%)------------------------- -------------------- ---------- ------------ ----

DEV001 0411 47280240 VDEV001 04DF X. 0 Created 0DEV002 041D 47280240 VDEV002 04EB X. 0 Created 0DEV003 0429 47280240 VDEV003 04F7 X. 0 Created 0DEV004 0435 47280240 VDEV004 0503 X. 0 Created 0DEV005 0441 47280240 VDEV005 050F X. 0 Created 0DEV006 044D 47280240 VDEV006 051B X. 0 Created 0DEV007 0459 47280240 VDEV007 0527 X. 0 Created 0DEV008 0465 47280240 VDEV008 0533 X. 0 Created 0DEV009 0471 47280240 VDEV009 053F X. 0 Created 0DEV010 047D 47280240 VDEV010 054B X. 0 Created 0DEV011 0489 47280240 VDEV011 0557 X. 0 Created 0DEV012 0495 47280240 VDEV012 0563 X. 0 Created 0DEV013 04A1 47280240 VDEV013 056F X. 0 Created 0DEV014 04AD 47280240 VDEV014 057B X. 0 Created 0DEV015 04B9 47280240 VDEV015 0587 X. 0 Created 0DEV016 04C5 47280240 VDEV016 0593 X. 0 Created 0DEV017 04D1 47280240 VDEV017 059F X. 0 Created 0

Total -------- ---------- Track(s) 803764080 0 MB(s) 50235264 0

Legend:

Configuring EMC Greenplum DCA Using SAN Mirror and EMC Symmetrix VMAX for Disaster Recovery

Page 89: Green Plum 300-012-943

Creating SNAPs on Remote Symmetrix for Disaster Recovery

(G): X = The Target device is associated with this group, . = The Target device is not associated with this group.(D): X = The Target device has one or more inactive duplicates. M = The Target device has one or more inactive duplicates AND maximum inactive duplicates exist for this Source device. . = The Target device has no inactive duplicates.

[root@smdw ~]#11. Activate the snap so it is available for mounting on a host or set of

hosts.

This puts the virtual devices in a "copy on write" state so that any changes to the R2 that the snap was taken of or the virtual devices used for the snap are tracked.

To make sure that all devices are activated at the same time,use the consistent option with the activate command.

symsnap -g <device group name> activate -consistent -noprompt

Sample output of command:

[root@smdw ~]# symsnap -g DCA2880_RDF2_SNAP_1 activate -consistent -noprompt

'Activate' operation execution is in progress fordevice group 'DCA2880_RDF2_SNAP_1'. Please wait...

'Activate' operation successfully executed for device group'DCA2880_RDF2_SNAP_1'.

[root@smdw ~]#12. Query the device group to verify the snap was activated.

The devices should now be in a "CopyOnWrite" state.

symsnap -g <device group name> query

Sample output of command:

[root@smdw ~]# symsnap -g DCA2880_RDF2_SNAP_1 query

Device Group (DG) Name: DCA2880_RDF2_SNAP_1DG's Type : RDF2DG's Symmetrix ID : 000192604452

Creating a single SNAP of R2 standard devices87

Page 90: Green Plum 300-012-943

88

Creating SNAPs on Remote Symmetrix for Disaster Recovery

Source Device Target Device State Copy------------------------- -------------------- ---------- ------------ ---- Protected ChangedLogical Sym Tracks Logical Sym GD Tracks SRC <=> TGT (%)------------------------- -------------------- ---------- ------------ ----

DEV001 0411 47280240 VDEV001 04DF X. 0 CopyOnWrite 0DEV002 041D 47280240 VDEV002 04EB X. 0 CopyOnWrite 0DEV003 0429 47280240 VDEV003 04F7 X. 0 CopyOnWrite 0DEV004 0435 47280240 VDEV004 0503 X. 0 CopyOnWrite 0DEV005 0441 47280240 VDEV005 050F X. 0 CopyOnWrite 0DEV006 044D 47280240 VDEV006 051B X. 0 CopyOnWrite 0DEV007 0459 47280240 VDEV007 0527 X. 0 CopyOnWrite 0DEV008 0465 47280240 VDEV008 0533 X. 0 CopyOnWrite 0DEV009 0471 47280240 VDEV009 053F X. 0 CopyOnWrite 0DEV010 047D 47280240 VDEV010 054B X. 0 CopyOnWrite 0DEV011 0489 47280240 VDEV011 0557 X. 0 CopyOnWrite 0DEV012 0495 47280240 VDEV012 0563 X. 0 CopyOnWrite 0DEV013 04A1 47280240 VDEV013 056F X. 0 CopyOnWrite 0DEV014 04AD 47280240 VDEV014 057B X. 0 CopyOnWrite 0DEV015 04B9 47280240 VDEV015 0587 X. 0 CopyOnWrite 0DEV016 04C5 47280240 VDEV016 0593 X. 0 CopyOnWrite 0DEV017 04D1 47280240 VDEV017 059F X. 0 CopyOnWrite 0

Total -------- ---------- Track(s) 803764080 0 MB(s) 50235264 0

Legend:

(G): X = The Target device is associated with this group, . = The Target device is not associated with this group.(D): X = The Target device has one or more inactive duplicates. M = The Target device has one or more inactive duplicates AND maximum inactive duplicates exist for this Source device. . = The Target device has no inactive duplicates.

[root@smdw ~]#13. Mount the snap to another host or set of hosts make sure the

virtual devices used for the snap are visible to the host either using symaccess commands or another means.

Once the devices are assigned to the host(s) performs a rescan on the hosts and mounts them to the appropriate locations.

14. Recreate the snap session so it picks up all changes from the R2 and starts tracking changes from this new point.

Configuring EMC Greenplum DCA Using SAN Mirror and EMC Symmetrix VMAX for Disaster Recovery

Page 91: Green Plum 300-012-943

Creating SNAPs on Remote Symmetrix for Disaster Recovery

Make sure if the snap was mounted that it is unmounted before recreating the snap.

symsnap -g <device group name> recreate -noprompt

Sample output of command:

[root@smdw ~]# symsnap -g DCA2880_RDF2_SNAP_1 recreate -noprompt

'Recreate' operation execution is in progress fordevice group 'DCA2880_RDF2_SNAP_1'. Please wait...

'Recreate' operation successfully initiated for device group'DCA2880_RDF2_SNAP_1'.

[root@smdw ~]#15. Query the device group to verify the snap was recreated.

symsnap -g <device group name> query

Sample output of command:

[root@smdw ~]# symsnap -g DCA2880_RDF2_SNAP_1 query

Device Group (DG) Name: DCA2880_RDF2_SNAP_1DG's Type : RDF2DG's Symmetrix ID : 000192604452

Source Device Target Device State Copy------------------------- -------------------- ---------- ------------ ---- Protected ChangedLogical Sym Tracks Logical Sym GD Tracks SRC <=> TGT (%)------------------------- -------------------- ---------- ------------ ----

DEV001 0411 47280240 VDEV001 04DF X. 0 Recreated 0DEV002 041D 47280240 VDEV002 04EB X. 0 Recreated 0DEV003 0429 47280240 VDEV003 04F7 X. 0 Recreated 0DEV004 0435 47280240 VDEV004 0503 X. 0 Recreated 0DEV005 0441 47280240 VDEV005 050F X. 0 Recreated 0DEV006 044D 47280240 VDEV006 051B X. 0 Recreated 0DEV007 0459 47280240 VDEV007 0527 X. 0 Recreated 0DEV008 0465 47280240 VDEV008 0533 X. 0 Recreated 0DEV009 0471 47280240 VDEV009 053F X. 0 Recreated 0DEV010 047D 47280240 VDEV010 054B X. 0 Recreated 0DEV011 0489 47280240 VDEV011 0557 X. 0 Recreated 0DEV012 0495 47280240 VDEV012 0563 X. 0 Recreated 0DEV013 04A1 47280240 VDEV013 056F X. 0 Recreated 0DEV014 04AD 47280240 VDEV014 057B X. 0 Recreated 0DEV015 04B9 47280240 VDEV015 0587 X. 0 Recreated 0DEV016 04C5 47280240 VDEV016 0593 X. 0 Recreated 0DEV017 04D1 47280240 VDEV017 059F X. 0 Recreated 0

Creating a single SNAP of R2 standard devices89

Page 92: Green Plum 300-012-943

90

Creating SNAPs on Remote Symmetrix for Disaster Recovery

Total -------- ---------- Track(s) 803764080 0 MB(s) 50235264 0

Legend:

(G): X = The Target device is associated with this group, . = The Target device is not associated with this group.(D): X = The Target device has one or more inactive duplicates. M = The Target device has one or more inactive duplicates AND maximum inactive duplicates exist for this Source device. . = The Target device has no inactive duplicates.

[root@smdw ~]#16. Terminate the snap session.

symsnap -g <device group name> terminate

Sample output of command:

[root@smdw ~]# symsnap -g DCA2880_RDF2_SNAP_1 terminate

Execute 'Terminate' operation for device group'DCA2880_RDF2_SNAP_1' (y/[n]) ? y

'Terminate' operation execution is in progress fordevice group 'DCA2880_RDF2_SNAP_1'. Please wait...

'Terminate' operation successfully executed for device group'DCA2880_RDF2_SNAP_1'.

[root@smdw ~]#17. Query the device group to verify the snap was destroyed.

symsnap -g <device group name> query

Sample output of command:

[root@smdw ~]# symsnap -g DCA2880_RDF2_SNAP_1 query

The Source device and the Target device do not form a Copy session

Device group 'DCA2880_RDF2_SNAP_1' does not have any devices that are Snap source devices[root@smdw ~]#

Configuring EMC Greenplum DCA Using SAN Mirror and EMC Symmetrix VMAX for Disaster Recovery

Page 93: Green Plum 300-012-943

Creating SNAPs on Remote Symmetrix for Disaster Recovery

Creating multiple SNAPS of R2 standard devicesTo create multiple SNAPs of R2 standard devices, complete the following steps:

1. Modify the SYMAPI options file and enable two SYMAPI parameters that allow the ability to control multiple snaps/clones independently.

The options file is located in /var/symapi/config/options on the host(s) where Solutions Enabler is loaded. Make sure to modify the options file where the snap operations are executed from.

a. Enable SYMAPI_COMMAND_SCOPE

When set to enable, this parameter limits actions performed on a specific device or composite group to that specific group and takes no action on paired TimeFinder devices outside the group.

For easy maintenance of multiple snaps the instructions will place the source devices into two separate groups and each group will maintain their own virtual devices.

Without this option enabled, any snap creation, activation or termination will occur against both groups. Rather than having two separate point-in-time snaps of the data for two separate time periods, with this option disabled the snap would create two copies of the data for the same time period.

Example of parameter enabled:

SYMAPI_COMMAND_SCOPE = ENABLE

b. Enable SYMAPI_ALLOW_DEV_IN_MULT_GRPS

When set to enable, this parameter allows a device (source, virtual, clone, bcv) to exist in more than one device or composite group.

Example of parameter enabled:

SYMAPI_ALLOW_DEV_IN_MULT_GRPS = ENABLE

2. If there is a potential for more than 16 snaps being created against a single source, modify the root users .bash_profile file and add an additional SYMCLI environment variable SYMCLI_MULTI_VIRTUAL_SNAP and set it to ENABLED.

Creating multiple SNAPS of R2 standard devices91

Page 94: Green Plum 300-012-943

92

Creating SNAPs on Remote Symmetrix for Disaster Recovery

With this environment variable set, Solutions Enabler will allow up to 128 snaps for a single source. The root users .bash_profile file is located in root's home directory. Typically, /root/.bash_profile

a. Enable SYMCLI_MULTI_VIRTUAL_SNAP by adding the following lines in the .bash_profile file of the user that will conduct the snap operations. Typically root.

– SYMCLI_MULTI_VIRTUAL_SNAP=ENABLED

– export SYMCLI_MULTI_VIRTUAL_SNAPb. Source the .bash_profile file after saving the changes to pick

up the new environment parameters.

source .bash_profile

3. Verify that the Symmetrix where the snap will be conducted has the proper amount of snap save devices configured for the number of snap sessions that will be created and the save devices are assigned to a snap pool.

symsnap list -sid <symm serial #> -savedevs

Sample output of command:

[root@smdw ~]# symsnap list -sid 452 -savedevs

Symmetrix ID: 000192604452

S N A P S A V E D E V I C E S---------------------------------------------------------------------Device SaveDevice Total Used Free FullSym Emulation Pool Name Tracks Tracks Tracks (%)---------------------------------------------------------------------0339 FBA R2_SNAP_POOL 3939540 0 3939540 0033A FBA R2_SNAP_POOL 3939540 0 3939540 0033B FBA R2_SNAP_POOL 3939540 0 3939540 0033C FBA R2_SNAP_POOL 3939540 0 3939540 0033D FBA R2_SNAP_POOL 3939540 0 3939540 0033E FBA R2_SNAP_POOL 3939540 0 3939540 0033F FBA R2_SNAP_POOL 3939540 0 3939540 00340 FBA R2_SNAP_POOL 3939540 0 3939540 00341 FBA R2_SNAP_POOL 3939540 0 3939540 00342 FBA R2_SNAP_POOL 3939540 0 3939540 00343 FBA R2_SNAP_POOL 3939540 0 3939540 00344 FBA R2_SNAP_POOL 3939540 0 3939540 00345 FBA R2_SNAP_POOL 3939540 0 3939540 0….. Data cut due to output length03CE FBA R2_SNAP_POOL 3939540 0 3939540 003CF FBA R2_SNAP_POOL 3939540 0 3939540 0

Configuring EMC Greenplum DCA Using SAN Mirror and EMC Symmetrix VMAX for Disaster Recovery

Page 95: Green Plum 300-012-943

Creating SNAPs on Remote Symmetrix for Disaster Recovery

Total --------- --------- --------- ---- Tracks 594870540 0 594870540 0 MB(s) 37179428 0 37179428

[root@smdw ~]#4. To list available save device pools (snap pools) and verify the

state of the pool, make sure the pool that will be used is Enabled.

symsnap -sid <symm serial #> -pools list

Sample output of command:

[root@smdw ~]# symsnap -sid 452 -pools list

Symmetrix ID: 000192604452

S A V E D E V I C E P O O L S-------------------------------------------------------------------------------- Dev Total Enabled Used Free Full Pool SessionPool Name Emul Tracks Tracks Tracks Tracks (%) State Status--------------------------------------------------------------------------------DEFAULT_POOL FBA 0 0 0 0 0 Disabled InactiveDEFAULT_POOL 3390 0 0 0 0 0 Disabled InactiveDEFAULT_POOL 3380 0 0 0 0 0 Disabled InactiveDEFAULT_POOL AS400 0 0 0 0 0 Disabled InactiveR2_SNAP_POOL FBA 594870540 594870540 0 594870540 0 Enabled Inactive[root@smdw ~]#

5. Verify that the Symmetrix where the snap will be conducted has the proper amount of virtual devices configured for the number of snap sessions that will be created and that they have the same layout as the R2 standard volumes.

symdev list -sid <symm serial #> -vdev -meta

Sample output of command:

[root@mdw gpadmin]# symdev list -sid 452 -vdev -meta

Symmetrix ID: 000192604452

Device Name Device Meta Information---------------------------- ---------------------- ---------------------------Sym Physical Config Attr Config Stripe # of Cap

Size Devs (MB)---------------------------- ---------------------- ---------------------------

04DF Not Visible VDEV N/Asst'd Striped 960k 12 295501504EB Not Visible VDEV N/Asst'd Striped 960k 12 2955015

Creating multiple SNAPS of R2 standard devices93

Page 96: Green Plum 300-012-943

94

Creating SNAPs on Remote Symmetrix for Disaster Recovery

04F7 Not Visible VDEV N/Asst'd Striped 960k 12 29550150503 Not Visible VDEV N/Asst'd Striped 960k 12 2955015050F Not Visible VDEV N/Asst'd Striped 960k 12 2955015051B Not Visible VDEV N/Asst'd Striped 960k 12 29550150527 Not Visible VDEV N/Asst'd Striped 960k 12 29550150533 Not Visible VDEV N/Asst'd Striped 960k 12 2955015053F Not Visible VDEV N/Asst'd Striped 960k 12 2955015054B Not Visible VDEV N/Asst'd Striped 960k 12 29550150557 Not Visible VDEV N/Asst'd Striped 960k 12 29550150563 Not Visible VDEV N/Asst'd Striped 960k 12 2955015…. Data cut due to output length071F Not Visible VDEV N/Asst'd Striped 960k 12 2955015072B Not Visible VDEV N/Asst'd Striped 960k 12 29550150737 Not Visible VDEV N/Asst'd Striped 960k 12 29550150743 Not Visible VDEV N/Asst'd Striped 960k 12 2955015074F Not Visible VDEV N/Asst'd Striped 960k 12 2955015075B Not Visible VDEV N/Asst'd Striped 960k 12 2955015

[root@mdw gpadmin]#

6. Create a device group for the first snap session.

symdg create <device group name> -type RDF2

Sample output of command:

[root@smdw ~]# symdg create DCA2880_RDF2_SNAP_1 -type RDF2[root@smdw ~]#

7. Add the R2 standard devices to device group 1 that was just created.

symdg -g <device group name> addall -sid <symm serial #> -devs <start dev>:<end dev>

Sample output of command:

[root@smdw ~]# symdg -g DCA2880_RDF2_SNAP_1 addall -sid 452 -devs 0411:04D1[root@smdw ~]#

8. Add the virtual devices to the device group 1 that will be used for snap session 1.

symdg -g <device group name> addall -sid 452 -devs <start dev>:<end dev> -vdev

Sample output of command:

[root@smdw config]# symdg -g DCA2880_RDF2_SNAP_2 addall -sid 452 -devs 05AB:066B -vdev[root@smdw config]#

9. Display the device group to verify the standard devices and the virtual devices were correctly added to the group.

symdg show <device group name>

Configuring EMC Greenplum DCA Using SAN Mirror and EMC Symmetrix VMAX for Disaster Recovery

Page 97: Green Plum 300-012-943

Creating SNAPs on Remote Symmetrix for Disaster Recovery

Sample output of command:

[root@smdw config]# symdg show DCA2880_RDF2_SNAP_1

Group Name: DCA2880_RDF2_SNAP_1

Group Type : RDF2 (RDFA) Device Group in GNS : No Valid : Yes Symmetrix ID : 000192604452 Group Creation Time : Mon Jun 27 11:16:49 2011 Vendor ID : EMC Corp Application ID : SYMCLI

Number of STD Devices in Group : 17 Number of Associated GK's : 0 Number of Locally-associated BCV's : 0 Number of Locally-associated VDEV's : 17 Number of Locally-associated TGT's : 0 Number of Remotely-associated VDEV's(STD RDF): 0 Number of Remotely-associated BCV's (STD RDF): 0 Number of Remotely-associated TGT's(TGT RDF) : 0 Number of Remotely-associated BCV's (BCV RDF): 0 Number of Remotely-assoc'd RBCV's (RBCV RDF) : 0 Number of Remotely-assoc'd BCV's (Hop-2 BCV) : 0 Number of Remotely-assoc'd VDEV's(Hop-2 VDEV): 0 Number of Remotely-assoc'd TGT's (Hop-2 TGT) : 0 Number of Composite Groups : 0 Composite Group Names : N/A

Standard (STD) Devices (17): { -------------------------------------------------------------------------------- Sym Device Cap LdevName PdevName Dev Config Att. Sts (MB) -------------------------------------------------------------------------------- DEV001 /dev/emcpowerf 0411 RDF2+TDEV (M) RW 2955015 DEV002 N/A 041D RDF2+TDEV (M) RW 2955015 DEV003 N/A 0429 RDF2+TDEV (M) RW 2955015 DEV004 N/A 0435 RDF2+TDEV (M) RW 2955015 DEV005 N/A 0441 RDF2+TDEV (M) RW 2955015 DEV006 N/A 044D RDF2+TDEV (M) RW 2955015 DEV007 N/A 0459 RDF2+TDEV (M) RW 2955015 DEV008 N/A 0465 RDF2+TDEV (M) RW 2955015 DEV009 N/A 0471 RDF2+TDEV (M) RW 2955015 DEV010 N/A 047D RDF2+TDEV (M) RW 2955015 DEV011 N/A 0489 RDF2+TDEV (M) RW 2955015 DEV012 N/A 0495 RDF2+TDEV (M) RW 2955015 DEV013 N/A 04A1 RDF2+TDEV (M) RW 2955015 DEV014 N/A 04AD RDF2+TDEV (M) RW 2955015

Creating multiple SNAPS of R2 standard devices95

Page 98: Green Plum 300-012-943

96

Creating SNAPs on Remote Symmetrix for Disaster Recovery

DEV015 N/A 04B9 RDF2+TDEV (M) RW 2955015 DEV016 N/A 04C5 RDF2+TDEV (M) RW 2955015 DEV017 N/A 04D1 RDF2+TDEV (M) RW 2955015 }

VDEV Devices Locally-associated (17): { -------------------------------------------------------------------------------- Sym Device Cap LdevName PdevName Dev Config Att. Sts (MB) -------------------------------------------------------------------------------- VDEV001 N/A 04DF VDEV (M) NR 2955015 VDEV002 N/A 04EB VDEV (M) NR 2955015 VDEV003 N/A 04F7 VDEV (M) NR 2955015 VDEV004 N/A 0503 VDEV (M) NR 2955015 VDEV005 N/A 050F VDEV (M) NR 2955015 VDEV006 N/A 051B VDEV (M) NR 2955015 VDEV007 N/A 0527 VDEV (M) NR 2955015 VDEV008 N/A 0533 VDEV (M) NR 2955015 VDEV009 N/A 053F VDEV (M) NR 2955015 VDEV010 N/A 054B VDEV (M) NR 2955015 VDEV011 N/A 0557 VDEV (M) NR 2955015 VDEV012 N/A 0563 VDEV (M) NR 2955015 VDEV013 N/A 056F VDEV (M) NR 2955015 VDEV014 N/A 057B VDEV (M) NR 2955015 VDEV015 N/A 0587 VDEV (M) NR 2955015 VDEV016 N/A 0593 VDEV (M) NR 2955015 VDEV017 N/A 059F VDEV (M) NR 2955015 }

Device Group RDF Information { RDF Type : R2 RDF (RA) Group Number : 1 (00)

Remote Symmetrix ID : 000192604467

R2 Device Is Larger Than The R1 Device : False

Paired with a Diskless Device : False Paired with a Concurrent Device : False Paired with a Cascaded Device : False Thick Thin Relationship : False

RDF Pair Configuration : Normal RDF STAR Mode : False

RDF Mode : Synchronous RDF Adaptive Copy : Disabled RDF Adaptive Copy Write Pending State : N/A

Configuring EMC Greenplum DCA Using SAN Mirror and EMC Symmetrix VMAX for Disaster Recovery

Page 99: Green Plum 300-012-943

Creating SNAPs on Remote Symmetrix for Disaster Recovery

RDF Adaptive Copy Skew (Tracks) : 32767

RDF Device Domino : Disabled

RDF Link Configuration : Fibre RDF Link Domino : Disabled Prevent Automatic RDF Link Recovery : Enabled Prevent RAs Online Upon Power ON : Enabled

Device RDF Status : Ready (RW)

Device RA Status : Ready (RW) Device Link Status : Not Ready (NR) Time of Last Device Link Status Change : N/A

Device Suspend State : Offline Device Consistency State : Enabled Device Consistency Exempt State : Disabled RDF R2 Not Ready If Invalid : Disabled Device Write Pacing Exempt State : Disabled Effective Write Pacing Exempt State : Disabled

Device RDF State : Ready (RW) Remote Device RDF State : Ready (RW)

RDF Pair State ( R1 <=\=> R2 ) : Split

Number of R1 Invalid Tracks : 106 Number of R2 Invalid Tracks : 0

RDFA Information: { Session Number : 0 Cycle Number : 0 Number of Devices in the Session : 17 Session Status : Inactive Consistency Exempt Devices : No Write Pacing Exempt Devices : No

Session Consistency State : N/A Minimum Cycle Time : 00:00:15 Average Cycle Time : 00:00:00 Duration of Last cycle : 00:00:00 Session Priority : 33

Tracks not Committed to the R2 Side: 0 Time that R2 is behind R1 : 291:46:58 R2 Image Capture Time : Wed Jun 15 07:43:38 2011 R2 Data is Consistent : True R1 Side Percent Cache In Use : 0 R2 Side Percent Cache In Use : 0

Creating multiple SNAPS of R2 standard devices97

Page 100: Green Plum 300-012-943

98

Creating SNAPs on Remote Symmetrix for Disaster Recovery

Transmit Idle Time : 00:00:00 R1 Side DSE Used Tracks : 0 R2 Side DSE Used Tracks : 0 R1 Side Shared Tracks : 0 } }

[root@smdw config]#10. Create a second device group for the second snap session.

symdg create <device group name> -type RDF2

Sample output of command:

[root@smdw ~]# symdg create DCA2880_RDF2_SNAP_2 -type RDF2[root@smdw ~]#

11. )Add the same R2 standard devices that were added to device group 1 to device group 2 that was just created.

IMPORTANT!If the SYMAPI environment parameters were not enabled from step 1 of this section this step will fail since by default a device can only be in one device group at a time by default.

symdg -g <device group name> addall -sid <symm serial #> -devs <start dev>:<end dev>

Sample output of command:

[root@smdw ~]# symdg -g DCA2880_RDF2_SNAP_2 addall -sid 452 -devs 0411:04D1[root@smdw ~]#

12. Add the virtual devices to device group 2 that will be used for the second snap session.

Note: These should not be the same virtual devices added to device group 1 in step 8 of this section.

symdg -g <device group name> addall -sid <symm serial #> -devs <start dev>:<end dev> -vdev

Sample output of command:

[root@smdw config]# symdg -g DCA2880_RDF2_SNAP_1 addall -sid 452 -devs 04DF:059F -vdev[root@smdw config]#

13. Display the device group to verify the R2 standard devices and virtual devices were added correctly to the group

symdg show <device group name>

Configuring EMC Greenplum DCA Using SAN Mirror and EMC Symmetrix VMAX for Disaster Recovery

Page 101: Green Plum 300-012-943

Creating SNAPs on Remote Symmetrix for Disaster Recovery

Sample output of command:

[root@smdw config]# symdg show DCA2880_RDF2_SNAP_2

Group Name: DCA2880_RDF2_SNAP_2

Group Type : RDF2 (RDFA) Device Group in GNS : No Valid : Yes Symmetrix ID : 000192604452 Group Creation Time : Mon Jun 27 11:17:07 2011 Vendor ID : EMC Corp Application ID : SYMCLI

Number of STD Devices in Group : 17 Number of Associated GK's : 0 Number of Locally-associated BCV's : 0 Number of Locally-associated VDEV's : 17 Number of Locally-associated TGT's : 0 Number of Remotely-associated VDEV's(STD RDF): 0 Number of Remotely-associated BCV's (STD RDF): 0 Number of Remotely-associated TGT's(TGT RDF) : 0 Number of Remotely-associated BCV's (BCV RDF): 0 Number of Remotely-assoc'd RBCV's (RBCV RDF) : 0 Number of Remotely-assoc'd BCV's (Hop-2 BCV) : 0 Number of Remotely-assoc'd VDEV's(Hop-2 VDEV): 0 Number of Remotely-assoc'd TGT's (Hop-2 TGT) : 0 Number of Composite Groups : 0 Composite Group Names : N/A

Standard (STD) Devices (17): { -------------------------------------------------------------------------------- Sym Device Cap LdevName PdevName Dev Config Att. Sts (MB) -------------------------------------------------------------------------------- DEV001 /dev/emcpowerf 0411 RDF2+TDEV (M) RW 2955015 DEV002 N/A 041D RDF2+TDEV (M) RW 2955015 DEV003 N/A 0429 RDF2+TDEV (M) RW 2955015 DEV004 N/A 0435 RDF2+TDEV (M) RW 2955015 DEV005 N/A 0441 RDF2+TDEV (M) RW 2955015 DEV006 N/A 044D RDF2+TDEV (M) RW 2955015 DEV007 N/A 0459 RDF2+TDEV (M) RW 2955015 DEV008 N/A 0465 RDF2+TDEV (M) RW 2955015 DEV009 N/A 0471 RDF2+TDEV (M) RW 2955015 DEV010 N/A 047D RDF2+TDEV (M) RW 2955015 DEV011 N/A 0489 RDF2+TDEV (M) RW 2955015 DEV012 N/A 0495 RDF2+TDEV (M) RW 2955015 DEV013 N/A 04A1 RDF2+TDEV (M) RW 2955015 DEV014 N/A 04AD RDF2+TDEV (M) RW 2955015

Creating multiple SNAPS of R2 standard devices99

Page 102: Green Plum 300-012-943

100

Creating SNAPs on Remote Symmetrix for Disaster Recovery

DEV015 N/A 04B9 RDF2+TDEV (M) RW 2955015 DEV016 N/A 04C5 RDF2+TDEV (M) RW 2955015 DEV017 N/A 04D1 RDF2+TDEV (M) RW 2955015 }

VDEV Devices Locally-associated (17): { -------------------------------------------------------------------------------- Sym Device Cap LdevName PdevName Dev Config Att. Sts (MB) -------------------------------------------------------------------------------- VDEV001 N/A 05AB VDEV (M) NR 2955015 VDEV002 N/A 05B7 VDEV (M) NR 2955015 VDEV003 N/A 05C3 VDEV (M) NR 2955015 VDEV004 N/A 05CF VDEV (M) NR 2955015 VDEV005 N/A 05DB VDEV (M) NR 2955015 VDEV006 N/A 05E7 VDEV (M) NR 2955015 VDEV007 N/A 05F3 VDEV (M) NR 2955015 VDEV008 N/A 05FF VDEV (M) NR 2955015 VDEV009 N/A 060B VDEV (M) NR 2955015 VDEV010 N/A 0617 VDEV (M) NR 2955015 VDEV011 N/A 0623 VDEV (M) NR 2955015 VDEV012 N/A 062F VDEV (M) NR 2955015 VDEV013 N/A 063B VDEV (M) NR 2955015 VDEV014 N/A 0647 VDEV (M) NR 2955015 VDEV015 N/A 0653 VDEV (M) NR 2955015 VDEV016 N/A 065F VDEV (M) NR 2955015 VDEV017 N/A 066B VDEV (M) NR 2955015 }

Device Group RDF Information { RDF Type : R2 RDF (RA) Group Number : 1 (00)

Remote Symmetrix ID : 000192604467

R2 Device Is Larger Than The R1 Device : False

Paired with a Diskless Device : False Paired with a Concurrent Device : False Paired with a Cascaded Device : False Thick Thin Relationship : False

RDF Pair Configuration : Normal RDF STAR Mode : False

RDF Mode : Synchronous RDF Adaptive Copy : Disabled RDF Adaptive Copy Write Pending State : N/A

Configuring EMC Greenplum DCA Using SAN Mirror and EMC Symmetrix VMAX for Disaster Recovery

Page 103: Green Plum 300-012-943

Creating SNAPs on Remote Symmetrix for Disaster Recovery

RDF Adaptive Copy Skew (Tracks) : 32767

RDF Device Domino : Disabled

RDF Link Configuration : Fibre RDF Link Domino : Disabled Prevent Automatic RDF Link Recovery : Enabled Prevent RAs Online Upon Power ON : Enabled

Device RDF Status : Ready (RW)

Device RA Status : Ready (RW) Device Link Status : Not Ready (NR) Time of Last Device Link Status Change : N/A

Device Suspend State : Offline Device Consistency State : Enabled Device Consistency Exempt State : Disabled RDF R2 Not Ready If Invalid : Disabled Device Write Pacing Exempt State : Disabled Effective Write Pacing Exempt State : Disabled

Device RDF State : Ready (RW) Remote Device RDF State : Ready (RW)

RDF Pair State ( R1 <=\=> R2 ) : Split

Number of R1 Invalid Tracks : 106 Number of R2 Invalid Tracks : 0

RDFA Information: { Session Number : 0 Cycle Number : 0 Number of Devices in the Session : 17 Session Status : Inactive Consistency Exempt Devices : No Write Pacing Exempt Devices : No

Session Consistency State : N/A Minimum Cycle Time : 00:00:15 Average Cycle Time : 00:00:00 Duration of Last cycle : 00:00:00 Session Priority : 33

Tracks not Committed to the R2 Side: 0 Time that R2 is behind R1 : 291:47:05 R2 Image Capture Time : Wed Jun 15 07:43:38 2011 R2 Data is Consistent : True R1 Side Percent Cache In Use : 0 R2 Side Percent Cache In Use : 0

Creating multiple SNAPS of R2 standard devices101

Page 104: Green Plum 300-012-943

102

Creating SNAPs on Remote Symmetrix for Disaster Recovery

Transmit Idle Time : 00:00:00 R1 Side DSE Used Tracks : 0 R2 Side DSE Used Tracks : 0 R1 Side Shared Tracks : 0 } }

[root@smdw config]#14. Repeat steps 10 - 13 for any additional snap sessions that are

needed.

15. Follow steps 9 - 17 from “Creating a single SNAP of R2 standard devices” on page 77 to perform snap operations on any of the newly created device groups.

Each device group will be treated independently from the others, so it is possible to generate rolling snaps of the Greenplum database to maintain multiple point-in-time copies.

Configuring EMC Greenplum DCA Using SAN Mirror and EMC Symmetrix VMAX for Disaster Recovery

Page 105: Green Plum 300-012-943

HInvisible Body Tag

This appendix contains the following information:

◆ Mounting and checking Symmetrix SNAP of Greenplum database ..... 104

Mounting and Checkinga Symmetrix SNAP of the

Greenplum Database

Mounting and Checking a Symmetrix SNAP of the Greenplum Database 103

Page 106: Green Plum 300-012-943

104

Mounting and Checking a Symmetrix SNAP of the Greenplum Database

Mounting and checking Symmetrix SNAP of Greenplum database

This procedure assumes that TimeFinder/Snap is configured and that a snap has been taken of the primary DCA's volumes with symsnap create. or detailed information about using TimeFinder/Snap, see Creating SNAPs from the R2 Volumes for Disaster Recovery documentation, located on EMC Powerlink®.

To mount and check a Symmetrix SNAP of the Greenplum database, complete the following steps:

1. Activate the snap that was created.

symsnap -g <device group name> activate -consistent

Sample of command output:

[root@smdw ~]# symsnap -g DCA2880_RDF2_SNAP_2 activate -consistent

Execute 'Activate' operation for device group'DCA2880_RDF2_SNAP_2' (y/[n]) ? y

'Activate' operation execution is in progress fordevice group 'DCA2880_RDF2_SNAP_2'. Please wait...

'Activate' operation successfully executed for device group'DCA2880_RDF2_SNAP_2'.

[root@smdw ~]#2. Query the device group to verify the snap was activated.

Devices should report that they are in "CopyOnWrite" state.

symsnap -g <device group name> query

Sample of command output:

[root@smdw ~]# symsnap -g DCA2880_RDF2_SNAP_2 query

Device Group (DG) Name: DCA2880_RDF2_SNAP_2DG's Type : RDF2DG's Symmetrix ID : 000192604452

Source Device Target Device State Copy------------------------- -------------------- ---------- ------------ ---- Protected ChangedLogical Sym Tracks Logical Sym GD Tracks SRC <=> TGT (%)------------------------- -------------------- ---------- ------------ ----

Configuring EMC Greenplum DCA Using SAN Mirror and EMC Symmetrix VMAX for Disaster Recovery

Page 107: Green Plum 300-012-943

Mounting and Checking a Symmetrix SNAP of the Greenplum Database

DEV001 0411 47280240 VDEV001 05AB X. 0 CopyOnWrite 0DEV002 041D 47280240 VDEV002 05B7 X. 0 CopyOnWrite 0DEV003 0429 47280240 VDEV003 05C3 X. 0 CopyOnWrite 0DEV004 0435 47280240 VDEV004 05CF X. 0 CopyOnWrite 0DEV005 0441 47280240 VDEV005 05DB X. 0 CopyOnWrite 0DEV006 044D 47280240 VDEV006 05E7 X. 0 CopyOnWrite 0DEV007 0459 47280240 VDEV007 05F3 X. 0 CopyOnWrite 0DEV008 0465 47280240 VDEV008 05FF X. 0 CopyOnWrite 0DEV009 0471 47280240 VDEV009 060B X. 0 CopyOnWrite 0DEV010 047D 47280240 VDEV010 0617 X. 0 CopyOnWrite 0DEV011 0489 47280240 VDEV011 0623 X. 0 CopyOnWrite 0DEV012 0495 47280240 VDEV012 062F X. 0 CopyOnWrite 0DEV013 04A1 47280240 VDEV013 063B X. 0 CopyOnWrite 0DEV014 04AD 47280240 VDEV014 0647 X. 0 CopyOnWrite 0DEV015 04B9 47280240 VDEV015 0653 X. 0 CopyOnWrite 0DEV016 04C5 47280240 VDEV016 065F X. 0 CopyOnWrite 0DEV017 04D1 47280240 VDEV017 066B X. 0 CopyOnWrite 0

Total -------- ---------- Track(s) 803764080 0 MB(s) 50235264 0

Legend: (G): X = The Target device is associated with this group, . = The Target device is not associated with this group.(D): X = The Target device has one or more inactive duplicates. M = The Target device has one or more inactive duplicates AND maximum inactive duplicates exist for this Source device. . = The Target device has no inactive duplicates.

3. Assign the SNAP virtual devices to the appropriate storage groups for the DCA nodes where the devices need to be mounted.

a. Gather information from each of the DCA nodes to identify which system is using which R1 device so that they can be mapped to the R2 and subsequently to the virtual device used for the SNAP.

gpssh -f <host file> 'powermt display dev=all |egrep "Pseudo|Logical"'

b. Run a query against the SRDF device group to get the device R1/R2 mapping to compare with the SNAP device group to identify how the devices are paired

symrdf –cg <composite group> query

c. Run a query against the SNAP device group to map the R2 to virtual device used for the snap

symsnap –g <device group name> query

Mounting and checking Symmetrix SNAP of Greenplum database105

Page 108: Green Plum 300-012-943

106

Mounting and Checking a Symmetrix SNAP of the Greenplum Database

d. Add the devices to the appropriate storage group based on the mapping identified in steps a – c.

symaccess –sid <symm serial #> -type storage –name <sg name> add devs <symdev>,<symdev>

Sample of command output:

[root@mdw ~]# gpssh -f gp-hosts_1-8 'powermt display dev=all |egrep "Pseudo|Logical"'[sdw4] Pseudo name=emcpowerf[sdw4] Logical device ID=02BB[sdw4] Pseudo name=emcpowere[sdw4] Logical device ID=02C7[sdw5] Pseudo name=emcpowerf[sdw5] Logical device ID=02D3[sdw5] Pseudo name=emcpowere[sdw5] Logical device ID=02DF[sdw6] Pseudo name=emcpowerf[sdw6] Logical device ID=02EB[sdw6] Pseudo name=emcpowere[sdw6] Logical device ID=02F7[sdw7] Pseudo name=emcpowerf[sdw7] Logical device ID=0303[sdw7] Pseudo name=emcpowere[sdw7] Logical device ID=030F[sdw1] Pseudo name=emcpowerf[sdw1] Logical device ID=0273[sdw1] Pseudo name=emcpowere[sdw1] Logical device ID=027F[sdw2] Pseudo name=emcpowerf[sdw2] Logical device ID=028B[sdw2] Pseudo name=emcpowere[sdw2] Logical device ID=0297[sdw3] Pseudo name=emcpowerf[sdw3] Logical device ID=02A3[sdw3] Pseudo name=emcpowere[sdw3] Logical device ID=02AF[sdw8] Pseudo name=emcpowerf[sdw8] Logical device ID=031B[sdw8] Pseudo name=emcpowere[sdw8] Logical device ID=0327[ mdw] Pseudo name=emcpowerd[ mdw] Logical device ID=0267[root@mdw ~]#[root@mdw ~]# symrdf query -cg DCA_R1_CG

Composite Group Name : DCA_R1_CGComposite Group Type : RDF1Number of Symmetrix Units : 1Number of RDF (RA) Groups : 1RDF Consistency Mode : SYNC

Configuring EMC Greenplum DCA Using SAN Mirror and EMC Symmetrix VMAX for Disaster Recovery

Page 109: Green Plum 300-012-943

Mounting and Checking a Symmetrix SNAP of the Greenplum Database

Symmetrix ID : 000192604467 (Microcode Version: 5875)Remote Symmetrix ID : 000192604452 (Microcode Version: 5875)RDF (RA) Group Number : 1 (00)

Source (R1) View Target (R2) View MODES STATES-------------------------------- ------------------------- ----- ------ ST LI ST C S Standard A N A o uLogical Sym T R1 Inv R2 Inv K T R1 Inv R2 Inv n s RDF PairDevice Dev E Tracks Tracks S Dev E Tracks Tracks MDAE s p STATE-------------------------------- -- ----------------------- ----- ------ DEV4467* 0267 RW 0 0 RW 0411 WD 0 0 S... X - SynchronizedDEV4467* 0273 RW 0 0 RW 041D WD 0 0 S... X - SynchronizedDEV4467* 027F RW 0 0 RW 0429 WD 0 0 S... X - SynchronizedDEV4467* 028B RW 0 0 RW 0435 WD 0 0 S... X - SynchronizedDEV4467* 0297 RW 0 0 RW 0441 WD 0 0 S... X - SynchronizedDEV4467* 02A3 RW 0 0 RW 044D WD 0 0 S... X - SynchronizedDEV4467* 02AF RW 0 0 RW 0459 WD 0 0 S... X - SynchronizedDEV4467* 02BB RW 0 0 RW 0465 WD 0 0 S... X - SynchronizedDEV4467* 02C7 RW 0 0 RW 0471 WD 0 0 S... X - SynchronizedDEV4467* 02D3 RW 0 0 RW 047D WD 0 0 S... X - SynchronizedDEV4467* 02DF RW 0 0 RW 0489 WD 0 0 S... X - SynchronizedDEV4467* 02EB RW 0 0 RW 0495 WD 0 0 S... X - SynchronizedDEV4467* 02F7 RW 0 0 RW 04A1 WD 0 0 S... X - SynchronizedDEV4467* 0303 RW 0 0 RW 04AD WD 0 0 S... X - SynchronizedDEV4467* 030F RW 0 0 RW 04B9 WD 0 0 S... X - SynchronizedDEV4467* 031B RW 0 0 RW 04C5 WD 0 0 S... X - SynchronizedDEV4467* 0327 RW 0 0 RW 04D1 WD 0 0 S... X - Synchronized

Total ------- ------- ------- ------- Track(s) 0 0 0 0 MBs 0.0 0.0 0.0 0.0

Legend for MODES:

M(ode of Operation) : A = Async, S = Sync, E = Semi-sync, C = Adaptive Copy D(omino) : X = Enabled, . = Disabled A(daptive Copy) : D = Disk Mode, W = WP Mode, . = ACp off (Consistency) E(xempt): X = Enabled, . = Disabled, M = Mixed, - = N/A

Legend for STATES:

Cons(istency State) : X = Enabled, . = Disabled, M = Mixed, - = N/A Susp(end State) : X = Online, . = Offline, P = Offline Pending, - = N/A

[root@mdw ~]#[root@smdw ~]# symsnap -g DCA2880_RDF2_SNAP_2 query

Device Group (DG) Name: DCA2880_RDF2_SNAP_2DG's Type : RDF2DG's Symmetrix ID : 000192604452

Mounting and checking Symmetrix SNAP of Greenplum database107

Page 110: Green Plum 300-012-943

108

Mounting and Checking a Symmetrix SNAP of the Greenplum Database

Source Device Target Device State Copy------------------------- -------------------- ---------- ------------ ---- Protected ChangedLogical Sym Tracks Logical Sym GD Tracks SRC <=> TGT (%)------------------------- -------------------- ---------- ------------ ----

DEV001 0411 47280240 VDEV001 05AB X. 0 CopyOnWrite 0DEV002 041D 47280240 VDEV002 05B7 X. 0 CopyOnWrite 0DEV003 0429 47280240 VDEV003 05C3 X. 0 CopyOnWrite 0DEV004 0435 47280240 VDEV004 05CF X. 0 CopyOnWrite 0DEV005 0441 47280240 VDEV005 05DB X. 0 CopyOnWrite 0DEV006 044D 47280240 VDEV006 05E7 X. 0 CopyOnWrite 0DEV007 0459 47280240 VDEV007 05F3 X. 0 CopyOnWrite 0DEV008 0465 47280240 VDEV008 05FF X. 0 CopyOnWrite 0DEV009 0471 47280240 VDEV009 060B X. 0 CopyOnWrite 0DEV010 047D 47280240 VDEV010 0617 X. 0 CopyOnWrite 0DEV011 0489 47280240 VDEV011 0623 X. 0 CopyOnWrite 0DEV012 0495 47280240 VDEV012 062F X. 0 CopyOnWrite 0DEV013 04A1 47280240 VDEV013 063B X. 0 CopyOnWrite 0DEV014 04AD 47280240 VDEV014 0647 X. 0 CopyOnWrite 0DEV015 04B9 47280240 VDEV015 0653 X. 0 CopyOnWrite 0DEV016 04C5 47280240 VDEV016 065F X. 0 CopyOnWrite 0DEV017 04D1 47280240 VDEV017 066B X. 0 CopyOnWrite 0

Total -------- ---------- Track(s) 803764080 0 MB(s) 50235264 0

Legend: (G): X = The Target device is associated with this group, . = The Target device is not associated with this group.(D): X = The Target device has one or more inactive duplicates. M = The Target device has one or more inactive duplicates AND maximum inactive duplicates exist for this Source device. . = The Target device has no inactive duplicates.

[root@smdw ~]#[root@smdw ~]# symaccess -sid 452 list

Symmetrix ID : 000192604452Group Name Type-------------------------------- ---------mdw1_ig Initiatorsdw1_ig Initiatorsdw2_ig Initiatorsdw3_ig Initiatorsdw4_ig Initiatorsdw5_ig Initiatorsdw6_ig Initiator

Configuring EMC Greenplum DCA Using SAN Mirror and EMC Symmetrix VMAX for Disaster Recovery

Page 111: Green Plum 300-012-943

Mounting and Checking a Symmetrix SNAP of the Greenplum Database

sdw7_ig Initiatorsdw8_ig InitiatorDCA_PG Portmdw1_sg Storagesdw1_sg Storagesdw2_sg Storagesdw3_sg Storagesdw4_sg Storagesdw5_sg Storagesdw6_sg Storagesdw7_sg Storagesdw8_sg Storage

[root@smdw ~]# symaccess -sid 452 -type storage -name mdw1_sg add devs 05AB[root@smdw ~]# symaccess -sid 452 -type storage -name sdw1_sg add devs 05B7,05C3[root@smdw ~]# symaccess -sid 452 -type storage -name sdw2_sg add devs 05CF,05DB[root@smdw ~]# symaccess -sid 452 -type storage -name sdw3_sg add devs 05E7,05F3[root@smdw ~]# symaccess -sid 452 -type storage -name sdw4_sg add devs 05FF,060B[root@smdw ~]# symaccess -sid 452 -type storage -name sdw5_sg add devs 0617,0623[root@smdw ~]# symaccess -sid 452 -type storage -name sdw6_sg add devs 062F,063B[root@smdw ~]# symaccess -sid 452 -type storage -name sdw7_sg add devs 0647,0653[root@smdw ~]# symaccess -sid 452 -type storage -name sdw8_sg add devs 065F,066B[root@smdw ~]#

4. Force the new DCA nodes where the SNAP will be mounted to probe their buses to find the newly added devices.

The command provided when used with gpssh will successfully probe all required buses on the DCA. Some errors will occur but can be ignored because the host buses 3 and 4 do not exist on the segment servers. They only exist on the master and standby master.

For i in 0 1 2 3 4, do

echo - - - > /sys/class/scsi_host/host$i/scan; done

Sample output of command:

[root@smdw ~]# gpssh -f my_gp-hosts_1-8 'for i in 0 1 2 3 4; do echo - - - > /sys/class/scsi_host/host$i/scan; done'[sdw8] bash: /sys/class/scsi_host/host3/scan: No such file or directory[sdw8] bash: /sys/class/scsi_host/host4/scan: No such file or directory[sdw6] bash: /sys/class/scsi_host/host3/scan: No such file or directory[sdw6] bash: /sys/class/scsi_host/host4/scan: No such file or directory[sdw7] bash: /sys/class/scsi_host/host3/scan: No such file or directory[sdw7] bash: /sys/class/scsi_host/host4/scan: No such file or directory[sdw4] bash: /sys/class/scsi_host/host3/scan: No such file or directory[sdw4] bash: /sys/class/scsi_host/host4/scan: No such file or directory[sdw5] bash: /sys/class/scsi_host/host3/scan: No such file or directory[sdw5] bash: /sys/class/scsi_host/host4/scan: No such file or directory[sdw2] bash: /sys/class/scsi_host/host3/scan: No such file or directory

Mounting and checking Symmetrix SNAP of Greenplum database109

Page 112: Green Plum 300-012-943

110

Mounting and Checking a Symmetrix SNAP of the Greenplum Database

[sdw2] bash: /sys/class/scsi_host/host4/scan: No such file or directory[sdw3] bash: /sys/class/scsi_host/host3/scan: No such file or directory[sdw3] bash: /sys/class/scsi_host/host4/scan: No such file or directory[ mdw][sdw1] bash: /sys/class/scsi_host/host3/scan: No such file or directory[sdw1] bash: /sys/class/scsi_host/host4/scan: No such file or directory[root@smdw ~]#

5. Issue a powermt config to each of the segment servers to generate power devices for the newly found devices.

powermt config

Sample output of command:

[root@smdw ~]# gpssh -f my_gp-hosts_1-8 'powermt config'[sdw8][sdw6][sdw7][sdw4][sdw5][sdw2][sdw3][ mdw][sdw1][root@smdw ~]#

6. Verify that the newly added devices are now configured with power devices.

Note: If more than one set of snap devices were assigned to the host, multiple power devices will be seen.

powermt display dev=all |egrep "Pseudo|Logical"

Sample output of command:

[root@smdw ~]# gpssh -f my_gp-hosts_1-8 'powermt display dev=all|egrep "Pseudo|Logical"'[sdw8] Pseudo name=emcpowerh[sdw8] Logical device ID=04C5[sdw8] Pseudo name=emcpowerg[sdw8] Logical device ID=04D1[sdw8] Pseudo name=emcpowerf[sdw8] Logical device ID=0593[sdw8] Pseudo name=emcpowere[sdw8] Logical device ID=059F[sdw8] Pseudo name=emcpowerj[sdw8] Logical device ID=065F[sdw8] Pseudo name=emcpoweri[sdw8] Logical device ID=066B

Configuring EMC Greenplum DCA Using SAN Mirror and EMC Symmetrix VMAX for Disaster Recovery

Page 113: Green Plum 300-012-943

Mounting and Checking a Symmetrix SNAP of the Greenplum Database

-- Data cut due to output length[root@smdw ~]#

7. Save the power path configuration.

powermt save

Sample output of command:

[root@smdw ~]# gpssh -f my_gp-hosts_1-8 'powermt save'[sdw8][sdw6][sdw7][sdw4][sdw5][sdw2][sdw3][ mdw][sdw1][root@smdw ~]#

8. Mount the virtual device containing the master database on the master server.

mount /dev/<emcpower device> <mountpoint> -o noatime,inode64,allocsize=16m

Sample output of command:

[root@smdw ~]# mount /dev/emcpowere /data -o noatime,inode64,allocsize=16m[root@smdw ~]#

9. Mount the appropriate power device(s) on each of the segment servers.

gpssh -h <segment server> 'mount /<emcpower device> <mountpoint> -o noatime,inode64,allocsize=16m; mount /<emcpower device> <mountpoint> -o noatime,inode64,allocsize=16m'

Sample output of command:

[root@smdw ~]# gpssh -h sdw1 'mount /dev/emcpowerj /mirror1 -o noatime,inode64,allocsize=16m;mount /dev/emcpowerk /mirror2 -o noatime,inode64,allocsize=16m'[sdw1][root@smdw ~]# gpssh -h sdw2 'mount /dev/emcpoweri /mirror1 -o noatime,inode64,allocsize=16m;mount /dev/emcpowerj /mirror2 -o noatime,inode64,allocsize=16m'[sdw2][root@smdw ~]# gpssh -h sdw3 'mount /dev/emcpoweri /mirror1 -o noatime,inode64,allocsize=16m;mount /dev/emcpowerj /mirror2 -o noatime,inode64,allocsize=16m'[sdw3][root@smdw ~]# gpssh -h sdw4 'mount /dev/emcpoweri /mirror1 -o noatime,inode64,allocsize=16m;mount /dev/emcpowerj /mirror2 -o noatime,inode64,allocsize=16m'

Mounting and checking Symmetrix SNAP of Greenplum database111

Page 114: Green Plum 300-012-943

112

Mounting and Checking a Symmetrix SNAP of the Greenplum Database

[sdw4][root@smdw ~]# gpssh -h sdw5 'mount /dev/emcpoweri /mirror1 -o noatime,inode64,allocsize=16m;mount /dev/emcpowerj /mirror2 -o noatime,inode64,allocsize=16m'[sdw5][root@smdw ~]# gpssh -h sdw6 'mount /dev/emcpoweri /mirror1 -o noatime,inode64,allocsize=16m;mount /dev/emcpowerj /mirror2 -o noatime,inode64,allocsize=16m'[sdw6][root@smdw ~]# gpssh -h sdw7 'mount /dev/emcpoweri /mirror1 -o noatime,inode64,allocsize=16m;mount /dev/emcpowerj /mirror2 -o noatime,inode64,allocsize=16m'[sdw7][root@smdw ~]# gpssh -h sdw8 'mount /dev/emcpoweri /mirror1 -o noatime,inode64,allocsize=16m;mount /dev/emcpowerj /mirror2 -o noatime,inode64,allocsize=16m'[sdw8][root@smdw ~]#

10. Verify that the SNAP devices were mounted successfully.

gpssh -f <host file> 'df -h |egrep "<mountloc>|<mountloc>"'

Sample output of command:

[root@smdw ~]# gpssh -f my_gp-hosts_1-8 'df -h |egrep "data|mirror"'[sdw8] /dev/sdb 2.7T 204G 2.5T 8% /data1[sdw8] /dev/sdd 2.7T 204G 2.5T 8% /data2[sdw8] /dev/emcpoweri 2.9T 2.6G 2.9T 1% /mirror1[sdw8] /dev/emcpowerj 2.9T 2.6G 2.9T 1% /mirror2[sdw6] /dev/sdb 2.7T 204G 2.5T 8% /data1[sdw6] /dev/sdd 2.7T 204G 2.5T 8% /data2[sdw6] /dev/emcpoweri 2.9T 2.6G 2.9T 1% /mirror1[sdw6] /dev/emcpowerj 2.9T 2.6G 2.9T 1% /mirror2[sdw7] /dev/sdb 2.7T 204G 2.5T 8% /data1[sdw7] /dev/sdd 2.7T 204G 2.5T 8% /data2[sdw7] /dev/emcpoweri 2.9T 2.6G 2.9T 1% /mirror1[sdw7] /dev/emcpowerj 2.9T 2.6G 2.9T 1% /mirror2[sdw4] /dev/sdb 2.7T 204G 2.5T 8% /data1[sdw4] /dev/sdd 2.7T 204G 2.5T 8% /data2[sdw4] /dev/emcpoweri 2.9T 2.6G 2.9T 1% /mirror1[sdw4] /dev/emcpowerj 2.9T 2.6G 2.9T 1% /mirror2[sdw5] /dev/sdb 2.7T 204G 2.5T 8% /data1[sdw5] /dev/sdd 2.7T 204G 2.5T 8% /data2[sdw5] /dev/emcpoweri 2.9T 2.6G 2.9T 1% /mirror1[sdw5] /dev/emcpowerj 2.9T 2.6G 2.9T 1% /mirror2[sdw2] /dev/sdb 2.7T 204G 2.5T 8% /data1[sdw2] /dev/sdd 2.7T 204G 2.5T 8% /data2[sdw2] /dev/emcpoweri 2.9T 2.6G 2.9T 1% /mirror1[sdw2] /dev/emcpowerj 2.9T 2.6G 2.9T 1% /mirror2[sdw3] /dev/sdb 2.7T 204G 2.5T 8% /data1[sdw3] /dev/sdd 2.7T 204G 2.5T 8% /data2[sdw3] /dev/emcpoweri 2.9T 2.6G 2.9T 1% /mirror1

Configuring EMC Greenplum DCA Using SAN Mirror and EMC Symmetrix VMAX for Disaster Recovery

Page 115: Green Plum 300-012-943

Mounting and Checking a Symmetrix SNAP of the Greenplum Database

[sdw3] /dev/emcpowerj 2.9T 2.6G 2.9T 1% /mirror2[ mdw] /dev/emcpowere 2.9T 263M 2.9T 1% /data[sdw1] /dev/sdb 2.7T 204G 2.5T 8% /data1[sdw1] /dev/sdd 2.7T 204G 2.5T 8% /data2[sdw1] /dev/emcpowerj 2.9T 2.6G 2.9T 1% /mirror1[sdw1] /dev/emcpowerk 2.9T 2.6G 2.9T 1% /mirror2[root@smdw ~]#

11. Switch to the gpadmin user on the new master server

su - gpadmin

Sample output of command:

[root@smdw ~]# su - gpadmin[gpadmin@smdw ~]$

12. Start the database in maintenance mode.

gpstart -m

Sample output of command:

gpadmin@smdw ~]$ gpstart -m20110630:06:35:54:gpstart:smdw:gpadmin-[INFO]:-Starting gpstart with args: -m20110630:06:35:54:gpstart:smdw:gpadmin-[INFO]:-Gathering information and validating the environment...20110630:06:35:55:gpstart:smdw:gpadmin-[INFO]:-Greenplum Binary Version: 'postgres (Greenplum Database) 4.1.1.1 build 1'20110630:06:35:55:gpstart:smdw:gpadmin-[INFO]:-Greenplum Catalog Version: '201101130'20110630:06:35:55:gpstart:smdw:gpadmin-[WARNING]:-postmaster.pid file exists on Master, checking if recovery startup required20110630:06:35:55:gpstart:smdw:gpadmin-[INFO]:-Commencing recovery startup checks20110630:06:35:55:gpstart:smdw:gpadmin-[INFO]:-No socket connection or lock file in /tmp found for port=543220110630:06:35:55:gpstart:smdw:gpadmin-[INFO]:-No Master instance process, entering recovery startup mode20110630:06:35:55:gpstart:smdw:gpadmin-[INFO]:-Clearing Master instance pid file20110630:06:35:55:gpstart:smdw:gpadmin-[INFO]:-Starting Master instance in admin mode20110630:06:35:56:gpstart:smdw:gpadmin-[INFO]:-Obtaining Greenplum Master catalog information20110630:06:35:56:gpstart:smdw:gpadmin-[INFO]:-Obtaining Segment details from master...20110630:06:35:57:gpstart:smdw:gpadmin-[INFO]:-Commencing forced instance shutdown20110630:06:35:58:gpstart:smdw:gpadmin-[INFO]:-Starting Master instance in admin mode20110630:06:35:59:gpstart:smdw:gpadmin-[INFO]:-Obtaining Greenplum Master catalog information20110630:06:35:59:gpstart:smdw:gpadmin-[INFO]:-Obtaining Segment details from master...20110630:06:36:00:gpstart:smdw:gpadmin-[INFO]:-Master Started...[gpadmin@smdw ~]$

Mounting and checking Symmetrix SNAP of Greenplum database113

Page 116: Green Plum 300-012-943

114

Mounting and Checking a Symmetrix SNAP of the Greenplum Database

13. Connect to the database using the utility role.

PGOPTIONS='-c gp_session_role=utility' psql <database name>

Sample output of command:

[gpadmin@smdw ~]$ PGOPTIONS='-c gp_session_role=utility' psql srdftestingpsql (8.2.15)Type "help" for help.

14. Run a query against the gp_segment_configuration and pg_filespace_entry table spaces and verify that all of mirror segments were up at the time of the snap.

If any of the mirror segments were down during the snap then the snap does not contain a consistent copy of the database since the R2 device associated with the mirror segment that was down was not updated from the primary segment and therefore does not contain the latest data.

select sc.dbid,sc.content,sc.role,sc.preferred_role,sc.mode,sc.status,sc.hostname,fse.fselocation from gp_segment_configuration sc,pg_filespace_entry fse where sc.dbid=fse.fsedbid order by sc.dbid

Sample output of command:

srdftesting=# select sc.dbid,sc.content,sc.role,sc.preferred_role,sc.mode,sc.status,sc.hostname,fse.fselocation from gp_segment_configuration sc,pg_filespace_entry fse where sc.dbid=fse.fsedbid order by sc.dbid;

dbid | content | role | preferred_role | mode | status | hostname | fselocation------+---------+------+----------------+------+--------+----------+------------ 1 | -1 | p | p | s | u | mdw | /data/master/gpseg-1 2 | 0 | p | p | s | u | sdw1 | /data1/primary/gpseg0 3 | 1 | p | p | s | u | sdw1 | /data1/primary/gpseg1 4 | 2 | p | p | s | u | sdw1 | /data1/primary/gpseg2 5 | 3 | p | p | s | u | sdw1 | /data2/primary/gpseg3 6 | 4 | p | p | s | u | sdw1 | /data2/primary/gpseg4 7 | 5 | p | p | s | u | sdw1 | /data2/primary/gpseg5 8 | 6 | p | p | s | u | sdw2 | /data1/primary/gpseg6 9 | 7 | p | p | s | u | sdw2 | /data1/primary/gpseg7 10 | 8 | p | p | s | u | sdw2 | /data1/primary/gpseg8 11 | 9 | p | p | s | u | sdw2 | /data2/primary/gpseg9 12 | 10 | p | p | s | u | sdw2 | /data2/primary/gpseg10 13 | 11 | p | p | s | u | sdw2 | /data2/primary/gpseg11 14 | 12 | p | p | s | u | sdw3 | /data1/primary/gpseg12 15 | 13 | p | p | s | u | sdw3 | /data1/primary/gpseg13 16 | 14 | p | p | s | u | sdw3 | /data1/primary/gpseg14 17 | 15 | p | p | s | u | sdw3 | /data2/primary/gpseg15 18 | 16 | p | p | s | u | sdw3 | /data2/primary/gpseg16

Configuring EMC Greenplum DCA Using SAN Mirror and EMC Symmetrix VMAX for Disaster Recovery

Page 117: Green Plum 300-012-943

Mounting and Checking a Symmetrix SNAP of the Greenplum Database

19 | 17 | p | p | s | u | sdw3 | /data2/primary/gpseg17

-- Data cut due to output length

94 | 44 | m | m | s | u | sdw3 | /mirror2/mirror/gpseg44 95 | 45 | m | m | s | u | sdw4 | /mirror1/mirror/gpseg45 96 | 46 | m | m | s | u | sdw5 | /mirror1/mirror/gpseg46 97 | 47 | m | m | s | u | sdw6 | /mirror1/mirror/gpseg47(97 rows)

srdftesting=#

15. If all of the mirror segments were up and synchronized at the time of the snap, bring the database online on the recovery DCA using only the mirror segments.

Update the gp_segment_configuration table space and mark the primary segment roles as 'mirror' and the status as 'down'.

update gp_segment_configuration sc set role='m',status='d' from pg_filespace_entry fse where sc.dbid=fse.fsedbid and fse.fselocation like'/<unique dir path value>/%' and sc.content >= 0;

Sample output of command:

srdftesting=# update gp_segment_configuration sc set role='m',status='d' from pg_filespace_entry fse where sc.dbid=fse.fsedbid and fse.fselocation like'/data_/%' and sc.content >= 0;UPDATE 48srdftesting=#

Sample output of command after running the update statement:

srdftesting=# select sc.dbid,sc.content,sc.role,sc.preferred_role,sc.mode,sc.status,sc.hostname,fse.fselocation from gp_segment_configuration sc,pg_filespace_entry fse where sc.dbid=fse.fsedbid order by sc.dbid;

dbid | content | role | preferred_role | mode | status | hostname | fselocation------+---------+------+----------------+------+--------+----------+------------ 1 | -1 | p | p | s | u | mdw | /data/master/gpseg-1 2 | 0 | m | p | s | d | sdw1 | /data1/primary/gpseg0 3 | 1 | m | p | s | d | sdw1 | /data1/primary/gpseg1 4 | 2 | m | p | s | d | sdw1 | /data1/primary/gpseg2 5 | 3 | m | p | s | d | sdw1 | /data2/primary/gpseg3 6 | 4 | m | p | s | d | sdw1 | /data2/primary/gpseg4 7 | 5 | m | p | s | d | sdw1 | /data2/primary/gpseg5 8 | 6 | m | p | s | d | sdw2 | /data1/primary/gpseg6 9 | 7 | m | p | s | d | sdw2 | /data1/primary/gpseg7 10 | 8 | m | p | s | d | sdw2 | /data1/primary/gpseg8 11 | 9 | m | p | s | d | sdw2 | /data2/primary/gpseg9 12 | 10 | m | p | s | d | sdw2 | /data2/primary/gpseg10 13 | 11 | m | p | s | d | sdw2 | /data2/primary/gpseg11

Mounting and checking Symmetrix SNAP of Greenplum database115

Page 118: Green Plum 300-012-943

116

Mounting and Checking a Symmetrix SNAP of the Greenplum Database

14 | 12 | m | p | s | d | sdw3 | /data1/primary/gpseg12 15 | 13 | m | p | s | d | sdw3 | /data1/primary/gpseg13 16 | 14 | m | p | s | d | sdw3 | /data1/primary/gpseg14 17 | 15 | m | p | s | d | sdw3 | /data2/primary/gpseg15 18 | 16 | m | p | s | d | sdw3 | /data2/primary/gpseg16 19 | 17 | m | p | s | d | sdw3 | /data2/primary/gpseg17

-- Data cut due to output length

90 | 40 | m | m | s | u | sdw4 | /mirror1/mirror/gpseg40 91 | 41 | m | m | s | u | sdw5 | /mirror1/mirror/gpseg41 92 | 42 | m | m | s | u | sdw1 | /mirror2/mirror/gpseg42 93 | 43 | m | m | s | u | sdw2 | /mirror2/mirror/gpseg43 94 | 44 | m | m | s | u | sdw3 | /mirror2/mirror/gpseg44 95 | 45 | m | m | s | u | sdw4 | /mirror1/mirror/gpseg45 96 | 46 | m | m | s | u | sdw5 | /mirror1/mirror/gpseg46 97 | 47 | m | m | s | u | sdw6 | /mirror1/mirror/gpseg47(97 rows)

srdftesting=#

16. Update the gp_segment_configuration table space and set the mirror segments to a role of 'primary' and a mode of 'change tracking'.

update gp_segment_configuration sc set role='p',mode='c' from pg_filespace_entry fse where sc.dbid=fse.fsedbid and fse.fselocation like'/<unique dir path value>/%' and sc.content >= 0;

Sample output of command:

srdftesting=# update gp_segment_configuration sc set role='p',mode='c' from pg_filespace_entry fse where sc.dbid=fse.fsedbid and fse.fselocation like'/mirror_/%' and sc.content >= 0; UPDATE 48 srdftesting=#

Sample output of command after running the update statement:

srdftesting=# select sc.dbid,sc.content,sc.role,sc.preferred_role,sc.mode,sc.status,sc.hostname,fse.fselocation from gp_segment_configuration sc,pg_filespace_entry fse where sc.dbid=fse.fsedbid order by sc.dbid;

dbid | content | role | preferred_role | mode | status | hostname | fselocation------+---------+------+----------------+------+--------+----------+------------ 1 | -1 | p | p | s | u | mdw | /data/master/gpseg-1 2 | 0 | m | p | s | d | sdw1 | /data1/primary/gpseg0 3 | 1 | m | p | s | d | sdw1 | /data1/primary/gpseg1 4 | 2 | m | p | s | d | sdw1 | /data1/primary/gpseg2 5 | 3 | m | p | s | d | sdw1 | /data2/primary/gpseg3 6 | 4 | m | p | s | d | sdw1 | /data2/primary/gpseg4 7 | 5 | m | p | s | d | sdw1 | /data2/primary/gpseg5 8 | 6 | m | p | s | d | sdw2 | /data1/primary/gpseg6

Configuring EMC Greenplum DCA Using SAN Mirror and EMC Symmetrix VMAX for Disaster Recovery

Page 119: Green Plum 300-012-943

Mounting and Checking a Symmetrix SNAP of the Greenplum Database

9 | 7 | m | p | s | d | sdw2 | /data1/primary/gpseg7 10 | 8 | m | p | s | d | sdw2 | /data1/primary/gpseg8 11 | 9 | m | p | s | d | sdw2 | /data2/primary/gpseg9 12 | 10 | m | p | s | d | sdw2 | /data2/primary/gpseg10 13 | 11 | m | p | s | d | sdw2 | /data2/primary/gpseg11 14 | 12 | m | p | s | d | sdw3 | /data1/primary/gpseg12 15 | 13 | m | p | s | d | sdw3 | /data1/primary/gpseg13 16 | 14 | m | p | s | d | sdw3 | /data1/primary/gpseg14 17 | 15 | m | p | s | d | sdw3 | /data2/primary/gpseg15 18 | 16 | m | p | s | d | sdw3 | /data2/primary/gpseg16 19 | 17 | m | p | s | d | sdw3 | /data2/primary/gpseg17 20 | 18 | m | p | s | d | sdw4 | /data1/primary/gpseg18

-- Data cut due to output length

90 | 40 | p | m | c | u | sdw4 | /mirror1/mirror/gpseg40 91 | 41 | p | m | c | u | sdw5 | /mirror1/mirror/gpseg41 92 | 42 | p | m | c | u | sdw1 | /mirror2/mirror/gpseg42 93 | 43 | p | m | c | u | sdw2 | /mirror2/mirror/gpseg43 94 | 44 | p | m | c | u | sdw3 | /mirror2/mirror/gpseg44 95 | 45 | p | m | c | u | sdw4 | /mirror1/mirror/gpseg45 96 | 46 | p | m | c | u | sdw5 | /mirror1/mirror/gpseg46 97 | 47 | p | m | c | u | sdw6 | /mirror1/mirror/gpseg47(97 rows)

srdftesting=#17. Exit out of the psql session and back to the command prompt.

q

Sample output of command:

srdftesting=# \q[gpadmin@smdw ~]$

18. Stop the database master server so it is removed from maintenance mode.

gpstop -m

Sample output of command:

[gpadmin@smdw ~]$ gpstop -m20110701:07:20:41:gpstop:smdw:gpadmin-[INFO]:-Starting gpstop with args: -m20110701:07:20:41:gpstop:smdw:gpadmin-[INFO]:-Gathering information and validating the environment...20110701:07:20:41:gpstop:smdw:gpadmin-[INFO]:-Obtaining Greenplum Master catalog information20110701:07:20:41:gpstop:smdw:gpadmin-[INFO]:-Obtaining Segment details from master...20110701:07:20:42:gpstop:smdw:gpadmin-[INFO]:-Greenplum Version: 'postgres (Greenplum Database) 4.1.1.1 build 1'20110701:07:20:42:gpstop:smdw:gpadmin-[INFO]:-There are 0 connections to the database

Mounting and checking Symmetrix SNAP of Greenplum database117

Page 120: Green Plum 300-012-943

118

Mounting and Checking a Symmetrix SNAP of the Greenplum Database

20110701:07:20:42:gpstop:smdw:gpadmin-[INFO]:-Commencing Master instance shutdown with mode='smart'20110701:07:20:42:gpstop:smdw:gpadmin-[INFO]:-Master host=mdw20110701:07:20:42:gpstop:smdw:gpadmin-[INFO]:-Commencing Master instance shutdown with mode=smart20110701:07:20:42:gpstop:smdw:gpadmin-[INFO]:-Master segment instance directory=/data/master/gpseg-1[gpadmin@smdw ~]$

19. Restart the database normally.

Watch to make sure that the primary segments were skipped and only the mirror segments are started.

gpstart

Sample output of command:

20110701:07:20:54:gpstart:smdw:gpadmin-[INFO]:-Starting gpstart with args:20110701:07:20:54:gpstart:smdw:gpadmin-[INFO]:-Gathering information and validating the environment...20110701:07:20:54:gpstart:smdw:gpadmin-[INFO]:-Greenplum Binary Version: 'postgres (Greenplum Database) 4.1.1.1 build 1'20110701:07:20:54:gpstart:smdw:gpadmin-[INFO]:-Greenplum Catalog Version: '201101130'20110701:07:20:54:gpstart:smdw:gpadmin-[INFO]:-Starting Master instance in admin mode20110701:07:20:55:gpstart:smdw:gpadmin-[INFO]:-Obtaining Greenplum Master catalog information20110701:07:20:55:gpstart:smdw:gpadmin-[INFO]:-Obtaining Segment details from master...20110701:07:20:55:gpstart:smdw:gpadmin-[INFO]:-Master Started...20110701:07:20:55:gpstart:smdw:gpadmin-[INFO]:-Shutting down master20110701:07:20:57:gpstart:smdw:gpadmin-[WARNING]:-Skipping startup of segment marked down in configuration: on sdw1 directory /data1/primary/gpseg0 <<<<<20110701:07:20:57:gpstart:smdw:gpadmin-[WARNING]:-Skipping startup of segment marked down in configuration: on sdw1 directory /data1/primary/gpseg1 <<<<<20110701:07:20:57:gpstart:smdw:gpadmin-[WARNING]:-Skipping startup of segment marked down in configuration: on sdw1 directory /data1/primary/gpseg2 <<<<<20110701:07:20:57:gpstart:smdw:gpadmin-[WARNING]:-Skipping startup of segment marked down in configuration: on sdw1 directory /data2/primary/gpseg3 <<<<<20110701:07:20:57:gpstart:smdw:gpadmin-[WARNING]:-Skipping startup of segment marked down in configuration: on sdw1 directory /data2/primary/gpseg4 <<<<<20110701:07:20:57:gpstart:smdw:gpadmin-[WARNING]:-Skipping startup of segment marked down in configuration: on sdw1 directory /data2/primary/gpseg5 <<<<<20110701:07:20:57:gpstart:smdw:gpadmin-[WARNING]:-Skipping startup of segment marked down in configuration: on sdw2 directory /data1/primary/gpseg6 <<<<<

-- Data cut due to output length

20110701:07:20:57:gpstart:smdw:gpadmin-[WARNING]:-Skipping startup of segment marked down in configuration: on sdw8 directory /data2/primary/gpseg46 <<<<<

Configuring EMC Greenplum DCA Using SAN Mirror and EMC Symmetrix VMAX for Disaster Recovery

Page 121: Green Plum 300-012-943

Mounting and Checking a Symmetrix SNAP of the Greenplum Database

20110701:07:20:57:gpstart:smdw:gpadmin-[WARNING]:-Skipping startup of segment marked down in configuration: on sdw8 directory /data2/primary/gpseg47 <<<<<20110701:07:20:57:gpstart:smdw:gpadmin-[INFO]:---------------------------20110701:07:20:57:gpstart:smdw:gpadmin-[INFO]:-Master instance parameters20110701:07:20:57:gpstart:smdw:gpadmin-[INFO]:---------------------------20110701:07:20:57:gpstart:smdw:gpadmin-[INFO]:-Database = template120110701:07:20:57:gpstart:smdw:gpadmin-[INFO]:-Master Port = 543220110701:07:20:57:gpstart:smdw:gpadmin-[INFO]:-Master directory = /data/master/gpseg-120110701:07:20:57:gpstart:smdw:gpadmin-[INFO]:-Timeout = 600 seconds20110701:07:20:57:gpstart:smdw:gpadmin-[INFO]:-Master standby = Off20110701:07:20:57:gpstart:smdw:gpadmin-[INFO]:----------------------------------20110701:07:20:57:gpstart:smdw:gpadmin-[INFO]:-Segment instances that will be started20110701:07:20:57:gpstart:smdw:gpadmin-[INFO]:----------------------------------20110701:07:20:57:gpstart:smdw:gpadmin-[INFO]:- Host Datadir Port Role20110701:07:20:57:gpstart:smdw:gpadmin-[INFO]:- sdw2 /mirror2/mirror/gpseg0 50000 Primary20110701:07:20:57:gpstart:smdw:gpadmin-[INFO]:- sdw3 /mirror2/mirror/gpseg1 50001 Primary20110701:07:20:57:gpstart:smdw:gpadmin-[INFO]:- sdw4 /mirror2/mirror/gpseg2 50002 Primary20110701:07:20:57:gpstart:smdw:gpadmin-[INFO]:- sdw5 /mirror1/mirror/gpseg3 50003 Primary20110701:07:20:57:gpstart:smdw:gpadmin-[INFO]:- sdw6 /mirror1/mirror/gpseg4 50004 Primary20110701:07:20:57:gpstart:smdw:gpadmin-[INFO]:- sdw7 /mirror1/mirror/gpseg5 50005 Primary20110701:07:20:57:gpstart:smdw:gpadmin-[INFO]:- sdw3 /mirror2/mirror/gpseg6 50000 Primary

-- Data cut due to output length

20110701:07:20:57:gpstart:smdw:gpadmin-[INFO]:- sdw5 /mirror1/mirror/gpseg46 50004 Primary20110701:07:20:57:gpstart:smdw:gpadmin-[INFO]:- sdw6 /mirror1/mirror/gpseg47 50005 Primary

Continue with Greenplum instance startup Yy|Nn (default=N):> y20110701:07:21:02:gpstart:smdw:gpadmin-[INFO]:-No standby master configured. skipping...20110701:07:21:02:gpstart:smdw:gpadmin-[INFO]:-Commencing parallel primary and mirror segment instance startup, please wait......20110701:07:21:05:gpstart:smdw:gpadmin-[INFO]:-Process results...20110701:07:21:05:gpstart:smdw:gpadmin-[INFO]:----------------------------------20110701:07:21:05:gpstart:smdw:gpadmin-[INFO]:- Successful segment starts = 4820110701:07:21:05:gpstart:smdw:gpadmin-[INFO]:- Failed segment starts = 0

Mounting and checking Symmetrix SNAP of Greenplum database119

Page 122: Green Plum 300-012-943

120

Mounting and Checking a Symmetrix SNAP of the Greenplum Database

20110701:07:21:05:gpstart:smdw:gpadmin-[WARNING]:-Skipped segment starts (segments are marked down in configuration) = 48 <<<<<<<<20110701:07:21:05:gpstart:smdw:gpadmin-[INFO]:----------------------------------20110701:07:21:05:gpstart:smdw:gpadmin-[INFO]:-20110701:07:21:05:gpstart:smdw:gpadmin-[INFO]:-Successfully started 48 of 48 segment instances, skipped 48 other segments20110701:07:21:05:gpstart:smdw:gpadmin-[INFO]:----------------------------------20110701:07:21:05:gpstart:smdw:gpadmin-[WARNING]:-******************************20110701:07:21:05:gpstart:smdw:gpadmin-[WARNING]:-There are 48 segment(s) marked down in the database20110701:07:21:05:gpstart:smdw:gpadmin-[WARNING]:-To recover from this current state, review usage of the gprecoverseg20110701:07:21:05:gpstart:smdw:gpadmin-[WARNING]:-management utility which will recover failed segment instance databases.20110701:07:21:05:gpstart:smdw:gpadmin-[WARNING]:-******************************20110701:07:21:05:gpstart:smdw:gpadmin-[INFO]:-Starting Master instance mdw directory /data/master/gpseg-120110701:07:21:06:gpstart:smdw:gpadmin-[INFO]:-Command pg_ctl reports Master mdw instance active20110701:07:21:19:gpstart:smdw:gpadmin-[WARNING]:-Database started but warnings generated <<<<<20110701:07:21:19:gpstart:smdw:gpadmin-[INFO]:-Check status of database with gpstate utility[gpadmin@smdw ~]$

20. To finish verifying that the SNAP was a valid snap and can be used to conduct a recovery of the primary system, execute gpcheckcat to check all of the catalogs in the database.

/usr/local/greenplum-db/bin/lib/gpcheckcat -p 5432 <database name>

Sample output of command:

[gpadmin@smdw ~]$ /usr/local/greenplum-db/bin/lib/gpcheckcat -p 5432 srdftesting20110701:08:41:49:gpcheckcat:default-[INFO]:-master at host localhost, port 5432, user gpadmin, database srdftesting20110701:08:41:49:gpcheckcat:default-[INFO]:------------------------------------20110701:08:41:49:gpcheckcat:default-[INFO]:------------------------------------20110701:08:41:49:gpcheckcat:default-[INFO]:-Performing check for database 'srdftesting', version 4.120110701:08:41:49:gpcheckcat:default-[INFO]:------------------------------------20110701:08:41:49:gpcheckcat:default-[INFO]:------------------------------------20110701:08:41:49:gpcheckcat:default-[INFO]:-Checking pg_partition ...20110701:08:41:49:gpcheckcat:default-[INFO]:-[OK] pg_partition branch integrity20110701:08:41:49:gpcheckcat:default-[INFO]:-[OK] partition with oids check20110701:08:41:49:gpcheckcat:default-[INFO]:-[OK] partition distribution policy check20110701:08:41:49:gpcheckcat:default-[INFO]:------------------------------------20110701:08:41:49:gpcheckcat:default-[INFO]:-Performing segment specific consistency checks20110701:08:41:49:gpcheckcat:default-[INFO]:------------------------------------20110701:08:41:49:gpcheckcat:default-[INFO]:------------------------------------20110701:08:41:49:gpcheckcat:default-[INFO]:-Checking pg_class.reltoastrelid

Configuring EMC Greenplum DCA Using SAN Mirror and EMC Symmetrix VMAX for Disaster Recovery

Page 123: Green Plum 300-012-943

Mounting and Checking a Symmetrix SNAP of the Greenplum Database

20110701:08:41:54:gpcheckcat:default-[INFO]:-[OK] pg_class.reltoastrelid20110701:08:41:54:gpcheckcat:default-[INFO]:------------------------------------20110701:08:41:54:gpcheckcat:default-[INFO]:-Checking for leaked temporary schemas20110701:08:41:55:gpcheckcat:default-[INFO]:-[OK] temporary schemas20110701:08:41:55:gpcheckcat:default-[INFO]:-Checking missing schema definitions 20110701:08:41:55:gpcheckcat:default-[INFO]:-[OK] missing schema definitions20110701:08:41:55:gpcheckcat:default-[INFO]:------------------------------------20110701:08:41:55:gpcheckcat:default-[INFO]:-Checking constraints on randomly distributed tables20110701:08:41:55:gpcheckcat:default-[INFO]:-[OK] randomly distributed tables20110701:08:41:55:gpcheckcat:default-[INFO]:------------------------------------20110701:08:41:55:gpcheckcat:default-[INFO]:-Checking that unique constraints are only on distribution columns20110701:08:41:55:gpcheckcat:default-[INFO]:-[OK] unique constraints20110701:08:41:55:gpcheckcat:default-[INFO]:------------------------------------20110701:08:41:55:gpcheckcat:default-[INFO]:-Checking table ownership20110701:08:41:55:gpcheckcat:default-[INFO]:-[OK] table ownership20110701:08:41:55:gpcheckcat:default-[INFO]:-[OK] type ownership20110701:08:41:55:gpcheckcat:default-[INFO]:------------------------------------20110701:08:41:55:gpcheckcat:default-[INFO]:-Checking indexes20110701:08:41:55:gpcheckcat:default-[INFO]:-[OK] relhasindex20110701:08:41:55:gpcheckcat:default-[INFO]:-[OK] deprecated indexes20110701:08:41:55:gpcheckcat:default-[INFO]:------------------------------------20110701:08:41:55:gpcheckcat:default-[INFO]:-Checking Object Dependencies20110701:08:41:57:gpcheckcat:default-[INFO]:-[OK] basic object dependencies20110701:08:41:57:gpcheckcat:default-[INFO]:------------------------------------20110701:08:41:57:gpcheckcat:default-[INFO]:-Checking persistent tables20110701:08:41:57:gpcheckcat:default-[WARNING]:-System requires recovery via gprecoverseg20110701:08:41:57:gpcheckcat:default-[INFO]:-[OK] gp_persistent_filespace_node <=> pg_filespace20110701:08:41:57:gpcheckcat:default-[INFO]:-[OK] gp_persistent_filespace_node <=> gp_global_sequence20110701:08:41:57:gpcheckcat:default-[INFO]:-[OK] gp_persistent_database_node <=> pg_database20110701:08:41:57:gpcheckcat:default-[INFO]:-[OK] gp_persistent_database_node <=> pg_tablespace20110701:08:41:57:gpcheckcat:default-[INFO]:-[OK] gp_persistent_database_node <=> gp_global_sequence20110701:08:41:57:gpcheckcat:default-[INFO]:-[OK] gp_persistent_tablespace_node <=> pg_tablespace20110701:08:41:57:gpcheckcat:default-[INFO]:-[OK] gp_persistent_tablespace_node <=> pg_filespace20110701:08:41:57:gpcheckcat:default-[INFO]:-[OK] gp_persistent_tablespace_node <=> gp_global_sequence20110701:08:41:57:gpcheckcat:default-[INFO]:-[OK] gp_persistent_relation_node <=> pg_tablespace20110701:08:41:58:gpcheckcat:default-[INFO]:-[OK] gp_persistent_relation_node <=> pg_database20110701:08:41:58:gpcheckcat:default-[INFO]:-[OK] gp_persistent_relation_node <=> gp_relation_node

Mounting and checking Symmetrix SNAP of Greenplum database121

Page 124: Green Plum 300-012-943

122

Mounting and Checking a Symmetrix SNAP of the Greenplum Database

20110701:08:41:58:gpcheckcat:default-[INFO]:-[OK] gp_persistent_relation_node <=> pg_class20110701:08:41:58:gpcheckcat:default-[INFO]:-[OK] gp_persistent_relation_node <=> gp_global_sequence20110701:08:41:58:gpcheckcat:default-[INFO]:-[OK] gp_persistent_relation_node <=> filesystem20110701:08:41:58:gpcheckcat:default-[INFO]:-[OK] pg_database <=> filesystem20110701:08:41:58:gpcheckcat:default-[INFO]:------------------------------------20110701:08:41:58:gpcheckcat:default-[INFO]:-Performing foreign key and cross database consistency tests20110701:08:41:58:gpcheckcat:default-[INFO]:------------------------------------20110701:08:41:58:gpcheckcat:default-[INFO]:------------------------------------20110701:08:41:58:gpcheckcat:default-[INFO]:-Checking foreign keys ...20110701:08:41:58:gpcheckcat:default-[INFO]:- 28 tables, 102 queries20110701:08:41:58:gpcheckcat:default-[INFO]:-[OK] Foreign key check for pg_aggregate(aggfnoid)20110701:08:41:58:gpcheckcat:default-[INFO]:-[OK] Foreign key check for pg_aggregate(aggtransfn)20110701:08:41:58:gpcheckcat:default-[INFO]:-[OK] Foreign key check for pg_aggregate(agginvtransfn)20110701:08:41:58:gpcheckcat:default-[INFO]:-[OK] Foreign key check for pg_aggregate(aggprelimfn)20110701:08:41:58:gpcheckcat:default-[INFO]:-[OK] Foreign key check for pg_aggregate(agginvprelimfn)20110701:08:41:59:gpcheckcat:default-[INFO]:-[OK] Foreign key check for pg_aggregate(aggfinalfn)20110701:08:41:59:gpcheckcat:default-[INFO]:-[OK] Foreign key check for pg_aggregate(aggsortop)20110701:08:41:59:gpcheckcat:default-[INFO]:-[OK] Foreign key check for pg_aggregate(aggtranstype)20110701:08:41:59:gpcheckcat:default-[INFO]:-[OK] Foreign key check for pg_am(aminsert)20110701:08:41:59:gpcheckcat:default-[INFO]:-[OK] Foreign key check for pg_am(ambeginscan)20110701:08:41:59:gpcheckcat:default-[INFO]:-[OK] Foreign key check for pg_am(amgettuple)20110701:08:41:59:gpcheckcat:default-[INFO]:-[OK] Foreign key check for pg_am(amgetmulti)

-- Data cut due to output length

20110701:08:42:00:gpcheckcat:default-[INFO]:-[OK] Foreign key check for pg_window(winprefunc)20110701:08:42:00:gpcheckcat:default-[INFO]:-[OK] Foreign key check for pg_window(winfinfunc)20110701:08:42:00:gpcheckcat:default-[INFO]:------------------------------------20110701:08:42:00:gpcheckcat:default-[INFO]:-Checking cross database consistency ...20110701:08:42:00:gpcheckcat:default-[INFO]:-[OK] Cross consistency check for gp_version_at_initdb

Configuring EMC Greenplum DCA Using SAN Mirror and EMC Symmetrix VMAX for Disaster Recovery

Page 125: Green Plum 300-012-943

Mounting and Checking a Symmetrix SNAP of the Greenplum Database

20110701:08:42:01:gpcheckcat:default-[INFO]:-[OK] Cross consistency check for pg_aggregate20110701:08:42:01:gpcheckcat:default-[INFO]:-[OK] Cross consistency check for pg_am20110701:08:42:02:gpcheckcat:default-[INFO]:-[OK] Cross consistency check for pg_amop20110701:08:42:02:gpcheckcat:default-[INFO]:-[OK] Cross consistency check for pg_amproc20110701:08:42:02:gpcheckcat:default-[INFO]:-[OK] Cross consistency check for pg_appendonly20110701:08:42:02:gpcheckcat:default-[INFO]:-[OK] Cross consistency check for pg_appendonly_alter_column20110701:08:42:02:gpcheckcat:default-[INFO]:-[OK] Cross consistency check for pg_attrdef20110701:08:42:07:gpcheckcat:default-[INFO]:-[OK] Cross consistency check for pg_attribute

-- Data cut due to output length

20110701:08:42:18:gpcheckcat:default-[INFO]:-[OK] Cross consistency acl check for pg_proc20110701:08:42:18:gpcheckcat:default-[INFO]:-[OK] Cross consistency acl check for pg_tablespace20110701:08:42:18:gpcheckcat:default-[INFO]:------------------------------------20110701:08:42:18:gpcheckcat:default-[INFO]:-Check complete[gpadmin@smdw ~]$

21. Stop the database.

gpstop

22. Unmount all of the devices so that they can be used for a recovery if the checks were all successful or so that the SNAP can be recreated until it is identified as a good snap.

umount <mountpoint>

Sample output of command:

[root@smdw ~]# umount /data[root@smdw ~]# gpssh -f my_gp-hosts_1-8 'umount /mirror1;umount /mirror2'[sdw8][sdw6][sdw7][sdw4][sdw5][sdw2][sdw3][ mdw] umount: /mirror1: not found[ mdw] umount: /mirror2: not found[sdw1][root@smdw ~]#

Mounting and checking Symmetrix SNAP of Greenplum database123

Page 126: Green Plum 300-012-943

124

Mounting and Checking a Symmetrix SNAP of the Greenplum Database

Configuring EMC Greenplum DCA Using SAN Mirror and EMC Symmetrix VMAX for Disaster Recovery

Page 127: Green Plum 300-012-943

IInvisible Body Tag

This appendix contains the following information:

◆ Recovering a DCA using a SNAP session and SRDF ................. 126

Recovering a DCA Usinga SNAP Session and

SRDF

Recovering a DCA Using a SNAP Session and SRDF 125

Page 128: Green Plum 300-012-943

126

Recovering a DCA Using a SNAP Session and SRDF

Recovering a DCA using a SNAP session and SRDF

IMPORTANT!This procedure assumes that the“Mounting and checking Symmetrix SNAP of Greenplum database” on page 104 was followed to ensure a consistent SNAP on the remote VMAX. An inconsistent SNAP will result in corrupted data on restore.

1. Unmount the R1 Symmetrix devices from the DCA that will be recovered in this process.

umount <mountpoint>

Sample output of command:

[root@mdw ~]# umount /data[root@mdw ~]# gpssh -f gp-hosts_1-8 'umount /mirror1;umount /mirror2'[sdw4][sdw5][sdw6][sdw7][sdw1][sdw2][sdw3][sdw8][ mdw] umount: /mirror1: not found[ mdw] umount: /mirror2: not found[root@mdw ~]#

2. Split the SRDF devices so that the SNAP can be restored to the R2 volumes.

symrdf -cg <composite group name> split -force

Sample output of command:

[root@mdw ~]# symrdf -cg DCA_R1_CG split -force

Execute an RDF 'Split' operation for compositegroup 'DCA_R1_CG' (y/[n]) ? y

An RDF 'Split' operation execution isin progress for composite group 'DCA_R1_CG'. Please wait...

Pend I/O on RDF link(s) for device(s) in (4467,001).............Done. Suspend RDF link(s) for device(s) in (4467,001)..................Done. Read/Write Enable device(s) in (4467,001) on RA at target (R2)...Done.

Configuring EMC Greenplum DCA Using SAN Mirror and EMC Symmetrix VMAX for Disaster Recovery

Page 129: Green Plum 300-012-943

Recovering a DCA Using a SNAP Session and SRDF

The RDF 'Split' operation successfully executed forcomposite group 'DCA_R1_CG'.

[root@mdw ~]#3. Query the SRDF devices and verify that they are in a split state.

symrdf query -cg <composite group name>

Sample output of command:

[root@mdw ~]# symrdf query -cg DCA_R1_CG

Composite Group Name : DCA_R1_CGComposite Group Type : RDF1Number of Symmetrix Units : 1Number of RDF (RA) Groups : 1RDF Consistency Mode : SYNC

Symmetrix ID : 000192604467 (Microcode Version: 5875)Remote Symmetrix ID : 000192604452 (Microcode Version: 5875)RDF (RA) Group Number : 1 (00)

Source (R1) View Target (R2) View MODES STATES-------------------------------- ------------------------- ----- ------ ----- ST LI ST C S Standard A N A o uLogical Sym T R1 Inv R2 Inv K T R1 Inv R2 Inv n s RDF PairDevice Dev E Tracks Tracks S Dev E Tracks Tracks MDAE s p STATE-------------------------------- -- ----------------------- ----- ------ -----DEV4467* 0267 RW 0 0 NR 0411 RW 2462 0 S... X . SplitDEV4467* 0273 RW 0 0 NR 041D RW 5128 0 S... X . SplitDEV4467* 027F RW 0 0 NR 0429 RW 5085 0 S... X . SplitDEV4467* 028B RW 0 0 NR 0435 RW 5159 0 S... X . SplitDEV4467* 0297 RW 0 0 NR 0441 RW 5049 0 S... X . SplitDEV4467* 02A3 RW 0 0 NR 044D RW 5036 0 S... X . SplitDEV4467* 02AF RW 0 0 NR 0459 RW 5033 0 S... X . SplitDEV4467* 02BB RW 0 0 NR 0465 RW 5146 0 S... X . SplitDEV4467* 02C7 RW 0 0 NR 0471 RW 5019 0 S... X . SplitDEV4467* 02D3 RW 0 0 NR 047D RW 5017 0 S... X . SplitDEV4467* 02DF RW 0 0 NR 0489 RW 4998 0 S... X . SplitDEV4467* 02EB RW 0 0 NR 0495 RW 5170 0 S... X . SplitDEV4467* 02F7 RW 0 0 NR 04A1 RW 5155 0 S... X . SplitDEV4467* 0303 RW 0 0 NR 04AD RW 5088 0 S... X . SplitDEV4467* 030F RW 0 0 NR 04B9 RW 4963 0 S... X . SplitDEV4467* 031B RW 0 0 NR 04C5 RW 5122 0 S... X . SplitDEV4467* 0327 RW 0 0 NR 04D1 RW 5046 0 S... X . Split

Total ------- ------- ------- ------- Track(s) 0 0 0 0 MBs 0.0 0.0 0.0 0.0

Recovering a DCA using a SNAP session and SRDF127

Page 130: Green Plum 300-012-943

128

Recovering a DCA Using a SNAP Session and SRDF

Legend for MODES:

M(ode of Operation) : A = Async, S = Sync, E = Semi-sync, C = Adaptive Copy D(omino) : X = Enabled, . = Disabled A(daptive Copy) : D = Disk Mode, W = WP Mode, . = ACp off (Consistency) E(xempt): X = Enabled, . = Disabled, M = Mixed, - = N/A

Legend for STATES:

Cons(istency State) : X = Enabled, . = Disabled, M = Mixed, - = N/A Susp(end State) : X = Online, . = Offline, P = Offline Pending, - = N/A

[root@mdw ~]#4. Execute a restore of the SNAP device group to restore the SNAP

to the R2 devices.

symsnap -g <device group name> restore

Sample output of command:

[root@smdw ~]# symsnap -g DCA2880_RDF2_SNAP_2 restore

Execute 'Incremental Restore' operation for device group'DCA2880_RDF2_SNAP_2' (y/[n]) ? y

'Incremental Restore' operation execution is in progress fordevice group 'DCA2880_RDF2_SNAP_2'. Please wait...

'Incremental Restore' operation successfully initiated for device group'DCA2880_RDF2_SNAP_2'.

[root@smdw ~]#5. Query the SNAP device group to verify the SNAP devices are in

the restored state.

symsnap -g <device group name> query -restore

Sample output of command:

[root@smdw ~]# symsnap -g DCA2880_RDF2_SNAP_2 query -restore

Device Group (DG) Name: DCA2880_RDF2_SNAP_2DG's Type : RDF2DG's Symmetrix ID : 000192604452

Source Device Target Device Virtual Device State Copy----------------------- ---------------------- -------------- ------------ ---- Protected IndirectLogical Sym Tracks Logical Sym Tracks Logical Sym SRC <=> TGT (%)----------------------- ---------------------- -------------- ------------ ----DEV001 0411 0 VDEV001 05AB 0 VDEV001 05AB Restored 100DEV002 041D 0 VDEV002 05B7 0 VDEV002 05B7 Restored 100

Configuring EMC Greenplum DCA Using SAN Mirror and EMC Symmetrix VMAX for Disaster Recovery

Page 131: Green Plum 300-012-943

Recovering a DCA Using a SNAP Session and SRDF

DEV003 0429 0 VDEV003 05C3 0 VDEV003 05C3 Restored 100DEV004 0435 0 VDEV004 05CF 0 VDEV004 05CF Restored 100DEV005 0441 0 VDEV005 05DB 0 VDEV005 05DB Restored 100DEV006 044D 0 VDEV006 05E7 0 VDEV006 05E7 Restored 100DEV007 0459 0 VDEV007 05F3 0 VDEV007 05F3 Restored 100DEV008 0465 0 VDEV008 05FF 0 VDEV008 05FF Restored 100DEV009 0471 0 VDEV009 060B 0 VDEV009 060B Restored 100DEV010 047D 0 VDEV010 0617 0 VDEV010 0617 Restored 100DEV011 0489 0 VDEV011 0623 0 VDEV011 0623 Restored 100DEV012 0495 0 VDEV012 062F 0 VDEV012 062F Restored 100DEV013 04A1 0 VDEV013 063B 0 VDEV013 063B Restored 100DEV014 04AD 0 VDEV014 0647 0 VDEV014 0647 Restored 100DEV015 04B9 0 VDEV015 0653 0 VDEV015 0653 Restored 100DEV016 04C5 0 VDEV016 065F 0 VDEV016 065F Restored 100DEV017 04D1 0 VDEV017 066B 0 VDEV017 066B Restored 100

Total -------- ------- Track(s) 0 0 MB(s) 0.0 0.0

Legend:

(G): X = The Target device is associated with this group, . = The Target device is not associated with this group.(D): X = The Target device has one or more inactive duplicates. M = The Target device has one or more inactive duplicates AND maximum inactive duplicates exist for this Source device. . = The Target device has no inactive duplicates.

[root@smdw ~]#

6. Terminate the SNAP restore session but, leave the SNAP session active in the event something happens with the restore and the SNAP session is needed again.

symsnap -g <device group name> terminate -restored

Sample output of command:

[root@smdw ~]# symsnap -g DCA2880_RDF2_SNAP_2 terminate -restored

Execute 'Terminate' operation for device group'DCA2880_RDF2_SNAP_2' (y/[n]) ? y

'Terminate' operation execution is in progress fordevice group 'DCA2880_RDF2_SNAP_2'. Please wait...

'Terminate' operation successfully executed for device group'DCA2880_RDF2_SNAP_2'.

[root@smdw ~]#

Recovering a DCA using a SNAP session and SRDF129

Page 132: Green Plum 300-012-943

130

Recovering a DCA Using a SNAP Session and SRDF

7. Query the SNAP device group and verify that it is back into the active state and the devices are marked with a status of "CopyOnWrite".

symsnap -g <device group name> query

Sample output of command:

[root@smdw ~]# symsnap -g DCA2880_RDF2_SNAP_2 query

Device Group (DG) Name: DCA2880_RDF2_SNAP_2DG's Type : RDF2DG's Symmetrix ID : 000192604452

Source Device Target Device State Copy------------------------- -------------------- ---------- ------------ ---- Protected ChangedLogical Sym Tracks Logical Sym GD Tracks SRC <=> TGT (%)------------------------- -------------------- ---------- ------------ ----

DEV001 0411 47277826 VDEV001 05AB X. 2414 CopyOnWrite 0DEV002 041D 47275407 VDEV002 05B7 X. 4833 CopyOnWrite 0DEV003 0429 47275351 VDEV003 05C3 X. 4889 CopyOnWrite 0DEV004 0435 47275359 VDEV004 05CF X. 4881 CopyOnWrite 0DEV005 0441 47275323 VDEV005 05DB X. 4917 CopyOnWrite 0DEV006 044D 47275407 VDEV006 05E7 X. 4833 CopyOnWrite 0DEV007 0459 47275340 VDEV007 05F3 X. 4900 CopyOnWrite 0DEV008 0465 47275304 VDEV008 05FF X. 4936 CopyOnWrite 0DEV009 0471 47275335 VDEV009 060B X. 4905 CopyOnWrite 0DEV010 047D 47275415 VDEV010 0617 X. 4825 CopyOnWrite 0DEV011 0489 47275386 VDEV011 0623 X. 4854 CopyOnWrite 0DEV012 0495 47275312 VDEV012 062F X. 4928 CopyOnWrite 0DEV013 04A1 47275295 VDEV013 063B X. 4945 CopyOnWrite 0DEV014 04AD 47275429 VDEV014 0647 X. 4811 CopyOnWrite 0DEV015 04B9 47275376 VDEV015 0653 X. 4864 CopyOnWrite 0DEV016 04C5 47275399 VDEV016 065F X. 4841 CopyOnWrite 0DEV017 04D1 47275388 VDEV017 066B X. 4852 CopyOnWrite 0

Total -------- ---------- Track(s) 803683652 80428 MB(s) 50230228 5027

Legend:

(G): X = The Target device is associated with this group, . = The Target device is not associated with this group.(D): X = The Target device has one or more inactive duplicates. M = The Target device has one or more inactive duplicates AND maximum inactive duplicates exist for this Source device. . = The Target device has no inactive duplicates.

[root@smdw ~]#

Configuring EMC Greenplum DCA Using SAN Mirror and EMC Symmetrix VMAX for Disaster Recovery

Page 133: Green Plum 300-012-943

Recovering a DCA Using a SNAP Session and SRDF

8. Query the SRDF composite group to verify that there are tracks to update from the R2 to the R1 devices.

symrdf query -cg <composite group name>

Sample output of command:

[root@mdw ~]# symrdf query -cg DCA_R1_CG

Composite Group Name : DCA_R1_CGComposite Group Type : RDF1Number of Symmetrix Units : 1Number of RDF (RA) Groups : 1RDF Consistency Mode : SYNC

Symmetrix ID : 000192604467 (Microcode Version: 5875)Remote Symmetrix ID : 000192604452 (Microcode Version: 5875)RDF (RA) Group Number : 1 (00)

Source (R1) View Target (R2) View MODES STATES-------------------------------- ------------------------- ----- ------------ ST LI ST C S Standard A N A o uLogical Sym T R1 Inv R2 Inv K T R1 Inv R2 Inv n s RDF PairDevice Dev E Tracks Tracks S Dev E Tracks Tracks MDAE s p STATE-------------------------------- -- ----------------------- ----- ------------ DEV4467* 0267 RW 0 0 NR 0411 RW 2462 0 S... X . SplitDEV4467* 0273 RW 0 0 NR 041D RW 5128 0 S... X . SplitDEV4467* 027F RW 0 0 NR 0429 RW 5085 0 S... X . SplitDEV4467* 028B RW 0 0 NR 0435 RW 5159 0 S... X . SplitDEV4467* 0297 RW 0 0 NR 0441 RW 5049 0 S... X . SplitDEV4467* 02A3 RW 0 0 NR 044D RW 5036 0 S... X . SplitDEV4467* 02AF RW 0 0 NR 0459 RW 5033 0 S... X . SplitDEV4467* 02BB RW 0 0 NR 0465 RW 5146 0 S... X . SplitDEV4467* 02C7 RW 0 0 NR 0471 RW 5019 0 S... X . SplitDEV4467* 02D3 RW 0 0 NR 047D RW 5017 0 S... X . SplitDEV4467* 02DF RW 0 0 NR 0489 RW 4998 0 S... X . SplitDEV4467* 02EB RW 0 0 NR 0495 RW 5170 0 S... X . SplitDEV4467* 02F7 RW 0 0 NR 04A1 RW 5155 0 S... X . SplitDEV4467* 0303 RW 0 0 NR 04AD RW 5088 0 S... X . SplitDEV4467* 030F RW 0 0 NR 04B9 RW 4963 0 S... X . SplitDEV4467* 031B RW 0 0 NR 04C5 RW 5122 0 S... X . SplitDEV4467* 0327 RW 0 0 NR 04D1 RW 5046 0 S... X . Split

Total ------- ------- ------- ------- Track(s) 0 0 83676 0 MBs 0.0 0.0 5229.8 0.0

Legend for MODES:

M(ode of Operation) : A = Async, S = Sync, E = Semi-sync, C = Adaptive Copy D(omino) : X = Enabled, . = Disabled

Recovering a DCA using a SNAP session and SRDF131

Page 134: Green Plum 300-012-943

132

Recovering a DCA Using a SNAP Session and SRDF

A(daptive Copy) : D = Disk Mode, W = WP Mode, . = ACp off (Consistency) E(xempt): X = Enabled, . = Disabled, M = Mixed, - = N/A

Legend for STATES:

Cons(istency State) : X = Enabled, . = Disabled, M = Mixed, - = N/A Susp(end State) : X = Online, . = Offline, P = Offline Pending, - = N/A

[root@mdw ~]#9. Issue a restore against the SRDF composite group to restore and

sync the R1 devices to the R2 devices using the data on the R2s.

symrdf -cg <composite group name> restore

Sample output of command:

[root@mdw ~]# symrdf -cg DCA_R1_CG restore

Execute an RDF 'Incremental Restore' operation for compositegroup 'DCA_R1_CG' (y/[n]) ? y

An RDF 'Incremental Restore' operation execution isin progress for composite group 'DCA_R1_CG'. Please wait...

Write Disable device(s) in (4467,001) on SA at source (R1).......Done. Write Disable device(s) in (4467,001) on RA at target (R2).......Done. Suspend RDF link(s) for device(s) in (4467,001)..................Done. Merge track tables between source and target in (4467,001).......Started. Devices: 0327-0332 in (4467,001)................................ Merged. Devices: 0307-0326 in (4467,001)................................ In Progress. Devices: 02E7-0306 in (4467,001)................................ In Progress. Devices: 02C7-02E6 in (4467,001)................................ In Progress. Devices: 02A7-02C6 in (4467,001)................................ In Progress. Devices: 0287-02A6 in (4467,001)................................ In Progress. Devices: 0267-0286 in (4467,001)................................ In Progress. Devices: 0307-0326 in (4467,001)................................ Merged. Devices: 02E7-0306 in (4467,001)................................ Merged. Devices: 02C7-02E6 in (4467,001)................................ Merged. Devices: 02A7-02C6 in (4467,001)................................ Merged. Devices: 0267-0286 in (4467,001)................................ Merged. Devices: 0287-02A6 in (4467,001)................................ Merged. Merge track tables between source and target in (4467,001).......Done. Resume RDF link(s) for device(s) in (4467,001)...................Started. Resume RDF link(s) for device(s) in (4467,001)...................Done. Read/Write Enable device(s) in (4467,001) on SA at source (R1)...Done.

The RDF 'Incremental Restore' operation successfully initiated forcomposite group 'DCA_R1_CG'.

[root@mdw ~]#

Configuring EMC Greenplum DCA Using SAN Mirror and EMC Symmetrix VMAX for Disaster Recovery

Page 135: Green Plum 300-012-943

Recovering a DCA Using a SNAP Session and SRDF

10. Query the SRDF composite group until the devices are in a "Synchronized" state indicating that all of the data was restored from the R2 to the R1.

symrdf query -cg <composite group name>

Sample output of command:

[root@mdw ~]# symrdf query -cg DCA_R1_CG

Composite Group Name : DCA_R1_CGComposite Group Type : RDF1Number of Symmetrix Units : 1Number of RDF (RA) Groups : 1RDF Consistency Mode : SYNC

Symmetrix ID : 000192604467 (Microcode Version: 5875)Remote Symmetrix ID : 000192604452 (Microcode Version: 5875)RDF (RA) Group Number : 1 (00)

Source (R1) View Target (R2) View MODES STATES-------------------------------- ------------------------- ----- ------------ ST LI ST C S Standard A N A o uLogical Sym T R1 Inv R2 Inv K T R1 Inv R2 Inv n s RDF PairDevice Dev E Tracks Tracks S Dev E Tracks Tracks MDAE s p STATE-------------------------------- -- ----------------------- ----- ------ ------DEV4467* 0267 RW 0 0 RW 0411 WD 0 0 S... X - SynchronizedDEV4467* 0273 RW 0 0 RW 041D WD 0 0 S... X - SynchronizedDEV4467* 027F RW 0 0 RW 0429 WD 0 0 S... X - SynchronizedDEV4467* 028B RW 0 0 RW 0435 WD 0 0 S... X - SynchronizedDEV4467* 0297 RW 0 0 RW 0441 WD 0 0 S... X - SynchronizedDEV4467* 02A3 RW 0 0 RW 044D WD 0 0 S... X - SynchronizedDEV4467* 02AF RW 0 0 RW 0459 WD 0 0 S... X - SynchronizedDEV4467* 02BB RW 0 0 RW 0465 WD 0 0 S... X - SynchronizedDEV4467* 02C7 RW 0 0 RW 0471 WD 0 0 S... X - SynchronizedDEV4467* 02D3 RW 0 0 RW 047D WD 0 0 S... X - SynchronizedDEV4467* 02DF RW 0 0 RW 0489 WD 0 0 S... X - SynchronizedDEV4467* 02EB RW 0 0 RW 0495 WD 0 0 S... X - SynchronizedDEV4467* 02F7 RW 0 0 RW 04A1 WD 0 0 S... X - SynchronizedDEV4467* 0303 RW 0 0 RW 04AD WD 0 0 S... X - SynchronizedDEV4467* 030F RW 0 0 RW 04B9 WD 0 0 S... X - SynchronizedDEV4467* 031B RW 0 0 RW 04C5 WD 0 0 S... X - SynchronizedDEV4467* 0327 RW 0 0 RW 04D1 WD 0 0 S... X - Synchronized

Total ------- ------- ------- ------- Track(s) 0 0 0 0 MBs 0.0 0.0 0.0 0.0

Legend for MODES:

M(ode of Operation) : A = Async, S = Sync, E = Semi-sync, C = Adaptive Copy

Recovering a DCA using a SNAP session and SRDF133

Page 136: Green Plum 300-012-943

134

Recovering a DCA Using a SNAP Session and SRDF

D(omino) : X = Enabled, . = Disabled A(daptive Copy) : D = Disk Mode, W = WP Mode, . = ACp off (Consistency) E(xempt): X = Enabled, . = Disabled, M = Mixed, - = N/A

Legend for STATES:

Cons(istency State) : X = Enabled, . = Disabled, M = Mixed, - = N/A Susp(end State) : X = Online, . = Offline, P = Offline Pending, - = N/A

[root@mdw ~]#11. Mount the R1 Symmetrix devices back to the DCA that were just

restored and unmounted in step 1 of this procedure.

mount <mount point>

Sample output of command:

[root@mdw ~]# mount /data[root@mdw ~]# gpssh -f gp-hosts_1-8 'mount /mirror1;mount /mirror2'[sdw4][sdw5][sdw6][sdw7][sdw1][sdw2][sdw3][sdw8][root@mdw ~]#

12. Switch to the gpadmin user.

su - gpadmin

Sample output of command:

[root@mdw ~]# su - gpadmin[gpadmin@mdw ~]$

13. Start the database in maintenance mode.

gpstart -m

Sample output of command:

[gpadmin@mdw ~]$ gpstart -m20110705:09:01:04:gpstart:mdw:gpadmin-[INFO]:-Starting gpstart with args: -m20110705:09:01:04:gpstart:mdw:gpadmin-[INFO]:-Gathering information and validating the environment...20110705:09:01:04:gpstart:mdw:gpadmin-[INFO]:-Greenplum Binary Version: 'postgres (Greenplum Database) 4.1.1.1 build 1'20110705:09:01:04:gpstart:mdw:gpadmin-[INFO]:-Greenplum Catalog Version: '201101130'20110705:09:01:04:gpstart:mdw:gpadmin-[INFO]:-Starting Master instance in admin mode

Configuring EMC Greenplum DCA Using SAN Mirror and EMC Symmetrix VMAX for Disaster Recovery

Page 137: Green Plum 300-012-943

Recovering a DCA Using a SNAP Session and SRDF

20110705:09:01:05:gpstart:mdw:gpadmin-[INFO]:-Obtaining Greenplum Master catalog information20110705:09:01:05:gpstart:mdw:gpadmin-[INFO]:-Obtaining Segment details from master...20110705:09:01:05:gpstart:mdw:gpadmin-[INFO]:-Master Started...[gpadmin@mdw ~]$

14. Connect to the database using utility mode

PGOPTIONS='-c gp_session_role=utility' psql <database name>

Sample output of command:

[gpadmin@mdw ~]$ PGOPTIONS='-c gp_session_role=utility' psql srdftestingpsql (8.2.15)Type "help" for help.

srdftesting=#

15. Verify that the primary segments set to a role of 'm' mirror and a status of 'd' down and that the mirror segments are set to a role of 'p' primary, a mode of 'c' change tracking and a status of 'u' up.

The segments should already be set if the SNAP devices were checked properly and were not set back before stopping the database on the DR system and unmounting the volumes.

select sc.dbid,sc.content,sc.role,sc.preferred_role,sc.mode,sc.status,sc.hostname,fse.fselocation from gp_segment_configuration sc,pg_filespace_entry fse where sc.dbid=fse.fsedbid order by sc.dbid;

Sample output of command:

srdftesting=# select sc.dbid,sc.content,sc.role,sc.preferred_role,sc.mode,sc.status,sc.hostname,fse.fselocation from gp_segment_configuration sc,pg_filespace_entry fse where sc.dbid=fse.fsedbid order by sc.dbid; dbid | content | role | preferred_role | mode | status | hostname | fselocation------+---------+------+----------------+------+--------+----------+------------ 1 | -1 | p | p | s | u | mdw | /data/master/gpseg-1 2 | 0 | m | p | s | d | sdw1 | /data1/primary/gpseg0 3 | 1 | m | p | s | d | sdw1 | /data1/primary/gpseg1 4 | 2 | m | p | s | d | sdw1 | /data1/primary/gpseg2 5 | 3 | m | p | s | d | sdw1 | /data2/primary/gpseg3 6 | 4 | m | p | s | d | sdw1 | /data2/primary/gpseg4 7 | 5 | m | p | s | d | sdw1 | /data2/primary/gpseg5 8 | 6 | m | p | s | d | sdw2 | /data1/primary/gpseg6 9 | 7 | m | p | s | d | sdw2 | /data1/primary/gpseg7 10 | 8 | m | p | s | d | sdw2 | /data1/primary/gpseg8 11 | 9 | m | p | s | d | sdw2 | /data2/primary/gpseg9 12 | 10 | m | p | s | d | sdw2 | /data2/primary/gpseg10 13 | 11 | m | p | s | d | sdw2 | /data2/primary/gpseg11

Recovering a DCA using a SNAP session and SRDF135

Page 138: Green Plum 300-012-943

136

Recovering a DCA Using a SNAP Session and SRDF

14 | 12 | m | p | s | d | sdw3 | /data1/primary/gpseg12 15 | 13 | m | p | s | d | sdw3 | /data1/primary/gpseg13 16 | 14 | m | p | s | d | sdw3 | /data1/primary/gpseg14 17 | 15 | m | p | s | d | sdw3 | /data2/primary/gpseg15 18 | 16 | m | p | s | d | sdw3 | /data2/primary/gpseg16 19 | 17 | m | p | s | d | sdw3 | /data2/primary/gpseg17 20 | 18 | m | p | s | d | sdw4 | /data1/primary/gpseg18

-- Data cut due to output length

90 | 40 | p | m | c | u | sdw4 | /mirror1/mirror/gpseg40 91 | 41 | p | m | c | u | sdw5 | /mirror1/mirror/gpseg41 92 | 42 | p | m | c | u | sdw1 | /mirror2/mirror/gpseg42 93 | 43 | p | m | c | u | sdw2 | /mirror2/mirror/gpseg43 94 | 44 | p | m | c | u | sdw3 | /mirror2/mirror/gpseg44 95 | 45 | p | m | c | u | sdw4 | /mirror1/mirror/gpseg45 96 | 46 | p | m | c | u | sdw5 | /mirror1/mirror/gpseg46 97 | 47 | p | m | c | u | sdw6 | /mirror1/mirror/gpseg47(97 rows)

srdftesting=#16. If the primary and mirror segments were set correctly skip to step

19. Otherwise, use the SQL statements in steps 17 and 18 to update the configuration so that the database starts up using the mirror segments that were just restored from the SNAP.

17. To set the primary segments to a role of 'm' and a status of 'd' issue the following SQL statement.

update gp_segment_configuration sc set role='m',status='d' from pg_filespace_entry fse where sc.dbid=fse.fsedbid and fse.fselocation like'/<unique dir path value>/%' and sc.content >= 0;

Sample output of command:

srdftesting=# update gp_segment_configuration sc set role='m',status='d' from pg_filespace_entry fse where sc.dbid=fse.fsedbid and fse.fselocation like'/data_/%' and sc.content >= 0;UPDATE 48srdftesting=#

Sample output of query command after running the update statement:

srdftesting=# select sc.dbid,sc.content,sc.role,sc.preferred_role,sc.mode,sc.status,sc.hostname,fse.fselocation from gp_segment_configuration sc,pg_filespace_entry fse where sc.dbid=fse.fsedbid order by sc.dbid; dbid | content | role | preferred_role | mode | status | hostname | fselocation------+---------+------+----------------+------+--------+----------+------------ 1 | -1 | p | p | s | u | mdw | /data/master/gpseg-1 2 | 0 | m | p | s | d | sdw1 | /data1/primary/gpseg0 3 | 1 | m | p | s | d | sdw1 | /data1/primary/gpseg1 4 | 2 | m | p | s | d | sdw1 | /data1/primary/gpseg2

Configuring EMC Greenplum DCA Using SAN Mirror and EMC Symmetrix VMAX for Disaster Recovery

Page 139: Green Plum 300-012-943

Recovering a DCA Using a SNAP Session and SRDF

5 | 3 | m | p | s | d | sdw1 | /data2/primary/gpseg3 6 | 4 | m | p | s | d | sdw1 | /data2/primary/gpseg4 7 | 5 | m | p | s | d | sdw1 | /data2/primary/gpseg5 8 | 6 | m | p | s | d | sdw2 | /data1/primary/gpseg6 9 | 7 | m | p | s | d | sdw2 | /data1/primary/gpseg7 10 | 8 | m | p | s | d | sdw2 | /data1/primary/gpseg8

-- Data cut due to output length

48 | 46 | m | p | s | d | sdw8 | /data2/primary/gpseg46 49 | 47 | m | p | s | d | sdw8 | /data2/primary/gpseg47 50 | 0 | m | m | s | u | sdw2 | /mirror2/mirror/gpseg0 51 | 1 | m | m | s | u | sdw3 | /mirror2/mirror/gpseg1 52 | 2 | m | m | s | u | sdw4 | /mirror2/mirror/gpseg2 53 | 3 | m | m | s | u | sdw5 | /mirror1/mirror/gpseg3 54 | 4 | m | m | s | u | sdw6 | /mirror1/mirror/gpseg4 55 | 5 | m | m | s | u | sdw7 | /mirror1/mirror/gpseg5 56 | 6 | m | m | s | u | sdw3 | /mirror2/mirror/gpseg6 57 | 7 | m | m | s | u | sdw4 | /mirror2/mirror/gpseg7

-- Data cut due to output length

96 | 46 | m | m | s | u | sdw5 | /mirror1/mirror/gpseg46 97 | 47 | m | m | s | u | sdw6 | /mirror1/mirror/gpseg47(97 rows)

srdftesting=#

18. To set the mirror segments to a role of 'p' and a mode of 'c' issue the following SQL statement.

update gp_segment_configuration sc set role='p',mode='c' from pg_filespace_entry fse where sc.dbid=fse.fsedbid and fse.fselocation like'/<unique dir path value>/%' and sc.content >= 0;

Sample output of command:

srdftesting=# update gp_segment_configuration sc set role='p',mode='c' from pg_filespace_entry fse where sc.dbid=fse.fsedbid and fse.fselocation like'/mirror_/%' and sc.content >= 0;UPDATE 48srdftesting=#

Sample output of query command after running the update statement:

srdftesting=# select sc.dbid,sc.content,sc.role,sc.preferred_role,sc.mode,sc.status,sc.hostname,fse.fselocation from gp_segment_configuration sc,pg_filespace_entry fse where sc.dbid=fse.fsedbid order by sc.dbid; dbid | content | role | preferred_role | mode | status | hostname | fselocation------+---------+------+----------------+------+--------+----------+------------ 1 | -1 | p | p | s | u | mdw | /data/master/gpseg-1 2 | 0 | m | p | s | d | sdw1 | /data1/primary/gpseg0 3 | 1 | m | p | s | d | sdw1 | /data1/primary/gpseg1

Recovering a DCA using a SNAP session and SRDF137

Page 140: Green Plum 300-012-943

138

Recovering a DCA Using a SNAP Session and SRDF

4 | 2 | m | p | s | d | sdw1 | /data1/primary/gpseg2 5 | 3 | m | p | s | d | sdw1 | /data2/primary/gpseg3 6 | 4 | m | p | s | d | sdw1 | /data2/primary/gpseg4 7 | 5 | m | p | s | d | sdw1 | /data2/primary/gpseg5 8 | 6 | m | p | s | d | sdw2 | /data1/primary/gpseg6 9 | 7 | m | p | s | d | sdw2 | /data1/primary/gpseg7 10 | 8 | m | p | s | d | sdw2 | /data1/primary/gpseg8

-- Data cut due to output length

49 | 47 | m | p | s | d | sdw8 | /data2/primary/gpseg47 50 | 0 | p | m | c | u | sdw2 | /mirror2/mirror/gpseg0 51 | 1 | p | m | c | u | sdw3 | /mirror2/mirror/gpseg1 52 | 2 | p | m | c | u | sdw4 | /mirror2/mirror/gpseg2 53 | 3 | p | m | c | u | sdw5 | /mirror1/mirror/gpseg3 54 | 4 | p | m | c | u | sdw6 | /mirror1/mirror/gpseg4 55 | 5 | p | m | c | u | sdw7 | /mirror1/mirror/gpseg5 56 | 6 | p | m | c | u | sdw3 | /mirror2/mirror/gpseg6 57 | 7 | p | m | c | u | sdw4 | /mirror2/mirror/gpseg7 58 | 8 | p | m | c | u | sdw5 | /mirror2/mirror/gpseg8

-- Data cut due to output length

96 | 46 | p | m | c | u | sdw5 | /mirror1/mirror/gpseg46 97 | 47 | p | m | c | u | sdw6 | /mirror1/mirror/gpseg47(97 rows)

srdftesting=#19. Exit out of the psql session and back to the command prompt.

\q

Sample output of command:

srdftesting=# \q[gpadmin@mdw ~]$

20. Stop the database to remove it from maintenance mode.

gpstop -m

Sample output of command:

[gpadmin@mdw ~]$ gpstop -m20110705:09:01:36:gpstop:mdw:gpadmin-[INFO]:-Starting gpstop with args: -m20110705:09:01:36:gpstop:mdw:gpadmin-[INFO]:-Gathering information and validating the environment...20110705:09:01:36:gpstop:mdw:gpadmin-[INFO]:-Obtaining Greenplum Master catalog information20110705:09:01:36:gpstop:mdw:gpadmin-[INFO]:-Obtaining Segment details from master...20110705:09:01:36:gpstop:mdw:gpadmin-[INFO]:-Greenplum Version: 'postgres (Greenplum Database) 4.1.1.1 build 1'20110705:09:01:36:gpstop:mdw:gpadmin-[INFO]:-There are 0 connections to the database

Configuring EMC Greenplum DCA Using SAN Mirror and EMC Symmetrix VMAX for Disaster Recovery

Page 141: Green Plum 300-012-943

Recovering a DCA Using a SNAP Session and SRDF

20110705:09:01:36:gpstop:mdw:gpadmin-[INFO]:-Commencing Master instance shutdown with mode='smart'20110705:09:01:36:gpstop:mdw:gpadmin-[INFO]:-Master host=mdw20110705:09:01:36:gpstop:mdw:gpadmin-[INFO]:-Commencing Master instance shutdown with mode=smart20110705:09:01:36:gpstop:mdw:gpadmin-[INFO]:-Master segment instance directory=/data/master/gpseg-1[gpadmin@mdw ~]$

21. Restart the database normally. Watch to make sure that the primary segments were skipped and only the mirror segments are started.

gpstart

Sample output of command:

[gpadmin@mdw ~]$ gpstart20110705:09:01:40:gpstart:mdw:gpadmin-[INFO]:-Starting gpstart with args:20110705:09:01:40:gpstart:mdw:gpadmin-[INFO]:-Gathering information and validating the environment...20110705:09:01:40:gpstart:mdw:gpadmin-[INFO]:-Greenplum Binary Version: 'postgres (Greenplum Database) 4.1.1.1 build 1'20110705:09:01:40:gpstart:mdw:gpadmin-[INFO]:-Greenplum Catalog Version: '201101130'20110705:09:01:40:gpstart:mdw:gpadmin-[INFO]:-Starting Master instance in admin mode20110705:09:01:41:gpstart:mdw:gpadmin-[INFO]:-Obtaining Greenplum Master catalog information20110705:09:01:41:gpstart:mdw:gpadmin-[INFO]:-Obtaining Segment details from master...20110705:09:01:42:gpstart:mdw:gpadmin-[INFO]:-Master Started...20110705:09:01:42:gpstart:mdw:gpadmin-[INFO]:-Shutting down master20110705:09:01:44:gpstart:mdw:gpadmin-[WARNING]:-Skipping startup of segment marked down in configuration: on sdw1 directory /data1/primary/gpseg0 <<<<<20110705:09:01:44:gpstart:mdw:gpadmin-[WARNING]:-Skipping startup of segment marked down in configuration: on sdw1 directory /data1/primary/gpseg1 <<<<<20110705:09:01:44:gpstart:mdw:gpadmin-[WARNING]:-Skipping startup of segment marked down in configuration: on sdw1 directory /data1/primary/gpseg2 <<<<<20110705:09:01:44:gpstart:mdw:gpadmin-[WARNING]:-Skipping startup of segment marked down in configuration: on sdw1 directory /data2/primary/gpseg3 <<<<<20110705:09:01:44:gpstart:mdw:gpadmin-[WARNING]:-Skipping startup of segment marked down in configuration: on sdw1 directory /data2/primary/gpseg4 <<<<<20110705:09:01:44:gpstart:mdw:gpadmin-[WARNING]:-Skipping startup of segment marked down in configuration: on sdw1 directory /data2/primary/gpseg5 <<<<<20110705:09:01:44:gpstart:mdw:gpadmin-[WARNING]:-Skipping startup of segment marked down in configuration: on sdw2 directory /data1/primary/gpseg6 <<<<<

-- Data cut due to output length

20110705:09:01:44:gpstart:mdw:gpadmin-[WARNING]:-Skipping startup of segment marked down in configuration: on sdw8 directory /data2/primary/gpseg46 <<<<<20110705:09:01:44:gpstart:mdw:gpadmin-[WARNING]:-Skipping startup of segment marked down in configuration: on sdw8 directory /data2/primary/gpseg47 <<<<<

Recovering a DCA using a SNAP session and SRDF139

Page 142: Green Plum 300-012-943

140

Recovering a DCA Using a SNAP Session and SRDF

20110705:09:01:44:gpstart:mdw:gpadmin-[INFO]:---------------------------20110705:09:01:44:gpstart:mdw:gpadmin-[INFO]:-Master instance parameters20110705:09:01:44:gpstart:mdw:gpadmin-[INFO]:---------------------------20110705:09:01:44:gpstart:mdw:gpadmin-[INFO]:-Database = template120110705:09:01:44:gpstart:mdw:gpadmin-[INFO]:-Master Port = 543220110705:09:01:44:gpstart:mdw:gpadmin-[INFO]:-Master directory = /data/master/gpseg-120110705:09:01:44:gpstart:mdw:gpadmin-[INFO]:-Timeout = 600 seconds20110705:09:01:44:gpstart:mdw:gpadmin-[INFO]:-Master standby = Off20110705:09:01:44:gpstart:mdw:gpadmin-[INFO]:-----------------------------------20110705:09:01:44:gpstart:mdw:gpadmin-[INFO]:-Segment instances that will be started20110705:09:01:44:gpstart:mdw:gpadmin-[INFO]:-----------------------------------20110705:09:01:44:gpstart:mdw:gpadmin-[INFO]:- Host Datadir Port Role20110705:09:01:44:gpstart:mdw:gpadmin-[INFO]:- sdw2 /mirror2/mirror/gpseg0 50000 Primary20110705:09:01:44:gpstart:mdw:gpadmin-[INFO]:- sdw3 /mirror2/mirror/gpseg1 50001 Primary20110705:09:01:44:gpstart:mdw:gpadmin-[INFO]:- sdw4 /mirror2/mirror/gpseg2 50002 Primary20110705:09:01:44:gpstart:mdw:gpadmin-[INFO]:- sdw5 /mirror1/mirror/gpseg3 50003 Primary20110705:09:01:44:gpstart:mdw:gpadmin-[INFO]:- sdw6 /mirror1/mirror/gpseg4 50004 Primary20110705:09:01:44:gpstart:mdw:gpadmin-[INFO]:- sdw7 /mirror1/mirror/gpseg5 50005 Primary20110705:09:01:44:gpstart:mdw:gpadmin-[INFO]:- sdw3 /mirror2/mirror/gpseg6 50000 Primary20110705:09:01:44:gpstart:mdw:gpadmin-[INFO]:- sdw4 /mirror2/mirror/gpseg7 50001 Primary20110705:09:01:44:gpstart:mdw:gpadmin-[INFO]:- sdw5 /mirror2/mirror/gpseg8 50002 Primary

-- Data cut due to output length

20110705:09:01:44:gpstart:mdw:gpadmin-[INFO]:- sdw5 /mirror1/mirror/gpseg46 50004 Primary20110705:09:01:44:gpstart:mdw:gpadmin-[INFO]:- sdw6 /mirror1/mirror/gpseg47 50005 Primary

Continue with Greenplum instance startup Yy|Nn (default=N):> y20110705:09:01:47:gpstart:mdw:gpadmin-[INFO]:-No standby master configured. skipping...20110705:09:01:47:gpstart:mdw:gpadmin-[INFO]:-Commencing parallel primary and mirror segment instance startup, please wait.....20110705:09:01:49:gpstart:mdw:gpadmin-[INFO]:-Process results...20110705:09:01:49:gpstart:mdw:gpadmin-[INFO]:-----------------------------------20110705:09:01:49:gpstart:mdw:gpadmin-[INFO]:- Successful segment starts = 48

Configuring EMC Greenplum DCA Using SAN Mirror and EMC Symmetrix VMAX for Disaster Recovery

Page 143: Green Plum 300-012-943

Recovering a DCA Using a SNAP Session and SRDF

20110705:09:01:49:gpstart:mdw:gpadmin-[INFO]:- Failed segment starts = 020110705:09:01:49:gpstart:mdw:gpadmin-[WARNING]:-Skipped segment starts (segments are marked down in configuration) = 48 <<<<<<<<20110705:09:01:49:gpstart:mdw:gpadmin-[INFO]:-----------------------------------20110705:09:01:49:gpstart:mdw:gpadmin-[INFO]:-20110705:09:01:49:gpstart:mdw:gpadmin-[INFO]:-Successfully started 48 of 48 segment instances, skipped 48 other segments20110705:09:01:49:gpstart:mdw:gpadmin-[INFO]:-----------------------------------20110705:09:01:49:gpstart:mdw:gpadmin-[WARNING]:-****************************************************************************20110705:09:01:49:gpstart:mdw:gpadmin-[WARNING]:-There are 48 segment(s) marked down in the database20110705:09:01:49:gpstart:mdw:gpadmin-[WARNING]:-To recover from this current state, review usage of the gprecoverseg20110705:09:01:49:gpstart:mdw:gpadmin-[WARNING]:-management utility which will recover failed segment instance databases.20110705:09:01:49:gpstart:mdw:gpadmin-[WARNING]:-****************************************************************************20110705:09:01:49:gpstart:mdw:gpadmin-[INFO]:-Starting Master instance mdw directory /data/master/gpseg-120110705:09:01:50:gpstart:mdw:gpadmin-[INFO]:-Command pg_ctl reports Master mdw instance active20110705:09:02:01:gpstart:mdw:gpadmin-[WARNING]:-Database started but warnings generated <<<<<20110705:09:02:01:gpstart:mdw:gpadmin-[INFO]:-Check status of database with gpstate utility[gpadmin@mdw ~]$

22. Once the database is up and running on the mirror segments that were just restored from the SNAP session, run a full gprecoverseg to rebuild the primary segments with the data from the mirror segments and synchronize the system.

A full recovery is necessary since the mirror data was changed by a different DCA and the remote DCA has no knowledge of what changes were made.

IMPORTANT!Performing an "incremental" recovery will result in a corrupted database!

Note: The output from gprecoverseg is very long; fifteen lines for each segment. The important thing to verify is that a full recovery is being performed from the mirror to the primary.

gprecoverseg -F

Recovering a DCA using a SNAP session and SRDF141

Page 144: Green Plum 300-012-943

142

Recovering a DCA Using a SNAP Session and SRDF

Sample output of command:

[gpadmin@mdw ~]$ gprecoverseg -F20110705:09:03:15:gprecoverseg:mdw:gpadmin-[INFO]:-Starting gprecoverseg with args: -F20110705:09:03:15:gprecoverseg:mdw:gpadmin-[INFO]:-local Greenplum Version: 'postgres (Greenplum Database) 4.1.1.1 build 1'20110705:09:03:15:gprecoverseg:mdw:gpadmin-[INFO]:-Obtaining Segment details from master...20110705:09:03:16:gprecoverseg:mdw:gpadmin-[INFO]:-Greenplum instance recovery parameters20110705:09:03:16:gprecoverseg:mdw:gpadmin-[INFO]:------------------------------20110705:09:03:16:gprecoverseg:mdw:gpadmin-[INFO]:-Recovery type = Standard20110705:09:03:16:gprecoverseg:mdw:gpadmin-[INFO]:------------------------------20110705:09:03:16:gprecoverseg:mdw:gpadmin-[INFO]:-Recovery 1 of 4820110705:09:03:16:gprecoverseg:mdw:gpadmin-[INFO]:------------------------------20110705:09:03:16:gprecoverseg:mdw:gpadmin-[INFO]:- Synchronization mode = Full20110705:09:03:16:gprecoverseg:mdw:gpadmin-[INFO]:- Failed instance host = sdw120110705:09:03:16:gprecoverseg:mdw:gpadmin-[INFO]:- Failed instance address = sdw1-120110705:09:03:16:gprecoverseg:mdw:gpadmin-[INFO]:- Failed instance directory = /data1/primary/gpseg020110705:09:03:16:gprecoverseg:mdw:gpadmin-[INFO]:- Failed instance port = 4000020110705:09:03:16:gprecoverseg:mdw:gpadmin-[INFO]:- Failed instance replication port = 4100020110705:09:03:16:gprecoverseg:mdw:gpadmin-[INFO]:- Recovery Source instance host = sdw220110705:09:03:16:gprecoverseg:mdw:gpadmin-[INFO]:- Recovery Source instance address = sdw2-220110705:09:03:16:gprecoverseg:mdw:gpadmin-[INFO]:- Recovery Source instance directory = /mirror2/mirror/gpseg020110705:09:03:16:gprecoverseg:mdw:gpadmin-[INFO]:- Recovery Source instance port = 5000020110705:09:03:16:gprecoverseg:mdw:gpadmin-[INFO]:- Recovery Source instance replication port = 5100020110705:09:03:16:gprecoverseg:mdw:gpadmin-[INFO]:- Recovery Target = in-place20110705:09:03:16:gprecoverseg:mdw:gpadmin-[INFO]:------------------------------20110705:09:03:16:gprecoverseg:mdw:gpadmin-[INFO]:-Recovery 2 of 4820110705:09:03:16:gprecoverseg:mdw:gpadmin-[INFO]:------------------------------20110705:09:03:16:gprecoverseg:mdw:gpadmin-[INFO]:- Synchronization mode = Full20110705:09:03:16:gprecoverseg:mdw:gpadmin-[INFO]:- Failed instance host = sdw120110705:09:03:16:gprecoverseg:mdw:gpadmin-[INFO]:- Failed instance address = sdw1-1

Configuring EMC Greenplum DCA Using SAN Mirror and EMC Symmetrix VMAX for Disaster Recovery

Page 145: Green Plum 300-012-943

Recovering a DCA Using a SNAP Session and SRDF

20110705:09:03:16:gprecoverseg:mdw:gpadmin-[INFO]:- Failed instance directory = /data1/primary/gpseg120110705:09:03:16:gprecoverseg:mdw:gpadmin-[INFO]:- Failed instance port = 4000120110705:09:03:16:gprecoverseg:mdw:gpadmin-[INFO]:- Failed instance replication port = 4100120110705:09:03:16:gprecoverseg:mdw:gpadmin-[INFO]:- Recovery Source instance host = sdw320110705:09:03:16:gprecoverseg:mdw:gpadmin-[INFO]:- Recovery Source instance address = sdw3-220110705:09:03:16:gprecoverseg:mdw:gpadmin-[INFO]:- Recovery Source instance directory = /mirror2/mirror/gpseg120110705:09:03:16:gprecoverseg:mdw:gpadmin-[INFO]:- Recovery Source instance port = 5000120110705:09:03:16:gprecoverseg:mdw:gpadmin-[INFO]:- Recovery Source instance replication port = 5100120110705:09:03:16:gprecoverseg:mdw:gpadmin-[INFO]:- Recovery Target = in-place

…. Data cut due to output length

20110705:09:03:16:gprecoverseg:mdw:gpadmin-[INFO]:------------------------------20110705:09:03:16:gprecoverseg:mdw:gpadmin-[INFO]:-Recovery 48 of 4820110705:09:03:16:gprecoverseg:mdw:gpadmin-[INFO]:------------------------------20110705:09:03:16:gprecoverseg:mdw:gpadmin-[INFO]:- Synchronization mode = Full20110705:09:03:16:gprecoverseg:mdw:gpadmin-[INFO]:- Failed instance host = sdw820110705:09:03:16:gprecoverseg:mdw:gpadmin-[INFO]:- Failed instance address = sdw8-220110705:09:03:16:gprecoverseg:mdw:gpadmin-[INFO]:- Failed instance directory = /data2/primary/gpseg4720110705:09:03:16:gprecoverseg:mdw:gpadmin-[INFO]:- Failed instance port = 4000520110705:09:03:16:gprecoverseg:mdw:gpadmin-[INFO]:- Failed instance replication port = 4100520110705:09:03:16:gprecoverseg:mdw:gpadmin-[INFO]:- Recovery Source instance host = sdw620110705:09:03:16:gprecoverseg:mdw:gpadmin-[INFO]:- Recovery Source instance address = sdw6-120110705:09:03:16:gprecoverseg:mdw:gpadmin-[INFO]:- Recovery Source instance directory = /mirror1/mirror/gpseg4720110705:09:03:16:gprecoverseg:mdw:gpadmin-[INFO]:- Recovery Source instance port = 5000520110705:09:03:16:gprecoverseg:mdw:gpadmin-[INFO]:- Recovery Source instance replication port = 5100520110705:09:03:16:gprecoverseg:mdw:gpadmin-[INFO]:- Recovery Target = in-place20110705:09:03:16:gprecoverseg:mdw:gpadmin-[INFO]:------------------------------

Continue with segment recovery procedure Yy|Nn (default=N):> y

Recovering a DCA using a SNAP session and SRDF143

Page 146: Green Plum 300-012-943

144

Recovering a DCA Using a SNAP Session and SRDF

20110705:09:03:22:gprecoverseg:mdw:gpadmin-[INFO]:-48 segment(s) to recover20110705:09:03:22:gprecoverseg:mdw:gpadmin-[INFO]:-Ensuring 48 failed segment(s) are stopped

20110705:09:03:23:gprecoverseg:mdw:gpadmin-[INFO]:-Cleaning files from 48 segment(s).........20110705:09:03:32:gprecoverseg:mdw:gpadmin-[INFO]:-Building template directory20110705:09:03:33:gprecoverseg:mdw:gpadmin-[INFO]:-Validating remote directories.20110705:09:03:34:gprecoverseg:mdw:gpadmin-[INFO]:-Copying template directory file.20110705:09:03:35:gprecoverseg:mdw:gpadmin-[INFO]:-Configuring new segments.20110705:09:03:36:gprecoverseg:mdw:gpadmin-[INFO]:-Cleaning files.20110705:09:03:37:gprecoverseg:mdw:gpadmin-[INFO]:-Updating configuration with new mirrors20110705:09:03:39:gprecoverseg:mdw:gpadmin-[INFO]:-Updating mirrors.20110705:09:03:40:gprecoverseg:mdw:gpadmin-[INFO]:-Starting mirrors20110705:09:03:40:gprecoverseg:mdw:gpadmin-[INFO]:-Commencing parallel primary and mirror segment instance startup, please wait.....20110705:09:03:42:gprecoverseg:mdw:gpadmin-[INFO]:-Process results...20110705:09:03:42:gprecoverseg:mdw:gpadmin-[INFO]:-Updating configuration to mark mirrors up20110705:09:03:42:gprecoverseg:mdw:gpadmin-[INFO]:-Updating primaries20110705:09:03:42:gprecoverseg:mdw:gpadmin-[INFO]:-Commencing parallel primary conversion of 48 segments, please wait.....20110705:09:03:44:gprecoverseg:mdw:gpadmin-[INFO]:-Process results...20110705:09:03:44:gprecoverseg:mdw:gpadmin-[INFO]:-Done updating primaries20110705:09:03:44:gprecoverseg:mdw:gpadmin-[INFO]:-******************************************************************20110705:09:03:44:gprecoverseg:mdw:gpadmin-[INFO]:-Updating segments for resynchronization is completed.20110705:09:03:44:gprecoverseg:mdw:gpadmin-[INFO]:-For segments updated successfully, resynchronization will continue in the background.20110705:09:03:44:gprecoverseg:mdw:gpadmin-[INFO]:-20110705:09:03:44:gprecoverseg:mdw:gpadmin-[INFO]:-Use gpstate -s to check the resynchronization progress.20110705:09:03:44:gprecoverseg:mdw:gpadmin-[INFO]:-******************************************************************[gpadmin@mdw ~]$

23. Periodically, run the gpstate command as reported in the gprecoverseg output to obtain the status of the recovery. When all segments report synchronized, the recovery is complete.

Configuring EMC Greenplum DCA Using SAN Mirror and EMC Symmetrix VMAX for Disaster Recovery

Page 147: Green Plum 300-012-943

Recovering a DCA Using a SNAP Session and SRDF

24. Note: the output from gpstate -s is very long - fourteen lines for each segment. The important thing to monitor is the "Mirror status", which will report "Resynchronizing" during the recovery process, and "Synchronized" when it is complete.

gpstate -s

Sample output of command when the systems is still resynchronizing:

[gpadmin@mdw ~]$ gpstate -s20110705:09:06:15:gpstate:mdw:gpadmin-[INFO]:-Starting gpstate with args: -s20110705:09:06:15:gpstate:mdw:gpadmin-[INFO]:-local Greenplum Version: 'postgres (Greenplum Database) 4.1.1.1 build 1'20110705:09:06:15:gpstate:mdw:gpadmin-[INFO]:-Obtaining Segment details from master...20110705:09:06:15:gpstate:mdw:gpadmin-[INFO]:-Gathering data from segments..............20110705:09:06:27:gpstate:mdw:gpadmin-[INFO]:-----------------------------------20110705:09:06:27:gpstate:mdw:gpadmin-[INFO]:--Master Configuration & Status20110705:09:06:27:gpstate:mdw:gpadmin-[INFO]:-----------------------------------20110705:09:06:27:gpstate:mdw:gpadmin-[INFO]:- Master host = mdw20110705:09:06:27:gpstate:mdw:gpadmin-[INFO]:- Master postgres process ID = 201520110705:09:06:27:gpstate:mdw:gpadmin-[INFO]:- Master data directory = /data/master/gpseg-120110705:09:06:27:gpstate:mdw:gpadmin-[INFO]:- Master port = 543220110705:09:06:27:gpstate:mdw:gpadmin-[INFO]:- Master current role = dispatch20110705:09:06:27:gpstate:mdw:gpadmin-[INFO]:- Greenplum initsystem version = 4.1.1.1 build 120110705:09:06:27:gpstate:mdw:gpadmin-[INFO]:- Greenplum current version = PostgreSQL 8.2.15 (Greenplum Database 4.1.1.1 build 1) on x86_64-unknown-linux-gnu, compiled by GCC gcc (GCC) 4.4.2 compiled on May 12 2011 18:07:0820110705:09:06:27:gpstate:mdw:gpadmin-[INFO]:- Postgres version = 8.2.1520110705:09:06:27:gpstate:mdw:gpadmin-[INFO]:- Master standby = No master standby configured20110705:09:06:27:gpstate:mdw:gpadmin-[INFO]:-----------------------------------20110705:09:06:27:gpstate:mdw:gpadmin-[INFO]:-Segment Instance Status Report20110705:09:06:27:gpstate:mdw:gpadmin-[INFO]:-----------------------------------20110705:09:06:27:gpstate:mdw:gpadmin-[INFO]:- Segment Info20110705:09:06:27:gpstate:mdw:gpadmin-[INFO]:- Hostname = sdw220110705:09:06:27:gpstate:mdw:gpadmin-[INFO]:- Address = sdw2-220110705:09:06:27:gpstate:mdw:gpadmin-[INFO]:- Datadir = /mirror2/mirror/gpseg020110705:09:06:27:gpstate:mdw:gpadmin-[INFO]:- Port = 5000020110705:09:06:27:gpstate:mdw:gpadmin-[INFO]:- Mirroring Info

Recovering a DCA using a SNAP session and SRDF145

Page 148: Green Plum 300-012-943

146

Recovering a DCA Using a SNAP Session and SRDF

20110705:09:06:27:gpstate:mdw:gpadmin-[INFO]:- Current role = Primary20110705:09:06:27:gpstate:mdw:gpadmin-[INFO]:- Preferred role = Mirror20110705:09:06:27:gpstate:mdw:gpadmin-[INFO]:- Mirror status = Resynchronizing20110705:09:06:27:gpstate:mdw:gpadmin-[INFO]:- Change Tracking Info20110705:09:06:27:gpstate:mdw:gpadmin-[INFO]:- Change tracking data size = 199 MB20110705:09:06:27:gpstate:mdw:gpadmin-[INFO]:- Resynchronization Info20110705:09:06:27:gpstate:mdw:gpadmin-[INFO]:- Resynchronization mode = Full20110705:09:06:27:gpstate:mdw:gpadmin-[INFO]:- Data synchronized = 2.50 GB20110705:09:06:27:gpstate:mdw:gpadmin-[INFO]:- Estimated total data to synchronize = 59.3 GB20110705:09:06:27:gpstate:mdw:gpadmin-[INFO]:- Estimated resync progress with mirror = 4.21%20110705:09:06:27:gpstate:mdw:gpadmin-[INFO]:- Estimated resync end time = 2011-07-05 10:06:3820110705:09:06:27:gpstate:mdw:gpadmin-[INFO]:- Status20110705:09:06:27:gpstate:mdw:gpadmin-[INFO]:- PID = 893720110705:09:06:27:gpstate:mdw:gpadmin-[INFO]:- Configuration reports status as = Up20110705:09:06:27:gpstate:mdw:gpadmin-[INFO]:- Database status = Up20110705:09:06:27:gpstate:mdw:gpadmin-[INFO]:-----------------------------------20110705:09:06:27:gpstate:mdw:gpadmin-[INFO]:- Segment Info20110705:09:06:27:gpstate:mdw:gpadmin-[INFO]:- Hostname = sdw120110705:09:06:27:gpstate:mdw:gpadmin-[INFO]:- Address = sdw1-120110705:09:06:27:gpstate:mdw:gpadmin-[INFO]:- Datadir = /data1/primary/gpseg020110705:09:06:27:gpstate:mdw:gpadmin-[INFO]:- Port = 4000020110705:09:06:27:gpstate:mdw:gpadmin-[INFO]:- Mirroring Info20110705:09:06:27:gpstate:mdw:gpadmin-[INFO]:- Current role = Mirror20110705:09:06:27:gpstate:mdw:gpadmin-[INFO]:- Preferred role = Primary20110705:09:06:27:gpstate:mdw:gpadmin-[INFO]:- Mirror status = Resynchronizing20110705:09:06:27:gpstate:mdw:gpadmin-[INFO]:- Status20110705:09:06:27:gpstate:mdw:gpadmin-[INFO]:- PID = 560220110705:09:06:27:gpstate:mdw:gpadmin-[INFO]:- Configuration reports status as = Up20110705:09:06:27:gpstate:mdw:gpadmin-[INFO]:- Segment status = Up20110705:09:06:27:gpstate:mdw:gpadmin-[INFO]:-----------------------------------

Configuring EMC Greenplum DCA Using SAN Mirror and EMC Symmetrix VMAX for Disaster Recovery

Page 149: Green Plum 300-012-943

Recovering a DCA Using a SNAP Session and SRDF

…. Data cut due to output length

20110705:09:06:27:gpstate:mdw:gpadmin-[INFO]:-----------------------------------20110705:09:06:27:gpstate:mdw:gpadmin-[INFO]:- Segment Info20110705:09:06:27:gpstate:mdw:gpadmin-[INFO]:- Hostname = sdw820110705:09:06:27:gpstate:mdw:gpadmin-[INFO]:- Address = sdw8-220110705:09:06:27:gpstate:mdw:gpadmin-[INFO]:- Datadir = /data2/primary/gpseg4720110705:09:06:27:gpstate:mdw:gpadmin-[INFO]:- Port = 4000520110705:09:06:27:gpstate:mdw:gpadmin-[INFO]:- Mirroring Info20110705:09:06:27:gpstate:mdw:gpadmin-[INFO]:- Current role = Mirror20110705:09:06:27:gpstate:mdw:gpadmin-[INFO]:- Preferred role = Primary20110705:09:06:27:gpstate:mdw:gpadmin-[INFO]:- Mirror status = Resynchronizing20110705:09:06:27:gpstate:mdw:gpadmin-[INFO]:- Status20110705:09:06:27:gpstate:mdw:gpadmin-[INFO]:- PID = 2822420110705:09:06:27:gpstate:mdw:gpadmin-[INFO]:- Configuration reports status as = Up20110705:09:06:27:gpstate:mdw:gpadmin-[INFO]:- Segment status = Up[gpadmin@mdw ~]$

Sample output of command when systems are fully synchronized:

[gpadmin@mdw ~]$ gpstate -s20110705:10:41:33:gpstate:mdw:gpadmin-[INFO]:-Starting gpstate with args: -s20110705:10:41:33:gpstate:mdw:gpadmin-[INFO]:-local Greenplum Version: 'postgres (Greenplum Database) 4.1.1.1 build 1'20110705:10:41:33:gpstate:mdw:gpadmin-[INFO]:-Obtaining Segment details from master...20110705:10:41:34:gpstate:mdw:gpadmin-[INFO]:-Gathering data from segments......20110705:10:41:37:gpstate:mdw:gpadmin-[INFO]:-----------------------------------20110705:10:41:37:gpstate:mdw:gpadmin-[INFO]:--Master Configuration & Status20110705:10:41:37:gpstate:mdw:gpadmin-[INFO]:-----------------------------------20110705:10:41:37:gpstate:mdw:gpadmin-[INFO]:- Master host = mdw20110705:10:41:37:gpstate:mdw:gpadmin-[INFO]:- Master postgres process ID = 201520110705:10:41:37:gpstate:mdw:gpadmin-[INFO]:- Master data directory = /data/master/gpseg-120110705:10:41:37:gpstate:mdw:gpadmin-[INFO]:- Master port = 543220110705:10:41:37:gpstate:mdw:gpadmin-[INFO]:- Master current role = dispatch20110705:10:41:37:gpstate:mdw:gpadmin-[INFO]:- Greenplum initsystem version = 4.1.1.1 build 1

Recovering a DCA using a SNAP session and SRDF147

Page 150: Green Plum 300-012-943

148

Recovering a DCA Using a SNAP Session and SRDF

20110705:10:41:37:gpstate:mdw:gpadmin-[INFO]:- Greenplum current version = PostgreSQL 8.2.15 (Greenplum Database 4.1.1.1 build 1) on x86_64-unknown-linux-gnu, compiled by GCC gcc (GCC) 4.4.2 compiled on May 12 2011 18:07:0820110705:10:41:37:gpstate:mdw:gpadmin-[INFO]:- Postgres version = 8.2.1520110705:10:41:37:gpstate:mdw:gpadmin-[INFO]:- Master standby = No master standby configured20110705:10:41:37:gpstate:mdw:gpadmin-[INFO]:-----------------------------------20110705:10:41:37:gpstate:mdw:gpadmin-[INFO]:-Segment Instance Status Report20110705:10:41:37:gpstate:mdw:gpadmin-[INFO]:-----------------------------------20110705:10:41:37:gpstate:mdw:gpadmin-[INFO]:- Segment Info20110705:10:41:37:gpstate:mdw:gpadmin-[INFO]:- Hostname = sdw220110705:10:41:37:gpstate:mdw:gpadmin-[INFO]:- Address = sdw2-220110705:10:41:37:gpstate:mdw:gpadmin-[INFO]:- Datadir = /mirror2/mirror/gpseg020110705:10:41:37:gpstate:mdw:gpadmin-[INFO]:- Port = 5000020110705:10:41:37:gpstate:mdw:gpadmin-[INFO]:- Mirroring Info20110705:10:41:37:gpstate:mdw:gpadmin-[INFO]:- Current role = Primary20110705:10:41:37:gpstate:mdw:gpadmin-[INFO]:- Preferred role = Mirror20110705:10:41:37:gpstate:mdw:gpadmin-[INFO]:- Mirror status = Synchronized20110705:10:41:37:gpstate:mdw:gpadmin-[INFO]:- Status20110705:10:41:37:gpstate:mdw:gpadmin-[INFO]:- PID = 893720110705:10:41:37:gpstate:mdw:gpadmin-[INFO]:- Configuration reports status as = Up20110705:10:41:37:gpstate:mdw:gpadmin-[INFO]:- Database status = Up20110705:10:41:37:gpstate:mdw:gpadmin-[INFO]:-----------------------------------20110705:10:41:37:gpstate:mdw:gpadmin-[INFO]:- Segment Info20110705:10:41:37:gpstate:mdw:gpadmin-[INFO]:- Hostname = sdw120110705:10:41:37:gpstate:mdw:gpadmin-[INFO]:- Address = sdw1-120110705:10:41:37:gpstate:mdw:gpadmin-[INFO]:- Datadir = /data1/primary/gpseg020110705:10:41:37:gpstate:mdw:gpadmin-[INFO]:- Port = 4000020110705:10:41:37:gpstate:mdw:gpadmin-[INFO]:- Mirroring Info20110705:10:41:37:gpstate:mdw:gpadmin-[INFO]:- Current role = Mirror20110705:10:41:37:gpstate:mdw:gpadmin-[INFO]:- Preferred role = Primary20110705:10:41:37:gpstate:mdw:gpadmin-[INFO]:- Mirror status = Synchronized20110705:10:41:37:gpstate:mdw:gpadmin-[INFO]:- Status

Configuring EMC Greenplum DCA Using SAN Mirror and EMC Symmetrix VMAX for Disaster Recovery

Page 151: Green Plum 300-012-943

Recovering a DCA Using a SNAP Session and SRDF

20110705:10:41:37:gpstate:mdw:gpadmin-[INFO]:- PID = 560220110705:10:41:37:gpstate:mdw:gpadmin-[INFO]:- Configuration reports status as = Up20110705:10:41:37:gpstate:mdw:gpadmin-[INFO]:- Segment status = Up20110705:10:41:37:gpstate:mdw:gpadmin-[INFO]:-----------------------------------

…. Data cut due to output length

20110705:10:41:37:gpstate:mdw:gpadmin-[INFO]:-----------------------------------20110705:10:41:37:gpstate:mdw:gpadmin-[INFO]:- Segment Info20110705:10:41:37:gpstate:mdw:gpadmin-[INFO]:- Hostname = sdw820110705:10:41:37:gpstate:mdw:gpadmin-[INFO]:- Address = sdw8-220110705:10:41:37:gpstate:mdw:gpadmin-[INFO]:- Datadir = /data2/primary/gpseg4720110705:10:41:37:gpstate:mdw:gpadmin-[INFO]:- Port = 4000520110705:10:41:37:gpstate:mdw:gpadmin-[INFO]:- Mirroring Info20110705:10:41:37:gpstate:mdw:gpadmin-[INFO]:- Current role = Mirror20110705:10:41:37:gpstate:mdw:gpadmin-[INFO]:- Preferred role = Primary20110705:10:41:37:gpstate:mdw:gpadmin-[INFO]:- Mirror status = Synchronized20110705:10:41:37:gpstate:mdw:gpadmin-[INFO]:- Status20110705:10:41:37:gpstate:mdw:gpadmin-[INFO]:- PID = 2822420110705:10:41:37:gpstate:mdw:gpadmin-[INFO]:- Configuration reports status as = Up20110705:10:41:37:gpstate:mdw:gpadmin-[INFO]:- Segment status = Up[gpadmin@mdw ~]$

25. Once the database is fully synchronized and all segments are up-to-date, stop and restart the database to switch the primary segments back to the primary role.

This will happen automatically when the database is restarted. There is no need to go back into maintenance mode and update the gp_segment_configuration table space manually.

gpstopgpstart

Recovering a DCA using a SNAP session and SRDF149

Page 152: Green Plum 300-012-943

150

Recovering a DCA Using a SNAP Session and SRDF

Configuring EMC Greenplum DCA Using SAN Mirror and EMC Symmetrix VMAX for Disaster Recovery

Page 153: Green Plum 300-012-943

JInvisible Body Tag

This appendix contains the following information:

◆ Starting EMC Greenplum database from mirror segments....... 152

Starting EMC GreenplumDatabase from Mirror

Segments

Starting EMC Greenplum Database from Mirror Segments 151

Page 154: Green Plum 300-012-943

152

Starting EMC Greenplum Database from Mirror Segments

Starting EMC Greenplum database from mirror segments Complete the following steps to start the Greenplum database from the mirror segments, such as on the R2 VMAX.

Note: All commands are issued as gpadmin, or another database user with appropriate permissions, unless otherwise noted.

1. Issue SRDF commands to split the SRDF link from R1 (symrdf failover), and mount SRDF R2 volumes to the remote DCA.

2. Start the Greenplum database in maintenance mode, which starts only the Master Server and nothing else.

gpstart -m

Sample output from command:

[gpadmin@smdw ~]$ gpstart -m20110616:08:27:34:gpstart:smdw:gpadmin-[INFO]:-Starting gpstart with args: -m20110616:08:27:34:gpstart:smdw:gpadmin-[INFO]:-Gathering information and validating the environment...20110616:08:27:34:gpstart:smdw:gpadmin-[INFO]:-Greenplum Binary Version: 'postgres (Greenplum Database) 4.1.1.1 build 1'20110616:08:27:34:gpstart:smdw:gpadmin-[INFO]:-Greenplum Catalog Version: '201101130'20110616:08:27:34:gpstart:smdw:gpadmin-[WARNING]:-postmaster.pid file exists on Master, checking if recovery startup required20110616:08:27:34:gpstart:smdw:gpadmin-[INFO]:-Commencing recovery startup checks20110616:08:27:34:gpstart:smdw:gpadmin-[INFO]:-No socket connection or lock file in /tmp found for port=543220110616:08:27:34:gpstart:smdw:gpadmin-[INFO]:-No Master instance process, entering recovery startup mode20110616:08:27:34:gpstart:smdw:gpadmin-[INFO]:-Clearing Master instance pid file20110616:08:27:34:gpstart:smdw:gpadmin-[INFO]:-Starting Master instance in admin mode20110616:08:27:35:gpstart:smdw:gpadmin-[INFO]:-Obtaining Greenplum Master catalog information20110616:08:27:35:gpstart:smdw:gpadmin-[INFO]:-Obtaining Segment details from master...20110616:08:27:36:gpstart:smdw:gpadmin-[INFO]:-Commencing forced instance shutdown20110616:08:27:38:gpstart:smdw:gpadmin-[INFO]:-Starting Master instance in admin mode20110616:08:27:39:gpstart:smdw:gpadmin-[INFO]:-Obtaining Greenplum Master catalog information20110616:08:27:39:gpstart:smdw:gpadmin-[INFO]:-Obtaining Segment details from master...20110616:08:27:39:gpstart:smdw:gpadmin-[INFO]:-Master Started...[gpadmin@smdw ~]$

3. Log in to a database on the GP server using utility mode.

PGOPTIONS='-c gp_session_role=utility' psql <database name>

Configuring EMC Greenplum DCA Using SAN Mirror and EMC Symmetrix VMAX for Disaster Recovery

Page 155: Green Plum 300-012-943

Starting EMC Greenplum Database from Mirror Segments

Sample output from command:

[gpadmin@smdw ~]$ PGOPTIONS='-c gp_session_role=utility' psql srdftestingpsql (8.2.15)Type "help" for help.srdftesting=#

4. If not previously done, alter the user’s search path so that the user can find and access the gp_toolkit schema where the gp_segment_configuration table space is maintained.

ALTER role gpadmin set search_path=gp_toolkit,information_schema,pg_aoseg,pg_bitmapindex,pg_catalog,pg_toast,public;

5. To see the current configuration of the GP database, run the following SQL command.

This SQL select command will show which database is on which segment server as well as its role, status and location on the segment server.

select sc.dbid,sc.content,sc.role,sc.preferred_role,sc.mode,sc.status,sc.hostname,fse.fselocation from gp_segment_configuration sc,pg_filespace_entry fse where sc.dbid=fse.fsedbid order by sc.dbid;

Sample output from command:

srdftesting=# select sc.dbid,sc.content,sc.role,sc.preferred_role,sc.mode,sc.status,sc.hostname,fse.fselocation from gp_segment_configuration sc,pg_filespace_entry fse where sc.dbid=fse.fsedbid order by sc.dbid;

dbid | content | role | preferred_role | mode | status | hostname | fselocation------+---------+------+----------------+------+--------+----------+---------------- 1 | -1 | p | p | s | u | mdw | /data/master/gpseg-1 2 | 0 | p | p | s | u | sdw1 | /data1/primary/gpseg0 3 | 1 | p | p | s | u | sdw1 | /data1/primary/gpseg1 4 | 2 | p | p | s | u | sdw1 | /data1/primary/gpseg2 5 | 3 | p | p | s | u | sdw1 | /data2/primary/gpseg3 6 | 4 | p | p | s | u | sdw1 | /data2/primary/gpseg4 7 | 5 | p | p | s | u | sdw1 | /data2/primary/gpseg5 8 | 6 | p | p | s | u | sdw2 | /data1/primary/gpseg6 9 | 7 | p | p | s | u | sdw2 | /data1/primary/gpseg7 10 | 8 | p | p | s | u | sdw2 | /data1/primary/gpseg8…… Data cut due to length of output 88 | 38 | m | m | s | u | sdw2 | /mirror2/mirror/gpseg38 89 | 39 | m | m | s | u | sdw3 | /mirror1/mirror/gpseg39 90 | 40 | m | m | s | u | sdw4 | /mirror1/mirror/gpseg40 91 | 41 | m | m | s | u | sdw5 | /mirror1/mirror/gpseg41 92 | 42 | m | m | s | u | sdw1 | /mirror2/mirror/gpseg42 93 | 43 | m | m | s | u | sdw2 | /mirror2/mirror/gpseg43 94 | 44 | m | m | s | u | sdw3 | /mirror2/mirror/gpseg44 95 | 45 | m | m | s | u | sdw4 | /mirror1/mirror/gpseg45 96 | 46 | m | m | s | u | sdw5 | /mirror1/mirror/gpseg46 97 | 47 | m | m | s | u | sdw6 | /mirror1/mirror/gpseg47(97 rows)srdftesting=#

Starting EMC Greenplum database from mirror segments153

Page 156: Green Plum 300-012-943

154

Starting EMC Greenplum Database from Mirror Segments

Note: A role of 'p' indicates the database has the 'primary' role and a role of 'm' indicates the database has the 'mirror' role.

A mode of 's' indicates the database is 'Synchronized', a mode of 'c' indicates the database is in 'change tracking' mode because its mirror or primary is down and a mode of 'r' indicates the database is in 'recovery' mode.

A status of 'u' indicates a segment database is 'up' and a status of 'd' indicates the database is 'down'.

6. Set the primary databases to the status of down and a role of mirror so that when the GP database is brought online it will not attempt to start the primary database segments, and if a gprecoverseg is issued it will use the mirror segments as the source for the recovery.

update gp_segment_configuration sc set role='m',status='d' from pg_filespace_entry fse where sc.dbid=fse.fsedbid and fse.fselocation like'/<unique dir path value>/%' and sc.content >= 0;

Sample output from command:

srdftesting=# update gp_segment_configuration sc set role='m',status='d' from pg_filespace_entry fse where sc.dbid=fse.fsedbid and fse.fselocation like'/data_/%' and sc.content >= 0;UPDATE 48srdftesting=#

Sample output from running select statement after this update statement:

srdftesting=# select sc.dbid,sc.content,sc.role,sc.preferred_role,sc.mode,sc.status,sc.hostname,fse.fselocation from gp_segment_configuration sc,pg_filespace_entry fse where sc.dbid=fse.fsedbid order by sc.dbid; dbid | content | role | preferred_role | mode | status | hostname | fselocation------+---------+------+----------------+------+--------+----------+------------ 1 | -1 | p | p | s | u | mdw | /data/master/gpseg-1 2 | 0 | m | p | s | d | sdw1 | /data1/primary/gpseg0 3 | 1 | m | p | s | d | sdw1 | /data1/primary/gpseg1 4 | 2 | m | p | s | d | sdw1 | /data1/primary/gpseg2 5 | 3 | m | p | s | d | sdw1 | /data2/primary/gpseg3 6 | 4 | m | p | s | d | sdw1 | /data2/primary/gpseg4 7 | 5 | m | p | s | d | sdw1 | /data2/primary/gpseg5 8 | 6 | m | p | s | d | sdw2 | /data1/primary/gpseg6 9 | 7 | m | p | s | d | sdw2 | /data1/primary/gpseg7 10 | 8 | m | p | s | d | sdw2 | /data1/primary/gpseg8…… Data cut due to length of output 88 | 38 | m | m | s | u | sdw2 | /mirror2/mirror/gpseg38

Configuring EMC Greenplum DCA Using SAN Mirror and EMC Symmetrix VMAX for Disaster Recovery

Page 157: Green Plum 300-012-943

Starting EMC Greenplum Database from Mirror Segments

89 | 39 | m | m | s | u | sdw3 | /mirror1/mirror/gpseg39 90 | 40 | m | m | s | u | sdw4 | /mirror1/mirror/gpseg40 91 | 41 | m | m | s | u | sdw5 | /mirror1/mirror/gpseg41 92 | 42 | m | m | s | u | sdw1 | /mirror2/mirror/gpseg42 93 | 43 | m | m | s | u | sdw2 | /mirror2/mirror/gpseg43 94 | 44 | m | m | s | u | sdw3 | /mirror2/mirror/gpseg44 95 | 45 | m | m | s | u | sdw4 | /mirror1/mirror/gpseg45 96 | 46 | m | m | s | u | sdw5 | /mirror1/mirror/gpseg46 97 | 47 | m | m | s | u | sdw6 | /mirror1/mirror/gpseg47(97 rows)

srdftesting=#7. Set the mirror database segments to the role of primary and mode

of change tracking so that when the GP database is brought online it will start the mirror segments as the primary segments and keep track of any changes that are made to the segments so a gprecoverseg can be issued.

update gp_segment_configuration sc set role='p',mode='c' from pg_filespace_entry fse where sc.dbid=fse.fsedbid and fse.fselocation like'/<unique dir path value>/%' and sc.content >= 0;

Sample output from command:

srdftesting=# update gp_segment_configuration sc set role='p',mode='c' from pg_filespace_entry fse where sc.dbid=fse.fsedbid and fse.fselocation like'/mirror_/%' and sc.content >= 0;UPDATE 48srdftesting=#

Sample output from select statement after this update statement:

srdftesting=# select sc.dbid,sc.content,sc.role,sc.preferred_role,sc.mode,sc.status,sc.hostname,fse.fselocation from gp_segment_configuration sc,pg_filespace_entry fse where sc.dbid=fse.fsedbid order by sc.dbid;

dbid | content | role | preferred_role | mode | status | hostname | fselocation------+---------+------+----------------+------+--------+----------+------------ 1 | -1 | p | p | s | u | mdw | /data/master/gpseg-1 2 | 0 | m | p | s | d | sdw1 | /data1/primary/gpseg0 3 | 1 | m | p | s | d | sdw1 | /data1/primary/gpseg1 4 | 2 | m | p | s | d | sdw1 | /data1/primary/gpseg2 5 | 3 | m | p | s | d | sdw1 | /data2/primary/gpseg3 6 | 4 | m | p | s | d | sdw1 | /data2/primary/gpseg4 7 | 5 | m | p | s | d | sdw1 | /data2/primary/gpseg5 8 | 6 | m | p | s | d | sdw2 | /data1/primary/gpseg6 9 | 7 | m | p | s | d | sdw2 | /data1/primary/gpseg7 10 | 8 | m | p | s | d | sdw2 | /data1/primary/gpseg8…… Data cut due to length of output 88 | 38 | p | m | c | u | sdw2 | /mirror2/mirror/gpseg38 89 | 39 | p | m | c | u | sdw3 | /mirror1/mirror/gpseg39 90 | 40 | p | m | c | u | sdw4 | /mirror1/mirror/gpseg40 91 | 41 | p | m | c | u | sdw5 | /mirror1/mirror/gpseg41 92 | 42 | p | m | c | u | sdw1 | /mirror2/mirror/gpseg42 93 | 43 | p | m | c | u | sdw2 | /mirror2/mirror/gpseg43

Starting EMC Greenplum database from mirror segments155

Page 158: Green Plum 300-012-943

156

Starting EMC Greenplum Database from Mirror Segments

94 | 44 | p | m | c | u | sdw3 | /mirror2/mirror/gpseg44 95 | 45 | p | m | c | u | sdw4 | /mirror1/mirror/gpseg45 96 | 46 | p | m | c | u | sdw5 | /mirror1/mirror/gpseg46 97 | 47 | p | m | c | u | sdw6 | /mirror1/mirror/gpseg47(97 rows)srdftesting=#

8. Quit the psql session to return to the command prompt.

\q

Sample output from command:

srdftesting=# \q[gpadmin@smdw ~]$

9. Stop the database master server so it is removed from maintenance mode.

gpstop -m

Sample output from command:

20110616:08:40:43:gpstop:smdw:gpadmin-[INFO]:-Starting gpstop with args: -m20110616:08:40:43:gpstop:smdw:gpadmin-[INFO]:-Gathering information and validating the environment...20110616:08:40:43:gpstop:smdw:gpadmin-[INFO]:-Obtaining Greenplum Master catalog information20110616:08:40:43:gpstop:smdw:gpadmin-[INFO]:-Obtaining Segment details from master...20110616:08:40:44:gpstop:smdw:gpadmin-[INFO]:-Greenplum Version: 'postgres (Greenplum Database) 4.1.1.1 build 1'20110616:08:40:44:gpstop:smdw:gpadmin-[INFO]:-There are 0 connections to the database20110616:08:40:44:gpstop:smdw:gpadmin-[INFO]:-Commencing Master instance shutdown with mode='smart'20110616:08:40:44:gpstop:smdw:gpadmin-[INFO]:-Master host=mdw20110616:08:40:44:gpstop:smdw:gpadmin-[INFO]:-Commencing Master instance shutdown with mode=smart20110616:08:40:44:gpstop:smdw:gpadmin-[INFO]:-Master segment instance directory=/data/master/gpseg-1[gpadmin@smdw ~]$

10. )Start the database as normal.

Notice that the primary segments are skipped since they were marked down in the gp_segment_configuration table space and only the mirror segments were started.

gpstart

Configuring EMC Greenplum DCA Using SAN Mirror and EMC Symmetrix VMAX for Disaster Recovery

Page 159: Green Plum 300-012-943

Starting EMC Greenplum Database from Mirror Segments

Sample output from command:

[gpadmin@smdw ~]$ gpstart20110616:08:40:50:gpstart:smdw:gpadmin-[INFO]:-Starting gpstart with args:20110616:08:40:50:gpstart:smdw:gpadmin-[INFO]:-Gathering information and validating the environment...20110616:08:40:50:gpstart:smdw:gpadmin-[INFO]:-Greenplum Binary Version: 'postgres (Greenplum Database) 4.1.1.1 build 1'20110616:08:40:50:gpstart:smdw:gpadmin-[INFO]:-Greenplum Catalog Version: '201101130'20110616:08:40:50:gpstart:smdw:gpadmin-[INFO]:-Starting Master instance in admin mode20110616:08:40:51:gpstart:smdw:gpadmin-[INFO]:-Obtaining Greenplum Master catalog information20110616:08:40:51:gpstart:smdw:gpadmin-[INFO]:-Obtaining Segment details from master...20110616:08:40:51:gpstart:smdw:gpadmin-[INFO]:-Master Started...20110616:08:40:51:gpstart:smdw:gpadmin-[INFO]:-Shutting down master20110616:08:40:53:gpstart:smdw:gpadmin-[WARNING]:-Skipping startup of segment marked down in configuration: on sdw1 directory /data1/primary/gpseg0 <<<<<20110616:08:40:53:gpstart:smdw:gpadmin-[WARNING]:-Skipping startup of segment marked down in configuration: on sdw1 directory /data1/primary/gpseg1 <<<<<20110616:08:40:53:gpstart:smdw:gpadmin-[WARNING]:-Skipping startup of segment marked down in configuration: on sdw1 directory /data1/primary/gpseg2 <<<<<20110616:08:40:53:gpstart:smdw:gpadmin-[WARNING]:-Skipping startup of segment marked down in configuration: on sdw1 directory /data2/primary/gpseg3 <<<<<20110616:08:40:53:gpstart:smdw:gpadmin-[WARNING]:-Skipping startup of segment marked down in configuration: on sdw1 directory /data2/primary/gpseg4 <<<<<20110616:08:40:53:gpstart:smdw:gpadmin-[WARNING]:-Skipping startup of segment marked down in configuration: on sdw1 directory /data2/primary/gpseg5 <<<<<20110616:08:40:53:gpstart:smdw:gpadmin-[WARNING]:-Skipping startup of segment marked down in configuration: on sdw2 directory /data1/primary/gpseg6 <<<<<20110616:08:40:53:gpstart:smdw:gpadmin-[WARNING]:-Skipping startup of segment marked down in configuration: on sdw2 directory /data1/primary/gpseg7 <<<<<20110616:08:40:53:gpstart:smdw:gpadmin-[WARNING]:-Skipping startup of segment marked down in configuration: on sdw2 directory /data1/primary/gpseg8 <<<<<20110616:08:40:53:gpstart:smdw:gpadmin-[WARNING]:-Skipping startup of segment marked down in configuration: on sdw2 directory /data2/primary/gpseg9 <<<<<…… Data cut due to length of output20110616:08:40:53:gpstart:smdw:gpadmin-[INFO]:---------------------------20110616:08:40:53:gpstart:smdw:gpadmin-[INFO]:-Master instance parameters20110616:08:40:53:gpstart:smdw:gpadmin-[INFO]:---------------------------20110616:08:40:53:gpstart:smdw:gpadmin-[INFO]:-Database = template120110616:08:40:53:gpstart:smdw:gpadmin-[INFO]:-Master Port = 543220110616:08:40:53:gpstart:smdw:gpadmin-[INFO]:-Master directory = /data/master/gpseg-120110616:08:40:53:gpstart:smdw:gpadmin-[INFO]:-Timeout = 600 seconds20110616:08:40:53:gpstart:smdw:gpadmin-[INFO]:-Master standby = Off20110616:08:40:53:gpstart:smdw:gpadmin-[INFO]:----------------------------------20110616:08:40:53:gpstart:smdw:gpadmin-[INFO]:-Segment instances that will be started20110616:08:40:53:gpstart:smdw:gpadmin-[INFO]:----------------------------------

Starting EMC Greenplum database from mirror segments157

Page 160: Green Plum 300-012-943

158

Starting EMC Greenplum Database from Mirror Segments

20110616:08:40:53:gpstart:smdw:gpadmin-[INFO]:- Host Datadir Port Role20110616:08:40:53:gpstart:smdw:gpadmin-[INFO]:- sdw2 /mirror2/mirror/gpseg0 50000 Primary20110616:08:40:53:gpstart:smdw:gpadmin-[INFO]:- sdw3 /mirror2/mirror/gpseg1 50001 Primary20110616:08:40:53:gpstart:smdw:gpadmin-[INFO]:- sdw4 /mirror2/mirror/gpseg2 50002 Primary20110616:08:40:53:gpstart:smdw:gpadmin-[INFO]:- sdw5 /mirror1/mirror/gpseg3 50003 Primary20110616:08:40:53:gpstart:smdw:gpadmin-[INFO]:- sdw6 /mirror1/mirror/gpseg4 50004 Primary20110616:08:40:53:gpstart:smdw:gpadmin-[INFO]:- sdw7 /mirror1/mirror/gpseg5 50005 Primary20110616:08:40:53:gpstart:smdw:gpadmin-[INFO]:- sdw3 /mirror2/mirror/gpseg6 50000 Primary20110616:08:40:53:gpstart:smdw:gpadmin-[INFO]:- sdw4 /mirror2/mirror/gpseg7 50001 Primary20110616:08:40:53:gpstart:smdw:gpadmin-[INFO]:- sdw5 /mirror2/mirror/gpseg8 50002 Primary

…… Data cut due to length of output

20110616:08:40:53:gpstart:smdw:gpadmin-[INFO]:- sdw5 /mirror1/mirror/gpseg46 50004 Primary20110616:08:40:53:gpstart:smdw:gpadmin-[INFO]:- sdw6 /mirror1/mirror/gpseg47 50005 Primary

Continue with Greenplum instance startup Yy|Nn (default=N):> y20110616:08:40:55:gpstart:smdw:gpadmin-[INFO]:-No standby master configured. skipping...20110616:08:40:55:gpstart:smdw:gpadmin-[INFO]:-Commencing parallel primary and mirror segment instance startup, please wait......20110616:08:40:58:gpstart:smdw:gpadmin-[INFO]:-Process results...20110616:08:40:58:gpstart:smdw:gpadmin-[INFO]:----------------------------------20110616:08:40:58:gpstart:smdw:gpadmin-[INFO]:- Successful segment starts = 4820110616:08:40:58:gpstart:smdw:gpadmin-[INFO]:- Failed segment starts = 020110616:08:40:58:gpstart:smdw:gpadmin-[WARNING]:-Skipped segment starts (segments are marked down in configuration) = 48 <<<<<<<<20110616:08:40:58:gpstart:smdw:gpadmin-[INFO]:----------------------------------20110616:08:40:58:gpstart:smdw:gpadmin-[INFO]:-20110616:08:40:58:gpstart:smdw:gpadmin-[INFO]:-Successfully started 48 of 48 segment instances, skipped 48 other segments20110616:08:40:58:gpstart:smdw:gpadmin-[INFO]:----------------------------------20110616:08:40:58:gpstart:smdw:gpadmin-[WARNING]:-******************************20110616:08:40:58:gpstart:smdw:gpadmin-[WARNING]:-There are 48 segment(s) marked down in the database20110616:08:40:58:gpstart:smdw:gpadmin-[WARNING]:-To recover from this current state, review usage of the gprecoverseg

Configuring EMC Greenplum DCA Using SAN Mirror and EMC Symmetrix VMAX for Disaster Recovery

Page 161: Green Plum 300-012-943

Starting EMC Greenplum Database from Mirror Segments

20110616:08:40:58:gpstart:smdw:gpadmin-[WARNING]:-management utility which will recover failed segment instance databases.20110616:08:40:58:gpstart:smdw:gpadmin-[WARNING]:-******************************20110616:08:40:58:gpstart:smdw:gpadmin-[INFO]:-Starting Master instance mdw directory /data/master/gpseg-120110616:08:40:59:gpstart:smdw:gpadmin-[INFO]:-Command pg_ctl reports Master mdw instance active20110616:08:41:11:gpstart:smdw:gpadmin-[WARNING]:-Database started but warnings generated <<<<<20110616:08:41:11:gpstart:smdw:gpadmin-[INFO]:-Check status of database with gpstate utility[gpadmin@smdw ~]$

11. To recover the primary segments that were marked down, issue gprecoverseg -F to perform a full recovery from the online mirror segments (SAN) to the offline primary segments (internal to the remote DCA).

12. Once the database is fully synchronized and all segments are up-to-date, stop and restart the database to switch the primary segments back to the primary role.

This will happen automatically when the database is restarted. There is no need to go back into maintenance mode and update the gp_segment_configuration table space manually.

gpstopgpstart

Starting EMC Greenplum database from mirror segments159

Page 162: Green Plum 300-012-943

160

Starting EMC Greenplum Database from Mirror Segments

Configuring EMC Greenplum DCA Using SAN Mirror and EMC Symmetrix VMAX for Disaster Recovery