backup and recovery in a san

86

Click here to load reader

Upload: emc-academic-alliance

Post on 28-Jan-2015

131 views

Category:

Technology


4 download

DESCRIPTION

This EMC Engineering TechBook provides information on traditional backup and recovery architecture, SAN-based backup and recovery technologies, and disk and tape backup and recovery.

TRANSCRIPT

Page 1: Backup and Recovery in a SAN

Backup and Recovery in a SAN

Version 1.1

• Traditional Backup and Recovery Architectures

• SAN-Based Backup and Recovery Technologies

• Disk and Tape Backup and Recovery Solutions

Ron DharmaSowjanya SakeMichael Manuel

Page 2: Backup and Recovery in a SAN

Backup and Recovery in a SAN TechBook2

Copyright © 2011 EMC Corporation. All rights reserved.

EMC believes the information in this publication is accurate as of its publication date. The information is subject to change without notice.

THE INFORMATION IN THIS PUBLICATION IS PROVIDED “AS IS.” EMC CORPORATION MAKES NO REPRESENTATIONS OR WARRANTIES OF ANY KIND WITH RESPECT TO THE INFORMATION IN THIS PUBLICATION, AND SPECIFICALLY DISCLAIMS IMPLIED WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE.

Use, copying, and distribution of any EMC software described in this publication requires an applicable software license.

For the most up-to-date regulatory document for your product line, go to the Technical Documentation and Advisories section on EMC Powerlink.

For the most up-to-date listing of EMC product names, see EMC Corporation Trademarks on EMC.com.

All other trademarks used herein are the property of their respective owners.

Part number H8077.1

Page 3: Backup and Recovery in a SAN

Contents

Preface.............................................................................................................................. 7

Chapter 1 Traditional Backup and RecoveryOverview of backups........................................................................ 14

Terminology ............................................................................... 14Why perform backups?............................................................. 14Backup architectures ................................................................. 15

Direct-attached backups .................................................................. 16Advantages................................................................................. 16Disadvantages ............................................................................ 16Migration paths.......................................................................... 17Improvement options................................................................ 17

LAN-based backups ......................................................................... 18Backup process overview ......................................................... 18Advantages................................................................................. 19Disadvantages ............................................................................ 19Migration paths.......................................................................... 22Improvement options................................................................ 22

Chapter 2 SAN-based Backup and RecoveryLAN-free backups............................................................................. 24

Backup process overview ......................................................... 24Advantages................................................................................. 25Disadvantages ............................................................................ 28Migration paths.......................................................................... 28Improvement options................................................................ 28

Serverless backups............................................................................ 29Theory of operation................................................................... 29

Backup and Recovery in a SAN TechBook 3

Page 4: Backup and Recovery in a SAN

Contents

Backup process overview......................................................... 30Advantages................................................................................. 30Disadvantages............................................................................ 31

Backup over long distances using FCIP and routers................... 32Theory of operation................................................................... 32Advantages................................................................................. 33

FCIP tape acceleration with EMC Connectrix MDS switches.... 34Notes............................................................................................ 35Enabling FCIP tape acceleration on Connectrix MDS switches....................................................................................... 36

FC-Write acceleration on Cisco MDS 9000 Family SSM.............. 37Cisco MDS 9000 Family SSM ................................................... 37Cisco I/O Accelerator (IOA).................................................... 39

FastWrite acceleration and tape pipelining................................... 42Brocade 7800 and EMC EDL over IP case study................... 42Results ......................................................................................... 44

NAS backups..................................................................................... 45Local and remote backup ......................................................... 45NDMP backup ........................................................................... 45

Chapter 3 Disk and Tape Backup and RecoveryBackup and recovery........................................................................ 48

Tape-based backup.................................................................... 48Disk-based backup .................................................................... 48Deduplication............................................................................. 49

Data archiving ................................................................................... 51Backup media.................................................................................... 52

Tape libraries.............................................................................. 52Editing the Solaris configuration file: st.conf ........................ 52HBAs ........................................................................................... 53Tape drives ................................................................................. 54

Mirrored fabric backup solution..................................................... 55Solution description .................................................................. 56Physical backup device centralization ................................... 58Summary..................................................................................... 59

Tapes and fabrics............................................................................... 60SCSI tape..................................................................................... 60Fibre Channel tape .................................................................... 60Sharing tape and disk on the same HBA ............................... 62

Glossary ......................................................................................................................... 65

Backup and Recovery in a SAN TechBook4

Page 5: Backup and Recovery in a SAN

Title Page

Figures

1 Data flow for direct-attached backups ........................................................ 162 LAN backup infrastructures ......................................................................... 183 LAN backup: Additional steps ..................................................................... 204 Bandwidth used in LAN backups (based on 100 Mb/s LAN setup) ..... 215 LAN-free backup ............................................................................................ 246 Process flow for SAN backups ..................................................................... 267 Bandwidth used in LAN-free backups ....................................................... 278 Serverless backup ........................................................................................... 299 Backup over long distances .......................................................................... 3210 FCIP tape acceleration ................................................................................... 3411 Normal SCSI Write ......................................................................................... 3812 SCSI Write with FC-WA ................................................................................ 3913 IOA topology .................................................................................................. 4014 Brocade tape acceleration — Host remote backup .................................... 4315 Brocade tape acceleration — NDMP remote backup ................................ 4416 NDMP example .............................................................................................. 4617 Data deduplication process .......................................................................... 4918 Example of backup software solution ......................................................... 5719 Core/edge fabric example with recommended component

placement ......................................................................................................... 58

Backup and Recovery in a SAN TechBook 5

Page 6: Backup and Recovery in a SAN

Figures

Backup and Recovery in a SAN TechBook6

Page 7: Backup and Recovery in a SAN

Preface

This EMC Engineering TechBook provides information on traditional backup and recovery architecture, SAN-based backup and recovery technologies, and disk and tape backup and recovery. Case studies are also presented. Case studies used in this document are distributed by EMC for information purposes only. EMC does not warrant that this information is free from errors. No contract is implied or allowed.

E-Lab would like to thank all the contributors to this document, including EMC engineers, EMC field personnel, and partners. Your contributions are invaluable.

As part of an effort to improve and enhance the performance and capabilities of its product lines, EMC periodically releases revisions of its hardware and software. Therefore, some functions described in this document may not be supported by all versions of the software or hardware currently in use. For the most up-to-date information on product features, refer to your product release notes. If a product does not function properly or does not function as described in this document, please contact your EMC representative.

Audience This TechBook is intended for EMC field personnel, including technology consultants, and for the storage architect, administrator, and operator involved in acquiring, managing, operating, or designing a networked storage environment that contains EMC and host devices.

EMC Support Matrixand E-Lab

InteroperabilityNavigator

For the most up-to-date information, always consult the EMC Support Matrix (ESM), available through E-Lab Interoperability Navigator (ELN), at: http://elabnavigator.EMC.com, under the PDFs and Guides tab.

Backup and Recovery in a SAN TechBook 7

Page 8: Backup and Recovery in a SAN

8

Preface

The EMC Support Matrix links within this topology guide will take you to Powerlink where you are asked to log in to the E-Lab Interoperability Navigator. Instructions on how to best use the ELN (tutorial, queries, wizards) are provided below this Log in window. If you are unfamiliar with finding information on this site, please read these instructions before proceeding any further.

Under the PDFs and Guides tab resides a collection of printable resources for reference or download. All of the matrices, including the ESM (which does not include most software), are subsets of the E-Lab Interoperability Navigator database. Included under this tab are:

◆ The EMC Support Matrix, a complete guide to interoperable, and supportable, configurations.

◆ Subset matrices for specific storage families, server families, operating systems or software products.

◆ Host connectivity guides for complete, authoritative information on how to configure hosts effectively for various storage environments.

Under the PDFs and Guides tab, consult the Internet Protocol pdf under the "Miscellaneous" heading for EMC's policies and requirements for the EMC Support Matrix.

Relateddocumentation

Related documents include:

◆ The former EMC Networked Storage Topology Guide has been divided into several TechBooks and reference manuals. The following documents, including this one, are available through the E-Lab Interoperability Navigator, Topology Resource Center tab, at http://elabnavigator.EMC.com.

These documents are also available at the following location:

http://www.emc.com/products/interoperability/topology-resource-center.htm

• Building Secure SANs TechBook

• Extended Distance Technologies TechBook

• Fibre Channel over Ethernet (FCoE): Data Center Bridging (DCB) Concepts and Protocols TechBook

• Fibre Channel SAN Topologies TechBook

• iSCSI SAN Topologies TechBook

• Networked Storage Concepts and Protocols TechBook

Backup and Recovery in a SAN TechBook

Page 9: Backup and Recovery in a SAN

Preface

• Networking for Storage Virtualization and RecoverPoint TechBook

• WAN Optimization Controller Technologies TechBook

• EMC Connectrix SAN Products Data Reference Manual

• Legacy SAN Technologies Reference Manual

• Non-EMC SAN Products Data Reference Manual

◆ EMC Support Matrix, available through E-Lab Interoperability Navigator at http://elabnavigator.EMC.com > PDFs and Guides

◆ RSA security solutions documentation, which can be found at http://RSA.com > Content Library

All of the following documentation and release notes can be found at http://Powerlink.EMC.com. From the toolbar, select Support > Technical Documentation and Advisories, then choose the appropriate Hardware/Platforms, Software, or Host Connectivity/HBAs documentation links.

Hardware documents and release notes include those on:

◆ Connectrix B series ◆ Connectrix M series ◆ Connectrix MDS (release notes only)◆ VNX series◆ CLARiiON ◆ Celerra ◆ Symmetrix

Software documents include those on:

◆ EMC Ionix ControlCenter ◆ RecoverPoint ◆ Invista ◆ TimeFinder ◆ PowerPath

The following E-Lab documentation is also available:

◆ Host Connectivity Guides◆ HBA Guides

For Cisco and Brocade documentation, refer to the vendor’s website.

◆ http://cisco.com

◆ http://brocade.com

Backup and Recovery in a SAN TechBook 9

Page 10: Backup and Recovery in a SAN

10

Preface

Authors of thisTechBook

This TechBook was authored by Ron Dharma, Sowjanya Sake, and Michael Manuel, with contributions from EMC engineers, EMC field personnel, and partners.

Ron Dharma is a Principal Integration Engineer and team-lead for Advance Product Solution group in E-Lab. Prior to joining EMC, Ron was a SCSI software engineer, spending almost 11 years resolving integration issues in multiple SAN components. He dabbled in almost every aspect of the SAN including storage virtualization, backup and recovery, point-in-time recovery, and distance extension. Ron provided the original information in this document, and works with other contributors to update and expand the content.

Sowjanya Sake is a a Senior Systems Integration engineer with over 6 years of experience in storage technologies, tape virtualization, backup and recovery, high availability, and tape and disk libraries. Currently, Sowji work in the E-Lab qualifing tape and disk libraries with Celerra NDMP backup, including EMC Disk Library, Quantum Dxi, Data Domain VTLs, Quantum Enterprise tape libraries, and StorageTek tape libraries, in combination with various Brocade and Cisco switches. Previously, Sowji worked for StorageTek and Brocade on Virtual Storage Manager and Brocade Fibre Channel switches, respectively.

Michael Manuel is a Consulting Program Manager and has been working in EMC E-Lab for over 10 years. Mike has 35 years of IT experience in areas including large systems, backup, recovery, and storage architectures. Mike has contributed to various E-Lab documents and has presented numerous sessions on backup and recovery at EMC World.

Conventions used inthis document

EMC uses the following conventions for special notices:

CAUTION!CAUTION, used with the safety alert symbol, indicates a hazardous situation which, if not avoided, could result in minor or moderate injury.

IMPORTANT!An important notice contains information essential to software or hardware operation.

Backup and Recovery in a SAN TechBook

Page 11: Backup and Recovery in a SAN

Preface

Note: A note presents information that is important, but not hazard-related.

Typographical conventionsEMC uses the following type style conventions in this document.

Normal Used in running (nonprocedural) text for:• Names of interface elements (such as names of windows,

dialog boxes, buttons, fields, and menus)• Names of resources, attributes, pools, Boolean expressions,

buttons, DQL statements, keywords, clauses, environment variables, functions, utilities

• URLs, pathnames, filenames, directory names, computer names, filenames, links, groups, service keys, file systems, notifications

Bold Used in running (nonprocedural) text for:• Names of commands, daemons, options, programs,

processes, services, applications, utilities, kernels, notifications, system calls, man pages

Used in procedures for:• Names of interface elements (such as names of windows,

dialog boxes, buttons, fields, and menus)• What user specifically selects, clicks, presses, or types

Italic Used in all text (including procedures) for:• Full titles of publications referenced in text• Emphasis (for example a new term)• Variables

Courier Used for:• System output, such as an error message or script • URLs, complete paths, filenames, prompts, and syntax when

shown outside of running text

Courier bold Used for:• Specific user input (such as commands)

Courier italic Used in procedures for:• Variables on command line• User input variables

< > Angle brackets enclose parameter or variable values supplied by the user

[ ] Square brackets enclose optional values

Backup and Recovery in a SAN TechBook 11

Page 12: Backup and Recovery in a SAN

12

Preface

Where to get help EMC support, product, and licensing information can be obtained as follows.

Product information — For documentation, release notes, software updates, or for information about EMC products, licensing, and service, go to the EMC Powerlink website (registration required) at:

http://Powerlink.EMC.com

Technical support — For technical support, go to Powerlink and choose Support. On the Support page, you will see several options, including one for making a service request. Note that to open a service request, you must have a valid support agreement. Please contact your EMC sales representative for details about obtaining a valid support agreement or with questions about your account.

We'd like to hear from you!

Your feedback on our TechBooks is important to us! We want our books to be as helpful and relevant as possible, so please feel free to send us your comments, opinions and thoughts on this or any other TechBook:

[email protected]

| Vertical bar indicates alternate selections - the bar means “or”

{ } Braces indicate content that you must specify (that is, x or y or z)

... Ellipses indicate nonessential information omitted from the example

Backup and Recovery in a SAN TechBook

Page 13: Backup and Recovery in a SAN

1

This chapter provides the following information on traditional backup and recovery architectures.

These case studies are distributed by EMC for information purposes only. EMC does not warrant that this information is free from errors. No contract is implied or allowed.

◆ Overview of backups......................................................................... 14◆ Direct-attached backups.................................................................... 16◆ LAN-based backups .......................................................................... 18

Traditional Backup andRecovery

Traditional Backup and Recovery 13

Page 14: Backup and Recovery in a SAN

14

Traditional Backup and Recovery

Overview of backupsThis section provides basic information on backup and recovery.

Terminology

The following terminology is used throughout this section.

◆ Back up — Create a copy of data.

◆ Archive — Move data with low usage patterns to a slower media for historical or other purpose.

◆ Restore — Copy a file or group of files from a backup to the primary storage location.

◆ Recover — Rebuild a system or data center from backups.

Why perform backups?

Backups are needed as an insurance policy against loss of data, which can occur because of:

◆ Hardware failures

◆ Human error

◆ Application failures

◆ Security breaches, such as hackers or viruses

High-availability storage arrays have reduced the need to recover data because of hardware failures. Hardware availability features can protect data from loss due to hardware failures; however, these availability features cannot protect against the other factors that can result in loss of data.

Backups are sometimes used as an archive; for instance, government regulations require that certain financial data must be kept for a specific number of years. In this context, a backup also becomes an archive.

Backup and Recovery in a SAN TechBook

Page 15: Backup and Recovery in a SAN

Traditional Backup and Recovery

Backup architecturesRefer to the appropriate sections for more information on these backup architectures:

Traditional backup architectures are discussed in his chapter:

◆ “Direct-attached backups” on page 16

◆ “LAN-based backups” on page 18

SAN backup topologies are discussed in Chapter 2, ”SAN-based Backup and Recovery”:

◆ “LAN-free backups” on page 24

◆ “Serverless backups” on page 29

◆ “NAS backups” on page 45

Disk and tape backup and recovery technologies are discussed in Chapter 3, ”Disk and Tape Backup and Recovery.”

Overview of backups 15

Page 16: Backup and Recovery in a SAN

16

Traditional Backup and Recovery

Direct-attached backupsMany organizations started with a simple backup infrastructure called direct-attached. This topology is also sometimes referred to as host-based or server-tethered backup. Each backup client has dedicated tape devices. Backups are performed directly from a backup client’s disk to a backup client’s tape devices.

Figure 1 Data flow for direct-attached backups

Advantages

The key advantage of direct-attached backups is speed. The tape devices can operate at the speed of the channels. Direct-attached backups optimize backup and restore speed, since the tape devices are close to the data source and dedicated to the host.

Disadvantages

Direct-attached backups impact the host and the application, since backups consume host I/O bandwidth, memory, and CPU resources.

Direct-attached backups are generally better suited for smaller environments. Growth rates in servers and data can cause direct-attached backups to become costly and difficult to manage and operate. Organizations with large growth may experience some or all of the following issues with direct-attached backup infrastructures:

◆ Large numbers of tape devices might be underutilized.

◆ A wide variety of backup media might be in use.

Backupserver

Tapelibrary

Storagearray

Data flow

Backup and Recovery in a SAN TechBook

Page 17: Backup and Recovery in a SAN

Traditional Backup and Recovery

◆ Operators could find it difficult to manage tape. Tape devices might be scattered between floors, buildings, or entire metropolitan areas.

◆ Each server might have unique (and possibly locally created) backup processes and tools, which can complicate backup management and operation.

◆ It might be difficult to determine if everything is being backed up properly.

◆ Dispersed backups, multiple media types, diverse tools, and operational complexity can challenge the task of business continuance recovery.

Migration paths

Organizations that have outgrown direct-attached backup infrastructures have several migration paths, described in these sections:

◆ “LAN-based backups” on page 18

◆ “LAN-free backups” on page 24

Improvement optionsThe following can also improve backup speed and efficiency:

◆ Utilize EMC® TimeFinder® to reduce production impact by making a snapshot of the production disk that will be backed up

◆ Implement faster devices, including disk and disk libraries

◆ Improve disk and tape pathing

Direct-attached backups 17

Page 18: Backup and Recovery in a SAN

18

Traditional Backup and Recovery

LAN-based backupsLAN backup infrastructures can be configured similar to the schematic illustrated in Figure 2.

Figure 2 LAN backup infrastructures

The metadata server is the central control point for all backups. This is where the metadata (also known as tape catalog or backup index) and backup policies reside. Tape control servers manage and control backup devices, and are controlled by the metadata server. (A metadata server can also be a tape control server.) The primary use of LAN backup topologies is to centralize and pool tape resources.

Backup process overviewThe backup process is as follows:

1. The metadata server invokes backup client processes on the backup client.

Tapelibrary

Meta dataserverMeta data

Tape controlserver

Backupdata

Backupdata

LAN

Storage

Servers

Backup and Recovery in a SAN TechBook

Page 19: Backup and Recovery in a SAN

Traditional Backup and Recovery

2. The tape control server places tapes into the tape drives.

3. The backup client determines which files require backup.

4. The backup client reads the backup data from disk and writes the backup data to the LAN.

5. The tape control server reads the backup data from the LAN and writes the backup data to the tape.

6. The backup client and the tape control servers sends metadata information to the metadata server, including what was backed up and which tapes the backups used.

7. The metadata server stores the metadata on disk.

Advantages

The key advantages of LAN-based backups compared to direct-attached backups are:

◆ Reduced costs — Pooling tape resources improves tape device utilization and reduces the number of tape drives required, which also results in fewer host bus adapters.

Some small servers may require backups; because of tape drive cost or limited card slot availability, however, it might not be practical to dedicate a tape drive to one of these systems. LAN backups can address these issues.

◆ Improved management and operability — Centralized backups reduce management complexity; there are fewer resources to manage, and they are all in one place.

Centralizing tape resources into tape control servers improves the productivity of the operations staff, especially when backup clients are scattered across floors of a building, campuses, or cities. Operability can be improved further by utilizing automated, robotic tape libraries.

DisadvantagesA LAN-based infrastructure introduces some disadvantages to the backup process:

◆ Backups impact the host and the application.

◆ A LAN-based backup adds two additional data movement steps.

LAN-based backups 19

Page 20: Backup and Recovery in a SAN

20

Traditional Backup and Recovery

◆ Backups consume host I/O bandwidth, memory, LAN, and CPU resources.

◆ There could be network issues.

◆ A LAN-based backup might require dedicated media servers.

◆ There could be restore and cloning issues.

Additional data movement stepsLAN backups require two additional data movement steps to put the backup data on tape, as illustrated in Figure 3.

Figure 3 LAN backup: Additional steps

Additional CPU and memory resources are required on the backup client (compared to directly connected tape devices) to comply with network protocols, format the data, and transmit the data over the network. Note that restore processing in a LAN environment is identical except that the data flows in the opposite direction.

Resource consumptionLike direct-attached backups, LAN backups consume CPU, I/O bandwidth, and memory. Since the final destination of the backup data resides elsewhere on the LAN, additional CPU is required on a tape control server. LAN bandwidth is also required.

LANbackup

3. Server reads from LAN

4. Server writes to tape

1. Client reads from disk

2. Client writes to LAN

Backup and Recovery in a SAN TechBook

Page 21: Backup and Recovery in a SAN

Traditional Backup and Recovery

Network issuesLAN backups will generally not perform as well as direct-attached backups. Additional data movement steps, network protocol overhead, and network bandwidth limits reduce the speed of backups. If the network segment is not dedicated to backups, the backup performance can be erratic, since it is vulnerable to such other network activity as large FTPs, video, audio, and email.

Even the fastest available network connections can be overwhelmed by a few disk connections. Backup disk I/O consists of intense read activity. Modern cached disk arrays, like the EMC® Symmetrix®

system, process I/O as fast as the channels will allow. Cache arrays with two Ultra SCSI or Fibre Channel connections are capable of exceeding the theoretical and practical rates of even the faster networking technologies. A single logical disk per path can impact the network for lengthy bursts, and multiple logical disks can saturate the network for long periods.

Figure 4 Bandwidth used in LAN backups (based on 100 Mb/s LAN setup)

Environments that back up many logical disks to many tape libraries will constrain even the fastest network technologies. Adding additional LAN bandwidth may not always be technically feasible, since there are often limits on how many high-speed NICs (network interface cards) a server can support.

LAN backups can increase management and troubleshooting complexity. Performing backups through firewalls can be a challenge. Troubleshooting may require engagement with operations personnel,

LAN

20 - 250 MB/s per path 1 - 60 MB/s per LAN

Backupclients

Mediaservers

Tapelibraries

20 - 200 MB/s per path

Storage

LAN-based backups 21

Page 22: Backup and Recovery in a SAN

22

Traditional Backup and Recovery

system administrators, storage administrators, and network administrators to resolve a problem.

Possible requirement for dedicated tape control serversLAN-based backups can require dedicated tape control servers to drive the tape devices and to act as a tape central control point. Many larger organizations implement metadata servers with no tape devices, along with dedicated tape control servers for the tape libraries. Some tape device maintenance (especially SCSI) can require the server to be taken out of service. If there are multiple tape robots connected to dedicated tape control servers and the metadata server is kept separate, restores and other backups can continue while maintenance is performed on the tape device. Dedicated tape servers are not always a technical requirement, but they are quite often an operational requirement.

Migration paths

Organizations that have outgrown direct-attached backup infrastructures have several migration paths, including the following, described later in this document:

◆ “LAN-free backups” on page 24

Improvement options

The following can also improve backup speed and efficiency:

◆ Utilize TimeFinder BCVs to reduce production impact

◆ Implement faster devices, including disk and disk libraries

◆ Improve disk and tape pathing

Backup and Recovery in a SAN TechBook

Page 23: Backup and Recovery in a SAN

2

This chapter provides the following information on SAN-based backup and recovery, including case studies.

These case studies are distributed by EMC for information purposes only. EMC does not warrant that this information is free from errors. No contract is implied or allowed.

◆ LAN-free backups.............................................................................. 24◆ Serverless backups ............................................................................. 29◆ Backup over long distances using FCIP and routers .................... 32◆ FCIP tape acceleration with EMC Connectrix MDS switches ..... 34◆ FC-Write acceleration on Cisco MDS 9000 Family SSM............... 37◆ FastWrite acceleration and tape pipelining.................................... 42◆ NAS backups ...................................................................................... 45

SAN-based Backupand Recovery

SAN-based Backup and Recovery 23

Page 24: Backup and Recovery in a SAN

24

SAN-based Backup and Recovery

LAN-free backupsLAN-free backups, as shown in Figure 5, utilize SAN (Storage Area Network) technology in conjunction with backup software that supports tape pooling. The high-speed and extended distance capabilities of Fibre Channel are used for the backup data movement path. Metadata is still moved over the LAN to the backup metadata server. This traffic is typically light and insignificant in relation to the large amounts of data moved during a backup. In a LAN-free architecture, the metadata server is also the control point for the robotic mechanism of the tape library.

Figure 5 LAN-free backup

Backup process overview

The backup process is as follows:

Note: The order of steps 2, 3, and 4 may vary depending on the backup tool that is utilized.

1. The metadata server invokes backup client processes on the backup client.

2. If tape pooling is used, the metadata server assigns tape devices to the backup client.

LAN

SAN

Backupclient

Meta dataserver

Meta data

Robotcontrol

Backup data flow

EMCstorage

Backup and Recovery in a SAN TechBook

Page 25: Backup and Recovery in a SAN

SAN-based Backup and Recovery

3. The metadata server instructs the tape robot to load tapes into the assigned drives.

4. The backup client determines which files require backup.

5. The backup client reads the backup data from disk through the SAN and writes backup data to the tape device through the SAN.

6. The backup client sends metadata information to the metadata server, including what was backed up and what tapes the backups used.

7. The metadata server stores the metadata on disk.

In theory, LAN-free backups can be implemented without tape pooling capabilities. This would provide the benefits of Fibre Channel performance and distance. This approach would essentially be a direct-attached backup at extended distance and would not address the tape utilization and management issues associated with direct-attached backups.

Organizations that are evolving from a direct-attached backup topology to LAN-free backup topology can gain additional benefits from tape pooling. Organizations that are evolving to LAN-free backups from a LAN backup topology are already conceptually performing tape pooling functions. To implement LAN-free backups without tape pooling would potentially require an increase in the number of tape devices. The remainder of this section assumes that backup software with tape pooling capabilities is part of the solution.

AdvantagesAn SAN-enabled backup infrastructure introduces these advantages to the backup process:

◆ Fibre Channel performance, reliability, and distance

◆ Fewer processes and less overhead

◆ No need to use the LAN to move backup data

◆ Elimination or reduction of dedicated media servers

◆ Improved backup and restore performance

Fibre Channel performance, reliability, and distanceFiber is designed for data movement and storage functions. This makes Fibre Channel an ideal channel for moving backup data.

LAN-free backups 25

Page 26: Backup and Recovery in a SAN

26

SAN-based Backup and Recovery

The performance capabilities of Fibre Channel allow the backup application to move the backup data at the speeds required for modern backup windows. Switched fabric provides the capability to connect multiple backup clients to the tape libraries, and is key to providing tape pooling. The distance capabilities of Fibre Channel allow the solution to maintain a centralized architecture. SAN-enabled backups provide centralization and tape pooling while operating at direct-attached backup speeds.

Fewer processes and less overheadFigure 6 shows the process flow for SAN backups. Two steps are required to copy the data to the backup media. (LAN backups require four steps.) The reduction in data movement steps reduces CPU and memory resource requirements for backup. Since restore processing follows the same steps (except that the data flows in the opposite direction), restores will perform faster with fewer resources as well.

Figure 6 Process flow for SAN backups

No need to use the LAN By removing the network bottleneck, the SAN allows the tape libraries to operate to full performance. Elimination of network traffic will also free CPU and memory, since the data does not need formatting for network transfer and networking protocols do not have to be managed by the system for backup traffic.

Additionally, overall system performance improves because of the reduction of backup traffic on the LAN.

1. Client reads from disk

2. Client writes to tape

Backup and Recovery in a SAN TechBook

Page 27: Backup and Recovery in a SAN

SAN-based Backup and Recovery

Figure 7 Bandwidth used in LAN-free backups

Elimination or reduction of dedicated media serversSAN reduces or eliminates dedicated media servers. CPU consumption for backup processes is directly related to the number of data movement steps. Since a LAN-free backup requires two fewer steps to move backup data, there are less CPU resources required. This CPU savings for LAN-free backups is on the media server. Organizations evolving from LAN backups to LAN-free might have the option to downsize or eliminate dedicated media servers because of the CPU reductions.

Improved backup and restore performanceBackup and restore performance is improved because of the following factors:

◆ Number of processing steps is reduced.

◆ Backup traffic is moved to higher bandwidth and higher reliability fiber connections.

◆ Network protocol overhead is eliminated.

These factors also improve overall system performance. EMC PowerPath® will also help balance and manage disk path contention caused by backup load.

LAN

SAN

Backupclient

Meta dataserver

Tapelibrary

Roboticcontrolpath

20-240 MB/s per path

EMCstorage

LAN-free backups 27

Page 28: Backup and Recovery in a SAN

28

SAN-based Backup and Recovery

DisadvantagesLAN-free backups impact the host and the application. LAN-free backups also consume host I/O bandwidth, memory, and CPU resources.

Migration pathsOrganizations that have exceeded the capabilities of LAN-free backups have several migration paths, including the following, described further in this chapter:

◆ “Serverless backups” on page 29 — An emerging enhancement to LAN-free backups is the serverless backup concept. Serverless backup utilizes the third-party copy capability of the Fibre Channel standard to move the backup data over the SAN straight from disk to tape under the control of the backup application.

Improvement optionsThe following can also improve backup speed and efficiency:

◆ Faster devices

◆ Improved pathing

◆ Utilization of TimeFinder BCVs

Backup and Recovery in a SAN TechBook

Page 29: Backup and Recovery in a SAN

SAN-based Backup and Recovery

Serverless backupsServerless backups, as shown in Figure 8, use SAN resources to move backup data from disk to tape. These backups are called serverless because the application server does not have to utilize host resources to perform the movement of backup data. Serverless backups are the next evolutionary step in LAN-free backup.

Note: Serverless backups are sometime referred to as server-free backups.

Figure 8 Serverless backup

Theory of operationServerless backups use the third-party copy function (also called extended copy) of the Fibre Channel standard. Third-party copy allows a device to copy data between points on the SAN. A device performing third-party copy can be thought of as a copy engine.

Currently, Cisco is supporting serverless backup in its switches with a SSM module. Cisco refers to this feature as NASB. The copy task is initiated and controlled by a host application that instructs the copy engine to copy a specific number of blocks from the source device to the target device.

LAN

SAN

Backupclient

Meta dataserver

Robot controldirect orover SAN

Backup data flow

Serverless backupcontrol path

EMCstorage

Serverless backups 29

Page 30: Backup and Recovery in a SAN

30

SAN-based Backup and Recovery

EMC NetWorker ® offers support for Windows and VERITAS offers support for Solaris. The copy engine then copies the data, block by block, until complete. The copy engine performs all of these copies inside the SAN and outside of the host.

With serverless backups, the backup application determines which blocks on disk require a backup. The backup application then instructs the copy engine to copy these disk blocks to the tape device.

Backup process overviewA typical backup process follows these steps:

Note: The order of steps 2, 3, and 5 may vary depending on the backup tool that is utilized.

1. The metadata server invokes backup client processes on the backup client.

2. If tape pooling is used, the metadata server assigns tape devices to the backup client.

3. The metadata server instructs the tape robot to load tapes into the assigned drives.

4. The metadata server notifies the backup client which tape drives it may use.

5. The backup client determines which files require backup.

6. The backup client instructs the copy engine to copy data, block by block, from the storage directly through the SAN to the tape drive.

7. When the backup is complete, the backup client sends metadata information to the metadata server.

8. The metadata server stores the metadata on disk.

AdvantagesServerless backups offer the following advantages over server-based backups:

◆ Fibre Channel performance, reliability, and distance — Fibre is designed for data movement and storage functions. This makes Fibre Channel an ideal channel for moving backup data.

Backup and Recovery in a SAN TechBook

Page 31: Backup and Recovery in a SAN

SAN-based Backup and Recovery

◆ No use of host resources to move backup data — Since the backup data is moved outboard from the host CPU, memory and I/O bandwidth resources are not consumed. This substantially reduces the impact on the application by removing resource contention points. Large organizations can realize a significant CPU savings during the backup window.

◆ Reduced application impact.

◆ No use of the LAN to move backup data — By removing the network bottleneck, the SAN allows tape libraries to operate to full performance. The elimination of network traffic will also free CPU and memory, since the data does not have to format for network transfer, and networking protocols do not require management by the system for backup traffic.

◆ Improved backup and restored performance — Backup performance is improved primarily because of reduced host-contention points. This serverless architecture is also designed to keep the tape devices streaming so that they operate at maximum performance.

Performance is improved because:

• Processing steps are reduced.

• Backup traffic is moved to higher bandwidth and higher-reliability fiber connections.

• Network protocol overhead is eliminated.

These factors also improve overall system performance. PowerPath also helps balance and manage disk path contention caused by backup load.

Disadvantages

Since serverless backups move data over the production SAN, there is a potential to cause an indirect impact on applications if the SAN has heavy I/O activity. Some implementations of serverless backup operate at the logical disk level, as opposed to the file level.

Serverless backups 31

Page 32: Backup and Recovery in a SAN

32

SAN-based Backup and Recovery

Backup over long distances using FCIP and routersUsing FCIP and routers allows backups over long distances within a SAN environment. It also allows longer distance vaulting for security, remote copy, real estate, etc. Tape acceleration techniques help ensure acceptable performance.

Figure 9 shows a typical SAN on both the right side and on the left. The routers link the two SANs together using FCIP. The SANs behave as if they are a SAN ISL, making the two SANs appear as one large SAN, thus extending the distance over which backups can occur.

Figure 9 Backup over long distances

Theory of operation

Long distance backup utilizes SAN (storage area network) technology in conjunction FCIP using traditional backup software. The high-speed and extended distance capabilities of Fibre Channel and FCIP are used for the backup data movement path. Metadata is still moved over the LAN to the backup metadata server. This traffic is typically light and insignificant in relation to the large amounts of data moved during a backup. The metadata server is also the control point for the robotic mechanism of the tape library.

GEN-000248

Router Router

FC SAN FC SAN

H1

H2

Hn

Hn

FCIP tunnelFW=1, TA=1

Tn

T0

T1

Tape1

Tape2

GE0

GE1

GE0

GE1

Backup and Recovery in a SAN TechBook

Page 33: Backup and Recovery in a SAN

SAN-based Backup and Recovery

AdvantagesAn SAN-enabled backup infrastructure introduces these advantages to the backup process:

◆ Fibre Channel performance, reliability, and distance.

◆ Fewer processes and less overhead.

◆ Use of inexpensive dedicated LAN to move backup data to more remote locations.

◆ Elimination or reduction of dedicated media servers.

◆ Improved backup and restore performance.

Backup over long distances using FCIP and routers 33

Page 34: Backup and Recovery in a SAN

34

SAN-based Backup and Recovery

FCIP tape acceleration with EMC Connectrix MDS switchesTape devices store and retrieve data sequentially. Normally, accesses to tape drives have only one outstanding SCSI write operation at any given time. This single command nature of tape writes impacts backup and archive performance because each SCSI write operation does not complete until the host receives a good status response from the tape drive.

The FCIP tape acceleration feature, introduced in MDS SAN-OS Release 2.0(1b), solves this problem. It improves tape backup and archive operations by allowing faster data streaming from the host to the tape over the WAN link.

With tape acceleration, the backup server issues write operations to a remote tape drive.

Figure 10 illustrates FCIP link tape acceleration.

Figure 10 FCIP tape acceleration

Backup and Recovery in a SAN TechBook

Page 35: Backup and Recovery in a SAN

SAN-based Backup and Recovery

The local EMC Connectrix® MDS switch acts as a proxy for the remote tape drive by quickly returning a Transfer Ready (Write Accelerator) signal to the host. This enables the host to more quickly begin sending the data.

After receiving all the data, the local Connectrix MDS switch responds to signal the successful completion of the SCSI write operation (Tape Accelerator). This response allows the host to start the next SCSI write operation.

This proxy method of operation results in more data being sent over the FCIP tunnel in the same time period compared to same operation with no proxying. This proxying method also improves the utilization of WAN links.

At the other end of the FCIP tunnel, another Connectrix MDS switch buffers the command and data it has received. The remote Connectrix MDS switch then acts as a backup server to the tape drive by listening to a Transfer Ready from the tape drive before forwarding the data.

The Connectrix MDS SAN-OS provides reliable data delivery to the remote tape drives using TCP/IP over the WAN. Write data integrity is maintained by allowing the Write Filemarks operation to complete end-to-end without proxying. The Write Filemarks operation signals the synchronization of the buffer data with the tape library data. While tape media errors are returned to backup servers for error handling, tape busy errors are retried automatically by the Connectrix MDS SAN-OS software.

For more information, refer to the MDS 9000 Family Fabric Manager Configuration Guide, available on http://Powerlink.EMC.com.

NotesNote the following:

◆ The tape acceleration feature is disabled by default and must be enabled on both sides of the FCIP link. If it is only enabled on one side of the FCIP tunnel, the tunnel is not initialized.

◆ FCIP tape acceleration does not work if the FCIP port is part of a Port Channel or if there are multiple paths with equal weight between the initiator and the target port. Such a configuration might cause either SCSI discovery failure or broken write or read operations.

FCIP tape acceleration with EMC Connectrix MDS switches 35

Page 36: Backup and Recovery in a SAN

36

SAN-based Backup and Recovery

◆ When tape acceleration is enabled in an FCIP interface, a FICON VSAN cannot be enabled in that interface. Likewise, if a FCIP interface is up in a FICON VSAN, write acceleration cannot be enabled on that interface.

◆ Enabling the tape acceleration feature automatically enables the write acceleration feature.

◆ Enabling tape acceleration for an FCIP tunnel re-initializes the tunnel.

◆ The flow control buffer size specifies the maximum amount of write data that a Connectrix MDS switch buffers for an FCIP tunnel before it stops the tape acceleration proxying process. The default buffer size is 256 KB and the maximum buffer size is 32 MB.

Enabling FCIP tape acceleration on Connectrix MDS switchesTo enable FCIP tape acceleration using Fabric Manager:

1. From Fabric Manager, choose ISLs > FCIP from the Physical Attributes pane.

The FCIP profiles and links display in the Information pane.

2. From Device Manager, choose IP > FCIP.

The FCIP dialog box displays.

3. Click the Tunnels tab.

The FCIP link information displays.

4. Click the Create Row icon in Fabric Manager or the Create button in Device Manager.

The FCIP Tunnels dialog box displays.

5. Set the profile ID in the ProfileID field and the tunnel ID in the TunnelID field.

6. Set the RemoteIPAddress and RemoteTCPPort fields for the peer IP address you are configuring.

7. Check the Write Accelerator and TapeAccelerator checkbox.

8. Optionally, set the other fields in this dialog box and click Create to create this FCIP link.

Backup and Recovery in a SAN TechBook

Page 37: Backup and Recovery in a SAN

SAN-based Backup and Recovery

FC-Write acceleration on Cisco MDS 9000 Family SSMThis section contains the following information:

◆ “Cisco MDS 9000 Family SSM” on page 37

◆ “Cisco I/O Accelerator (IOA)” on page 39

Cisco MDS 9000 Family SSMThe Cisco MDS 9000 Family Storage Services Module (SSM) provides the intelligent service of identifying the SCSI I/O flow for a given initiator-target pair. This information is used to provide the FC-Write acceleration (FC-WA) feature and the feature to gather advanced I/O statistics for a given initiator-target pair. The FC-WA feature decreases the latency of an I/O over long distances. The advanced I/O statistics collected can be used to evaluate storage performance for the initiator-target pair.

The improved performance results from a coordinated effort performed by the Storage Services Module local to the initiator and the Storage Services Module local to the target. The initiator Storage Services Module, bearing the host-connected intelligent port (HI-port), allows the initiator to send the data to be written well before the write command has been processed by the remote target and an SCSI Transfer Ready message has had the time to travel back to start the data transfer in the traditional way.

The exchange of information between the HI-port and the disk-connected intelligent port (DI-port) allows the transfer to begin earlier than in a traditional transfer. The procedure makes use of a set of buffers for temporarily storing the data as near to the DI-port as possible. The information between the HI-port and DI-port is piggy-backed on the SCSI command and the SCSI Transfer Ready command, so there are no additional FC-WA-specific frames traveling on the SAN. Data integrity is maintained by the fact that the original message that states the correct execution disk side of the write operation (SCSI Status Good) is transferred from the disk to the host.

FC-Write acceleration on Cisco MDS 9000 Family SSM 37

Page 38: Backup and Recovery in a SAN

38

SAN-based Backup and Recovery

Figure 11 shows the effect of latency in the communication channel to the time taken to complete the I/O operation during a SCSI write operation. The time added to the net execution time of the operation is at least four times the trip delay between the host and the disk because of the transfer of the command, the Transfer Ready message, the data, and the status.

Figure 11 Normal SCSI Write

Figure 12 on page 39 shows how FC-WA allows the data to be sent on the line without waiting for the disk Transfer Ready message to be transferred all the way back to the host. To preserve data integrity, the status message is not emulated. Depending on the timing, the latency added by the communication time may be as low as two times the trip delays, transfer of the command, and transfer of status. Therefore the expected distance between the host and the disk can now be increased by up to 50 percent.

Backup and Recovery in a SAN TechBook

Page 39: Backup and Recovery in a SAN

SAN-based Backup and Recovery

Figure 12 SCSI Write with FC-WA

Cisco I/O Accelerator (IOA)The Cisco MDS 9000 Family I/O Accelerator (IOA) feature provides Small Computer System Interface (SCSI) acceleration in a storage area network (SAN) where the sites are interconnected over long distances using Fibre Channel or Fibre Channel over IP (FCIP) Inter-Switch Links (ISLs). Figure 13 on page 40 shows an example of an IOA topology.

FC-Write acceleration on Cisco MDS 9000 Family SSM 39

Page 40: Backup and Recovery in a SAN

40

SAN-based Backup and Recovery

Figure 13 IOA topology

Benefits include:

◆ Unified acceleration service

IOA provides both SCSI write acceleration and tape acceleration features as a unified fabric service. These services were provided in previous releases in the form of Fibre Channel write acceleration for remote replication over Fibre Channel links and FCIP write acceleration and tape acceleration over FCIP links. Fibre Channel write acceleration was offered on the Storage Services Module (SSM) and FCIP write acceleration and tape acceleration were offered on the IP storage services modules. IOA offers both the write acceleration and tape acceleration services on the Cisco MDS MSM-18/4 module, SSN-16 module, and 9222i switch as a fabric service. This eliminates the need to buy separate hardware to obtain Fibre Channel write acceleration and FCIP write acceleration and tape acceleration.

Backup and Recovery in a SAN TechBook

Page 41: Backup and Recovery in a SAN

SAN-based Backup and Recovery

◆ Topology independent

IOA can be deployed anywhere in the fabric without rewiring the hardware or reconfiguring the fabric. There are no restrictions on where the hosts and targets are connected to. Both the Fibre Channel and FCIP write acceleration is supported only on PortChannels but do not support multiple equal-cost links. FCIP tape acceleration is not supported on PortChannels. IOA eliminates these topological restrictions.

◆ Transport agnostic

IOA is completely transport-agnostic and is supported on both Fibre Channel and FCIP ISLs between two sites.

◆ High availability and resiliency

IOA equally supports both PortChannels and equal-cost multiple path (ECMP) links across two data centers. This allows you to seamlessly add ISLs across the two data centers for capacity building or redundancy. IOA is completely resilient against ISL failures. IOA uses a Lightweight Reliable Transport Protocol (LRTP) to guard against any ISL failures as long as there is an alternate path available across the two data centers. Remote replication and tape backup applications are completely unaffected by these failures.

◆ Improved tape acceleration performance

IOA tape acceleration provides higher throughput numbers than the FCIP tape acceleration, which is limited by a single Gigabit Ethernet throughput.

◆ Load balancing

IOA uses clustering technology to provide automatic load balancing and redundancy for traffic flows across multiple IOA service engines that can be configured for the IOA service. When an IOA service engine fails, the affected traffic flows are automatically redirected to the available IOA service engines to resume acceleration.

FC-Write acceleration on Cisco MDS 9000 Family SSM 41

Page 42: Backup and Recovery in a SAN

42

SAN-based Backup and Recovery

FastWrite acceleration and tape pipeliningTo optimize performance of backups over FCIP links, consider using FastWrite and tape pipelining.

Note: These features are supported only in Fabric OS 5.2.x and higher.

FastWrite and tape pipelining provide accelerated speeds to FCIP tunnels in some configurations:

◆ FastWrite accelerates the SCSI write I/Os over FCIP.

◆ Tape pipelining accelerates SCSI write I/Os to sequential devices (such as tape drives) over FCIP, reducing the number of roundtrip times needed to complete the I/O over the IP network and accelerating the process.

Note: You must enable FastWrite in order to use tape pipelining,.

◆ Both sides of an FCIP tunnel must have matching configurations for these features to work.

FastWrite, and tape pipelining features do not require any predefined configurations. This makes it possible to enable these features by adding optional parameters such as –c, -f, or -t when you create FCIP tunnels. Refer to the Fabric OS Administrators Guide for further information.

Brocade 7800 and EMC EDL over IP case study

In Figure 14 on page 43, a configuration was built with Brocade 7800 and IP network emulating up to 40,000 kilometers distance (EMC EDL 4200 local devices and remote and EMC NetWorker). FastWrite and tape pipelining were turned on for the remote configuration.

In Step 1, the backups are written to the local EMC EDL 4200 device.

In Step 2, the backup is cloned to the remote location which consists of a remote EDL 4200 connected to a remote Brocade 7800. FastWrite and tape pipelining are turned on in both the local and remote Brocade 7800s.

Backup and Recovery in a SAN TechBook

Page 43: Backup and Recovery in a SAN

SAN-based Backup and Recovery

Figure 14 Brocade tape acceleration — Host remote backup

In Figure 15 on page 44, a configuration was built using EMC Celerra®, Brocade 7800, and an IP network emulating up to 40,000 kilometers distance, EMC EDL 4200 devices locally and remote, and EMC NetWorker. FastWrite and tape pipelining were turned on for the remote configuration.

In Step 1, the backups are written to the local EMC EDL 4200 device by Celerra under NetWorker NDMP control.

In Step 2, the backup is cloned by Celerra under NDMP control to the remote location, which consists of a remote EDL DL710 connected to remote Brocade 7800. FastWrite and tape pipelining are turned on in both the local and remote Brocade 7800s.

FastWrite acceleration and tape pipelining 43

Page 44: Backup and Recovery in a SAN

44

SAN-based Backup and Recovery

Figure 15 Brocade tape acceleration — NDMP remote backup

Note: EDL has IP replication between the EDLs, but this solution uses Celerra NDMP over IP.

ResultsPerformance is based on the block size, but the improvement with tape pipelining is up to 80%.

Backup and Recovery in a SAN TechBook

Page 45: Backup and Recovery in a SAN

SAN-based Backup and Recovery

NAS backupsThe Celerra Network Attached Storage (NAS) device provides supports multiple backup options:

◆ Local and remote backup

◆ Network backups

◆ Network Data Management Protocol (NDMP) backups

Local and remote backupLocal backup uses tape devices that are directly connected to the Celerra.

Remote backup transfers the backup data to another server that contains a tape device. The backup data may be transmitted over the network or it may be transferred through direct server-to-server connection.

NDMP backupThe Network Data Management Protocol (NDMP) is an open standard communication protocol that is specifically designed for backup of network attached storage devices. NDMP enables centralized backup management and minimizes network traffic.

A key feature of NDMP is that it separates the flow of data from the management of the data. This allows third-party backup tools to interface with Celerra and maintain centralized control. NDMP backups can be performed locally (NDMP V1) with tape devices connected directly to the Celerra or remotely (NDMP V2) to another location. Both of these options are managed by the third-party backup tool.

NAS backups 45

Page 46: Backup and Recovery in a SAN

46

SAN-based Backup and Recovery

Figure 16 on page 46 shows an example of NDMP.

Figure 16 NDMP example

For more information on Celerra backups can be found in various Celerra documents on Powerlink.

For more information on NDMP backups on an EMC VNX™ series system, refer to the EMC VNX Series Configuring NDMP Backups on VNX document, available on Powerlink.

Backup and Recovery in a SAN TechBook

Page 47: Backup and Recovery in a SAN

3

This chapter provides the following information on backup and recovery.

These case studies are distributed by EMC for information purposes only. EMC does not warrant that this information is free from errors. No contract is implied or allowed.

◆ Backup and recovery ......................................................................... 48◆ Data archiving .................................................................................... 51◆ Backup media ..................................................................................... 52◆ Mirrored fabric backup solution...................................................... 55◆ Tapes and fabrics................................................................................ 60

Disk and Tape Backupand Recovery

Disk and Tape Backup and Recovery 47

Page 48: Backup and Recovery in a SAN

48

Disk and Tape Backup and Recovery

Backup and recoveryThis section briefly describes the following:

◆ “Tape-based backup” on page 48

◆ “Disk-based backup” on page 48

◆ “Deduplication” on page 49

Tape-based backupTraditional tape-based backup systems can be slowed by the mechanical nature of tape.

Consider the following:

Tape Speed = Tape Transfer Rate + Mechanical Motion

where mechanical motion consists of the following factors:

• Robot mount / dismount time

• Load-to-ready times, find the start of the tape

• Rewind times

Disk-based backup

Disk arrays can be faster than tape for both backups and restores. Advantages include:

• Potentially faster transfer speeds

• No mount, rewind, load-to-ready issues

• Not vulnerable to streaming issues

A potential disadvantage of backup to disk is that it may require changes to existing backup processes and operations.

For more information concerning backup to disk, refer to the EMC VNX Series Configuring NDMP Backups on VNX document, available on Powerlink.

Backup and Recovery in a SAN TechBook

Page 49: Backup and Recovery in a SAN

Disk and Tape Backup and Recovery

DeduplicationData deduplication is a method of reducing storage needs by eliminating redundant data. Only one unique instance of the data is actually retained on storage media, such as disk or tape. Redundant data is replaced with a pointer to the unique data copy.

Deduplication occurs across the files. Any redundant data across the files (not only within the file as in case of compression) is stored only once.

The deduplication process searches all files that have redundant chunks of data and saves only the unique blocks in the disk, adding a pointer whenever a block is repeated. As a result, the disk capacity required to store the files is reduced. Figure 17 shows an example of the data deduplication process.

Figure 17 Data deduplication process

Data deduplication enables organizations to reduce back-end capacity requirements by minimizing the amount of redundant data that is ultimately written to disk backup targets. The actual data reduction can vary significantly from organization to organization or from application to application depending on a number of factors, the most important being the rate at which data is changing, the frequency of backup and archive events, and how long that data is retained online.

Benefits include:

◆ Lower storage space requirements, hence reducing disk expenditure and power and cooling requirements.

◆ Longer disk retention periods.

◆ Reduction in the amount of data to be sent over distance.

A B C D

AB C

D AC

E A B C D E

E

File 1

File 2

File 3

User data

Deduplicated data

A B

C D

E

Unique blocks

Disk

Backup and recovery 49

Page 50: Backup and Recovery in a SAN

50

Disk and Tape Backup and Recovery

EMC has the following deduplication options:

◆ Avamar®

This is a deduplication backup software that identifies redundant data at the source, thereby reducing the amount of data sent over the fabric.

◆ Data Domain® Deduplication applianceThis is the storage which dedupes the data before storing, hence reducing the disk space.

◆ NetWorkerThis backup application provides deduplication with integration of Avamar.

For more information, refer to the Backup and Recovery: Accelerating Efficiency and Driving Down IT Costs Using Data Depulication White Paper, available on Powerlink.

Backup and Recovery in a SAN TechBook

Page 51: Backup and Recovery in a SAN

Disk and Tape Backup and Recovery

Data archivingData archiving is the process of moving data that is no longer actively used to a separate data storage device for long-term retention.

Data archives consist of older data that is still important and necessary for future references, as well as data that must be retained for regulatory compliance. Archives are indexed and have search capabilities so that files, and parts of files, can be easily located and retrieved.

Generally, the data to be archived is moved to slow and less expensive storage, mainly to tape. The tape can be moved out of the library and stored in external offsite storage that is considered safe from the possibility of disasters. The archive is the primary copy of the data. All other copies of the data are deleted. Only one copy is stored as archive.

Archive to tape can be done directly from the disk storage, such as Virtual Tape Library. The EMC Disk Library (EDL) has an export-to-tape feature that transfers the data from the disk to tape without using the server for the backup and archiving to tape. When the tape is directly connected to the EDL, it frees the network of the traffic associated with the archiving process.

Data archiving 51

Page 52: Backup and Recovery in a SAN

52

Disk and Tape Backup and Recovery

Backup media This section includes the following information on backup media:

◆ “Tape libraries” on page 52

◆ “Editing the Solaris configuration file: st.conf” on page 52

◆ “HBAs” on page 53

◆ “Tape drives” on page 54

Tape libraries

A tape library consists of tape storage facilities, a robotic tape selection mechanism (sometimes referred to as a picker), and tape devices.

Tape libraries connect to the SAN with two types of connections: the data path and the robotic control path. The data path is the mechanism that moves data between the server and the tape device(s). The robotic control path controls the robotic mechanisms.

The robotic control (picker) path connects to the SAN with a embedded bridge or blade located in the tape library. Depending on the tape library and customer preferences, the drive data path can connect through the blade, or directly to the SAN. Some newer libraries do not use a blade for the picker. In this case, one drive is configured to be the control LUN-based drive, and the picked control path is a LUN under the drive. There are no significant technical advantages or disadvantages to either connection.

The key benefit from a SAN backup comes from the ability to connect the tape devices to the SAN.

Editing the Solaris configuration file: st.conf

Note: Valid only for Solaris 8 and below non-Leadville-based Solaris configurations.

If tape devices are addressed (through a blade, bridge or control drive) they can use target IDs and LUNs greater than zero. The driver configuration file /kernel/drv/st.conf must be modified to include the correct definitions for each target ID and LUN. The Solaris driver treats all entries in all target drivers (such as sd and st)

Backup and Recovery in a SAN TechBook

Page 53: Backup and Recovery in a SAN

Disk and Tape Backup and Recovery

as one continuous assignment per instance of the HBA. Therefore, be sure that st.conf does not contain target numbers already specified in sd.conf. Different HBAs behave in slightly different ways, so consult your HBA vendor to see if these changes are needed.

Solaris provides host application access to the tape through:

/dev/rmt/0, /dev/rmt/1,...x

Therefore, /drv/rmt/<x> addresses must be used for the tape access.

Note: The numbering in /dev/rmt should be sequential, starting with 0.

Note: Changes to st.conf do not take effect until the host is rebooted with the reconfigure option (reboot -- -r).

CAUTION!Deleting internal drive entries from /kernel/drv/st.conf makes your host unbootable.

HBAs

Persistent binding is recommended. Persistent binding keeps the LUNs and tape numbers consistent since there is a slight chance that they could change during a system event. Consult your HBA vendor for details.

Windows Requires no additional settings. You must install a tape driver when working with a tape drive.

Sun Solaris (Fibre Channel tape support)You may need to change two files:

◆ sd.conf — Configuration file used for disks; outlines the specific devices that can be used, and enables binding.

◆ st.conf — Configuration file used for tapes; all tape drives must be listed in this file, and binding can be enabled.

Backup media 53

Page 54: Backup and Recovery in a SAN

54

Disk and Tape Backup and Recovery

Tape drivesAll new generation tape drives support FC-SW and FC-AL. Some drives may default to loop. EMC recommends setting your switch or drive to FC-SW. Some devices have two ports; EMC supports the use of only one port at a time.

Each Fibre Channel tape drive is configured with a static, unique World Wide Name (WWN) assigned by the manufacturer. You can use this WWN during the fabric zoning process to allow servers access to tape devices.

The most popular drive technology is LTO, which is currently at generation 5. There are variants of half-height and full-height LTO 4 and LTO 5 tape drives.

◆ LTO 5 has the capability of 8 G FC interface and can store 1.5 TB of native data and 3 TB in compressed form.

◆ LTO 4 and LTO 5 tape drives also provide target-based encryption of the data-at-rest with the use of Key manager for storing and managing the encryption keys.

Oracle's Storagetek tape drives T10000A and T10000B also provide the encryption functionality.

Backup and Recovery in a SAN TechBook

Page 55: Backup and Recovery in a SAN

Disk and Tape Backup and Recovery

Mirrored fabric backup solutionEMC supports many Fibre Channel tape backup solutions. This section describes the solution that best fulfills the following objectives:

◆ Ease of management:

• No new zone sets are required; only new tape zones.

• Traffic patterns/routes are direct and deterministic.

◆ Supportability:

• No new hardware or configuration is required.

• Affected resources are easily identified because of consistent placement and routing.

◆ Flexibility:

• PowerPath allows non-simultaneous use of the same HBAs for both disk and tape.

Note: PowerPath does not provide multipathing or failover solutions for tape.

• Core/edge fabric topologies allow traffic segregation and localization, so bandwidth requirements can be managed separately.

◆ Scalability — Provides the scalability benefits of the mirrored, core/edge fabric design.

◆ Availability:

• Core/edge fabrics provide multiple paths from servers to tapes.

• Mirrored fabrics protect against HBA failures, switch failures, tape failures, fabric failures, and maintenance activities.

◆ Maintainability — Since all servers have access to tapes on both fabrics, either fabric can be placed in a maintenance state without catastrophically interrupting the backup cycle.

◆ Maximum return on capital investments:

• PowerPath allows hosts to provide additional disk access when backups are not being performed. (Backup customization may be required.)

Mirrored fabric backup solution 55

Page 56: Backup and Recovery in a SAN

56

Disk and Tape Backup and Recovery

• Better utilization of bandwidth, ISLs, and switch ports.

“Solution description” on page 56 describes the solution in detail, and illustrates how each aspect of the backup solution adheres to the project requirements.

Solution descriptionSince tape drives are supported as single attached devices only, our solution must provide a means to protect our backup resources in the event of a failure in the environment. To accomplish this, EMC recommends that tape drive resources be distributed evenly across both sides of the proposed mirrored fabric environment.

Understanding the importance of backup resources in your business continuance plan, EMC strives to provide the highest level of protection possible, while maximizing the duty cycle of all resources. Each media server or application server (LAN-free backup) in this facility would be configured so that it could access disk and tape storage on both fabrics. This can ensure that any service interruptions or failures on one fabric do not affect the storage (disk/tape) resources attached to the other fabric.

Providing resources that are isolated from the impact of maintenance cycles or failures helps ensure that backup resources are always available when you need them.

The backup software solution should be evaluated for the best way to create media pools of devices that can include tape drives from both fabrics. PowerPath scripting should also be employed to set specific HBAs that share tape and disk into standby mode during backup cycles. Setting a path to standby allows PowerPath to use the HBA in the case of path failure, but would not use it for data traffic during normal conditions.

This procedure can ensure that the HBA is dedicated to tape access during the time of the backup, alleviating contention for bandwidth resources across this link (which could cause backup performance degradation). This procedure is especially useful for hosts that do not have enough available slots for a dedicated tape (backup) HBA. It is also useful for hosts that are already very write-intensive.

Figure 18 on page 57 shows an example of a backup software solution.

Backup and Recovery in a SAN TechBook

Page 57: Backup and Recovery in a SAN

Disk and Tape Backup and Recovery

Figure 18 Example of backup software solution

Although tape drives can be placed anywhere in the fabric, EMC recommends placing the tape drives at the core of a core/edge fabric. This provides equal (single-hop) access from each of the LAN-free backup servers located at the edge switches. Centralizing the tape storage also helps to prevent any ISL bandwidth congestion that might occur by providing direct routes to the tape storage. This also aids in balancing the backup load across the entire fabric without complex backup scheduling. Backup, restore, and recovery requirements can be maintained simply by adding the required core/edge ISLs needed to handle the peak application and backup bandwidth load.

Dedicated media servers and third-party copy engines (server-free backup) should also be placed at the core of the fabric. Since these servers are dedicated to tape access, placing them at the core ensures that their data will traverse only the internal switch backplane or the core-to-core ISLs. This provides better use of the switch’s internal resources, helping to minimize the required core/edge ISLs.

Utilizing the core-to-core ISLs also provides a means to reduce the contention for ISL resources by segregating the backup traffic from the application traffic. Maintaining the backup objectives for these dedicated media servers and third-party copy engines is managed by adding the proper number of core-to-core ISLs. Because the number of dedicated media servers and third-party copy engines is usually much lower than the number of application servers (backup clients), the fan-out benefits of the core/edge fabric design is not adversely affected. Providing a level of traffic segregation also provides better control over our application server SLAs by minimizing the impact of backup traffic on our application servers and their data routes.

Sample host

Data DataTape Tape

SAN

HBA

SAN

HBA

Mirrored fabric backup solution 57

Page 58: Backup and Recovery in a SAN

58

Disk and Tape Backup and Recovery

Figure 19 is a sample of a core/edge fabric with the recommended application server, tape drive, disk storage, and media server placement.

Figure 19 Core/edge fabric example with recommended component placement

Physical backup device centralizationCreating a centralized backup location that contains both the trained backup administrators and backup media resources can increase the duty cycles of your backup media, and increase the efficiency of your personnel resources.

TapeData

TapeData

Businesscontinuance

Applicationtraffic

Backup serversand data movers

Edge

Core

ServersServersServers

Backup and Recovery in a SAN TechBook

Page 59: Backup and Recovery in a SAN

Disk and Tape Backup and Recovery

Centrally locating the personnel provides the following benefits:

◆ Faster communication between group members.

◆ Provides better access and transfer of skills among the members.

◆ Eliminates the necessity for operators to move between locations for tape processing and troubleshooting.

SummaryWhile EMC supports many different fabric environments, core/edge mirrored fabric environments offer an efficient, robust design that can fulfill both your application and business continuance requirements.

Mirrored fabric backup solution 59

Page 60: Backup and Recovery in a SAN

60

Disk and Tape Backup and Recovery

Tapes and fabricsThis section discusses the following:

◆ “SCSI tape” on page 60

◆ “Fibre Channel tape” on page 60

◆ “Sharing tape and disk on the same HBA” on page 62

For basic information on tape, refer to “Tape libraries” on page 52 and “Tape drives” on page 54.

SCSI tape

SCSI-attached tape storage devices have been, and continue to be, a viable solution in the data storage environment. Whether these devices were legacy hardware, new purchases or advanced SCSI technology, they will also have to communicate with the Fibre Channel portion of the SAN environment. In order to achieve this, they have to be attached directly to SCSI-to-Fibre Channel bridges and through them to the SAN.

Fibre Channel tape

Most native Fibre Channel tape drives available today are FC-SW capable, but there are some early first-generation native FC tape drives that only support Fibre Channel Arbitrated Loop (FC-AL). Consult your tape drive vendor to obtain this information. Public loop devices are arbitrated loop devices that support Fibre Channel fabric login and services. Each Fibre Channel tape drive has one or more NL_Ports (Node Loop Ports), which can be used to connect into either a loop device or fabric device capable of communication with NL_Ports or a loop-to-fabric bridge.

Each Fibre Channel tape drive is configured with a static, unique, World Wide Name (WWN) assigned by the manufacturer. This WWN can be used during the fabric zoning process to allow servers access to the tape devices.

Supported Fibre Channel tape drivesE-Lab Navigator lists the currently supported FC-AL and FC-SW tape drives.

Backup and Recovery in a SAN TechBook

Page 61: Backup and Recovery in a SAN

Disk and Tape Backup and Recovery

Connecting Fibre Channel tape drives into a fabricSome fabric switches, routers, and bridges support both the FC-AL protocol and the Fibre Channel Point-to-Point protocol. Point-to-Point is required for SAN appliances that support switched fabric communication. A physical port on such an appliance is referred to as an FL_Port (Fabric Loop Port). FL_Ports may either automatically negotiate the method of communication with a connected N_Port or NL_Port device on initialization, or the user may be required to manually set the port type in the device’s configuration file prior to connecting the devices together. In order to connect with SAN devices that do not support FC-AL (Connectrix ED-1032, for example) a loop-to-switch bridge can be used.

Configuration details The supported switches all provide the same functionality when configuring Fibre Channel tape drives. Prior to starting the configuration, make sure that the Fibre Channel ports on the tape drives are enabled and that the tape drives are on line:

◆ Tape drive connection to switch — Each port on these switches is capable of auto-negotiating and auto-initializing the port for loop or switch communication. Auto-configuration occurs immediately after the tape devices are connected to the switch port. At the completion of the auto-configuration phase (almost instantaneously), the port should appear as an FL_Port in the name server list on the switch. No additional software or licensing is required for this functionality.

Note: Do not configure these ports as QuickLoop or Fabric Assist ports. The QuickLoop or Fabric Assist mechanisms are not required for public loop device support and are not supported by EMC.

When the negotiation and initialization is complete, you will also be able to view the tape drive’s WWPN in the switch’s name server list.

◆ Server connection to switch — Each server connected to the switch that will communicate with the FC-AL tape drive should be configured as a Fibre Channel switch fabric device. When the server is connected to the switch it will automatically run through the port negotiation and initialization procedures, but the resultant port configuration will appear as a fabric-capable N_Port in the switch’s name server list. When the negotiation and initialization is complete, you should be able to view the server’s HBA WWPN in the switch’s name server list.

Tapes and fabrics 61

Page 62: Backup and Recovery in a SAN

62

Disk and Tape Backup and Recovery

Note: E-Lab Navigator identifies the latest drivers and firmware associated with these HBAs.

◆ Zoning — The switch's Web browser configuration tool will allow you to zone the unique WWPN of the tape drive with the WWPN of the server’s HBA.

“Sharing tape and disk on the same HBA,” next, describes the considerations on sharing the same HBA for both disk and tape.

◆ Server-to-tape communication — Translation of FC-AL protocols associated with the tape drive from/to FC-SW associated with the server’s HBA are all handled automatically, internal to the switch. No special settings on the switches are necessary to allow translation and communication.

Sharing tape and disk on the same HBAWhile EMC supports simultaneous tape and disk access over the same HBA, some access scenarios impact the feasibility of such a solution. E-Lab Navigator contains specific details about your configuration.

When disk and tape share the same HBA, the HBA must support four I/O activities:

◆ Production disk reads to support your applications

◆ Production disk writes to support your applications

◆ Disk reads to feed the backup operation

◆ Tape writes to support the backup operation

In heavy I/O environments, these four activities combined can result in I/O path and device contention.

I/O path contention occurs when more than one application is trying to access the same I/O path at the same time. Both operations must arbitrate for control of the path, and one must wait while the other is using the path. All technologies also have their own path bandwidth limits, which must be shared between the operations.

Device contention occurs when multiple operations are trying to access the same information at the same time. Again, while one operation is accessing a resource, other resources must wait their turns.

Backup and Recovery in a SAN TechBook

Page 63: Backup and Recovery in a SAN

Disk and Tape Backup and Recovery

Path and device contention can result in both reduced application performance and reduced tape backup performance.

Tape devices are sequential media that expect data to be sent in large continuous blocks. This continuous flow of data (called streaming) keeps the tape media progressing at its fastest speed. If streaming is interrupted due to congestion, the drive must stop and back up to the end of the last block transferred before it can accept new data. Stopping and repositioning takes time because of the effort required to gradually slow down the tape drive in an orderly fashion. (Rapid deceleration could damage the media; by stretching the tape, for example.)

Backing up to the last block endpoint and stopping again also involves a change of direction, acceleration and deceleration, a stop, a re-tension, and a position check. Some tape devices (depending on the drive) may also require that the tape be kept at a specific tension level. In this situation, if a data transfer stops, the drive must stop, and back up to the end, after which the tape will move forward and backwards slowly until the data transfer starts again. This method (often called shoe shining) is used to maintain the proper tension on the tape.

Another way that congestion can interfere with your backup is its effect on data compression. Most modern tape devices also have built-in hardware compression. The data is written to a buffer in the tape device, compressed, and then written to the physical media. Part of the compression mechanism involves using standard algorithms; however, part of the compression mechanism also combines the contents of the tape buffer into mega blocks to reduce inter-record gaps on the tape device.

Heavy congestion can also cause tape compression rates to drop, as the drives are unable to use completely full buffers when they create the mega blocks. Since the data is not optimally compressed, more write operations are required to get the data to the tape media. More write operations results in degradation of backup performance. This could also result in more tape cartridges being required and subsequent elongated recovery times because there are now more tape cartridges to recover.

If your backup environment is in a congestion situation, the congestion can be partially addressed by placing disk and tape on separate HBAs. The potential benefits of separate tape and disk HBAs (in congestion situations) include reduced production disk path contention and improved tape performance. If the number of

Tapes and fabrics 63

Page 64: Backup and Recovery in a SAN

64

Disk and Tape Backup and Recovery

HBAs on the server is limited, you can also employ PowerPath during the backups to manage the load across the adapters. PowerPath provides the ability to set an HBA into standby mode, which allows the HBA to be used if there is a failure, but not for disk traffic (while in standby mode). When the backups were complete, you could set the HBA back into an active state. Once the HBA was in an active state, PowerPath would rebalance the disk traffic, using this HBA in its load-balancing calculations.

Limitations Moving the Fibre Channel cables associated with the tape or with the server communicating with the tape to a different switch port while I/O is running will result in I/O failures and device lockout. If this occurs, you may be required to either power cycle the tape device or return the Fibre Channel cables to their original ports, and manually release the tape device to return the system to working order. It will then be necessary to restart the backup job.

EMC supports only single-port usage of the STK 9840. Simultaneous use of both ports can result in contention for devices on both server boot and setting of device reservation.

Backup and Recovery in a SAN TechBook

Page 65: Backup and Recovery in a SAN

Glossary

This glossary contains terms related to EMC products and EMC networked storage concepts.

Aaccess control A service that allows or prohibits access to a resource. Storage

management products implement access control to allow or prohibit specific users. Storage platform products implement access control, often called LUN Masking, to allow or prohibit access to volumes by Initiators (HBAs). See also “persistent binding” and “zoning.”

active domain ID The domain ID actively being used by a switch. It is assigned to a switch by the principal switch.

active zone set The active zone set is the zone set definition currently in effect and enforced by the fabric or other entity (for example, the name server). Only one zone set at a time can be active.

agent An autonomous agent is a system situated within (and is part of) an environment that senses that environment, and acts on it over time in pursuit of its own agenda. Storage management software centralizes the control and monitoring of highly distributed storage infrastructure. The centralizing part of the software management system can depend on agents that are installed on the distributed parts of the infrastructure. For example, an agent (software component) can be installed on each of the hosts (servers) in an environment to allow the centralizing software to control and monitor the hosts.

Backup and Recovery in a SAN TechBook 65

Page 66: Backup and Recovery in a SAN

66

Glossary

alarm An SNMP message notifying an operator of a network problem.

any-to-any portconnectivity

A characteristic of a Fibre Channel switch that allows any port on the switch to communicate with any other port on the same switch.

application Application software is a defined subclass of computer software that employs the capabilities of a computer directly to a task that users want to perform. This is in contrast to system software that participates with integration of various capabilities of a computer, and typically does not directly apply these capabilities to performing tasks that benefit users. The term application refers to both the application software and its implementation which often refers to the use of an information processing system. (For example, a payroll application, an airline reservation application, or a network application.) Typically an application is installed “on top of” an operating system like Windows or LINUX, and contains a user interface.

application-specificintegrated circuit

(ASIC)

A circuit designed for a specific purpose, such as implementing lower-layer Fibre Channel protocols (FC-1 and FC-0). ASICs contrast with general-purpose devices such as memory chips or microprocessors, which can be used in many different applications.

arbitration The process of selecting one respondent from a collection of several candidates that request service concurrently.

ASIC family Different switch hardware platforms that utilize the same port ASIC can be grouped into collections known as an ASIC family. For example, the Fuji ASIC family which consists of the ED-64M and ED-140M run different microprocessors, but both utilize the same port ASIC to provide Fibre Channel connectivity, and are therefore in the same ASIC family. For inter operability concerns, it is useful to understand to which ASIC family a switch belongs.

ASCII ASCII (American Standard Code for Information Interchange), generally pronounced [aeski], is a character encoding based on the English alphabet. ASCII codes represent text in computers, communications equipment, and other devices that work with text. Most modern character encodings, which support many more characters, have a historical basis in ASCII.

audit log A log containing summaries of actions taken by a Connectrix Management software user that creates an audit trail of changes. Adding, modifying, or deleting user or product administration

Backup and Recovery in a SAN TechBook

Page 67: Backup and Recovery in a SAN

Glossary

values, creates a record in the audit log that includes the date and time.

authentication Verification of the identity of a process or person.

Bbackpressure The effect on the environment leading up to the point of restriction.

See “congestion.”

BB_Credit See “buffer-to-buffer credit.”

beaconing Repeated transmission of a beacon light and message until an error is corrected or bypassed. Typically used by a piece of equipment when an individual Field Replaceable Unit (FRU) needs replacement. Beaconing helps the field engineer locate the specific defective component. Some equipment management software systems such as Connectrix Manager offer beaconing capability.

BER See “bit error rate.”

bidirectional In Fibre Channel, the capability to simultaneously communicate at maximum speeds in both directions over a link.

bit error rate Ratio of received bits that contain errors to total of all bits transmitted.

blade server A consolidation of independent servers and switch technology in the same chassis.

blocked port Devices communicating with a blocked port are prevented from logging in to the Fibre Channel switch containing the port or communicating with other devices attached to the switch. A blocked port continuously transmits the off-line sequence (OLS).

bridge A device that provides a translation service between two network segments utilizing different communication protocols. EMC supports and sells bridges that convert iSCSI storage commands from a NIC- attached server to Fibre Channel commands for a storage platform.

broadcast Sends a transmission to all ports in a network. Typically used in IP networks. Not typically used in Fibre Channel networks.

Backup and Recovery in a SAN TechBook 67

Page 68: Backup and Recovery in a SAN

68

Glossary

broadcast frames Data packet, also known as a broadcast packet, whose destination address specifies all computers on a network. See also “multicast.”

buffer Storage area for data in transit. Buffers compensate for differences in link speeds and link congestion between devices.

buffer-to-buffer credit The number of receive buffers allocated by a receiving FC_Port to a transmitting FC_Port. The value is negotiated between Fibre Channel ports during link initialization. Each time a port transmits a frame it decrements this credit value. Each time a port receives an R_Rdy frame it increments this credit value. If the credit value is decremented to zero, the transmitter stops sending any new frames until the receiver has transmitted an R_Rdy frame. Buffer-to-buffer credit is particularly important in SRDF and Mirror View distance extension solutions.

CCall Home A product feature that allows the Connectrix service processor to

automatically dial out to a support center and report system problems. The support center server accepts calls from the Connectrix service processor, logs reported events, and can notify one or more support center representatives. Telephone numbers and other information are configured through the Windows NT dial-up networking application. The Call Home function can be enabled and disabled through the Connectrix Product Manager.

channel With Open Systems, a channel is a point-to-point link that transports data from one point to another on the communication path, typically with high throughput and low latency that is generally required by storage systems. With Mainframe environments, a channel refers to the server-side of the server-storage communication path, analogous to the HBA in Open Systems.

Class 2 Fibre Channelclass of service

In Class 2 service, the fabric and destination N_Ports provide connectionless service with notification of delivery or nondelivery between the two N_Ports. Historically Class 2 service is not widely used in Fibre Channel system.

Class 3 Fibre Channelclass of service

Class 3 service provides a connectionless service without notification of delivery between N_Ports. (This is also known as datagram service.) The transmission and routing of Class 3 frames is the same

Backup and Recovery in a SAN TechBook

Page 69: Backup and Recovery in a SAN

Glossary

as for Class 2 frames. Class 3 is the dominant class of communication used in Fibre Channel for moving data between servers and storage and may be referred to as “Ship and pray.”

Class F Fibre Channelclass of service

Class F service is used for all switch-to-switch communication in a multiswitch fabric environment. It is nearly identical to class 2 from a flow control point of view.

community A relationship between an SNMP agent and a set of SNMP managers that defines authentication, access control, and proxy characteristics.

community name A name that represents an SNMP community that the agent software recognizes as a valid source for SNMP requests. An SNMP management program that sends an SNMP request to an agent program must identify the request with a community name that the agent recognizes or the agent discards the message as an authentication failure. The agent counts these failures and reports the count to the manager program upon request, or sends an authentication failure trap message to the manager program.

community profile Information that specifies which management objects are available to what management domain or SNMP community name.

congestion Occurs at the point of restriction. See “backpressure.”

connectionless Non dedicated link. Typically used to describe a link between nodes that allows the switch to forward Class 2 or Class 3 frames as resources (ports) allow. Contrast with the dedicated bandwidth that is required in a Class 1 Fibre Channel Service point-to-point link.

Connectivity Unit A hardware component that contains hardware (and possibly software) that provides Fibre Channel connectivity across a fabric. Connectrix switches are example of Connectivity Units. This is a term popularized by the Fibre Alliance MIB, sometimes abbreviated to connunit.

Connectrixmanagement

software

The software application that implements the management user interface for all managed Fibre Channel products, typically the Connectrix -M product line. Connectrix Management software is a client/server application with the server running on the Connectrix service processor, and clients running remotely or on the service processor.

Backup and Recovery in a SAN TechBook 69

Page 70: Backup and Recovery in a SAN

70

Glossary

Connectrix serviceprocessor

An optional 1U server shipped with the Connectrix -M product line to run the Connectrix Management server software and EMC remote support application software.

Control Unit In mainframe environments, a Control Unit controls access to storage. It is analogous to a Target in Open Systems environments.

core switch Occupies central locations within the interconnections of a fabric. Generally provides the primary data paths across the fabric and the direct connections to storage devices. Connectrix directors are typically installed as core switches, but may be located anywhere in the fabric.

credit A numeric value that relates to the number of available BB_Credits on a Fibre Channel port. See“buffer-to-buffer credit”.

DDASD Direct Access Storage Device.

default Pertaining to an attribute, value, or option that is assumed when none is explicitly specified.

default zone A zone containing all attached devices that are not members of any active zone. Typically the default zone is disabled in a Connectrix M environment which prevents newly installed servers and storage from communicating until they have been provisioned.

Dense WavelengthDivision Multiplexing

(DWDM)

A process that carries different data channels at different wavelengths over one pair of fiber optic links. A conventional fiber-optic system carries only one channel over a single wavelength traveling through a single fiber.

destination ID A field in a Fibre Channel header that specifies the destination address for a frame. The Fibre Channel header also contains a Source ID (SID). The FCID for a port contains both the SID and the DID.

device A piece of equipment, such as a server, switch or storage system.

dialog box A user interface element of a software product typically implemented as a pop-up window containing informational messages and fields for modification. Facilitates a dialog between the user and the application. Dialog box is often used interchangeably with window.

Backup and Recovery in a SAN TechBook

Page 71: Backup and Recovery in a SAN

Glossary

DID An acronym used to refer to either Domain ID or Destination ID. This ambiguity can create confusion. As a result E-Lab recommends this acronym be used to apply to Domain ID. Destination ID can be abbreviated to FCID.

director An enterprise-class Fibre Channel switch, such as the Connectrix ED-140M, MDS 9509, or ED-48000B. Directors deliver high availability, failure ride-through, and repair under power to insure maximum uptime for business critical applications. Major assemblies, such as power supplies, fan modules, switch controller cards, switching elements, and port modules, are all hot-swappable.

The term director may also refer to a board-level module in the Symmetrix that provides the interface between host channels (through an associated adapter module in the Symmetrix) and Symmetrix disk devices. (This description is presented here only to clarify a term used in other EMC documents.)

DNS See “domain name service name.”

domain ID A byte-wide field in the three byte Fibre Channel address that uniquely identifies a switch in a fabric. The three fields in a FCID are domain, area, and port. A distinct Domain ID is requested from the principal switch. The principal switch allocates one Domain ID to each switch in the fabric. A user may be able to set a Preferred ID which can be requested of the Principal switch, or set an Insistent Domain ID. If two switches insist on the same DID one or both switches will segment from the fabric.

domain name servicename

Host or node name for a system that is translated to an IP address through a name server. All DNS names have a host name component and, if fully qualified, a domain component, such as host1.abcd.com. In this example, host1 is the host name.

dual-attached host A host that has two (or more) connections to a set of devices.

EE_D_TOV A time-out period within which each data frame in a Fibre Channel

sequence transmits. This avoids time-out errors at the destination Nx_Port. This function facilitates high speed recovery from dropped frames. Typically this value is 2 seconds.

Backup and Recovery in a SAN TechBook 71

Page 72: Backup and Recovery in a SAN

72

Glossary

E_Port Expansion Port, a port type in a Fibre Channel switch that attaches to another E_Port on a second Fibre Channel switch forming an Interswitch Link (ISL). This link typically conforms to the FC-SW standards developed by the T11 committee, but might not support heterogeneous inter operability.

edge switch Occupies the periphery of the fabric, generally providing the direct connections to host servers and management workstations. No two edge switches can be connected by interswitch links (ISLs). Connectrix departmental switches are typically installed as edge switches in a multiswitch fabric, but may be located anywhere in the fabric

Embedded WebServer

A management interface embedded on the switch’s code that offers features similar to (but not as robust as) the Connectrix Manager and Product Manager.

error detect time outvalue

Defines the time the switch waits for an expected response before declaring an error condition. The error detect time out value (E_D_TOV) can be set within a range of two-tenths of a second to one second using the Connectrix switch Product Manager.

error message An indication that an error has been detected. See also “information message” and “warning message.”

Ethernet A baseband LAN that allows multiple station access to the transmission medium at will without prior coordination and which avoids or resolves contention.

event log A record of significant events that have occurred on a Connectrix switch, such as FRU failures, degraded operation, and port problems.

expansionport See “E_Port.”

explicit fabric login In order to join a fabric, an Nport must login to the fabric (an operation referred to as an FLOGI). Typically this is an explicit operation performed by the Nport communicating with the F_port of the switch, and is called an explicit fabric login. Some legacy Fibre Channel ports do not perform explicit login, and switch vendors perform login for ports creating an implicit login. Typically logins are explicit.

Backup and Recovery in a SAN TechBook

Page 73: Backup and Recovery in a SAN

Glossary

FFA Fibre Adapter, another name for a Symmetrix Fibre Channel director.

F_Port Fabric Port, a port type on a Fibre Channel switch. An F_Port attaches to an N_Port through a point-to-point full-duplex link connection. A G_Port automatically becomes an F_port or an E-Port depending on the port initialization process.

fabric One or more switching devices that interconnect Fibre Channel N_Ports, and route Fibre Channel frames based on destination IDs in the frame headers. A fabric provides discovery, path provisioning, and state change management services for a Fibre Channel environment.

fabric element Any active switch or director in the fabric.

fabric login Process used by N_Ports to establish their operating parameters including class of service, speed, and buffer-to-buffer credit value.

fabric port A port type (F_Port) on a Fibre Channel switch that attaches to an N_Port through a point-to-point full-duplex link connection. An N_Port is typically a host (HBA) or a storage device like Symmetrix, VNX series, or CLARiiON.

fabric shortest pathfirst (FSPF)

A routing algorithm implemented by Fibre Channel switches in a fabric. The algorithm seeks to minimize the number of hops traversed as a Fibre Channel frame travels from its source to its destination.

fabric tree A hierarchical list in Connectrix Manager of all fabrics currently known to the Connectrix service processor. The tree includes all members of the fabrics, listed by WWN or nickname.

failover The process of detecting a failure on an active Connectrix switch FRU and the automatic transition of functions to a backup FRU.

fan-in/fan-out Term used to describe the server:storage ratio, where a graphic representation of a 1:n (fan-in) or n:1 (fan-out) logical topology looks like a hand-held fan, with the wide end toward n. By convention fan-out refers to the number of server ports that share a single storage port. Fan-out consolidates a large number of server ports on a fewer number of storage ports. Fan-in refers to the number of storage ports that a single server port uses. Fan-in enlarges the storage capacity used by a server. A fan-in or fan-out rate is often referred to as just the

Backup and Recovery in a SAN TechBook 73

Page 74: Backup and Recovery in a SAN

74

Glossary

n part of the ratio; For example, a 16:1 fan-out is also called a fan-out rate of 16, in this case 16 server ports are sharing a single storage port.

FCP See “Fibre Channel Protocol.”

FC-SW The Fibre Channel fabric standard. The standard is developed by the T11 organization whose documentation can be found at T11.org. EMC actively participates in T11. T11 is a committee within the InterNational Committee for Information Technology (INCITS).

fiber optics The branch of optical technology concerned with the transmission of radiant power through fibers made of transparent materials such as glass, fused silica, and plastic.

Either a single discrete fiber or a non spatially aligned fiber bundle can be used for each information channel. Such fibers are often called optical fibers to differentiate them from fibers used in non-communication applications.

fibre A general term used to cover all physical media types supported by the Fibre Channel specification, such as optical fiber, twisted pair, and coaxial cable.

Fibre Channel The general name of an integrated set of ANSI standards that define new protocols for flexible information transfer. Logically, Fibre Channel is a high-performance serial data channel.

Fibre ChannelProtocol

A standard Fibre Channel FC-4 level protocol used to run SCSI over Fibre Channel.

Fibre Channel switchmodules

The embedded switch modules in the back plane of the blade server. See “blade server” on page 67.

firmware The program code (embedded software) that resides and executes on a connectivity device, such as a Connectrix switch, a Symmetrix Fibre Channel director, or a host bus adapter (HBA).

F_Port Fabric Port, a physical interface within the fabric. An F_Port attaches to an N_Port through a point-to-point full-duplex link connection.

frame A set of fields making up a unit of transmission. Each field is made of bytes. The typical Fibre Channel frame consists of fields: Start-of-frame, header, data-field, CRC, end-of-frame. The maximum frame size is 2148 bytes.

Backup and Recovery in a SAN TechBook

Page 75: Backup and Recovery in a SAN

Glossary

frame header Control information placed before the data-field when encapsulating data for network transmission. The header provides the source and destination IDs of the frame.

FRU Field-replaceable unit, a hardware component that can be replaced as an entire unit. The Connectrix switch Product Manager can display status for the FRUs installed in the unit.

FSPF Fabric Shortest Path First, an algorithm used for routing traffic. This means that, between the source and destination, only the paths that have the least amount of physical hops will be used for frame delivery.

Ggateway address In TCP/IP, a device that connects two systems that use the same

or different protocols.

gigabyte (GB) A unit of measure for storage size, loosely one billion (109) bytes. One gigabyte actually equals 1,073,741,824 bytes.

G_Port A port type on a Fibre Channel switch capable of acting either as an F_Port or an E_Port, depending on the port type at the other end of the link.

GUI Graphical user interface.

HHBA See “host bus adapter.”

hexadecimal Pertaining to a numbering system with base of 16; valid numbers use the digits 0 through 9 and characters A through F (which represent the numbers 10 through 15).

high availability A performance feature characterized by hardware component redundancy and hot-swappability (enabling non-disruptive maintenance). High-availability systems maximize system uptime while providing superior reliability, availability, and serviceability.

hop A hop refers to the number of InterSwitch Links (ISLs) a Fibre Channel frame must traverse to go from its source to its destination.

Backup and Recovery in a SAN TechBook 75

Page 76: Backup and Recovery in a SAN

76

Glossary

Good design practice encourages three hops or less to minimize congestion and performance management complexities.

host bus adapter A bus card in a host system that allows the host system to connect to the storage system. Typically the HBA communicates with the host over a PCI or PCI Express bus and has a single Fibre Channel link to the fabric. The HBA contains an embedded microprocessor with on board firmware, one or more ASICs, and a Small Form Factor Pluggable module (SFP) to connect to the Fibre Channel link.

II/O See “input/output.”

in-band management Transmission of monitoring and control functions over the Fibre Channel interface. You can also perform these functions out-of-band typically by use of the ethernet to manage Fibre Channel devices.

information message A message telling a user that a function is performing normally or has completed normally. User acknowledgement might or might not be required, depending on the message. See also “error message” and “warning message.”

input/output (1) Pertaining to a device whose parts can perform an input process and an output process at the same time. (2) Pertaining to a functional unit or channel involved in an input process, output process, or both (concurrently or not), and to the data involved in such a process. (3) Pertaining to input, output, or both.

interface (1) A shared boundary between two functional units, defined by functional characteristics, signal characteristics, or other characteristics as appropriate. The concept includes the specification of the connection of two devices having different functions. (2) Hardware, software, or both, that links systems, programs, or devices.

Internet Protocol See “IP.”

interoperability The ability to communicate, execute programs, or transfer data between various functional units over a network. Also refers to a Fibre Channel fabric that contains switches from more than one vendor.

Backup and Recovery in a SAN TechBook

Page 77: Backup and Recovery in a SAN

Glossary

interswitch link (ISL) Interswitch link, a physical E_Port connection between any two switches in a Fibre Channel fabric. An ISL forms a hop in a fabric.

IP Internet Protocol, the TCP/IP standard protocol that defines the datagram as the unit of information passed across an internet and provides the basis for connectionless, best-effort packet delivery service. IP includes the ICMP control and error message protocol as an integral part.

IP address A unique string of numbers that identifies a device on a network. The address consists of four groups (quadrants) of numbers delimited by periods. (This is called dotted-decimal notation.) All resources on the network must have an IP address. A valid IP address is in the form nnn.nnn.nnn.nnn, where each nnn is a decimal in the range 0 to 255.

ISL Interswitch link, a physical E_Port connection between any two switches in a Fibre Channel fabric.

Kkilobyte (K) A unit of measure for storage size, loosely one thousand bytes. One

kilobyte actually equals 1,024 bytes.

Llaser A device that produces optical radiation using a population inversion

to provide light amplification by stimulated emission of radiation and (generally) an optical resonant cavity to provide positive feedback. Laser radiation can be highly coherent temporally, spatially, or both.

LED Light-emitting diode.

link The physical connection between two devices on a switched fabric.

link incident A problem detected on a fiber-optic link; for example, loss of light, or invalid sequences.

load balancing The ability to distribute traffic over all network ports that are the same distance from the destination address by assigning different paths to different messages. Increases effective network bandwidth. EMC PowerPath software provides load-balancing services for server IO.

Backup and Recovery in a SAN TechBook 77

Page 78: Backup and Recovery in a SAN

78

Glossary

logical volume A named unit of storage consisting of a logically contiguous set of disk sectors.

Logical Unit Number(LUN)

A number, assigned to a storage volume, that (in combination with the storage device node's World Wide Port Name (WWPN)) represents a unique identifier for a logical volume on a storage area network.

MMAC address Media Access Control address, the hardware address of a device

connected to a shared network.

managed product A hardware product that can be managed using the Connectrix Product Manager. For example, a Connectrix switch is a managed product.

management session Exists when a user logs in to the Connectrix Management software and successfully connects to the product server. The user must specify the network address of the product server at login time.

media The disk surface on which data is stored.

media access control See “MAC address.”

megabyte (MB) A unit of measure for storage size, loosely one million (106) bytes. One megabyte actually equals 1,048,576 bytes.

MIB Management Information Base, a related set of objects (variables) containing information about a managed device and accessed through SNMP from a network management station.

multicast Multicast is used when multiple copies of data are to be sent to designated, multiple, destinations.

multiswitch fabric Fibre Channel fabric created by linking more than one switch or director together to allow communication. See also “ISL.”

multiswitch linking Port-to-port connections between two switches.

Nname server (dNS) A service known as the distributed Name Server provided by a Fibre

Channel fabric that provides device discovery, path provisioning, and

Backup and Recovery in a SAN TechBook

Page 79: Backup and Recovery in a SAN

Glossary

state change notification services to the N_Ports in the fabric. The service is implemented in a distributed fashion, for example, each switch in a fabric participates in providing the service. The service is addressed by the N_Ports through a Well Known Address.

network address A name or address that identifies a managed product, such as a Connectrix switch, or a Connectrix service processor on a TCP/IP network. The network address can be either an IP address in dotted decimal notation, or a Domain Name Service (DNS) name as administered on a customer network. All DNS names have a host name component and (if fully qualified) a domain component, such as host1.emc.com. In this example, host1 is the host name and EMC.com is the domain component.

nickname A user-defined name representing a specific WWxN, typically used in a Connectrix -M management environment. The analog in the Connectrix -B and MDS environments is alias.

node The point at which one or more functional units connect to the network.

N_Port Node Port, a Fibre Channel port implemented by an end device (node) that can attach to an F_Port or directly to another N_Port through a point-to-point link connection. HBAs and storage systems implement N_Ports that connect to the fabric.

NVRAM Nonvolatile random access memory.

Ooffline sequence

(OLS)The OLS Primitive Sequence is transmitted to indicate that the FC_Port transmitting the Sequence is:

a. initiating the Link Initialization Protocol

b. receiving and recognizing NOS

c. or entering the offline state

OLS See “offline sequence (OLS)”.

operating mode Regulates what other types of switches can share a multiswitch fabric with the switch under consideration.

Backup and Recovery in a SAN TechBook 79

Page 80: Backup and Recovery in a SAN

80

Glossary

operating system Software that controls the execution of programs and that may provide such services as resource allocation, scheduling, input/output control, and data management. Although operating systems are predominantly software, partial hardware implementations are possible.

optical cable A fiber, multiple fibers, or a fiber bundle in a structure built to meet optical, mechanical, and environmental specifications.

OS See “operating system.”

out-of-bandmanagement

Transmission of monitoring/control functions outside of the Fibre Channel interface, typically over ethernet.

oversubscription The ratio of bandwidth required to bandwidth available. When all ports, associated pair-wise, in any random fashion, cannot sustain full duplex at full line-rate, the switch is oversubscribed.

Pparameter A characteristic element with a variable value that is given a constant

value for a specified application. Also, a user-specified value for an item in a menu; a value that the system provides when a menu is interpreted; data passed between programs or procedures.

password (1) A value used in authentication or a value used to establish membership in a group having specific privileges. (2) A unique string of characters known to the computer system and to a user who must specify it to gain full or limited access to a system and to the information stored within it.

path In a network, any route between any two nodes.

persistent binding Use of server-level access control configuration information to persistently bind a server device name to a specific Fibre Channel storage volume or logical unit number, through a specific HBA and storage port WWN. The address of a persistently bound device does not shift if a storage target fails to recover during a power cycle. This function is the responsibility of the HBA device driver.

port (1) An access point for data entry or exit. (2) A receptacle on a device to which a cable for another device is attached.

Backup and Recovery in a SAN TechBook

Page 81: Backup and Recovery in a SAN

Glossary

port card Field replaceable hardware component that provides the connection for fiber cables and performs specific device-dependent logic functions.

port name A symbolic name that the user defines for a particular port through the Product Manager.

preferred domain ID An ID configured by the fabric administrator. During the fabric build process a switch requests permission from the principal switch to use its preferred domain ID. The principal switch can deny this request by providing an alternate domain ID only if there is a conflict for the requested Domain ID. Typically a principal switch grants the non-principal switch its requested Preferred Domain ID.

principal switch In a multiswitch fabric, the switch that allocates domain IDs to itself and to all other switches in the fabric. There is always one principal switch in a fabric. If a switch is not connected to any other switches, it acts as its own principal switch.

principle downstreamISL

The ISL to which each switch will forward frames originating from the principal switch.

principle ISL The principal ISL is the ISL that frames destined to, or coming from, the principal switch in the fabric will use. An example is an RDI frame.

principle upstream ISL The ISL to which each switch will forward frames destined for the principal switch. The principal switch does not have any upstream ISLs.

product (1) Connectivity Product, a generic name for a switch, director, or any other Fibre Channel product. (2) Managed Product, a generic hardware product that can be managed by the Product Manager (a Connectrix switch is a managed product). Note distinction from the definition for “device.”

Product Manager A software component of Connectrix Manager software such as a Connectrix switch product manager, that implements the management user interface for a specific product. When a product instance is opened from the Connectrix Manager software products view, the corresponding product manager is invoked. The product manager is also known as an Element Manager.

Backup and Recovery in a SAN TechBook 81

Page 82: Backup and Recovery in a SAN

82

Glossary

product name A user configurable identifier assigned to a Managed Product. Typically, this name is stored on the product itself. For a Connectrix switch, the Product Name can also be accessed by an SNMP Manager as the System Name. The Product Name should align with the host name component of a Network Address.

products view The top-level display in the Connectrix Management software user interface that displays icons of Managed Products.

protocol (1) A set of semantic and syntactic rules that determines the behavior of functional units in achieving communication. (2) A specification for the format and relative timing of information exchanged between communicating parties.

RR_A_TOV See “resource allocation time out value.”

remote access link The ability to communicate with a data processing facility through a remote data link.

remote notification The system can be programmed to notify remote sites of certain classes of events.

remote userworkstation

A workstation, such as a PC, using Connectrix Management software and Product Manager software that can access the Connectrix service processor over a LAN connection. A user at a remote workstation can perform all of the management and monitoring tasks available to a local user on the Connectrix service processor.

resource allocationtime out value

A value used to time-out operations that depend on a maximum time that an exchange can be delayed in a fabric and still be delivered. The resource allocation time-out value of (R_A_TOV) can be set within a range of two-tenths of a second to 120 seconds using the Connectrix switch product manager. The typical value is 10 seconds.

SSAN See “storage area network (SAN).”

segmentation A non-connection between two switches. Numerous reasons exist for an operational ISL to segment, including interop mode incompatibility, zoning conflicts, and domain overlaps.

Backup and Recovery in a SAN TechBook

Page 83: Backup and Recovery in a SAN

Glossary

segmented E_Port E_Port that has ceased to function as an E_Port within a multiswitch fabric due to an incompatibility between the fabrics that it joins.

service processor See “Connectrix service processor.”

session See “management session.”

single attached host A host that only has a single connection to a set of devices.

small form factorpluggable (SFP)

An optical module implementing a shortwave or long wave optical transceiver.

SMTP Simple Mail Transfer Protocol, a TCP/IP protocol that allows users to create, send, and receive text messages. SMTP protocols specify how messages are passed across a link from one system to another. They do not specify how the mail application accepts, presents or stores the mail.

SNMP Simple Network Management Protocol, a TCP/IP protocol that generally uses the User Datagram Protocol (UDP) to exchange messages between a management information base (MIB) and a management client residing on a network.

storage area network(SAN)

A network linking servers or workstations to disk arrays, tape backup systems, and other devices, typically over Fibre Channel and consisting of multiple fabrics.

subnet mask Used by a computer to determine whether another computer with which it needs to communicate is located on a local or remote network. The network mask depends upon the class of networks to which the computer is connecting. The mask indicates which digits to look at in a longer network address and allows the router to avoid handling the entire address. Subnet masking allows routers to move the packets more quickly. Typically, a subnet may represent all the machines at one geographic location, in one building, or on the same local area network.

switch priority Value configured into each switch in a fabric that determines its relative likelihood of becoming the fabric’s principal switch.

Backup and Recovery in a SAN TechBook 83

Page 84: Backup and Recovery in a SAN

84

Glossary

TTCP/IP Transmission Control Protocol/Internet Protocol. TCP/IP refers to

the protocols that are used on the Internet and most computer networks. TCP refers to the Transport layer that provides flow control and connection services. IP refers to the Internet Protocol level where addressing and routing are implemented.

toggle To change the state of a feature/function that has only two states. For example, if a feature/function is enabled, toggling changes the state to disabled.

topology Logical and/or physical arrangement of switches on a network.

trap An asynchronous (unsolicited) notification of an event originating on an SNMP-managed device and directed to a centralized SNMP Network Management Station.

Uunblocked port Devices communicating with an unblocked port can log in to a

Connectrix switch or a similar product and communicate with devices attached to any other unblocked port if the devices are in the same zone.

Unicast Unicast routing provides one or more optimal path(s) between any of two switches that make up the fabric. (This is used to send a single copy of the data to designated destinations.)

upper layer protocol(ULP)

The protocol user of FC-4 including IPI, SCSI, IP, and SBCCS. In a device driver ULP typically refers to the operations that are managed by the class level of the driver, not the port level.

URL Uniform Resource Locater, the addressing system used by the World Wide Web. It describes the location of a file or server anywhere on the Internet.

Vvirtual switch A Fibre Channel switch function that allows users to subdivide a

physical switch into multiple virtual switches. Each virtual switch consists of a subset of ports on the physical switch, and has all the properties of a Fibre Channel switch. Multiple virtual switches can be connected through ISL to form a virtual fabric or VSAN.

Backup and Recovery in a SAN TechBook

Page 85: Backup and Recovery in a SAN

Glossary

virtual storage areanetwork (VSAN)

An allocation of switch ports that can span multiple physical switches, and forms a virtual fabric. A single physical switch can sometimes host more than one VSAN.

volume A general term referring to an addressable logically contiguous storage space providing block IO services.

VSAN Virtual Storage Area Network.

Wwarning message An indication that a possible error has been detected. See also “error

message” and “information message.”

World Wide Name(WWN)

A unique identifier, even on global networks. The WWN is a 64-bit number (XX:XX:XX:XX:XX:XX:XX:XX). The WWN contains an OUI which uniquely determines the equipment manufacturer. OUIs are administered by the Institute of Electronic and Electrical Engineers (IEEE). The Fibre Channel environment uses two types of WWNs; a World Wide Node Name (WWNN) and a World Wide Port Name (WWPN). Typically the WWPN is used for zoning (path provisioning function).

Zzone An information object implemented by the distributed Nameserver

(dNS) of a Fibre Channel switch. A zone contains a set of members which are permitted to discover and communicate with one another. The members can be identified by a WWPN or port ID. EMC recommends the use of WWPNs in zone management.

zone set An information object implemented by the distributed Nameserver (dNS) of a Fibre Channel switch. A Zone Set contains a set of Zones. A Zone Set is activated against a fabric, and only one Zone Set can be active in a fabric.

zonie A storage administrator who spends a large percentage of his workday zoning a Fibre Channel network and provisioning storage.

zoning Zoning allows an administrator to group several devices by function or by location. All devices connected to a connectivity product, such as a Connectrix switch, may be configured into one or more zones.

Backup and Recovery in a SAN TechBook 85

Page 86: Backup and Recovery in a SAN

86

Glossary

Backup and Recovery in a SAN TechBook