nas foundations 2005

Upload: zuwairi-kamarudin

Post on 03-Apr-2018

216 views

Category:

Documents


0 download

TRANSCRIPT

  • 7/28/2019 NAS Foundations 2005

    1/67

    Copyright 2005 EMC Corporation. Do not Copy - All Rights Reserved.

    NAS Foundations - 1

    2005 EMC Corporation. All rights reserved.

    NAS FoundationsNAS Foundations

    Welcome to NAS Foundations.

    The AUDIO portion of this course is supplemental to the material and is not a replacement for the student notes accompanyingthis course. EMC recommends downloading the Student Resource Guide from the Supporting Materials tab, and reading the notes intheir entirety.

    Copyright 2005 EMC Corporation. All rights reserved.

    These materials may not be copied without EMC's written consent.

    EMC believes the information in this publication is accurate as of its publication date. The information is subject to change withoutnotice.

    THE INFORMATION IN THIS PUBLICATION IS PROVIDED AS IS. EMC CORPORATION MAKES NO REPRESENTATIONSOR WARRANTIES OF ANY KIND WITH RESPECT TO THE INFORMATION IN THIS PUBLICATION, AND SPECIFICALLYDISCLAIMS IMPLIED WARRANTIES OF MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE.

    Use, copying, and distribution of any EMC software described in this publication requires an applicable software license.

    Celerra, CLARalert, CLARiiON, Connectrix, Dantz, Documentum, EMC, EMC2, HighRoad, Legato, Navisphere, PowerPath,ResourcePak, SnapView/IP, SRDF, Symmetrix, TimeFinder, VisualSAN, where information lives are registered trademarks.

    Access Logix, AutoAdvice, Automated Resource Manager, AutoSwap, AVALONidm, C-Clip, Celerra Replicator, Centera, CentraStar,CLARevent, CopyCross, CopyPoint, DatabaseXtender, Direct Matrix, Direct Matrix Architecture, EDM, E-Lab, EMC AutomatedNetworked Storage, EMC ControlCenter, EMC Developers Program, EMC OnCourse, EMC Proven, EMC Snap, Enginuity, FarPoint,FLARE, GeoSpan, InfoMover, MirrorView, NetWin, OnAlert, OpenScale, Powerlink, PowerVolume, RepliCare, SafeLine, SANArchitect, SAN Copy, SAN Manager, SDMS, SnapSure, SnapView, StorageScope, SupportMate, SymmAPI, SymmEnabler, SymmetrixDMX, Universal Data Tone, VisualSRM are trademarks of EMC Corporation. All other trademarks used herein are the property of theirrespective owners.

    All other trademarks used herein are the property of their respective owners.

  • 7/28/2019 NAS Foundations 2005

    2/67

    Copyright 2005 EMC Corporation. Do not Copy - All Rights Reserved.

    NAS Foundations - 2

    Upon completion of this course, you will be able to:

    2005 EMC Corporation. All rights reserved. Module Title - 2

    NAS Foundations

    y Identify the concepts and value of Network Attached Storage

    y List Environmental Aspects of NAS

    y Identify the EMC NAS Platforms and their differences

    y Identify and describe key Celerra Software Features

    y Identify and describe the Celerra Management Software offerings

    y Identify and describe key Windows Specific Options with respect toEMC NAS environments

    y Identify and describe NAS Business Continuity and Replication

    Options with respect to the various EMC NAS platforms

    y Identify and describe key NAS Backup and Recovery options

    These are the learning objectives for this training. Please take a moment to read them.

  • 7/28/2019 NAS Foundations 2005

    3/67

    Copyright 2005 EMC Corporation. Do not Copy - All Rights Reserved.

    NAS Foundations - 3

    2005 EMC Corporation. All rights reserved. Module Title - 3

    Network Attached Storage

    NAS OVERVIEW

    Lets start by looking at an overview of Network Attached Storage (NAS).

  • 7/28/2019 NAS Foundations 2005

    4/67

    Copyright 2005 EMC Corporation. Do not Copy - All Rights Reserved.

    NAS Foundations - 4

    2005 EMC Corporation. All rights reserved. Module Title - 4

    What Is Network-Attached Storage

    y Built on the concept of

    shared storage on a LocalArea Network

    y Leverages the benefits ofa network file server andnetwork storage

    y Utilizes industry-standardnetwork and file sharingprotocols

    Network

    File Server + Network-attached storage = NAS

    Client Application Application Application

    Unix Client Unix ClientWindows Client

    The benefit of NAS is that it now brings the advantages of networked storage to the desktop through

    file-level sharing of data via a dedicated device.

    NAS is network-centric. Typically used for client storage consolidation on a LAN, NAS is a preferredstorage capacity solution for enabling clients to access files quickly and directly. This eliminates the

    bottlenecks users often encounter when accessing files from a general-purpose server.

    NAS provides security and performs all file and storage services through standard network protocols,

    using TCP/IP for data transfer, Ethernet and Gigabit Ethernet for media access, and CIFS, http, ftp, and

    NFS for remote file service. In addition, NAS can serve both UNIX and Microsoft Windows users

    seamlessly, sharing the same data between the different architectures. For client users, NAS is the

    technology of choice for providing storage with unencumbered access to files.

    Although NAS trades some performance for manageability and simplicity, it is by no means a lazy

    technology. Gigabit Ethernet allows NAS to scale to high performance and low latency, making it

    possible to support a myriad of clients through a single interface. Many NAS devices support multipleinterfaces and can support multiple networks at the same time.

  • 7/28/2019 NAS Foundations 2005

    5/67

    Copyright 2005 EMC Corporation. Do not Copy - All Rights Reserved.

    NAS Foundations - 5

    2005 EMC Corporation. All rights reserved. Module Title - 5

    Why NAS?

    y

    Highest availabilityy Scales for growth

    y Avoids file replication

    y Increases flexibility

    y Reduces complexityy Improves security

    y Costs

    Firewall

    Web

    Servers

    NAS

    Internet

    Data CenterS

    n

    S2

    ..

    ..

    S1

    Internal

    Network

    Shared applications can now achieve the availability and scalability benefits of networked storage.

    Centralizing file storage reduces system complexity and system administration costs. Backup, restore,

    and disaster recovery can be simplified.

  • 7/28/2019 NAS Foundations 2005

    6/67

    Copyright 2005 EMC Corporation. Do not Copy - All Rights Reserved.

    NAS Foundations - 6

    2005 EMC Corporation. All rights reserved. Module Title - 6

    NAS Operations

    y All IO operations use file level IOprotocols

    No awareness of disk volumes ordisk sectors

    y File system is mounted remotelyusing a network file accessprotocol, such as: Network File System (NFS)

    Common Internet FileSystem(CIFS)

    y IO is redirected to remote system

    y Utilizes mature data transport

    (e.g., TCP/IP) and media accessprotocols

    y NAS device assumesresponsibility for organizing data(R/W) on disk and managingcache

    Disk

    IP Network

    App licati on

    NAS Device

    NAS

    SAN

    ORDirect

    Attach

    One of the key differences of a NAS disk device, compared to DAS or other networked storage

    solutions such as SAN, is that all I/O operations use file level I/O protocols. File I/O is a high level

    type of request that, in essence, specifies only the file to be accessed, but does not directly address the

    storage device. This is done later by other operating system functions in the remote NAS appliance.

    A file I/O specifies the file. It also indicates an offset into the file. For instance, the I/O may specify

    Go to byte 1000 in the file (as if the file were a set of contiguous bytes), and read the next 256 bytes

    beginning at that position.

    Unlike block I/O, there is no awareness of a disk volume or disk sector in a file I/O request. Inside the

    NAS appliance, the operating system keeps track of where files are located on disk. The OS issues a

    block I/O request to the disks to fulfill the file I/O read and write requests it receives.

    The disk resources can be either directly attached to the NAS device, or they can be attached using a

    SAN.

  • 7/28/2019 NAS Foundations 2005

    7/67

    Copyright 2005 EMC Corporation. Do not Copy - All Rights Reserved.

    NAS Foundations - 7

    2005 EMC Corporation. All rights reserved. Module Title - 7

    NAS Architecture

    Application

    Remote I/O

    Request

    Operating System

    NFS/CIFS

    TCP/IP Stack

    Network Interface

    File I/O to NAS

    I/O Redirector

    Network Interface

    TCP/IP Stack

    Network FileProtocol Handler

    NASOperatingSystem

    To Storage

    y NFS and CIFS handle

    file requests to remotefile system

    y I/O is encapsulated byTCP/IP Stack to moveover the network

    y NAS device convertsrequests to block IOand reads or writesdata to NAS disk

    storage

    Drive Protocol (SCSI)

    Storage Network

    Protocol(Fibre Channel)

    The Network File System (NFS) protocol and Common Internet File System (CIFS) protocol handle

    file I/O requests to the remote file system, which is located in the NAS device. I/O requests are

    packaged by the initiator into the TCP/IP protocols to move across the IP network. The remote NAS

    file system converts the request to block I/O, and reads or writes the data to the NAS disk storage. Toreturn data to the requesting client application, the NAS appliance software re-packages the data to

    move it back across the network.

    Here, we see an example of an IO being directed to the remote NAS device, and the different protocols

    that play a part in moving the request back and forth, to the remote file system located on the NAS

    server.

  • 7/28/2019 NAS Foundations 2005

    8/67

    Copyright 2005 EMC Corporation. Do not Copy - All Rights Reserved.

    NAS Foundations - 8

    2005 EMC Corporation. All rights reserved. Module Title - 8

    NAS Device

    y Single-purpose machine orcomponent, serves as a

    dedicated, high-performance,high-speed communication of filedata

    y Is sometimes called a filer or anetwork appliance

    y Uses one or more NetworkInterface Cards (NICs) to connectto the customer network

    y Uses proprietary optimizedoperating system; DART, DataAccess in Real Time, is EMCsNAS operating system

    y Uses industry standard storageprotocols to connect to storageresources

    DiskStorage

    IP Network

    Client Application

    NAS Device

    Network Drivers and Protocols

    NFS CIFS

    NAS Device OS (DART)

    Storage Drivers and Protocols

    A NAS server is not a general-purpose computer; it is a significantly streamlined/tuned OS, in

    comparison to a general purpose computer. It is sometimes called a filer because it focuses all of its

    processing power solely on file service and file storage. The NAS device is sometimes called a

    network appliance, referring to the plug and play design of many NAS devices. Common networkinterface cards (NICs) include gigabit Ethernet (1000 Mb/s), Fast Ethernet (10Mb/s), ATM, and

    FIDDI. Some NAS also supports NDMP, Novell Netware, and HTTP protocols.

    The NAS operating system for Network Appliance products is called Data ONTAP. The NAS

    operating system for EMC Celerra is DART - Data Access in Real Time. These operating systems

    are tuned to perform file operations including open, close, read, write, etc.

    The NAS device will generally use a standard drive protocol to manage data to and from the disk

    resources.

  • 7/28/2019 NAS Foundations 2005

    9/67

    Copyright 2005 EMC Corporation. Do not Copy - All Rights Reserved.

    NAS Foundations - 9

    2005 EMC Corporation. All rights reserved. Module Title - 9

    NAS Applications

    y CAD/CAM environments, wherewidely dispersed engineers have

    to share and modify designdrawings

    y Serving Web pages to thousandsof workstations at the same time

    y Easily sharing company-wideinformation among employees

    y Database application

    Low transaction rate

    Low data volatility

    Smaller in size

    Not performance constrained

    Database applications have traditionally been implemented in a SAN architecture. The primary reason

    is the deterministic performance of a SAN. This characteristic is especially applicable for very large,

    on-line transactional applications with high transaction rates and high data volatility.

    However, NAS might be appropriate where the database transaction rate is low and performance is not

    constrained. Extensive application profiling should be done in order to understand the specific

    database application requirement and if, in fact, a NAS solution would be appropriate.

    When considering a NAS solution, the databases should:

    ybe sequentially accessed, non-indexed or have a flat file structure

    y have a low transaction rate

    y have low data volatility

    ybe relatively small

    y not have performance/timing constraints

  • 7/28/2019 NAS Foundations 2005

    10/67

    Copyright 2005 EMC Corporation. Do not Copy - All Rights Reserved.

    NAS Foundations - 10

    2005 EMC Corporation. All rights reserved. Module Title - 10

    NAS Components and Networking Infrastructure

    AN INTRODUCTION

    This section will introduce NAS components and networking infrastructures.

  • 7/28/2019 NAS Foundations 2005

    11/67

    Copyright 2005 EMC Corporation. Do not Copy - All Rights Reserved.

    NAS Foundations - 11

    2005 EMC Corporation. All rights reserved. Module Title - 11

    What is a Network?

    Site 1

    Site 2

    y LAN

    y Physical Media

    y WAN

    LAN

    A network is any collection of independent computers that communicate with one another over ashared network medium. LANs are networks usually confined to a geographic area, such as a single

    building or a college campus. LANs can be small, linking as few as three computers, but often linkinghundreds of computers used by thousands of people.

    Physical Media

    An important part of designing and installing a network is selecting the appropriate medium. There areseveral types in use today: Ethernet, Fiber Distributed Data Interface (FDDI), Asynchronous TransferMode (ATM), and Token Ring.

    Ethernet is popular because it strikes a good balance between speed, cost, and ease of installation.These benefits, combined with wide acceptance in the computer marketplace, and the ability to supportvirtually all popular network protocols, make Ethernet an ideal networking technology for mostcomputer users today.

    WAN

    Wide area networking combines multiple LANs that are geographically separated. This isaccomplished by connecting the different LANs using services such as dedicated leased phone lines,dial-up phone lines (both synchronous and asynchronous), satellite links, and data packet carrierservices. Wide area networking can be as simple as a modem and remote access server for employeesto dial into, or it can be as complex as hundreds of branch offices globally linked, using special routingprotocols and filters to minimize the expense of sending data sent over vast distances.

  • 7/28/2019 NAS Foundations 2005

    12/67

    Copyright 2005 EMC Corporation. Do not Copy - All Rights Reserved.

    NAS Foundations - 12

    2005 EMC Corporation. All rights reserved. Module Title - 12

    Physical Components

    y Network Interface

    Card (NIC)y Switches

    y Routers

    NIC

    NIC

    NIC

    NIC

    Switch

    Switch

    Router

    155.10.10.XX

    155.10.20.XX

    Network Interface Card

    A network topology is the geometric arrangement of nodes and cable links in a LAN, and is used in

    two general configurations: bus and star.Network interface cards, commonly referred to as NICs, are used to connect a Host, Server,

    Workstation, PC, etc. to a network. The NIC provides a physical connection between the networking

    cable and the computer's internal bus. The rate at which data passes back and forth can be different.

    Switches

    LAN switches can link multiple network connections together. Todays switches will accept and

    analyze the entire packet of data to catch certain packet errors, and keep them from propagating

    through the network before forwarding it to its destination. Each of the segments attached to an

    Ethernet switch has the full bandwidth of the switch 10Mb/100Mb/1Gigabit.

    Routers

    Routers pass traffic between networks. Routers also divide networks logically instead of physically.

    An IP router can divide a network into various subnets, so that only traffic destined for particular IP

    addresses can pass between segments.

  • 7/28/2019 NAS Foundations 2005

    13/67

    Copyright 2005 EMC Corporation. Do not Copy - All Rights Reserved.

    NAS Foundations - 13

    2005 EMC Corporation. All rights reserved. Module Title - 13

    Network Protocols

    y Network transport

    Protocols

    y Network filesystemProtocols

    NIC

    NIC

    NIC

    NIC

    Switch

    Switch

    Router

    155.10.10.XX

    155.10.20.XX

    Network transport protocols are standards that allow computers to communicate. A protocol defineshow computers identify one another on a network, the form that the data should take in transit, andhow this information is processed once it reaches its final destination. Protocols also define procedures

    for handling lost or damaged transmissions, or "packets".Network transport protocols are used to manage the movement of data packets to devicescommunicating across the network. UDP and TCP are examples of transport protocol. UDP is used innon-connection oriented networks, while TCP is used to manage the movement of data packets inconnection oriented networks.

    In a non-connection oriented communication model, the data is sent out to a recipient using a besteffort approach, with no acknowledgement of the receipt of the data being sent back to the originator,by the recipient. Error correction and resend must be controlled by a higher layer application to ensuredata integrity.

    In a connection oriented model, all data packets sent by an originator are acknowledged by therecipient, and transmission errors/lost data packets are managed at the protocol layer.

    TCP/IP (for UNIX, Windows NT, Windows 95 and other platforms), IPX (for Novell NetWare),DECnet (for networking Digital Equipment Corp. computers), AppleTalk (for Macintosh computers),and NetBIOS/NetBEUI (for LAN Manager and Windows NT networks) are examples of networktransport protocols in use today.

    Network filesystem protocols are used to manage how a data request will be processed, once it reachesits final destination. NFS, Network File System protocol, is used to manage file access in a networkedUNIX environment; it is supported by both UDP and TCP transport protocols.

    CIFS, Common Internet File system protocol, is used to manage file access in a networked Windowsenvironment, and it is supported by both UDP and TCP transport protocols.

  • 7/28/2019 NAS Foundations 2005

    14/67

    Copyright 2005 EMC Corporation. Do not Copy - All Rights Reserved.

    NAS Foundations - 14

    2005 EMC Corporation. All rights reserved. Module Title - 14

    Network Addressing

    y IP Addressing

    y DHCP

    y DNS

    155.10.10.13Host Name Peter

    155.10.10.11

    Switch

    Router

    155.10.20.11

    DNS Server

    155.10.10. 14

    Host name Mary

    Host Name = Account1

    155.10.10.XX

    155.10.20.XX

    DHCPServer

    155.10.10.12

    Several things must happen in order for computers, attached to a network, to be able to communicatedata across the network. First, the computer must have a unique network address, referred to as the IPAddress. It is a four octet number in the commonly used IP version 4, for example 155.10.20.11, that

    uniquely identifies this computer to all other computers connected to the network.An address can be assigned in one of two ways: dynamically or statically. A static address requiresentering the IP address that the computer will use in a local file. This can be quite a problem from anadministrative view, as well as a source of conflict. If two computers on the same subnet are assignedthe same IP address, they would not be able to communicate. Another approach is to set up a computeron the network to dynamically assign an IP address to a host when it joins the network. This is calledthe Dynamic Host Configuration Protocol (DHCP Server). In our example, the host Mary is assignedan IP address 155.10.10.14, and the host Peter is assigned an IP address 155.10.10.13 by the DHCPserver. The NAS device, Account1, is a File server. Servers normally will have a statically assigned IPaddress. In this example, it has the IP address 155.10.20.11.

    A second requirement for communications is to know the address of the recipient of the

    communication. The more common approach is to communicate by name, as for example, the nameyou place on a letter. However, the network uses numerical addresses. IP addresses can be managed inthree ways. The first approach is to enter the IP address into the application (IP address in place ofwww.x.com in your browser). The second is to maintain a local file with host names and associated IPaddresses. The third is a hierarchical database called Domain Name Service (DNS), which resolveshost names to IP addresses. In our example, if someone on host Mary wants to talk to host Peter, it isthe DNS server that resolves Peter to 155.10.20.13.

  • 7/28/2019 NAS Foundations 2005

    15/67

    Copyright 2005 EMC Corporation. Do not Copy - All Rights Reserved.

    NAS Foundations - 15

    2005 EMC Corporation. All rights reserved. Module Title - 15

    Volume and Files

    y CreateVolumes

    y CreateNetworkFilesystem

    155.10.10.13Host Name Peter

    155.10.10.11

    Router

    DNS Server

    155.10.10. 14Host name Mary

    Account1

    Array

    /Acct_ RepFile System

    NAS

    155.10.20.11

    DHCPServer

    155.10.10.12

    Create Array Volume

    The first step in a network attached storage environment is to create logical volumes on the array and

    assign it a LUN Identifier. The LUN will then be presented to the NAS device.Create NAS Volume

    The NAS device will perform a discovery operation when it first starts, or when directed. In the

    discovery operation, the NAS device will see the array LUN as a physical drive. The next task is to

    create logical volumes at the NAS device level. The Celerra will create meta volumes using the

    volume resources presented by the array.

    Create Network File

    When the logical volumes are created on the Celerra, it can use them to create a file system. In this

    example, we have created a file system /Acct_Rep on the NAS server Account1.

    Mount File System

    Once the file system has been created, it must be mounted. With the file system mounted, we can then

    move to the next step, which is publishing the file system on the network.

  • 7/28/2019 NAS Foundations 2005

    16/67

    Copyright 2005 EMC Corporation. Do not Copy - All Rights Reserved.

    NAS Foundations - 16

    2005 EMC Corporation. All rights reserved. Module Title - 16

    Publish

    y Export

    y Share

    155.10.10. 13Host name Peter

    User PeterUnixExport

    155.10.10.11

    Router

    DNS Server

    155.10.10. 14Host name MaryUser MaryMS Windows

    Share

    Array

    ACCOUNT1 /Acct_ Rep

    155.10.20.11

    Group Name =SALES

    Group Name =Accounting

    NAS

    DHCPServer

    155.10.10.12

    Now that a network file system has been created, there are two ways it can be accessed using the

    network.

    The first method is through the UNIX environment. This is accomplished by performing an Export.The Export publishes to those UNIX clients who can mount (access) the remote file system. The

    export is published using NFS. Access permissions are assigned when the export is published.

    The second method is through the Windows environment. This is accomplished by publishing a share.

    The share publishes to those Windows clients who map a drive to access the remote file system. The

    share is published using CIFS. Access permission are assigned when the share is published.

    In our example, we may only allow Mary and Peter, who are in the Sales organization, share or

    export access. At this level, NFS and CIFS are performing the same function, but are used in

    different environments. In our example, all members of the Group SALES, which include the users

    Mary and Peter, are granted access to /Acct_Rep.

  • 7/28/2019 NAS Foundations 2005

    17/67

    Copyright 2005 EMC Corporation. Do not Copy - All Rights Reserved.

    NAS Foundations - 17

    2005 EMC Corporation. All rights reserved. Module Title - 17

    Client Access

    y Mount

    y MAP

    155.10.10. 13

    Host name PeterUser Peter

    Unix

    nfsmount

    155.10.10.11

    Router

    DNS Server

    155.10.10. 14Host name MaryUser Mary

    MS Windows

    MAP

    Array

    ACCOUNT1 /Acct_ Rep

    155.10.20.11

    Group Name =SALES

    Group Name =Accounting

    NAS

    DHCPServer

    155.10.10.12

    To access the network file system, the client must either mount a directory or map a drive pointing to

    the remote file system.

    Mount is a UNIX command performed by a UNIX client to set a local directory pointer to the remotefile system. The mount command uses NFS protocol to mount the export locally.

    For a UNIX client to perform this task, it will execute the nfsmount command. The format for the

    command is:

    y nfsmount /name of the NAS server:name of the remote file system/name of the local directory

    For example:

    y nfsmount/Account1:Acct_Rep /localAcct_Rep.

    For a Windows client to perform this task, it will execute a map network drive. The sequence is my

    computer> tools>map network drive. Select the drive letter and provide the server name and share

    name in the Folder field.

    For example:

    y G:

    y \\Account1\Acct_Rep

    If you make a comparison, the same information is provided: the local drive (Windows) or the local

    directory, the name of the NAS server, and the name of the export or share.

  • 7/28/2019 NAS Foundations 2005

    18/67

    Copyright 2005 EMC Corporation. Do not Copy - All Rights Reserved.

    NAS Foundations - 18

    2005 EMC Corporation. All rights reserved. Module Title - 18

    File Permissions

    y Creates File

    y File Request

    155.10.10. 13

    Host name PeterUser Peter

    Unix

    155.10.10.11

    Router

    DHCPServerDNS Server

    155.10.10. 14Host name MaryUser Mary

    MS Windows

    155.10.10.12

    Array155.10.20.11

    Group Name =SALES

    Group Name =Accounting

    Account1

    /Acct_ Rep

    MRPT1 PRPT2Files

    NAS

    Creates file

    Once access is gained by the client, files can be created on the remote file system. When a file is

    created by a client, normal permission is assigned. The Client can also modify the original permissionsassigned to a file. File permission is changed in UNIX using the chmod command. File permission in

    Windows is changed through right clicking on the selected file, then selecting Properties> Security,

    add or remove group, add or remove permissions. It should be noted that in order to modify the file

    permissions, one must have the permission to make the change.

    File request

    If a request for a file is received by the NAS server, the NAS server will first authenticate the user

    either locally, or over the network. If the user identity is confirmed, then the user will be allowed to

    perform operations contained in the file permissions for the user of the Group to which the user is a

    member.

    In our example, user Mary on host Mary creates a file MRPT1 on the NAS server Account1. She

    assigns herself the normal permission for this file, which allows her to read and write to this file. She

    also limits file permissions to other members of the Group Sales to read only. User Peter on host Peter

    is a member of the Group SALES. Peter has access to the export /Acct_Rep. If user Peter attempts to

    write to file MRPT1, he would be denied the permission to write to the file.

  • 7/28/2019 NAS Foundations 2005

    19/67

    Copyright 2005 EMC Corporation. Do not Copy - All Rights Reserved.

    NAS Foundations - 19

    2005 EMC Corporation. All rights reserved. Module Title - 19

    EMC NAS Platforms

    PRODUCTS

    Lets examine the current NAS products offered by EMC.

  • 7/28/2019 NAS Foundations 2005

    20/67

    Copyright 2005 EMC Corporation. Do not Copy - All Rights Reserved.

    NAS Foundations - 20

    2005 EMC Corporation. All rights reserved. Module Title - 20

    EMC NAS Platforms

    Broadest Range of NAS Products

    CNSNS500NS600NS700

    NetWin110

    NetWin110, 200

    NS704GNS500GNS600GNS700G

    High availability

    1 or 2 Data Movers

    Integrated NAS

    CLARiiON

    DART

    Advanced clu stering

    214 Data Movers

    NAS gateway to SAN

    CLARiiON, Symmetrix

    DART

    High availability

    1 or 2 Data Movers

    NAS gateway to SAN

    CLARiiON, Symmetrix

    DART

    Data integrity

    Intel-based Server

    NAS gateway to SAN

    CLARiiON

    WSS 2003

    Simple Web-based Management

    Data integrity

    Intel-based Server

    NAS direct attach

    CLARiiON AX100

    WSS 2003

    Advanced clu stering

    4 Data Movers

    NAS gateway to SAN

    CLARiiON, Symmetrix

    DART

    An important decision customers must make is: What is the right information platform that meets my

    business requirements?

    EMC offers the broadest range of NAS platforms. EMC makes it easy. Rate your requirements andchoose your solution.

  • 7/28/2019 NAS Foundations 2005

    21/67

    Copyright 2005 EMC Corporation. Do not Copy - All Rights Reserved.

    NAS Foundations - 21

    2005 EMC Corporation. All rights reserved. Module Title - 21

    Celerra NAS - SAN Scalability

    y Consolidated storage infrastructure

    for all applicationsy NAS front end scales independently

    of SAN back end Connect to multiple Symmetrix and

    CLARiiONs Improved utilization

    y Allocate storage to Celerra andservers as needed Easy to move filesystems among Data

    Movers Online filesystem growth

    y Centralized management for SANand NAS

    Windows

    UNIX

    CLARiiONCX Family

    ConnectrixSAN

    Celerra

    GoldenEagle/Eagle

    SymmetrixDMX Family

    CelerraNSX00GNSX00GS

    One of the reasons that Celerra Golden Eagle scales impressively is due to the architecture thatseparates the NAS front end (Data Movers) from the SAN back end (Symmetrix or CLARiiON).

    This allows the front end and back end to grow independently. Customers can merely add Data Moversto the Celerra Golden Eagle to scale the front-end performance to handle more clients. As the amountof data increases, you can add more disks, or the Celerra Golden Eagle can access multiple Symmetrixor CLARiiONs. This flexibility leads to improved disk utilization.

    Celerra Golden Eagle supports simultaneous SAN and NAS access to the CLARiiON and Symmetrix.Celerra Golden Eagle can be added to an existing SAN, and general purpose servers can now accessunused back-end capacity. This extends the improved utilization, centralized management, and TCObenefits of SAN plus NAS consolidation to Celerra Golden Eagle, Symmetrix, and CLARiiON.

    The configuration can also be reconfigured via software. Since all Data Movers can see the entirefile space, it is easy to reassign filesystems to balance the load. In addition, filesystems can beextended online as they fill.

    Even though the architecture splits the front end among multiple Data Movers and a separate SANback end, the entire NAS solution can be managed as a single entity.

    The Celerra NSx00G (configured with two Data Movers) and the Celerra NSx00GS (configured with asingle Data Mover) connect to a CLARiiON CX array through a fibre channel switch. CelerraNSx00G/NSx00GS supports simultaneous SAN and NAS access to the CLARiiON CX family.

  • 7/28/2019 NAS Foundations 2005

    22/67

    Copyright 2005 EMC Corporation. Do not Copy - All Rights Reserved.

    NAS Foundations - 22

    2005 EMC Corporation. All rights reserved. Module Title - 22

    Celerra Family Hardware

    NAS FRAME BUILDING BLOCKS

    Lets take a closer look at the hardware components of the Celerra family.

  • 7/28/2019 NAS Foundations 2005

    23/67

    Copyright 2005 EMC Corporation. Do not Copy - All Rights Reserved.

    NAS Foundations - 23

    2005 EMC Corporation. All rights reserved. Module Title - 23

    Celerra Family Control Station Hardware

    y CNS / CFS Style

    Control Station

    Golden Eagle and

    Eagle Frame

    Control Station provides the controlling subsystem of the Celerra, as well as the management interface

    to all file server components. The Control Station provides a secure user interface as a single point of

    administration and management for the whole Celerra solution. Control Station administrative

    functions are accessible via the local console, Telnet, or a Web Browser.

    The Control station is single Intel processor based, with high memory capacity. Dependent on the

    model, the Control Stations may have internal storage. Currently, the NS and Golden Eagle frame

    series only have this feature.

  • 7/28/2019 NAS Foundations 2005

    24/67

    Copyright 2005 EMC Corporation. Do not Copy - All Rights Reserved.

    NAS Foundations - 24

    2005 EMC Corporation. All rights reserved. Module Title - 24

    Celerra Family Control Station Hardware (cont.)

    y NS Series Style

    Disk ArrayEnclosures

    This is the NS range Control Station format.

  • 7/28/2019 NAS Foundations 2005

    25/67

    Copyright 2005 EMC Corporation. Do not Copy - All Rights Reserved.

    NAS Foundations - 25

    2005 EMC Corporation. All rights reserved. Module Title - 25

    Celerra Family Data Mover Hardware

    y Single or Dual Intel Processors

    yPCI or PCI-X based

    y High memory capacity

    y Multi-port Network cards

    y Fibre Channel connectivity tostorage arrays

    y No internal storage devices

    y Redundancy mechanism

    Data Mover

    Golden Eagle and Eagle Frame

    NS 6XX Frame Data Mover

    Each Data Mover is an independent, autonomous file server that transfers requested files to clients and

    will remain unaffected, should a problem arise with another Data Mover. The multiple Data Movers

    (up to 14 in the Eagle and Golden Eagle frames) are managed as a single entity. Data Movers are hot

    pluggable and can be configured with standbys to implement N to 1 availability. A Data Mover (DM)connects to a LAN through FastEthernet, or Gigabit Ethernet. The default name for a Data Mover is

    server_n, where n is its slot location. For example, in the Golden Eagle/Eagle frame, a Data Mover

    can be in slot location 2 through 15 (i.e. server_2 - server_15 in Celerra Golden Eagle/Eagle frame).

    There is no remote login capability on the DM, nor do they run any binaries (very secure).

    Data Mover redundancy is the mechanism by which the Celerra family reduces the network data

    outage in the event of a Data Mover failure. The ability to failover the Data Movers is achieved by the

    creation of a Data Mover configuration database on the Control Station system volumes, and is

    managed via the Control Station. No Data Mover failover will occur if the Control Station is not

    available for some reason.

  • 7/28/2019 NAS Foundations 2005

    26/67

    Copyright 2005 EMC Corporation. Do not Copy - All Rights Reserved.

    NAS Foundations - 26

    2005 EMC Corporation. All rights reserved. Module Title - 26

    NAS Reference Documentation

    y NAS Support Matrix

    Data MoversControl Stations

    Software supported features

    www.emc.com/horizontal/interoperability

    The NAS Support Matrix provides support information on the Data Movers and Control Station

    models, NAS software version, supported features, storage models, and microcode. This

    interoperability reference can be found at: http://www.emc.com/horizontal/interoperability

  • 7/28/2019 NAS Foundations 2005

    27/67

    Copyright 2005 EMC Corporation. Do not Copy - All Rights Reserved.

    NAS Foundations - 27

    2005 EMC Corporation. All rights reserved. Module Title - 27

    Celerra Family Software

    SOFTWARE OPERATING SYSTEM

    Now, lets look at operating system software used by the Celerra Family.

  • 7/28/2019 NAS Foundations 2005

    28/67

    Copyright 2005 EMC Corporation. Do not Copy - All Rights Reserved.

    NAS Foundations - 28

    2005 EMC Corporation. All rights reserved. Module Title - 28

    Celerra Software Operating Systems

    y Linux 7.2This is an industry hardened and EMC modified Operating System loaded

    on the Control Station to provide Secure NAS management environmentGrowing in popularity and corporate acceptance

    y DART Data Access in Real TimeThis is a highly specialized Operating System designed to optimize network

    traffic Input/Output throughput and is loaded on the Data Movers Is multi-threaded to optimize load balancing capabilities of the multi-

    processor Data Movers Advanced volume management - UxFS

    Large file size and filesystem support Ability to extend filesystems onlineMetadata logging for fast recovery Striped volume support

    Feature rich to support the varied specialized capabilities of the Celerrarange

    Linux OS is installed on the Control Station. Control Station OS software is used to install, manage,

    and configure the Data Movers, monitor the environmental conditions and performance of all

    components, and implement the Call Home and dial-in support feature. Typical Administration

    functions include volume and filesystem management, configuration of network interfaces, creation offilesystems, exporting filesystems to clients, performing filesystem consistency checks, and extending

    filesystems.

    The OS that the Data Movers run is EMCs Data Access in Real Time (DART) embedded system

    software, which is optimized for file I/O, to move data from the EMC storage array to the network.

    DART supports standard network and file access protocols: NFS, CIFS, and FTP.

  • 7/28/2019 NAS Foundations 2005

    29/67

    Copyright 2005 EMC Corporation. Do not Copy - All Rights Reserved.

    NAS Foundations - 29

    2005 EMC Corporation. All rights reserved. Module Title - 29

    Celerra Family

    SOME KEY HIGH AVAILABILITY

    FEATURES

    Lets examine some of the high availability features found in the Celerra family.

  • 7/28/2019 NAS Foundations 2005

    30/67

    Copyright 2005 EMC Corporation. Do not Copy - All Rights Reserved.

    NAS Foundations - 30

    2005 EMC Corporation. All rights reserved. Module Title - 30

    Control Station and Data Mover Standby

    y For hardware high availability the EMC NAS frames

    implement both Control Station and Data Mover failovercapabilities

    yThis means that in the simplest configuration there will bean equivalent system within the frame awaiting a possiblefailure of the active component, in order to assume theconfiguration and production role, with minimal outage tothe end-users

    y As the standby system is the equivalent of the productionsystem, there will be no performance or managementimpact to the environment

    Hardware high availability is achieved by having equivalent systems contained within the NAS frame

    configured as standby units for one or more primary systems.

    This is made possible by the configuration database maintained on the Control Station and managedfailover is controlled by the Control Station. A standby system is pointed to a specific location in the

    configuration database so that it can assume the complete personality of the failed primary system.

    However if the Control Station itself is not available when a primary system fails then failover will not

    be able to occur until the Control station is restored.

    Standby Data Mover configuration options: 1. Each standby Data Mover, as a standby for a single

    primary Data Mover; 2. Each standby Data Mover, as a standby for a group of primary Data Movers;

    3. Multiple standby Data Movers for a primary Data Mover. These Standby Data Movers are powered

    and ready to assume the personality of their associated Primary Data Movers, in the event of a failure.

  • 7/28/2019 NAS Foundations 2005

    31/67

    Copyright 2005 EMC Corporation. Do not Copy - All Rights Reserved.

    NAS Foundations - 31

    2005 EMC Corporation. All rights reserved. Module Title - 31

    Data Mover Standby (continued)

    y Data Mover failover is a policy driven mechanism controlled by theControl Station

    yThe policy is as follows

    Automatic

    When configured like this, the Control Station will detect a Data Mover failure,power down the failed Data Mover and bring the designated standby Data Moveron line with the failed DM personality

    Retry

    With this configuration, the Control Station will detect a Data Mover failure, tryand reboot the failed Data Mover, if the reboot does not clear the error, powerdown the failed Data Mover and bring the designated standby Data Mover on line

    with the failed DM personality Manual

    When configured like this no action is taken and administrator intervention isrequired for failover

    y In all cases failback is a manual process

    How does a Data Mover failover work? Through constant Data Mover monitoring by the Control

    Station. This is a policy driven solution and the automatic failover setting of the policy works in the

    following fashion:

    y the Control Station detects a Data Mover problem

    y the failing Data Mover is taken offline

    y the pre-defined standby Data Mover assumes the network identity of the failed Data Mover

    including the MAC and IP addresses

    This process takes seconds to minutes to complete. The standby Data Mover continues serving files to

    the failed Data Mover's NFS and CIFS clients. Once the failed Data Mover is replaced, it will resume

    its role as the active Data Mover with administrator managed failback, and the standby Data Mover

    will resume its standby role.

    A single Celerra Data Mover can be configured to act as a standby for several Data Movers. There can

    also be many standby Data Movers in a single Celerra cabinet, each backing up their own group ofData Movers. The number of standbys configured depends on how critical the application is and how

    much risk can be tolerated.

  • 7/28/2019 NAS Foundations 2005

    32/67

    Copyright 2005 EMC Corporation. Do not Copy - All Rights Reserved.

    NAS Foundations - 32

    2005 EMC Corporation. All rights reserved. Module Title - 32

    Network FailSafe Device

    y Network outages, due to environmental failure, are more

    common than Data Mover failuresy Network FailSafe Device

    DART OS mechanism to minimize data access disruption due tothese failures

    Logical device is created using either physical ports, or other logicalports, combined together to create redundant groups of ports

    Logically grouped Data Mover network ports monitor network trafficon the ports

    Active FailSafe Device port senses traffic disruption

    Standby (non-active) port assumes the IP Address and MediaAccess Control address in a very short space of time, thus reducingdata access disruption

    Having discussed the maintenance of data access via redundant Data Movers, we will now discuss the sameconcept utilizing network port mechanisms. First, lets look at the Network Failsafe device.

    Network outages due to environmental failures are more common than Data Mover failures. To minimize data

    access disruption due to these failures, the DART OS has a mechanism, the Network FailSafe Device, which isenvironment agnostic.

    This is a mechanism by which the Network ports of a Data Mover may be logically grouped into a partnership,which will monitor network traffic on the ports. If the currently active port senses a disruption of traffic, thestandby (non-active) port will assume the active role in a very short space of time, thus reducing data accessdisruption. The way this works is that a logical device is created, using either physical ports or other logicalports, combined together to create redundant groups of ports.

    In normal operation, the active port will carry all network traffic. The standby (nonactive port) will remainpassive until a failure is detected. Once a failure has been detected by the FailSafe Device, this port will assumethe network identity of the active port, including IP Address and Media Access Control address.

    Having assumed the failed port identity, the standby port will now continue the network traffic. Networkdisruption due to this change over is minimal, and may only be noticed in a high transaction oriented NASimplementation, or in CIFS environments due to the connection-oriented nature of the protocol.

    There are several benefits achieved by configuring the network FailSafe device: 1. Configuration is handledtransparently to client access; 2. the ports that make up the FailSafe device need not be of the same type; 3.Rapid recovery from a detected failure; 4. can be combined with logical Aggregated Port devices to provideeven higher levels of redundancy.

    Although the ports that make up the FailSafe device need not be of the same type, care must be taken to ensurethat once failover has occurred, client expected response times remain relatively the same, and data access pathsare maintained.

  • 7/28/2019 NAS Foundations 2005

    33/67

    Copyright 2005 EMC Corporation. Do not Copy - All Rights Reserved.

    NAS Foundations - 33

    2005 EMC Corporation. All rights reserved. Module Title - 33

    Link Aggregation - High Availability

    y Link aggregation is the combining of two or more datachannels into a single data channel for high availability

    Two Methods: IEEE 802.3ad LACP & CISCO FastEtherChannel

    y IEEE 802.3ad LACP

    Combining links for improvedavailability

    If one port fails, other ports take over

    Industry standard IEEE 802.3ad

    Combines 212 Ethernet ports intoa single virtual link

    Deterministic behavior

    Does not increase single client throughput

    LINK

    IndustryStandardSwitchCelerra

    Having discussed the network FailSafe device, the next methodologies we will look at are the two Link

    Aggregation methodologies. Link aggregation is the combining of two or more data channels into a

    single data channel. There are two methodologies that are supported by EMC NAS devices. They are

    IEEE 802.3ad Link Aggregation Control Protocol and CISCO FastEtherChannel using PortAggregation Protocol (PAgP).

    The purpose for combining data channels in the EMC implementation is to achieve redundancy and

    fault tolerance of network connectivity. It is commonly assumed that link aggregation will provide a

    single client with a data channel bandwidth equal to the sum of the bandwidths of individual member

    channels. This is not, in fact, the case due to the methodology of channel utilization, and it may only

    be achieved with very special considerations to the client environment. The overall channel bandwidth

    is increased, but the client will only receive, under normal working conditions, the bandwidth equal to

    one of the component channels.

    To implement Link Aggregation, the network switches must support the IEEE 802.3ad standard. It is a

    technique for combining several links together to enhance availability of network access, and applies

    to a single Data Mover, but not across Data Movers. The current implementation focuses on

    availability. Only full duplex operation is currently supported. Always check the NAS Interoperability

    Matrix for supported features at the following address:

    http://www.emc.com/horizontal/interoperability

  • 7/28/2019 NAS Foundations 2005

    34/67

    Copyright 2005 EMC Corporation. Do not Copy - All Rights Reserved.

    NAS Foundations - 34

    2005 EMC Corporation. All rights reserved. Module Title - 34

    Channel

    CISCO Switch

    Celerra

    Link Aggregation - High Availability (continued)

    y CISCO FastEtherChannel

    Port grouping for improvedavailability

    Combines 2, 4, or 8 Ethernetports into a single virtual device

    Inter-operates with trunking-capable switches

    High availabilityif one port fails,other ports take over

    Does not increase single clientthroughput

    Ethernet Trunking (Ether Channel) increases availability. It provides statistical load sharing by

    connecting different clients to different ports. It does not increase single-client throughput. Different

    clients get allocated to different ports. With only one client, the client will access Celerra via the same

    port for every access. This DART OS feature interoperates FastEtherChannel capable Cisco switches.FastEtherChannel is Cisco proprietary.

    IEEE 802.3ad/FastEtherChannel - Comparison

  • 7/28/2019 NAS Foundations 2005

    35/67

    Copyright 2005 EMC Corporation. Do not Copy - All Rights Reserved.

    NAS Foundations - 35

    2005 EMC Corporation. All rights reserved. Module Title - 35

    Network Redundancy - High Availability

    y An example of FSN and Port aggregation co-operation

    This example shows a fail-safe network (FSN) device that consists of a FastEtherChannel comprising

    the four ports of an Ethernet NIC and one Gigabit Ethernet port. The FastEtherChannel could be the

    primary device but, per recommended practices, the ports of the FSN would not be marked primary or

    secondary. FSN provides the ability to configure a standby network port for a primary port, and two ormore ports can be connected to different switches. The secondary port remains passive until the

    primary port link status is broken, then the secondary port takes over operation.

    An FSN device is a virtual device that combines 2 virtual ports. A virtual port can consist of a single

    physical link or an aggregation of links (EtherChannel, LACP). The port types, or number, need not be

    the same when creating a failsafe device group. For example, a quad Ethernet card can be first

    trunked and then coupled with a single Gigabit Ethernet port. In this case, all four ports in the trunk

    would need to fail before FSN would implement failover to the Gigabit port. Thus, Celerra could

    tolerate four network failures before losing the connection.

    Note: an active primary port/active standby port configuration on the Data Mover is not recommended

    practice.

  • 7/28/2019 NAS Foundations 2005

    36/67

    Copyright 2005 EMC Corporation. Do not Copy - All Rights Reserved.

    NAS Foundations - 36

    2005 EMC Corporation. All rights reserved. Module Title - 36

    Celerra Family Environment Management

    Integration

    VIRTUAL LOCAL AREA NETWORKS

    Environmental management tools used in the NAS space include Virtual Local Area Networks, or

    VLANS. We will now discuss how EMC NAS integrates into this strategy.

  • 7/28/2019 NAS Foundations 2005

    37/67

    Copyright 2005 EMC Corporation. Do not Copy - All Rights Reserved.

    NAS Foundations - 37

    2005 EMC Corporation. All rights reserved. Module Title - 37

    VLAN Support

    y Create logical LANsegment

    Divide a single LANinto logical segments

    J oin multiple separatesegments into onelogical LAN

    y VLAN Tagging

    802.1q

    y Simplified Management

    No network

    reconfigurationrequired for memberrelocation

    Hub Hub

    Hub Hub

    Bridge

    or

    Switch

    Bridge

    or

    Switch

    Hub Hub

    Router

    Workstation VLAN B

    VLAN B

    VLAN A

    VLAN A

    Collision Domain

    LAN Segment

    Collision DomainLAN Segment

    Collision Domain

    LAN Segment

    Broadcast Domain

    LAN

    Broadcast Domain LAN

    Network domains are categorized into Collision, a LAN segment within which data collisions arecontained, or Broadcast, the portions of the network through which broadcast and multicast traffic ispropagated. Collision domains are determined by hardware components and how they are connected

    together. The components are usually client computers, hubs, and repeaters. Separation of a Collisiondomain from a Broadcast domain is accomplished by a network switch, or a router, that generally doesnot forward broadcast traffic. VLANs allow multiple, distinct, possibly geographically separatenetwork segments to be connected into one logical segment. This can be done either by subnetting orby using VLAN tags (802.1q.), which is an address added to network packets to identify the VLANs towhich the packet belongs. This could allow servers that were connected to physically separatenetworks to communicate more efficiently, and it could prevent servers that were attached to the samephysical network from impeding one another.

    By using VLANs to logically segment the Broadcast Domains, the equipment contained within thislogical environment need not be physically located together. This now means that if a mobile clientmoves location, an administrator need not do any physical network or software configuration for the

    relocation, as bridging technology would now be used, and a router would only be needed tocommunicate between VLANS.

    There are two commonly practiced ways of implementing this technology:

    . IP Address subnetting

    . VLAN Ethernet packet tagging

  • 7/28/2019 NAS Foundations 2005

    38/67

    Copyright 2005 EMC Corporation. Do not Copy - All Rights Reserved.

    NAS Foundations - 38

    2005 EMC Corporation. All rights reserved. Module Title - 38

    VLAN Implementation Methodologies

    yThere are two primary methodologies of implementing

    VLANSBy IP address subnetting

    Using this methodology an administrator will configure the broadcastdomains to encompass the whole network area for specific groups ofcomputers, by using BridgeRouter technology

    By VLAN Tagging

    Using this methodology an administrator will configure groups of usercomputers to embed an identification tag embedded into all of theirEthernet packet traffic

    When using the IP address subnetting methodology, the administrator will configure the broadcast

    domains to encompass the whole network area for specific groups of computers, by using

    BridgeRouter technology. When using the VLAN tagging methodology, the members of a specific

    group will have an identification tag embedded into all of their Ethernet packet traffic.

    VLAN Tagging allows a single Gigabit Data Mover port to service multiple logical LANs (Virtual

    LANs). This allows data network nodes to be configured (added and moved as well as other changes)

    quickly and conveniently from the management console, rather than in the wiring closet. VLAN also

    allows a customer to limit traffic to specific elements of a corporate network, and protect against

    broadcasts (such as denial of service) affecting whole networks. Standard router based security

    mechanisms can be used with VLANs to restrict access and improve security.

  • 7/28/2019 NAS Foundations 2005

    39/67

    Copyright 2005 EMC Corporation. Do not Copy - All Rights Reserved.

    NAS Foundations - 39

    2005 EMC Corporation. All rights reserved. Module Title - 39

    VLAN - Benefits

    y Performance

    This is client related as

    packets not destined for amachine in a particular VLANwill not be processed by theclient

    y Reduced Router Overhead

    y Reduced Costs

    expensive routers and billabletraffic routing costs can bereduced

    y Security

    placing users into a taggedVLAN environment willprevent unauthorized accessto network packets

    VLAN-A VLAN S VLAN E

    The benefits of VLAN support include:

    y Performance: In all networks, there is a large amount of broadcast and multicast traffic and

    VLANS can reduce the amount of traffic being processed by all clients.

    y Virtual Collaborative Work Divisions: by placing widely dispersed collaborative users into a

    VLAN, broadcast and multicast traffic between these users will be kept from affecting other

    network clients, and reduce the amount of routing overhead placed on their traffic.

    y Simplified Administration: with the large amount of mobile computing today, physical user

    relocation generates a lot of administrative user reconfiguration (adding, moving and changing). If

    the user has not changed company functionality, but has only relocated, VLANs can perpetuate

    undisrupted job functionality.

    y Reduced Cost by using VLANS: expensive routers and billable traffic routing costs can be

    reduced.

    y Security, by placing users into a tagged VLAN environment, external access to sensitive broadcast

    data traffic can be reduced.

    VLAN support enables a single Data Mover with Gigabit Ethernet port(s) to be the standby for

    multiple primary Data Movers with Gigabit Ethernet port(s). Each primary Data Mover's Gigabit

    Ethernet port(s) can be connected to different switches. Each of these switches can be in a different

    subnet and different VLAN. The standby Data Mover's Gigabit Ethernet port is connected to a switch

    which is connected to all the other switches.

  • 7/28/2019 NAS Foundations 2005

    40/67

    Copyright 2005 EMC Corporation. Do not Copy - All Rights Reserved.

    NAS Foundations - 40

    2005 EMC Corporation. All rights reserved. Module Title - 40

    Celerra Family Software Management

    USER INTERFACES

    In this section, we will examine the different user interfaces. These interfaces include the Command

    line, Celerra Manager, and EMC ControlCenter.

  • 7/28/2019 NAS Foundations 2005

    41/67

    Copyright 2005 EMC Corporation. Do not Copy - All Rights Reserved.

    NAS Foundations - 41

    2005 EMC Corporation. All rights reserved. Module Title - 41

    Celerra Management Command Line

    yThe command line can be accessed on the ControlStation viaAn ssh interface tool, e.g. PuTTy

    Telnet

    y Its primary function is for the scripting of commonrepetitive tasks that may run on a predeterminedschedule to ease administrative burden

    y It has approximately 60 UNIX command-like commandsnas_ - These commands are generally for the configuration and

    management of global resourcesserver_- These commands are generally for the configuration and

    management of Data Mover specific resources

    Telnet access is disabled, by default, on the Control Station due to the possibility of unauthorized

    access if the Control Station is placed on a publicly accessible network. If this is the case, it is strongly

    recommended that this service is not enabled.

    The preferred mechanism of accessing the Control Station is the SSH (Secure Shell) daemon via an

    SSH client such as PuTTy.

  • 7/28/2019 NAS Foundations 2005

    42/67

    Copyright 2005 EMC Corporation. Do not Copy - All Rights Reserved.

    NAS Foundations - 42

    2005 EMC Corporation. All rights reserved. Module Title - 42

    Celerra Manager GUI Management

    With the release of DART v5.2, the GUI management has become consolidated into one product with

    two options:

    y Celerra Management Basic Edition

    y Celerra Management Advanced Edition

    The Basic Edition will be installed, along with the DART OS, and will provide a comprehensive set of

    common management functionality for a single Celerra at a time. The Advanced Edition will add

    multiple Celerra support, along with some advanced feature GUI management, and will be licensed

    separately from the DART code.

  • 7/28/2019 NAS Foundations 2005

    43/67

    Copyright 2005 EMC Corporation. Do not Copy - All Rights Reserved.

    NAS Foundations - 43

    2005 EMC Corporation. All rights reserved. Module Title - 43

    Celerra Manager GUI Wizards

    Celerra Manager V5.2 and higher will offer a number of configuration Wizards for various tasks to

    assist with new administrator ease of implementation.

  • 7/28/2019 NAS Foundations 2005

    44/67

    Copyright 2005 EMC Corporation. Do not Copy - All Rights Reserved.

    NAS Foundations - 44

    2005 EMC Corporation. All rights reserved. Module Title - 44

    Celerra Manager GUI Tools

    Celerra Manager V5.2 will offer a set of tools to integrate Celerra monitoring functionality and launch

    Navisphere Manager.

    With the addition of the Navisphere Manager Launch capability, the SAN/NAS administrator willhave a more consolidated management environment.

  • 7/28/2019 NAS Foundations 2005

    45/67

    Copyright 2005 EMC Corporation. Do not Copy - All Rights Reserved.

    NAS Foundations - 45

    2005 EMC Corporation. All rights reserved. Module Title - 45

    EMC ControlCenter V5.x.x NAS Support

    y Discovery and monitoring

    Data Movers

    Devices and volumes

    Network adapters and IPinterfaces

    Mount points

    Exports

    Filesystems (including snapshotsand checkpoints)

    The EMC flagship management product, EMC ControlCenter, has the capability of an assisted

    discovery of both EMC NAS and third party NAS products, namely NetApps filers.

    Currently, management of the EMC NAS family is deferred to the specific product managementproducts, due to the highly specialized nature of the NAS environment. Therefore, this product

    functionality (shown on this slide) is focused mainly around discovery, monitoring, and product

    management software launch capability

    ControlCenter V5.x.x has enhanced device management support for the Celerra family. The

    ControlCenter Celerra Agent runs on Windows and has enhanced discovery and monitoring

    capabilities. You can now view properties information on Celerra Data Movers, devices, network

    adapters and interfaces, mount points, exports, filesystems (including snapshots and checkpoints), and

    volumes from the ControlCenter Console. You can also view alerting information for the Celerra

    family as well.

  • 7/28/2019 NAS Foundations 2005

    46/67

    Copyright 2005 EMC Corporation. Do not Copy - All Rights Reserved.

    NAS Foundations - 46

    2005 EMC Corporation. All rights reserved. Module Title - 46

    Celerra Family File System Management

    AUTOMATIC VOLUME MANAGEMENT

    For ease of use and implementation, the DART operating system utilizes an Automatic Volume

    Manager (AVM). This allows the NAS manager to quickly create, deploy and manage NAS file

    systems with known, predictable performance and management parameters.

  • 7/28/2019 NAS Foundations 2005

    47/67

    Copyright 2005 EMC Corporation. Do not Copy - All Rights Reserved.

    NAS Foundations - 47

    2005 EMC Corporation. All rights reserved. Module Title - 47

    Celerra Automatic Volume Management - AVM

    y Celerra uses AVM to create a more array friendlymethodology for laying out volumes and file systems

    Automates volume and file system creation and management

    Arranges volumes into storage pools dependent upon the array disklayout characteristics

    System defined

    Profiles are predefined rules that define how devices are aggregated and put intosystem-defined storage pools

    User defined

    This storage pools allow customers more flexibility to define their own volumecharacteristics to meet their own specific needs

    y Volume creation from the GUI management interface willuse AVM by default, but it can also be invoked from theCommand Line Interface, CLI

    The Automatic Volume Management (AVM) feature of the Celerra File Server automates volumecreation and management. By using Celerra command options and interfaces that support AVM, youcan create and expand file systems without manually creating and managing their underlying volumes.

    A Storage Pool is a container for one or more member volumes. All storage pools have attributes,some of which are modifiable. There are two types of storage pools:y System-defined System-defined storage pools with NAS 5.3 are what used to be called systemprofiles in prior releases. AVM controls the allocation of storage to a file system when you createthe file system by allocating space from a system-defined storage pool. The system-definedstorage pools ship with the Celerra. They are designed to optimize performance based on thehardware configuration.

    y User-defined User-defined storage pools allow for more flexibility in that you choose whatstorage should be included in the pool. If the user defines the storage pool, the user must explicitlyadd and remove storage from the storage pool and define the attributes for the storage pool.

    Profilesprovide the rules that define how devices are aggregated and put into system-defined storagepools. Users cannot create, delete, or modify these profiles. There are two types of profiles:

    y Volume Volume profiles define how new disk volumes are added to a system-defined storagepool.

    y Storage Storage profiles define how the raw physical spindles are aggregated into Celerra diskvolumes.

    Note: Both volume profiles and storage profiles are associated with system-defined storage pools andare unique and predefined for each storage system. It is NOT recommended to mix the types of volumemanagement methodologies, AVM and Manual, on a system, as mixing these may result in non-optimized disk utilization leading poor system utilization and performance.

  • 7/28/2019 NAS Foundations 2005

    48/67

    Copyright 2005 EMC Corporation. Do not Copy - All Rights Reserved.

    NAS Foundations - 48

    2005 EMC Corporation. All rights reserved. Module Title - 48

    AVM System Defined Storage Pools

    y symm_std

    Highest performance, medium cost, using Symmetrix STD disk volumes

    y symm_std_rdf_src

    Highest performance, medium cost, using SRDF

    y clar_r1

    High performance, low cost, using CLARiiON CLSTD disk volumes in RAID 1

    y clar_r5_performance

    Medium performance, low cost, using CLARiiON CLSTD disk volumes in 4+1RAID 5

    yclar_r5_economy Medium performance, lowest cost, using CLARiiON CLSTD disk volumes in 8+1

    RAID 5

    y clarata_archive

    6+1, low performance, high capacity, using CLARiiON Serial ATA disk volumes

    STD = Standard

    CLSTD = CLARiiON Standard

    Clarata = CLARiiON ATA drives

  • 7/28/2019 NAS Foundations 2005

    49/67

    Copyright 2005 EMC Corporation. Do not Copy - All Rights Reserved.

    NAS Foundations - 49

    2005 EMC Corporation. All rights reserved. Module Title - 49

    Celerra Family Management Software

    WINDOWS SPECIFIC INTEGRATION

    OPTIONS

    EMC NAS frames integrate traditionally into UNIX environments seamlessly due to its roots in the

    NFS protocol environment. However, with the addition of support for the CIFS protocol, which is the

    Microsoft networking domain, there has been the need for very specific integration methodologies to

    ensure the seamless integration and management into this environment.

  • 7/28/2019 NAS Foundations 2005

    50/67

    Copyright 2005 EMC Corporation. Do not Copy - All Rights Reserved.

    NAS Foundations - 50

    2005 EMC Corporation. All rights reserved. Module Title - 50

    Windows Environment Integration

    yTo achieve a tightly integrated Windows active directoryenvironment, EMC NAS uses several software features

    usermapper

    This is a feature that will help the Celerra automatically assign UNIXuser and group identifiers, UIDs and GIDs, to Windows users andgroups. This assists Windows administrators with the integration of thesespecialized NAS frames, as there is minimal or no user environmentmodification

    UNIX User Management

    Active Directory migration tool

    MMC plug-in extension for ActiveDirectory uses and computers

    Celerra Management tool snap-in

    (MMC Console)

    Celerra offers a number of Windows 2000 management tools with the Windows 2000 look and feel.

    For example, Celerra shares and quotas can be managed by the standard Microsoft Management

    Console (MMC).

    The tools include:

    y The Active Directory (AD) Migration tool Migrates the Windows/UNIX user and group

    mappings to Active Directory. The matching users/groups are displayed in a property page with a

    separate sheet for users and groups. The administrator selects the users/groups that should be

    migrated and de-selects those that should not be migrated, or should be removed from Active

    Directory.

    y The Microsoft Management Console (MMC) Snap-in extension for AD users and computers.

    This adds a property page to the users property sheet to specify UID (user ID)/GID (group

    ID)/Comment and adds a property page to the group property sheet to specify GID/Comment. You

    can only manage users and groups of the local tree.

    y The Celerra Management Tool (MMC Console) Snap-in extension for Dart UNIX User

    Management displays Windows users/groups which are mapped to UNIX attributes. It also

    displays all domains that are known to the local domain (Local Tree, Trusted domains).

  • 7/28/2019 NAS Foundations 2005

    51/67

    Copyright 2005 EMC Corporation. Do not Copy - All Rights Reserved.

    NAS Foundations - 51

    2005 EMC Corporation. All rights reserved. Module Title - 51

    Windows Environment Integration (continued)

    Virus Checker Management

    Celerra Management tool

    MMC Console

    Home Directory snap-in

    Allows multiple points of entry to a single share

    Data Mover security snap-in

    Manage user rights and auditing

    Further tools are

    y The Celerra Management Tool (MMC Console)Snap-in extension for Dart Virus Checker

    Management which manages parameters for the DART Virus Checker.

    y The Home Directories capability in the Celerra allows a customer to set up multiple points of entry

    to a single Share/Export so as to avoid sharing out many hundreds of points of entry to a

    filesystem, for each individual user for storing their Home Directories. The MMC Snap-in provides

    a simple and familiar management interface for Windows administrators for this capability.

    y The Data Mover Security Settings Snap-in provides a standard Windows interface for managing

    user rights assignments, as well as the settings for which statistics Celerra should audit, based on

    the NT V4 style auditing policies.

  • 7/28/2019 NAS Foundations 2005

    52/67

    Copyright 2005 EMC Corporation. Do not Copy - All Rights Reserved.

    NAS Foundations - 52

    2005 EMC Corporation. All rights reserved. Module Title - 52

    Virtual Data Movers

    y Another improvement to the Windows integration is the

    ability to create multiple virtual CIFS servers on eachData Mover

    yThis is achieved by creating Virtual Data Moverenvironments

    This is a huge benefit to the consolidation of multiple server, fileserving functionality onto single Data Movers, as each virtual DataMover can maintain isolated CIFS servers with their own rootfilesystem environment

    This will allow whole Virtual Data Mover environments to be loaded,unloaded, or even replicated between physical Data Movers for easein Windows environmental management

    Currently, in pre DART v5.2, a Data Mover supports one NFS server and multiple CIFS servers, whereeach server has the same view of all the resources. The CIFS servers are not logically isolated andalthough they are very useful in consolidating multiple servers into one data mover, they do notprovide the isolation between servers as needed in some environments, such as data from disjoint

    departments hosted on the same data mover.

    In v5.2, VDMs support separate isolated CIFS servers, allowing you to place one or multiple CIFSservers into a VDM, along with their file systems. The servers residing in a VDM store their dynamicconfiguration information (such as local groups, shares, security credentials, and audit logs, etc.) in aconfiguration file system. A VDM can then be loaded and unloaded, moved from Data Mover to DataMover, or even replicated to a remote Data Mover as an autonomous unit. The servers, their filesystems, and all of the configuration data that allows clients to access the file systems are available inone virtual container.

    VDMs provide virtual partitioning of the physical resources, and independently contain all theinformation necessary to support the contained CIFS servers. Having the file systems and theconfiguration information contained in a VDM does the following:

    y enables administrators to separate CIFS servers and give them access to specified shares;

    y allows replication of the CIFS environment from primary to secondary without impacting serveraccess;

    y enables administrators to easily move CIFS servers from one physical Data Mover to another.

    A VDM can contain one or more CIFS servers. The only requirement is that you have at least oneinterface available for each CIFS server you create. The CIFS servers in each VDM have access onlyto the file systems mounted to that VDM, and therefore can only create shares on those file systemsmounted to the VDM. This allows a user to administratively partition or group their file systems andCIFS servers.

  • 7/28/2019 NAS Foundations 2005

    53/67

    Copyright 2005 EMC Corporation. Do not Copy - All Rights Reserved.

    NAS Foundations - 53

    2005 EMC Corporation. All rights reserved. Module Title - 53

    Celerra Family Business Continuity

    DISK BASED REPLICATION AND

    RECOVERY SOLUTIONS

    Now we can examine some of the replication and recovery solutions available in the Celerra family.

  • 7/28/2019 NAS Foundations 2005

    54/67

    Copyright 2005 EMC Corporation. Do not Copy - All Rights Reserved.

    NAS Foundations - 54

    2005 EMC Corporation. All rights reserved. Module Title - 54

    Disk-Based Replication and Recovery Solutions

    Celerra/Symmetrix

    Celerra/FC4700

    SynchronousSynchronousDisasterDisasterRecoveryRecovery

    SRDFSRDF

    Seconds

    FileFile

    RestorationRestorationCelerra SnapSureCelerra SnapSure

    Hours

    FileFile--basedbasedReplicationReplicationTimeFinder/FSTimeFinder/FS

    Celerra ReplicatorCelerra ReplicatorEMC OnCourseEMC OnCourse

    Minutes

    Celerra /CLARiiON

    CelerraNS600

    FUNCTIONALITY

    RECOVERY TIME

    High-end environments require non-stop access to the information pool. From a practical perspective,not all data carries the same value. The following illustrates that EMC Celerra provides a range ofdisk-based replication tools for each recovery time requirement.

    File restoration: This is the information archived to disk and typically saved to tape. Here we measurerecovery in hours. Celerra SnapSure enables local point-in-time replication for file undeletes andbackups.

    File-based replication: This information is recoverable in time frames measured in minutes.Information is mirrored to disk by TimeFinder, and the copy is made accessible with TimeFinder/FS.The Celerra Replicator creates replicas of production filesystems either locally, or at a remote site.Recovery time from the secondary site depends on the bandwidth of the IP connection between the twosites. EMC OnCourse provides secure, policy-based file transfers.

    The Replicator feature supports data recovery for both CIFS and NFS by allowing the secondaryfilesystem (SFS) to be manually switched to read/write mode after the Replicator session has beenstopped, either manually or due to a destructive event. Note: There is no re-synch or failbackcapability.

    Synchronous disaster recovery: this is the information requiring disaster recovery with no loss oftransactions. This strategy allows customers to have data recovery in seconds. SRDF, in synchronousmode, facilitates real-time remote mirroring in campus environments (up to 60 km).

    File restoration and file-based replication (Celerra Replicator, EMC OnCourse) are available withCelerra /CLARiiON. The entire suite of file restoration, file-based replication, and synchronousdisaster recovery are available with Celerra /Symmetrix.

  • 7/28/2019 NAS Foundations 2005

    55/67

    Copyright 2005 EMC Corporation. Do not Copy - All Rights Reserved.

    NAS Foundations - 55

    2005 EMC Corporation. All rights reserved. Module Title - 55

    Disaster Recovery

    CELERRA SYMMETRIX REMOTE DATA

    FACILITY

    In this section, we will look at the Celerra disaster recovery solution.

  • 7/28/2019 NAS Foundations 2005

    56/67

    Copyright 2005 EMC Corporation. Do not Copy - All Rights Reserved.

    NAS Foundations - 56

    2005 EMC Corporation. All rights reserved.

    Celerra SRDF Disaster Recovery

    y Celerra synchronous disaster recovery solution Allows an administrator to configure remote standby Data Movers, waiting

    to assume primary roles, in the event of a disaster occurring at the primarydata site

    SRDF allows administrator to achieve a remote synchronous copy ofproduction filesystems at a remote location

    Real-time, logically synchronized and consistent copies of selected volumes Uni-directional and bi-directional support Resilient against drive, link, and server failures No lost I/Os in the event of a disaster Independent of CPU, operating system, application, or database Simplifies disaster recovery switchover and back

    CelerraCelerraUni or bi-directional

    Campus (60 km) distance

    Network

    Increases data availability by combining the high availability of theCelerra family with the Symmetrix Remote Data Facility

    In the NAS environment, data availability is one of the key aspects for implementation determination.By combining the high availability of the Celerra family with the Symmetrix Remote Data Facility,data availability increases exponentially. What the SRDF feature allows an administrator to achieve is

    a remote synchronous copy of production filesystems at a remote location. However, as this entails thecreation of Symmetrix specific R1 and R2 data volumes, this functionality is currently restricted toCelerra/Symmetrix implementations only. This feature allows an administrator to configureremote standby Data Movers waiting to assume primary roles in the event of a disaster occurring at theprimary data site. Due to data latency issues, this solution is restricted to a campus distance ofseparation between the two data sites (60 network km). The SRDF solution for Celerra can leverage anexisting SRDF transport infrastructure to support the full range of supported SAN (storage areanetwork) and DAS (direct-attached storage) connected general purpose server platforms.

    After establishing the connection and properly configuring the Celerra, users gain continued access tofilesystems in the event that the local Celerra and/or the Symmetrix becomes unavailable. The Celerrasystems communicate over the network to ensure the primary and secondary Data Movers are

    synchronized with respect to meta data, while the physical data is transported over the SRDF link. Inorder to ensure an up to date and consistent copy of the filesystems on the remote Celerra, thesynchronous mode of SRDF operation is currently the only supported SRDF operational mode, butboth modes of SRDF operation, active-passive and active-active, are supported. This means that activedata can be configured to be only on one side of the SRDF link or on both sides, dependent on thecustomers needs.

  • 7/28/2019 NAS Foundations 2005

    57/67

    Copyright 2005 EMC Corporation. Do not Copy - All Rights Reserved.

    NAS Foundations - 57

    2005 EMC Corporation. All rights reserved. Module Title - 57

    Data Replication

    SNAPSURE, TIMEFINDER/FS &

    CELERRA REPLICATOR

    Next, we will examine several of the Celerra data replication solutions.

  • 7/28/2019 NAS Foundations 2005

    58/67

    Copyright 2005 EMC Corporation. Do not Copy - All Rights Reserved.

    NAS Foundations - 58

    2005 EMC Corporation. All rights reserved. Module Title - 58

    Celerra SnapSure - Data Replication

    y Enables speedy recovery

    Low volume activity, read-onlyapplications

    Simple file undelete

    Incremental backup

    y Logical point-in-time view ofCelerra data Works for all Celerra

    Implementations

    Saves disk space

    Maintains pointers to track changesto the primary filesystem

    Not a mirror; creation of specializedvolumes (R1/R2, BCVs) notrequired

    Productionfilesystem Checkpoint

    Celerra CLARiiON orSymmetrix

    Due to the business demands for high data availability and speedy recovery, there are many

    methodologies utilized to facilitate this requirement.

    The first methodology discussed is the SnapSure feature of the Celerra family. This methodology usesa logical point-in-time view of a Production Filesystem to facilitate Incremental backup views of a

    Production File System, PFS, individual file recovery, and roll back of an entire filesystem to a

    previous point-in-time image. SnapSure maintains pointers to changes to the primary file system, and

    reads data from either the primary filesystem or a copy area. The copy area is defined as a meta-

    volume (SavVol).

    One of the obvious benefits of this solution is that it is storage array agnostic, i.e. works for all NAS

    DART implementations. This also means that there are no specialized volumes that need to be

    configured for this feature to function. Some other replication methodologies, such as SRDF and

    TimeFinder/FS, are dependent on the creation of Symmetrix Remote Data Facility and Business

    Continuity Volumes in the Symmetrix. SnapSure does not require any specialized volume creation and

    will therefore work with any back-end storage array (CLARiiON or Symmetrix).

    Multiple Checkpoints can be done on the Production Filesystem and, thereby, facilitate the ability to

    recover different point-in-time images of files or filesystems. Without using any other similar

    replication methodologies, e.g. Celerra Replicator, the currently supported maximum of Checkpoints

    per filesystem is 32.

  • 7/28/2019 NAS Foundations 2005

    59/67

    Copyright 2005 EMC Corporation. Do not Copy - All Rights Reserved.

    NAS Foundations - 59

    2005 EMC Corporation. All rights reserved. Module Title - 59

    Celerra SnapSure - Management

    yMultiple Checkpoints for recovery of different point-in-time

    imagesGUI Checkpoint schedule manipulation

    Checkpoint out of order delete

    Automatic mounting upon creation

    For ease of management, Checkpoints can be manipulated with the GUI management interfaces, along

    with the ability to schedule the frequency of the Checkpoints.

    Most Checkpoint technology is chronologically linked; however, the DART 5.2 solution will supportout of order deletion of checkpoints, while maintaining SnapSure integrity. SnapSure Enhancements

    allow customers to delete a Checkpoint out of order. This feature allows customers to delete any

    Checkpoint instead of being constrained to having to delete Checkpoints from the oldest to maintain

    integrity.

    A customer may also delete an individual scheduled checkpoint instead of the entire schedule, and may

    refresh any checkpoint instead of only the oldest.

    Checkpoints created in DART v5.2 are automatically mounted on creation, and maintenance of a

    hidden checkpoint directory in any subdirectory. This new hidden directory will now also allow

    changing the default name (yyy_dd_hh_mm_ss_GMT) into something more administratively friendly.

  • 7/28/2019 NAS Foundations 2005

    60/67

    Copyright 2005 EMC Corporation. Do not Copy - All Rights Reserved.

    NAS Foundations - 60

    2005 EMC Corporation. All rights reserved.

    Celerra TimeFinder/FS - Data Replicationy Point in time copy of file system

    y Provides an independent mirror copy ofCelerra data for out-of-bandbusiness

    processes and support functionsy Provides read and write functionality

    independent of the original

    y Requires Symmetrix storage

    y Celerra controlled features Point-in-time copies

    Dynamic mirroring

    Multiple BCVs

    Spans volumes

    Entire filesystem

    y Applications Backup and restore

    Data warehouses

    Live test data

    Batch jobs

    BCV =Business Continuance Volume

    Point-in-timecopy

    CelerraSymmetrix

    FSAFSAPFS

    PFSCopy

    A second Celerra data replication method that provides high availability and rapid recovery is

    TimeFinder/FS. It uses a specially defined volume, called a Business Continuance Volume (BCV), to

    facilitate this functionality. As only the Symmetrix Array is currently able to define a BCV,

    TimeFinder/FS on the Celerra Family is currently restricted to implementations with Symmetrix only.The TimeFinder/FS implementation is different from a standard TimeFinder implementation as it is

    file system based, which is implemented upon a volume based feature. A BCV, which attaches to a

    standard volume on which a file system resides, provides the foundation for the file system copy. File

    systems can share BCVs, although the BCV remains dedicated to a volume. What this means is that if

    multiple file systems share a single BCV when one of the file systems is saved as point in time, all

    other file systems are in an unknown state. This, therefore, precludes the ability for recovery from this

    copy as the unknown state file systems would also be recovered and the underlying technology is

    volume based.

    TimeFinder/FS creates a point-in-time copy, or a dynamic mirror, of a filesystem. Integrated into the

    Celerra Control Station. The TimeFinder/FS option allows users to create filesystem copies (with onlya brief suspension of access to the original file system), for independent read/write copies of data,

    useful for non-disruptive file backups, live copy test beds for new applications, and mirror copies of

    files for redundancy and business continuity. It will facilitate backup and restore of older versions of a

    specific file, directory, (by mounting the snapshot filesystem and manually recovering the file or

    directory) or complete file system. It can also function in mirroring and continuous updates mode for

    an active file system.

  • 7/28/2019 NAS Foundations 2005

    61/67

    Copyright 2005 EMC Corporation. Do not Copy - All Rights Reserved.

    NAS Foundations - 61

    2005 EMC Corporation. All rights reserved. Module Title - 61

    TimeFinder/FS Near Copy

    y Synchronous disk-based disasterrecovery and data replication

    solution

    Requires Symmetrix storage