configuration guide for red hat linux host attachment

56
Configuration Guide for Red Hat ® Linux ® Host Attachment Hitachi Unified Storage VM Hitachi Virtual Storage Platform Hitachi Universal Storage Platform V/VM F ASTF IND L INKS Contents Product Version Getting Help MK-96RD640-05

Upload: vqbhanu

Post on 16-Sep-2015

231 views

Category:

Documents


3 download

DESCRIPTION

redhat

TRANSCRIPT

  • Configuration Guide for Red Hat Linux Host Attachment

    Hitachi Unified Storage VM Hitachi Virtual Storage Platform

    Hitachi Universal Storage Platform V/VM

    FASTFIND LINKS Contents

    Product Version

    Getting Help

    MK-96RD640-05

  • ii

    Configuration Guide for Red Hat Linux Host Attachment

    2007-2012 Hitachi, Ltd. All rights reserved.

    No part of this publication may be reproduced or transmitted in any form or by any means, electronic or mechanical, including photocopying and recording, or stored in a database or retrieval system for any purpose without the express written permission of Hitachi, Ltd. (hereinafter referred to as "Hitachi") and Hitachi Data Systems Corporation (hereinafter referred to as "Hitachi Data Systems").

    Hitachi and Hitachi Data Systems reserve the right to make changes to this document at any time without notice and assume no responsibility for its use. This document contains the most current information available at the time of publication. When new or revised information becomes available, this entire document will be updated and distributed to all registered users.

    Some of the features described in this document may not be currently available. Refer to the most recent product announcement or contact your local Hitachi Data Systems sales office for information about feature and product availability.

    Notice: Hitachi Data Systems products and services can be ordered only under the terms and conditions of the applicable Hitachi Data Systems agreements. The use of Hitachi Data Systems products is governed by the terms of your agreements with Hitachi Data Systems.

    Hitachi is a registered trademark of Hitachi, Ltd. in the United States and other countries. Hitachi Data Systems is a registered trademark and service mark of Hitachi, Ltd. in the United States and other countries.

    All other trademarks, service marks, and company names are properties of their respective owners.

    Microsoft product screen shots are reprinted with permission from Microsoft Corporation.

  • Contents iii

    Configuration Guide for Red Hat Linux Host Attachment

    Contents

    Preface................................................................................................... v Intended Audience .............................................................................................. vi Product Version................................................................................................... vi Document Revision Level ..................................................................................... vi Changes in this Revision ..................................................................................... vii Referenced Documents....................................................................................... vii Document Conventions...................................................................................... viii Convention for Storage Capacity Values ................................................................ ix Accessing Product Documentation ........................................................................ ix Getting Help ........................................................................................................x Comments ...........................................................................................................x

    Introduction......................................................................................... 1-1 About the Hitachi RAID Storage Systems.............................................................1-2 Device Types ....................................................................................................1-3 Installation and Configuration Roadmap..............................................................1-7

    Installing the Storage System................................................................ 2-1 Requirements ...................................................................................................2-2 Preparing for the Storage System Installation ......................................................2-3

    Hardware Installation Considerations............................................................2-3 LUN Manager Software Installation ..............................................................2-4 Setting the Host Mode ................................................................................2-4 Setting the Host Mode Options ....................................................................2-5

    Configuring the Fibre-Channel Ports..................................................................2-13 Port Address Considerations for Fabric Environments...................................2-14 Loop ID Conflicts ......................................................................................2-14

    Connecting the Storage System to the Red Hat Linux Host .................................2-15 Configuring the Host Fibre-Channel HBAs..........................................................2-16

  • iv Contents

    Configuration Guide for Red Hat Linux Host Attachment

    Verifying New Device Recognition .................................................................... 2-17

    Configuring the New Disk Devices.......................................................... 3-1 Setting the Number of Logical Units ................................................................... 3-2 Partitioning the Devices .................................................................................... 3-3 Creating, Mounting, and Verifying the File Systems ............................................. 3-4

    Creating the File Systems ........................................................................... 3-4 Creating the Mount Directories.................................................................... 3-4 Mounting the New File Systems................................................................... 3-4 Verifying the File Systems........................................................................... 3-4 Setting the Auto-Mount Parameters............................................................. 3-5

    Failover and SNMP Operation ................................................................ 4-1 Host Failover.................................................................................................... 4-2 Path Failover.................................................................................................... 4-2 Device Mapper Multipath................................................................................... 4-2 SNMP Remote System Management................................................................... 4-3

    Troubleshooting ................................................................................... 5-1 General Troubleshooting ................................................................................... 5-2 Calling the Hitachi Data Systems Support Center................................................. 5-3

    Note on Using Veritas Cluster Server...................................................... A-1

    Acronyms and Abbreviations

  • Preface v

    Configuration Guide for Red Hat Linux Host Attachment

    Preface

    This document describes and provides instructions for installing and configuring the devices on the Hitachi RAID storage systems for operations in a Red Hat Linux environment. The Hitachi RAID storage system models include the Hitachi Unified Storage VM, Hitachi Virtual Storage Platform (VSP), and the Hitachi Universal Storage Platform V and Hitachi Universal Storage Platform VM (USP V/VM).

    Please read this document carefully to understand how to use this product, and maintain a copy for reference purposes.

    This preface includes the following information:

    Intended Audience Product Version Document Revision Level Changes in this Revision Referenced Documents Document Conventions Convention for Storage Capacity Values Accessing Product Documentation Getting Help Comments

  • vi Preface

    Configuration Guide for Red Hat Linux Host Attachment

    Intended Audience This document is intended for system administrators, Hitachi Data Systems representatives, and authorized service providers who are involved in installing, configuring, and operating the Hitachi RAID storage systems.

    Readers of this document should meet the following requirements:

    You should have a background in data processing and understand RAID storage systems and their basic functions.

    You should be familiar with the Hitachi RAID storage systems, and you should have read the User and Reference Guide for the storage system.

    You should be familiar with the Storage Navigator software for the Hitachi RAID storage systems, and you should have read the Storage Navigator Users Guide.

    You should be familiar with the Red Hat Linux operating system and the hardware hosting the Red Hat Linux system.

    You should be familiar with the hardware used to attach the Hitachi RAID storage system to the Red Hat Linux host, including fibre-channel cabling, host bus adapters (HBAs), switches, and hubs.

    Product Version

    This document revision applies to the following microcode levels:

    Hitachi Unified Storage VM microcode 73-01-0x or later. Hitachi Virtual Storage Platform microcode 70-01-0x or later. Hitachi Universal Storage Platform V/VM microcode 60-01-3x or later.

    Document Revision Level

    Revision Date Description

    MK-96RD640-P February 2007 Initial Release

    MK-96RD640-00 May 2007 Initial Release, supersedes and replaces MK-96RD640-P

    MK-96RD640-01 September 2007 Revision 01, supersedes and replaces MK-96RD640-00

    MK-96RD640-02 February 2010 Revision 02, supersedes and replaces MK-96RD640-01

    MK-96RD640-03 October 2010 Revision 03, supersedes and replaces MK-96RD640-02

    MK-96RD640-04 June 2012 Revision 04, supersedes and replaces MK-96RD640-03

    MK-96RD640-05 September 2012 Revision 05, supersedes and replaces MK-96RD640-04

  • Preface vii

    Configuration Guide for Red Hat Linux Host Attachment

    Changes in this Revision Added the Hitachi Unified Storage VM storage system. Updated the host mode option information (Table 2-2):

    Added information about the following host mode options: 7, 22, 39, 41, 48, 49, 50, 51, 52, 65, 68, 69, 71

    Referenced Documents

    Hitachi Unified Storage VM documents:

    User and Reference Guide, MK-92HM7005 Provisioning Guide, MK-92HM7012 Storage Navigator User Guide, MK-92HM7016 Storage Navigator Messages, MK-92HM7017 Hitachi Virtual Storage Platform documents:

    Provisioning Guide for Open Systems, MK-90RD7022 Storage Navigator User Guide, MK-90RD7027 Storage Navigator Messages, MK-90RD7028 User and Reference Guide, MK-90RD7042 Hitachi Universal Storage Platform V/VM documents:

    Storage Navigator Messages, MK-96RD613 LUN Manager Users Guide, MK-96RD615 LUN Expansion (LUSE) Users Guide, MK-96RD616 Storage Navigator Users Guide, MK-96RD621 Virtual LVI/LUN and Volume Shredder Users Guide, MK-96RD630 User and Reference Guide, MK-96RD635 Cross-OS File Exchange Users Guide, MK-96RD647 Hitachi Dynamic Link Manager for Red Hat Linux Users Guide,

    MK-92DLM113

  • Document Conventions

    This document uses the following terminology conventions:

    Convention Description

    Hitachi RAID storage system, storage system

    Refers to all models of the Hitachi RAID storage systems unless otherwise noted.

    This document uses the following typographic conventions:

    Convention Description

    Bold Indicates the following:

    Text in a window or dialog box, such as menus, menu options,buttons, and labels. Example: On the Add Pair dialog box, click OK.

    Text appearing on screen or entered by the user. Example: The -split option. The name of a directory, folder, or file. Example: The horcm.conf file.

    Italic Indicates a variable, which is a placeholder for actual text provided by the user or system. Example: copy source-file target-file

    Note: Angle brackets (< >) are also used to indicate variables.

    monospace Indicates text that is displayed on screen or entered by the user. Example: # pairdisplay -g oradb

    < > angle brackets Indicates a variable, which is a placeholder for actual text provided by the user or system. Example: # pairdisplay -g Note: Italic is also used to indicate variables.

    [ ] square brackets Indicates optional values. Example: [ a | b ] indicates that you can choose a, b, or nothing.

    { } braces Indicates required or expected values. Example: { a | b } indicates that you must choose either a or b.

    | vertical bar Indicates that you have a choice between two or more options or arguments. Examples:

    [ a | b ] indicates that you can choose a, b, or nothing.

    { a | b } indicates that you must choose either a or b.

    This document uses the following icons to draw attention to information:

    Icon Meaning Description

    Note Calls attention to important or additional information.

    Tip Provides helpful information, guidelines, or suggestions for performing tasks more effectively.

    Caution Warns of adverse conditions or consequences (for example, disruptive operations).

    WARNING Warns of severe conditions or consequences (for example, destructive operations).

    viii Preface

    Configuration Guide for Red Hat Linux Host Attachment

  • Preface ix

    Configuration Guide for Red Hat Linux Host Attachment

    Convention for Storage Capacity Values

    Physical storage capacity values (for example, disk drive capacity) are calculated based on the following values:

    Physical capacity unit Value

    1 KB 1,000 (103) bytes

    1 MB 1,000 KB or 1,0002 bytes

    1 GB 1,000 MB or 1,0003 bytes

    1 TB 1,000 GB or 1,0004 bytes

    1 PB 1,000 TB or 1,0005 bytes

    1 EB 1,000 PB or 1,0006 bytes

    Logical storage capacity values (for example, logical device capacity) are calculated based on the following values:

    Logical capacity unit Value

    1 block 512 bytes

    1 KB 1,024 (210) bytes

    1 MB 1,024 KB or 1,0242 bytes

    1 GB 1,024 MB or 1,0243 bytes

    1 TB 1,024 GB or 1,0244 bytes

    1 PB 1,024 TB or 1,0245 bytes

    1 EB 1,024 PB or 1,0246 bytes

    Accessing Product Documentation

    The user documentation for the Hitachi RAID storage systems is available on the Hitachi Data Systems Portal: https://portal.hds.com. Check this site for the most current documentation, including important updates that may have been made after the release of the product.

  • x Preface

    Configuration Guide for Red Hat Linux Host Attachment

    Getting Help

    The Hitachi Data Systems customer support staff is available 24 hours a day, seven days a week. If you need technical support, log on to the Hitachi Data Systems Portal for contact information: https://portal.hds.com

    Comments

    Please send us your comments on this document: [email protected] Include the document title and number, including the revision level (for example, -07), and refer to specific sections and paragraphs whenever possible. All comments become the property of Hitachi Data Systems.

    Thank you!

  • 1

    Introduction 1-1

    Configuration Guide for Red Hat Linux Host Attachment

    Introduction

    This chapter provides an overview of the Hitachi RAID storage systems and host attachment:

    About the Hitachi RAID Storage Systems Device Types Installation and Configuration Roadmap

  • 1-2 Introduction

    Configuration Guide for Red Hat Linux Host Attachment

    About the Hitachi RAID Storage Systems

    The Hitachi RAID storage systems offer a wide range of storage and data services, including thin provisioning with Hitachi Dynamic Provisioning software, application-centric storage management and logical partitioning, and simplified and unified data replication across heterogeneous storage systems. These storage systems are an integral part of the Services Oriented Storage Solutions architecture from Hitachi Data Systems, providing the foundation for matching application requirements to different classes of storage and delivering critical services such as:

    Business continuity services Content management services (search, indexing) Non-disruptive data migration Volume management across heterogeneous storage arrays Thin provisioning Security services (immutability, logging, auditing, data shredding) Data de-duplication I/O load balancing Data classification File management services The Hitachi RAID storage systems provide heterogeneous connectivity to support multiple concurrent attachment to a variety of host operating systems, including Red Hat Linux, UNIX platforms, Windows, VMware, and mainframe servers, enabling massive consolidation and storage aggregation across disparate platforms. The storage systems can operate with multi-host applications and host clusters and are designed to handle very large databases as well as data warehousing and data mining applications that store and retrieve petabytes of data.

    The Hitachi RAID storage systems are configured with OPEN-V logical units (LUs) and are compatible with most fibre-channel (FC) host bus adapters (HBAs). Users can perform additional LU configuration activities using the LUN Manager, Virtual LVI/LUN (VLL), and LUN Expansion (LUSE) features provided by the Storage Navigator software, which is the primary user interface for the storage systems.

    For further information on storage solutions and the Hitachi RAID storage systems, contact your Hitachi Data Systems account team.

  • Introduction 1-3

    Configuration Guide for Red Hat Linux Host Attachment

    Device Types

    Table 1-1 describes the types of logical devices (volumes) that can be installed and configured for operation with the Hitachi RAID storage systems on a Red Hat Linux operating system. Table 1-2 lists the specifications for devices supported by the Hitachi RAID storage systems. Logical devices are defined to the host as SCSI disk devices, even though the interface is fibre channel. For information about configuring devices other than OPEN-V, contact your Hitachi Data Systems representative.

    The sector size for the devices is 512 bytes.

    Table 1-1 Logical Devices Supported by the Hitachi RAID Storage Systems

    Device Type Description

    OPEN-V Devices OPEN-V logical units (LUs) are disk devices (VLL-based volumes) that do not have a predefined size.

    OPEN-x Devices OPEN-x logical units (LUs) (for example, OPEN-3, OPEN-9) are disk devices of predefined sizes. The Hitachi RAID storage systems support OPEN-3, OPEN-8, OPEN-9, OPEN-E, and OPEN-L, devices. For the latest information on usage of these device types, contact your Hitachi Data Systems account team.

    LUSE Devices (OPEN-x*n)

    LUSE devices are combined LUs that can be from 2 to 36 times larger than standard OPEN-x LUs. Using LUN Expansion (LUSE) remote console software, you can configure these custom-size devices. LUSE devices are designated as OPEN-x*n, where x is the LU type (for example, OPEN-9*n) and 2< n < 36). For example, a LUSE device created from 10 OPEN-3 LUs is designated as an OPEN-3*10 disk device. This lets the host combine logical devices and access the data stored on the Hitachi RAID storage system using fewer LU numbers.

    VLL Devices (OPEN-x VLL)

    VLL devices are custom-size LUs that are smaller than standard OPEN-x LUs. Using Virtual LVI/LUN remote console software, you can configure VLL devices by slicing a single LU into several smaller LUs that best fit your application needs to improve host access to frequently used files. The product name for the OPEN-x VLL devices is OPEN-x-CVS (CVS stands for custom volume size). The OPEN-L LU type does not support Virtual LVI/LUN.

    VLL LUSE Devices (OPEN-x*n VLL)

    VLL LUSE devices combine Virtual LVI/LUN devices (instead of standard OPEN-x LUs) into LUSE devices. Use the Virtual LVI/LUN feature to create custom-size devices, then use the LUSE feature to combine the VLL devices. You can combine from 2 to 36 VLL devices into one VLL LUSE device. For example, an OPEN-3 LUSE volume created from a0 OPEN-3 VLL volumes is designated as an OPEN-3*10 VLL device (product name OPEN-3*10-CVS).

    FX Devices (3390-3A/B/C, OPEN-x-FXoto)

    The Hitachi Cross-OS File Exchange (FX) software allows you to share data across mainframe, UNIX, and PC server platforms using special multiplatform volumes. The VLL feature can be applied to FX devices for maximum flexibility in volume size. For more information about FX, see the Cross-OS File Exchange Users Guide, or contact your Hitachi Data Systems account team.

    FX devices are not SCSI disk devices, and must be installed and accessed as raw devices. UNIX/PC server hosts must use FX to access the FX devices as raw devices (no file system, no mount operation).

    The 3390-3B devices are write-protected from UNIX/PC server access. The Hitachi RAID storage system rejects all UNIX/PC server write operations (including fibre-channel adapters) for 3390-3B devices.

    Multiplatform devices are not write-protected for UNIX/PC server access. Do not execute any write operation by the fibre-channel adapters on these devices. Do not create a partition or file system on these devices. This will overwrite any data on the FX device and prevent the FX software from accessing the device.

  • 1-4 Introduction

    Configuration Guide for Red Hat Linux Host Attachment

    Table 1-2 Device Specifications

    Device Type Category (Note 1)

    Product Name (Note 2)

    # of Blocks (512 B/blk)

    # of Cylinders

    # of Heads

    # of Sectors per Track

    Capacity (MB)

    (Note 3)

    OPEN-3 SCSI disk OPEN-3 4806720 3338 15 96 2347

    OPEN-8 SCSI disk OPEN-8 14351040 9966 15 96 7007

    OPEN-9 SCSI disk OPEN-9 14423040 10016 15 96 7042

    OPEN-E SCSI disk OPEN-E 28452960 19759 15 96 13893

    OPEN-L SCSI disk OPEN-L 71192160 49439 15 96 34761

    OPEN-V SCSI disk OPEN-V 125827200 max Note 4

    Note 5 15 128 Note 6

    OPEN-3*n SCSI disk OPEN-3*n 4806720*n 3338*n 15 96 2347*n

    OPEN-8*n SCSI disk OPEN-8*n 14351040*n 9966*n 15 96 7007*n

    OPEN-9*n SCSI disk OPEN-9*n 14423040*n 10016*n 15 96 7042*n

    OPEN-E*n SCSI disk OPEN-E*n 28452960*n 19759*n 15 96 13893*n

    OPEN-L*n SCSI disk OPEN-L*n 71192160*n 49439*n 15 96 34761*n

    OPEN-V*n SCSI disk OPEN-L*n Note 4 Note 5 15 128 Note 6

    OPEN-3 VLL SCSI disk OPEN-3-CVS Note 4 Note 5 15 96 Note 6

    OPEN-8 VLL SCSI disk OPEN-8-CVS Note 4 Note 5 15 96 Note 6

    OPEN-9 VLL SCSI disk OPEN-9-CVS Note 4 Note 5 15 96 Note 6

    OPEN-E VLL SCSI disk OPEN-E-CVS Note 4 Note 5 15 96 Note 6

    OPEN-V VLL SCSI disk OPEN-V Note 4 Note 5 15 128 Note 6

    OPEN-3*n VLL SCSI disk OPEN-3*n-CVS Note 4 Note 5 15 96 Note 6

    OPEN-8*n VLL SCSI disk OPEN-8*n-CVS Note 4 Note 5 15 96 Note 6

    OPEN-9*n VLL SCSI disk OPEN-9*n-CVS Note 4 Note 5 15 96 Note 6

    OPEN-E*n VLL SCSI disk OPEN-E*n-CVS Note 4 Note 5 15 96 Note 6

    OPEN-V*n VLL SCSI disk OPEN-V*n Note 4 Note 5 15 128 Note 6

    3390-3A FX otm/mto 3390-3A 5820300 3345 15 116 2844

    3390-3B FXmto 3390-3B 5816820 3343 15 116 2844

    3390-3C FXotm OP-C-3390-3C 5820300 3345 15 116 2844

    FX OPEN-3 FXoto OPEN-3 4806720 3338 15 96 2347

    3390-3A VLL FX otm/mto 3390-3A-CVS Note 4 Note 5 15 116 Note 6

    3390-3B VLL FXmto 3390-3B-CVS Note 4 Note 5 15 116 Note 6

    3390-3C VLL FXotm OP-C-3390-3C-CVS Note 4 Note 5 15 116 Note 6

    FX OPEN-3 VLL

    FXoto OPEN-3-CVS Note 4 Note 5 15 96 Note 6

  • Introduction 1-5

    Configuration Guide for Red Hat Linux Host Attachment

    Note 1: The category of a device (SCSI disk or FX) determines its volume usage. Table 1-3 shows the volume usage for SCSI disk devices and FX devices. The SCSI disk devices (OPEN-x, VLL, LUSE, and VLL LUSE) are usually formatted with file systems for Red Hat Linux operations. The FX devices (3390-3A/B/C, and OPEN-x-FXoto) must be installed as raw devices and can only be accessed using the FX software. Do not partition or create a file system on any device used for FX operations.

    Table 1-3 Volume Usage for Device Categories

    Category Device Type Volume Usage

    SCSI Disk OPEN-x, OPEN-x VLL, OPEN-x*n LUSE, OPEN-x*n VLL LUSE

    File System or Raw Device (for example, some applications use raw devices)

    FX 3390-3A/B/C 3390-3A/B/C VLL OPEN-x for FXoto, OPEN-x VLL for FXoto

    Raw Device

    Note 2: The command device (used for Command Control Interface (CCI) operations) is distinguished by CM on the product name (for example, OPEN-3-CM, OPEN-3-CVS-CM). The product name for VLL devices is OPEN-x-CVS, where CVS = custom volume size.

    Note 3: This capacity is the maximum size which can be entered using the lvcreate command. The device capacity can sometimes be changed by the BIOS or host bus adapter. Also, different capacities may be due to variations such as 1 MB = 10002 or 10242 bytes.

    Note 4: The number of blocks for a VLL volume is calculated as follows:

    # of blocks = (# of data cylinders) (# of heads) (# of sectors per track)

    The number of sectors per track is 128 for OPEN-V and 96 for the other emulation types.

    Example: For an OPEN-3 VLL volume with capacity = 37 MB:

    # of blocks = (53 cylinders see Note 2) (15 heads) (96 sectors per track) = 76320

    Note 5: The number of data cylinders for a Virtual LVI/LUN volume is calculated as follows ( means that the value should be rounded up to the next integer):

    Number of data cylinders for OPEN-x VLL volume (except for OPEN-V) = # of cylinders = (capacity (MB) 1024/720 Example: For OPEN-3 VLL volume with capacity = 37 MB:

    # of cylinders = 37 1024/720 = 52.62 = 53 cylinders

  • 1-6 Introduction

    Configuration Guide for Red Hat Linux Host Attachment

    Number of data cylinders for an OPEN-V VLL volume = # of cylinders = (capacity (MB) specified by user) 16/15 Example: For OPEN-V VLL volume with capacity = 50 MB:

    # of cylinders = 50 16/15 = 53.33 = 54 cylinders Number of data cylinders for a VLL LUSE volume (except for OPEN-V) =

    # of cylinders = (capacity (MB) 1024/720 n Example: For OPEN-3 VLL LUSE volume with capacity = 37 MB and n = 4:

    # of cylinders = 37 1024/720 4 = 52.62 4 = 53 4 = 212

    Number of data cylinders for an OPEN-V VLL LUSE volume = # of cylinders = (capacity (MB) specified by user) 16/15 n Example: For OPEN-V VLL LUSE volume with capacity = 50 MB and n = 4:

    # of cylinders = 50 16/15 4 = 53.33 4 = 54 4 = 216 Number of data cylinders for a 3390-3A/C =

    # of cylinders = (number of cylinders) + 9

    Number of data cylinders for a 3390-3B VLL volume = # of cylinders = (number of cylinders) + 7

    S1 = maximum lvcreate size value for VLL, LUSE, and VLL LUSE devices. Calculate the maximum size value (in MB) as follows: S1 = (PE Size) (Free PE). Note: Do not exceed the maximum lvcreate size value of 128 GB.

    Note 6: The size of an OPEN-x VLL volume is specified by capacity in MB, not number of cylinders. The size of an OPEN-V VLL volume can be specified by capacity in MB or number of cylinders. The user specifies the volume size using the Virtual LVI/LUN software.

  • Introduction 1-7

    Configuration Guide for Red Hat Linux Host Attachment

    Installation and Configuration Roadmap

    The steps in Table 1-4 outline the general process you follow to install and configure the Hitachi RAID storage system on a Red Hat Linux operating system.

    Table 1-4 Installation and Configuration Roadmap

    Task

    1. Verify that the system on which you are installing the Hitachi RAID storage system meets the minimum requirements for this release.

    2. Prepare the Hitachi RAID storage system for the installation.

    3. Connect the Hitachi RAID storage system to a Red Hat Linux host.

    4. Configure the fibre-channel HBAs for the installation.

    5. Verify recognition of the new devices.

    6. Set the number of logical units.

    7. Partition the disk devices.

    8. Create file systems and mount directories, mount and verify the file systems, and set and verify auto-mount parameters.

  • 1-8 Introduction

    Configuration Guide for Red Hat Linux Host Attachment

  • 2

    Installing the Storage System 2-1

    Configuration Guide for Red Hat Linux Host Attachment

    Installing the Storage System

    This chapter describes how to install the Hitachi RAID storage system on a Red Hat Linux operating system:

    Requirements Preparing for the Storage System Installation Configuring the Fibre-Channel Ports Connecting the Storage System to the Red Hat Linux Host Configuring the Host Fibre-Channel HBAs Verifying New Device Recognition

  • 2-2 Installing the Storage System

    Configuration Guide for Red Hat Linux Host Attachment

    Requirements

    Table 2-1 lists and describes the requirements for installing the Hitachi RAID storage system on the HP-UX operating system.

    Table 2-1 Requirements

    Item Requirements

    Hitachi RAID storage system

    Hitachi Unified Storage VM Hitachi Virtual Storage Platform (VSP) Hitachi Universal Storage Platform V/VM (USP V/VM)

    The availability of features and devices depends on the level of microcode installed on the Hitachi RAID storage system.

    Use the LUN Manager software on Storage Navigator to configure the fibre-channel ports.

    Red Hat Linux AS/ES operating system

    Refer to the Hitachi Data Systems interoperability site for specific support information for the Red Hat Linux operating system: http://www.hds.com/products/interoperability

    DM Multipath: Red Hat Enterprise Linux (RHEL) version 5.4 or later (X64 or X32) is required for Device Mapper (DM) Multipath operations.

    Root (superuser) login access to the host system is required.

    Red Hat Linux server Refer to the Red Hat Linux user documentation for server hardware and configuration requirements.

    Fibre-channel HBAs The Hitachi RAID storage systems support fibre-channel HBAs equipped as follows:

    8-Gbps fibre-channel interface, including shortwave non-OFC (open fibre control) optical interface and multimode optical cables with LC connectors

    4 Gbps fibre-channel interface, including shortwave non-OFC (open fibre control) optical interface and multimode optical cables with LC connectors.

    2 Gbps fibre-channel interface, including shortwave non-OFC (open fibre control) optical interface and multimode optical cables with LC connectors.

    1 Gbps fibre-channel interface, including shortwave non-OFC optical interface and multimode optical cables with SC connectors.

    If a switch or HBA with a 1Gbps transfer rate is used, configure the device to use a fixed 1Gbps setting instead of Auto Negotiation. Otherwise, it may prevent a connection from being established.

    However, the transfer speed of CHF port cannot be set as 1 Gbps when the CHF is 8US/8UFC/16UFC. Therefore 1 Gbps HBA and switch cannot be connected.

    Do not connect OFC-type fibre-channel interfaces to the Hitachi RAID storage system. For information about supported fibre-channel HBAs, optical cables, hubs, and fabric switches, contact your Hitachi Data Systems account team.

    For information about supported HBAs, drivers, hubs, and switches, see the Hitachi Data Systems interoperability site: http://www.hds.com/products/interoperability

    Fibre-channel utilities and tools

    Refer to the documentation for your HBA for information about installing the utilities and tools for your adapter.

    Fibre-channel drivers Do not install/load the drivers yet. When instructed in this guide to install the drives for your fibre-channel HBA, refer to the documentation for your adapter.

  • Installing the Storage System 2-3

    Configuration Guide for Red Hat Linux Host Attachment

    Preparing for the Storage System Installation

    The following sections describe preinstallation considerations to follow before installing the Hitachi RAID storage system.

    Hardware Installation Considerations

    The Hitachi Data Systems representative performs the hardware installation by following the precautions and procedures in the Maintenance Manual.

    Hardware installation activities include:

    Assembling all hardware and cabling Installing and formatting the logical devices (LDEVs). Be sure to obtain the

    desired LDEV configuration information from the user, including the desired number of OPEN-x, LUSE, VLL, VLL LUSE, and multiplatform (FX) devices.

    Installing the fibre-channel HBAs and cabling. The total fibre cable length attached to each fibre-channel adapter must not exceed 500 meters (1,640 feet).

    Do not connect any OFC-type connectors to the storage system. Do not connect/disconnect fibre-channel cabling that is being actively

    used for I/O. This can cause the Red Hat Linux system to hang.

    Always confirm that the devices on the fibre cable are offline before connecting/disconnecting the fibre cable.

    Configuring the fibre port topology. The fibre topology parameters for each fibre-channel port depend on the type of device to which the port is connected, and the type of port. Determine the topology parameters supported by the device, and set your topology accordingly (see Configuring the Fibre-Channel Ports).

    Before starting the installation, check all specifications to ensure proper installation and configuration.

  • LUN Manager Software Installation

    The LUN Manager software on Storage Navigator is used to configure the fibre-channel ports. For instructions on installing LUN Manager, see the Storage Navigator User Guide for the storage system.

    Setting the Host Mode

    The Hitachi RAID storage system has host modes that the storage administrator must set for all new installations (newly connected ports) to Red Hat Linux hosts. The required host mode for Red Hat Linux is 00. Do not select a host mode other than 00 for Red Hat Linux.

    Use the LUN Manager software to set the host mode. For instructions, see the Provisioning Guide or LUN Manager User Guide for the storage system.

    Caution: Changing host modes on a Hitachi RAID storage system that is already installed and configured is disruptive and requires the server to be rebooted.

    2-4 Installing the Storage System

    Configuration Guide for Red Hat Linux Host Attachment

  • Setting the Host Mode Options

    When each new host group is added, the storage administrator must be sure that the host mode options (HMOs) are set for all host groups connected to Red Hat Linux hosts. Table 2-2 lists and describes the HMOs that can be used for Solaris operations. Use the LUN Manager software on Storage Navigator to set the HMOs.

    WARNING: Before setting any HMO, review its functionality carefully to determine

    whether it can be used for your configuration and environment. If you have any questions or concerns, contact your Hitachi Data Systems representative or the Support Center.

    Changing HMOs on a Hitachi RAID storage system that is already installed and configured is disruptive and requires the server to be rebooted.

    Table 2-2 Host Mode Options for Red Hat Linux Operations

    HMO Storage System

    Description Host Mode

    Comments

    2 HUS VM

    VSP

    USP V/VM

    Veritas DBE+RAC

    (1) The response of Test Unit Ready(TUR) for Persistent Reserve is changed.

    (2) According to the SCSI-3 specification, Reservation Conflict is responded to the TUR issued via the path without Reservation Key registered,

    (3) Setting HMO 02 to ON enables the TUR to perform normally, which is issued via the path without Reservation Key registered and to which Reservation Key used to be responded.

    Note: HMO 02 is required when the Veritas DBE for Oracle RAC(I/O Fencing) function is in use.

    Common Mandatory.

    Do not apply this option to Sun Cluster.

    7 HUS VM

    VSP

    USP V/VM

    Changes the setting of whether to return the Unit Attention response when adding a LUN.

    ON: Unit Attention response is returned.

    OFF (default): Unit Attention response is not returned.

    Sense code: REPORTED LUNS DATA HAS CHANGED

    Notes:

    1. Set host mode option 07 to ON when you expect the REPORTED LUNS DATA HAS CHANGED UA at SCSI path change.

    2. If the Unit Attention report occurs frequently and the load on the host side becomes high, the data transfer cannot be started on the host side and timeout may occur.

    3. If both HMO 07 and HMO 69 are set to ON, the UA of HMO 69 is returned to the host.

    Common For VSP and HUS VM HMO 7 works regardless of the host mode setting.

    For USP V/VM the host mode must be 0x00 or 0x09 for 60-01-29-99/99 or earlier.

    13 HUS VM

    VSP

    USP V/VM

    Provides SIM notification when the number of link failures detected between ports exceeds the threshold.

    Common Optional

    Configure HMO 13 only when you are requested to do so.

    Installing the Storage System 2-5

    Configuration Guide for Red Hat Linux Host Attachment

  • 2-6 Installing the Storage System

    Configuration Guide for Red Hat Linux Host Attachment

    HMO Storage System

    Description Host Mode

    Comments

    22 HUS VM

    VSP

    USP V/VM

    When a reserved volume receives a Mode Sense command from a node that is not reserving this volume, the host will receive the following responses from the storage system:

    ON: Normal Response.

    OFF: Reservation Conflict (Default).

    Notes:

    1. By applying HMO 22, the volume status (reserved/non-reserved) is checked more frequently (several tens of milliseconds per LU).

    2. By applying HMO 22, the host OS does not receive warning messages when a Mode Select command is issued to a reserved volume.

    3. There is no influence to Veritas Cluster Server software when HMO 22 is set OFF. Enable HMO 22 ON when there are numerous reservation conflicts.

    4. Set HMO 22 ON when Veritas Cluster Server is connected.

    Common USP V/VM:

    60-02-52-00/00 or later (within 60-02-5x range)

    60-03-2x-00/00 or later

    39 HUS VM

    VSP

    USP V/VM

    Resets a job and returns UA to all the initiators connected to the host group where Target Reset has occurred.

    ON:

    Job reset range: Reset is performed to the jobs of all the Initiators connected to the host group where Target Reset has occurred.

    UA set range: UA is returned to all the Initiators connected to the host group where Target Reset has occurred.

    OFF (default):

    Job reset range: Reset is performed to the jobs of the initiator that has issued Target Reset.

    UA set range: UA is returned to the initiator that has issued Target Reset.

    Note: This option is used in the SVC environment, and the job reset range and UA set range need to be controlled per host group when Target Reset has been received.

    Common USP V/VM: 60-08-01-00/00 or later

    VSP: 70-02-03-00/00 or later

    41 HUS VM

    VSP

    USP V/VM

    Gives priority to starting Inquiry/ Report LUN issued from the host where this option is set.

    ON: Inquiry/ Report LUN is started by priority.

    OFF (default): The operation is the same as before.

    Common USP V/VM: 60-03-24-00/00 or later

  • Installing the Storage System 2-7

    Configuration Guide for Red Hat Linux Host Attachment

    HMO Storage System

    Description Host Mode

    Comments

    48 USP V/VM By setting this option to ON, in normal operation, the pair status of S-VOL is not changed to SSWS even when Read commands exceeding the threshold (1,000/6 min) are issued while a specific application is used.

    ON: The pair status of S-VOL is not changed to SSWS if Read commands exceeding the threshold are issued.

    OFF (default): The pair status of S-VOL is changed to SSWS if Read commands exceeding the threshold are issued.

    Note:

    1. Set this option to ON for the host group if the transition of the pair status to SSWS is not desired in the case that an application that issues Read commands (*1) exceeding the threshold (1,000/6 min) to S-VOL is used in HAM environment.

    *1: Currently, the vxdisksetup command of Solaris VxVM serves.

    2. Even when a failure occurs in P-VOL, if this option is set to ON, which means that the pair status of S-VOL is not changed to SSWS (*2), the response time of Read command to the S-VOL whose pair status remains as Pair takes several msecs.

    On the other hand, if the option is set to OFF, the response time of Read command to the S-VOL is recovered to be equal to that to P-VOL by judging that an error occurs in the P-VOL when Read commands exceeding the threshold are issued.

    *2: Until the S-VOL receives a Write command, the pair status of S-VOL is not changed to SSWS.

    Common USP V/VM:

    60-06-10-00/10 or later (within 60-06-1x range)

    60-06-21-00/00 or later

  • 2-8

    ost Attachment

    Installing the Storage System

    Configuration Guide for Red Hat Linux H

    HMO Storage System

    Description Host Mode

    Comments

    49 HUS VM

    VSP

    USP V/VM

    Selects BB_Credit value. (HMO#49: Low_bit)

    ON: The subsystem operates with BB_Credit value of 80 or 255.

    OFF (default): The subsystem operates with BB_Credit value of 40 or 128.

    *HMO#50/HMO#49: BB_Credit value is decided by 2 bits of the two HMO.

    00: Existing mode (BB_Credit value = 40) 01: BB_Credit value = 80

    10: BB_Credit value = 128 11: BB_Credit value = 255

    Note: This option is applied when the two conditions below are met:

    Data frame transfer in long distance connection exceeds the BB_Credit value.

    System option mode 769 is set to OFF (retry operation is enabled at TC/UR path creation).

    VSP, HUS VM:

    1. When HMO 49 is set to ON, SSB log of link down is output on MCU (M-DKC) and RCU (R-DKC).

    2. This HMO can work only when the micro-program supporting this function is installed on both MCU (M-DKC) and RCU (R-DKC).

    3. The HMO setting is only applied to Initiator-Port and RCU Target-Port. This function is only applicable when the 8UFC or 16UFC PCB is used on RCU/MCU.

    4. If this option is used, Point to Point setting is necessary.

    5. When removing the 8UFC or 16UFC PCB, the operation must be executed after setting HMO 49 to OFF.

    6. If HMO 49 is set to ON while SOM 769 is ON, path creation may fail after automatic port switching.

    7. Make sure to set HMO 49 from OFF to ON or from ON to OFF after the pair is suspended or when the load is low.

    8. The RCU Target, which is connected with the MCU where this mode is set to ON, cannot be used for UR.

    9. This function is prepared for long distance data transfer. Therefore, if HMO49 is set to ON with distance of 0 km, a data transfer error may occur on RCU side.

    USP V/VM:

    1. When HMO 49 is set to ON, SSB log of link down is output on MCU (M-DKC).

    2. This HMO can work only when the micro-program supporting this function is installed on both MCU (M-DKC) and RCU (R-DKC).

    3. The HMO setting is only applied to Initiator-Port. This function is only applicable when the 8US PCB is used on RCU/MCU.

    4. If this option is used, Point to Point setting is necessary.

    5. When removing the 8US PCB, the operation must be executed after setting the HMO 49 to OFF.

    6. If HMO 49 is set to ON while SOM 769 is ON, path creation may fail after automatic port switching.

    7. Make sure to set HMO 49 from OFF to ON or from ON to OFF after the pair is suspended or when the load is low.

    8. The RCU Target, which is connected with the MCU where this mode is set to ON, cannot be used for UR.

    9. This function is prepared for long distance data transfer. Therefore, if HMO49 is set to ON with distance of 0 km, a data transfer error may occur on RCU side.

    Common VSP:

    70-02-31-00/00 and higher (within 70-02-3x range)

    70-02-54-00/00 or later

    USP V/VM: 60-07-51-00/00 or later

  • 2-9

    ost Attachment

    Installing the Storage System

    Configuration Guide for Red Hat Linux H

    HMO Storage System

    Description Host Mode

    Comments

    50 HUS VM

    VSP

    USP V/VM

    Selects BB_Credit value. (HMO#50: High_bit)

    ON: The subsystem operates with BB_Credit value of 128 or 255.

    OFF (default): The subsystem operates with BB_Credit value of 40 or 80.

    *HMO#50/HMO#49: BB_Credit value is decided by 2 bits of the two HMO.

    00: Existing mode (BB_Credit value = 40) 01: BB_Credit value = 80

    10: BB_Credit value = 128 11: BB_Credit value = 255

    Note: This option is applied when the two conditions below are met:

    Data frame transfer in long distance connection exceeds the BB_Credit value.

    System option mode 769 is set to OFF (retry operation is enabled at TC/UR path creation).

    VSP, HUS VM:

    1. When HMO 50 is set to ON, SSB log of link down is output on MCU (M-DKC) and RCU (R-DKC).

    2. This HMO can work only when the micro-program supporting this function is installed on both MCU (M-DKC) and RCU (R-DKC).

    3. The HMO setting is only applied to Initiator-Port and RCU Target-Port. This function is only applicable when the 8UFC or 16UFC PCB is used on RCU/MCU.

    4. If this option is used, Point to Point setting is necessary.

    5. When removing the 8UFC or 16UFC PCB, the operation must be executed after setting HMO 50 to OFF.

    6. If HMO 50 is set to ON while SOM 769 is ON, path creation may fail after automatic port switching.

    7. Make sure to set HMO 50 from OFF to ON or from ON to OFF after the pair is suspended or when the load is low.

    8. The RCU Target, which is connected with the MCU where this mode is set to ON, cannot be used for UR.

    9. This function is prepared for long distance data transfer. Therefore, if HMO49 is set to ON with distance of 0 km, a data transfer error may occur on RCU side.

    USP V/VM:

    1. When HMO 50 is set to ON, SSB log of link down is output on MCU (M-DKC).

    2. This HMO can work only when the micro-program supporting this function is installed on both MCU (M-DKC) and RCU (R-DKC).

    3. The HMO setting is only applied to Initiator-Port. This function is only applicable when the 8US PCB is used on RCU/MCU.

    4. If this option is used, Point to Point setting is necessary.

    5. When removing 8US PCB, the operation must be executed after setting the HMO 50 to OFF.

    6. If HMO 50 is set to ON while SOM 769 is ON, path creation may fail after automatic port switching.

    7. Make sure to set HMO 50 from OFF to ON or from ON to OFF after the pair is suspended or when the load is low.

    8. The RCU Target, which is connected with the MCU where this mode is set to ON, cannot be used for UR.

    9. This function is prepared for long distance data transfer. Therefore, if HMO49 is set to ON with distance of 0 km, a data transfer error may occur on RCU side.

    Common VSP:

    70-02-31-00/00 and higher (within 70-02-3x range)

    70-02-54-00/00 or later

    USP V/VM: 60-07-51-00/00 or later

  • 2-10

    achment

    Installing the Storage System

    Configuration Guide for Red Hat Linux Host Att

    HMO Storage System

    Description Host Mode

    Comments

    51 HUS VM

    VSP

    USP V/VM

    Selects operation condition of TrueCopy.

    ON: TrueCopy operates in the performance improvement logic.

    (When a WRITE command is issued, FCP_CMD/FCP_DATA is continuously issued while XFER_RDY issued from RCU side is prevented.)

    OFF (default): TrueCopy operates in the existing logic.

    Note: This option is applied when write I/O of TrueCopy is executed.

    VSP, HUS VM:

    1. When HMO 51 is set to ON, SSB log of link down is output on MCU (M-DKC) and RCU (R-DKC).

    2. This HMO can work only when the micro-program supporting this function is installed on both MCU (M-DKC) and RCU (R-DKC).

    3. The HMO setting is only applied to Initiator-Port and RCU Target-Port. This function is only applicable when the 8UFC or 16UFC PCB is used on RCU/MCU.

    4. When removing 8UFC or 16UFC PCB, the operation must be executed after setting HMO 51 to OFF.

    5. If HMO 51 is set to ON while SOM 769 is ON, path creation may fail after automatic port switching.

    6. Make sure to set HMO 51 from OFF to ON or from ON to OFF after the pair is suspended or when the load is low.

    7. The RCU Target, which is connected with the MCU where this mode is set to ON, cannot be used for UR.

    8. When HMO51 is set to ON using RAID600 as MCU and RAID700 as RCU, the micro-program of RAID600 must be 60-07-63-00/00 or higher (within 60-07-6x range) or 60-08-06-00/00 or higher.

    9. Path attribute change (Initiator Port - RCU-Target Port, RCU-Target Port - Initiator Port) accompanied with Hyperswap is enabled after setting HMO51 to ON. If HMO51 is already set to ON on the both paths, HMO51 continues to be applied on the paths even after execution of Hyperswap.

    10. In a storage system with maximum number of MPBs (8 MPBs) mounted, HMO051 may need to be used with HMO065. In this case, also see HMO 65.

    USP V/VM:

    1. When HMO 51 is set to ON, SSB log of link down is output on MCU (M-DKC).

    2. This HMO can work only when the micro-program supporting this function is installed on both MCU (M-DKC) and RCU (R-DKC).

    3. The HMO setting is only applied to Initiator-Port. This function is only applicable when the 8US PCB is used on RCU/MCU.

    4. When removing 8US PCB, the operation must be executed after setting the HMO 51 to OFF.

    5. If HMO 51 is set to ON while SOM 769 is ON, path creation may fail after automatic port switching.

    6. Make sure to set HMO 51 from OFF to ON or from ON to OFF after the pair is suspended or when the load is low.

    7. The RCU Target, which is connected with the MCU where this mode is set to ON, cannot be used for UR.

    8. When HMO51 is set to ON using RAID600 as MCU and RAID700 as RCU, the micro-program of RAID600 must be 60-07-63-00/00 or higher (within 60-07-6x range) or 60-08-06-00/00 or higher.

    9. Path attribute change (Initiator Port - RCU-Target Port, RCU-Target Port - Initiator Port) accompanied with

    Common VSP:

    70-02-31-00/00 and higher (within 70-02-3x range)

    70-02-54-00/00 or later

    USP V/VM: 60-07-51-00/00 or later

  • Installing the Storage System 2-11

    Configuration Guide for Red Hat Linux Host Attachment

    HMO Storage System

    Description Host Mode

    Comments

    52 VSP Enables a function using HAM to transfer SCSI-2 reserve information. If using software for a cluster system that uses a SCSI-2 Reservation, set host mode option 52 on the host groups where the executing node and standby node reside.

    ON: The function to transfer SCSI-2 reserve information is enabled.

    OFF (default): The function to transfer SCSI-2 reserve information is not enabled.

    Notes:

    1. To use HAM to transfer SCSI-2 reserve information, the cluster middleware (alternate path) on host side must have been evaluated with the function.

    2. Set this HMO to ON on both paths of P-VOL and S-VOL to use this function.

    Common VSP: 70-03-01-00/00 or later

    65 VSP Selects TrueCopy operation mode when the Round Trip function is enabled by setting HMO 051 to ON in the configuration of the maximum number of MPBs.

    ON: TrueCopy is performed in enhanced performance improvement mode of Round Trip.

    OFF (default): TrueCopy is performed in existing Round Trip mode.

    Note:

    1. The option is applied when response performance for an update I/O degrades while the Round Trip function is used in a configuration of the maximum number of MPBs.

    2. When using the option, set HMO 51 to ON.

    3. The option can work only when HMO 51 is ON. Refer to the document of HMO 51.

    4. When the option is set to ON, SSB logs of link down are output on MCU (M-DKC) and RCU (R-DKC).

    5. The option can work only when the micro-program supporting this function is installed on both MCU (M-DKC) and RCU (R-DKC).

    6. The option setting is applied to Initiator-Port and RCU Target-Port. The function is applicable only when the PCB type of 8UFC or 16UFC is used on MCU and RCU and in the configuration with 4 sets of MPBs on MCU.

    7. Setting change of the option from OFF to ON or from ON to OFF must be done after the pair is suspended or when the load is low.

    8. Before downgrading the micro-program from a supported version to an unsupported version, set the option to OFF. (Micro-program exchange without setting the option to OFF is guarded. In this case, setting the option to OFF and then retry the micro-program exchange is required.)

    Common 70-03-32-00/00 or later

  • 2-12 Installing the Storage System

    Configuration Guide for Red Hat Linux Host Attachment

    HMO Storage System

    Description Host Mode

    Comments

    68 VSP By setting the option, the Linux OS can judge whether the conditions are met to issue WriteSame(16) to the storage system.

    ON: 05h is set to Version field of Standard Inquiry. LBPRZ and LBPME bit of Read Capacity(16) are set. Block Limits VPD Page and LBP VPD Page are returned.

    OFF (default): 02h is set to Version field of Standard Inquiry. LBPRZ and LBPME bit of Read Capacity(16) are not supported. Block Limits VPD Page and LBP BPD Page are not supported.

    Note:

    1. This HMO is applied when Dynamic Provisioning is used by Linux 2.6.33 or higher.

    2. HMO 68 should be used separately from HMO 63, which also enables to change the setting values of Standard Inquiry Page and Read Capacity(16) and to switch the support for Block Limits VPD Page.

    0x00 70-04-01-00/00 or later

    69 VSP Enables/disables the UA response to a host when an LU whose capacity has been expanded receives a command from the host.

    ON: When an LU whose capacity has been expanded receives a command from a host, UA is returned to the host.

    Sense key: 0x06 (Unit Attention)

    Sense code: 0x2a09 (Capacity Data Has Changed), 0x2a01 (Mode Parameters Changed)

    OFF (default): When an LU whose capacity has been expanded receives a command from a host, UA is not returned to the host.

    Note:

    1. The option is applied when returning UA to the host after LUSE capacity expansion is required.

    2. If both HMO 7 and HMO 69 are set to ON, the UA of HMO 69 is returned to the host.

    Common 70-03-36-00/00 or later

    71 VSP Switches sense key/sense code returned as a response to Check Condition when a read/write I/O is received while a DP pool is blocked.

    ON: The sense key/sense code returned as a response to Check Condition when a read/write I/O is received while a DP pool is blocked is 03(MEDIUM ERROR)/9001(VENDORUNIQUE).

    OFF (default): The sense key/sense code returned as a response to Check Condition when a read/write I/O is received while a DP pool is blocked is 0400(LOGICAL UNIT NOT READY/CAUSE NOT REPORTABLE).

    Note: This option is applied if switching sense key/sense code returned as a response to Check Condition when a read/write I/O is received while a DP pool is blocked can prevent a device file from being blocked and therefore the extent of impact can be reduced on host side.

    Common 70-04-01-00/00 or later

  • Configuring the Fibre-Channel Ports

    Use LUN Manager to configure the fibre-channel ports with the appropriate fibre parameters. You select the appropriate settings for each port based on the device to which the port is connected. Determine the topology parameters supported by the device, and set your topology accordingly.

    The Hitachi RAID storage system supports up to 2048 logical units per fibre-channel port (512 per host group). Check your fibre-channel adapter documentation and your Linux system documentation to determine the total number of devices that can be supported.

    Table 2-3 explains the settings for defining port parameters. For instructions, see the Provisioning Guide or LUN Manager User Guide for the storage system.

    Table 2-3 Fibre Parameter Settings

    Fabric Connection Provides

    Enable FC-AL FL-port (fabric port)

    Enable Point-to-Point F-port (fabric port)

    Disable FC-AL NL-port (private arbitrated loop)

    Disable Point-to-Point Not supported

    Note: If you plan to connect different types of servers to the Hitachi RAID storage

    system via the same fabric switch, use the zoning function of the fabric switch.

    Contact Hitachi Data Systems for information about port topology configurations supported by HBA/switch combinations. Not all switches support F-port connection.

    Installing the Storage System 2-13

    Configuration Guide for Red Hat Linux Host Attachment

  • 2-14 Installing the Storage System

    Configuration Guide for Red Hat Linux Host Attachment

    Port Address Considerations for Fabric Environments

    In fabric environments, port addresses are assigned automatically by fabric switch port number and are not controlled by the port settings. In arbitrated loop environments, the port addresses are set by entering an AL-PA (arbitrated-loop physical address, or loop ID).

    Table 2-4 shows the available AL-PA values ranging from 01 to EF. Fibre-channel protocol uses the AL-PAs to communicate on the fibre-channel link, but the software driver of the platform host adapter translates the AL-PA value assigned to the port to a SCSI TID.

    Table 2-4 Available AL-PA Values

    EF CD B2 98 72 55 3A 25

    E8 CC B1 97 71 54 39 23

    E4 CB AE 90 6E 53 36 1F

    E2 CA AD 8F 6D 52 35 1E

    E1 C9 AC 88 6C 51 34 1D

    E0 C7 AB 84 6B 4E 33 1B

    DC C6 AA 82 6A 4D 32 18

    DA C5 A9 81 69 4C 31 17

    D9 C3 A7 80 67 4B 2E 10

    D6 BC A6 7C 66 4A 2D 0F

    D5 BA A5 7A 65 49 2C 08

    D4 B9 A3 79 63 47 2B 04

    D3 B6 9F 76 5C 46 2A 02

    D2 B5 9E 75 5A 45 29 01

    D1 B4 9D 74 59 43 27

    CE B3 9B 73 56 3C 26

    Loop ID Conflicts

    The Red Hat Linux operating system assigns port addresses from lowest (01) to highest (EF). To avoid loop ID conflict, assign the port addresses from highest to lowest (that is, starting at EF). The AL-PAs should be unique for each device on the loop to avoid conflicts. Do not use more than one port address with the same TID in same loop (for example, addresses EF and CD both have TID 0, see Table 2-4).

  • Installing the Storage System 2-15

    Configuration Guide for Red Hat Linux Host Attachment

    Connecting the Storage System to the Red Hat Linux Host

    After you prepare the hardware, software, and fibre-channel HBAs, connect the Hitachi RAID storage system to the Red Hat Linux system.

    Table 2-5 summarizes the steps for connecting Hitachi RAID storage system to the Red Hat Linux system host. Some steps are performed by the Hitachi Data Systems representative, while others are performed by the user.

    Table 2-5 Connecting the Storage System to the Red Hat Linux Host

    Activity Performed by Description

    1. Verify storage system installation

    Hitachi Data Systems representative

    Confirm that the status of the fibre-channel HBAs and LDEVs is NORMAL.

    2. Shut down the Red Hat Linux system

    User Power off the Red Hat Linux system before connecting the Hitachi RAID storage system.

    Shut down the Red Hat Linux system. When shutdown is complete, power

    off the Red Hat Linux display.

    Power off all peripheral devices except for the Hitachi RAID storage system.

    Power off the host system. You are now ready to connect the Hitachi RAID storage system.

    3. Connect the Hitachi RAID storage system

    Hitachi Data Systems representative

    Install fibre-channel cables between the storage system and the Red Hat Linux system. Follow all precautions and procedures in the Maintenance Manual. Check all specifications to ensure proper installation and configuration.

    4. Power on the Red Hat Linux system

    User Power on the Red Hat Linux system after connecting the Hitachi storage system:

    Power on the Red Hat Linux system display.

    Power on all peripheral devices. The Hitachi RAID storage system should be on, and the fibre-channel ports should be configured. If the fibre ports are configured after the Linux system is powered on, restart the system to have the new devices recognized.

    Confirm the ready status of all peripheral devices, including the Hitachi RAID storage system.

    Power on the Red Hat Linux system.

    5 Boot the Red Hat Linux system

  • Configuring the Host Fibre-Channel HBAs

    You need to configure the fibre-channel HBAs connected to the Hitachi RAID storage system. The HBAs have many configuration options. This section provides the following minimum requirements for configuring host fibre-channel adapters for operation with the Hitachi RAID storage system. Use the same settings and device parameters for all devices on the Hitachi RAID storage system.

    The queue depth requirements for the devices on the Hitachi RAID storage system are specified in Table 2-6. You can adjust the queue depth for the devices later as needed (within the specified range) to optimize the I/O performance of the devices.

    For Qlogic adapters, enable the BIOS. For other adapters, you might need to disable the BIOS to prevent the system from trying to boot from the Hitachi RAID storage system. Refer to the documentation for the adapter.

    Several other parameters (for example, FC, fabric, multipathing) may also need to be set. See the user documentation for the HBA to determine whether other options or settings are required to meet your operational requirements.

    Note: If you plan to use Device Mapper (DM) Multipath operations, contact the Hitachi Data Systems Support Center for important HBA settings, such as disabling the HBA failover function and editing the /etc/modprobe.conf file.

    Table 2-6 Queue Depth Requirements

    Parameter Required Value

    32 per LU IOCB Allocation (Queue depth) per LU 2048 per port IOCB Allocation (Queue depth) per port (MAXTAGS)

    2-16 Installing the Storage System

    Configuration Guide for Red Hat Linux Host Attachment

  • Installing the Storage System 2-17

    Configuration Guide for Red Hat Linux Host Attachment

    Verifying New Device Recognition

    The final step before configuring the new disk devices is to verify that the host system recognizes the new devices. The host system automatically creates a device file for each new device recognized.

    To verify new device recognition:

    1. Use the dmesg command to display the devices (see Figure 2-1).

    2. Record the device file name for each new device. You will need this information when you partition the devices (see Verifying New Device Recognition). See Table 2-7 for a sample SCSI path worksheet.

    3. The device files are created under the /dev directory. Verify that a device file was created for each new disk device (see Figure 2-2).

    # dmesg | more : : scsi0 : Qlogic QLA2200 PCI to Fibre Channel Host Adapter: 0 device 14 irq 11 Firmware version: 1.17.26, Driver version 2.11 Beta scsi : 1 host. Vendor: HITACHI Model: OPEN-3 Rev: 0111 Type: Direct-Access ANSI SCSI revision: 02 Detected scsi disk sda at scsi0, channel 0, id 0, lun 0 Device file name of this disk = /dev/sda Logical unit number Vendor: HITACHI Model: OPEN-9 Rev: 0111 Type: Direct-Access ANSI SCSI revision: 02 Detected scsi disk sdb at scsi0, channel 0, id 0, lun 1 : : In this example, the HITACHI OPEN-3 device (TID 0, LUN 0) and the HITACHI OPEN-9 device (TID 0, LUN 1) are recognized by the Red Hat Linux server.

    Figure 2-1 Example of Verifying New Device Recognition

    # ls -l /dev | more : brw-rw---- 1 root disk 8, 0 May 6 1998 sda Device file = sda

    Figure 2-2 Example of Verifying Device Files

  • 2-18 Installing the Storage System

    Configuration Guide for Red Hat Linux Host Attachment

    Table 2-7 Sample SCSI Path Worksheet

    LDEV (CU:LDEV)

    Device Type

    LUSE ( *n)

    VLL (MB)

    Device File

    Name

    Path Alternate Path

    0:00 TID:____

    LUN:____

    TID:____

    LUN:____

    0:01 TID:____

    LUN:____

    TID:____

    LUN:____

    0:02 TID:____

    LUN:____

    TID:____

    LUN:____

    0:03 TID:____

    LUN:____

    TID:____

    LUN:____

    0:04 TID:____

    LUN:____

    TID:____

    LUN:____

    0:05 TID:____

    LUN:____

    TID:____

    LUN:____

    0:06 TID:____

    LUN:____

    TID:____

    LUN:____

    0:07 TID:____

    LUN:____

    TID:____

    LUN:____

    0:08 TID:____

    LUN:____

    TID:____

    LUN:____

    0:09 TID:____

    LUN:____

    TID:____

    LUN:____

    0:0A TID:____

    LUN:____

    TID:____

    LUN:____

    0:0B TID:____

    LUN:____

    TID:____

    LUN:____

    0:0C TID:____

    LUN:____

    TID:____

    LUN:____

    0:0D TID:____

    LUN:____

    TID:____

    LUN:____

    0:0E TID:____

    LUN:____

    TID:____

    LUN:____

    0:0F TID:____

    LUN:____

    TID:____

    LUN:____

  • 3

    Configuring the New Disk Devices 3-1

    Configuration Guide for Red Hat Linux Host Attachment

    Configuring the New Disk Devices

    This chapter describes how to configure the new disk devices on the Red Hat Linux system host:

    Setting the Number of Logical Units Partitioning the Devices Creating, Mounting, and Verifying the File Systems

  • 3-2 Configuring the New Disk Devices

    Configuration Guide for Red Hat Linux Host Attachment

    Setting the Number of Logical Units

    To set the number of LUs:

    1. Edit the /etc/modules.conf file to add the following line:

    options scsi_mod max_scsi_luns=xx

    where xx is the maximum number of LUs supported by your Linux operating system. Check your fibre-channel adapter documentation and Linux system documentation to ascertain the total number of devices that can be supported.

    2. To set the Emulex Driver, as shown in Figure 3-3, add the following line to the /etc/modules.conf file:

    Alias scsi_hostadapter lpfcdd

    3. To activate the above modification, make an image file for booting.

    Example: # mkinitrd /boot/initrd-2.4.x.scsiluns.img `uname -r`

    4. Use one of the following methods to change the setting of Bootloader:

    a. LILO used as Bootloader. Edit the lilo.conf file as shown in Figure 3-1, then issue the lilo command to activate the lilo.conf setting with selecting the label. Example: # lilo

    b. Grand Unified Bootloader (GRUB) is used as Bootloader. Edit the /boot/grub/grub.conf file as shown in Figure 3-2.

    5. Reboot the system.

    image=/boot/vmlinuz-qla2x00 label=Linux-qla2x00 append=max_scsi_luns=16 # initrd=/boot/initrd-2.4.x.img Comment out this line. initrd=/boot/initrd-2.4.x.scsiluns.img Add this line. root=/dev/sda7 read-only #sbin/lilo

    Figure 3-1 Example of Setting the Number of LUs (LILO)

    kernel /boot/vmlinuz-2.4.x ro root=/dev/hda1 # initrd /boot/initrd-2.4.x.img This line is commented out. initrd /boot/initrd-2.4.x.scsiluns.img Add this line.

    Figure 3-2 Example of Setting the Number of LUs (GRUB)

    Alias scsi_hostadapter lpfcdd Add this to /etc/modules.conf

    Figure 3-3 Example of Setting the Emulex Driver

  • Partitioning the Devices

    After the setting the number of logical units, you need to create the partitions on the new disk devices.

    Note: For important information about creating partitions with DM Multipath, contact the Hitachi Data Systems Support Center.

    To create the partitions on the new disk devices:

    1. Enter fdisk/dev/

    Example: fdisk/dev/sda

    where dev/sda is the device file name

    2. Select p to display the present partitions.

    3. Select n to make a new partition. You can make up to four primary partitions (1-4) or one extended partition. The extended partition can be organized into 11 logical partitions, which can be assigned partition numbers from 5 to 15.

    4. Select w to write the partition information to disk and complete the fdisk command.

    Tip: Other useful commands include d to remove partitions and q to stop a change.

    5. Repeat steps 1 through 4 for each new disk device.

    Configuring the New Disk Devices 3-3

    Configuration Guide for Red Hat Linux Host Attachment

  • 3-4 Configuring the New Disk Devices

    Configuration Guide for Red Hat Linux Host Attachment

    Creating, Mounting, and Verifying the File Systems

    Creating the File Systems

    After you partition the devices, create the file systems. Be sure the file system are appropriate for the primary and/or extended partition for each logical unit.

    To create the file system, issue the mkfs command:

    # mkfs /dev/sda1

    where /dev/sda1 is device file of primary partition number 1.

    Creating the Mount Directories

    To create the mount directories, issue the mkdir command:

    # mkdir /USP-LU00

    Mounting the New File Systems

    Use the mount command to mount each new file system (see example in Figure 3-4). The first parameter of the mount command is the device file name (/dev/sda1), and the second parameter is the mount directory, as shown in Figure 3-4.

    # mount /dev/sda1 /USP-LU00 Device file name Mount directory name #

    Figure 3-4 Example of Mounting the New Devices

    Verifying the File Systems

    After mounting the file systems, verify the file systems (see the example in Figure 3-5).

    # df -h Filesystem Size Used Avail Used% Mounted on /dev/sda1 1.8G 890M 866M 51% / /dev/sdb1 1.9G 1.0G 803M 57% /usr /dev/sdc1 2.2G 13k 2.1G 0% /USP-LU00 #

    Figure 3-5 Example of Verifying the File System

  • Configuring the New Disk Devices 3-5

    Configuration Guide for Red Hat Linux Host Attachment

    Setting the Auto-Mount Parameters

    To set the auto-mount parameters, edit the /etc/fstab file (see the example in Figure 3-6).

    # cp -ip /etc/fstab /etc/fstab.standard Make a backup of /etc/fstab. # vi /etc/fstab Edit /etc/fstab. : /dev/sda1 /USP-LU00 ext2 defaults 0 2 Add new device.

    Figure 3-6 Example of Setting Auto-Mount Parameters

  • 3-6 Configuring the New Disk Devices

    Configuration Guide for Red Hat Linux Host Attachment

  • 4

    Failover and SNMP Operation 4-1

    Configuration Guide for Red Hat Linux Host Attachment

    Failover and SNMP Operation

    The Hitachi RAID storage systems support industry-standard products and functions that provide host and/or application failover, I/O path failover, and logical volume management (LVM). The Hitachi RAID storage systems also support the industry-standard simple network management protocol (SNMP) for remote storage system management from the UNIX/PC server host. SNMP is used to transport management information between the storage system and the SNMP manager on the host. The SNMP agent sends status information to the host when requested by the host or when a significant event occurs.

    This chapter describes how failover and SNMP operations are supported on the Hitachi RAID storage system:

    Host Failover Path Failover Device Mapper Multipath SNMP Remote System Management

    Note: The user is responsible for configuring the failover and SNMP management software on the UNIX/PC server host. For assistance with failover and/or SNMP configuration on the host, refer to the user documentation, or contact the vendors technical support.

  • Host Failover

    The Hitachi RAID storage systems support the Veritas Cluster Server and host failover products for the Red Hat Linux operating system. The user must be sure to configure the host failover software and any other high-availability (HA) software as needed to recognize and operate with the newly attached devices.

    For assistance with Veritas Cluster Server operations, refer to the Veritas user documentation, see Note on Using Veritas Cluster Server, or contact Symantec technical support. For assistance with specific configuration issues related to the Hitachi RAID storage system, contact your Hitachi Data Systems representative.

    Path Failover

    The Hitachi RAID storage systems support the Hitachi HiCommand Dynamic Link Manager (HDLM) and Veritas Volume Manager for the Red Hat Linux operating system. For further information, see the Hitachi Dynamic Link Manager for Red Hat Linux Users Guide. For assistance with Veritas Volume Manager operations, refer to the Veritas user documentation or contact Symantec technical support.

    Device Mapper Multipath

    The Hitachi Virtual Storage Platform and Hitachi Universal Storage Platform V/VM support DM Multipath operations for Red Hat Enterprise Linux (RHEL) version 5.4 X64 or X32 or later.

    Note: Contact the Hitachi Data Systems Support Center for important information about required settings and parameters for DM Multipath operations, including but not limited to: Disabling the HBA failover function Editing the /etc/modprobe.conf file Editing the /etc/multipath.conf file Configuring LVM Configuring raw devices Creating partitions with DM Multipath

    4-2 Failover and SNMP Operation

    Configuration Guide for Red Hat Linux Host Attachment

  • SNMP Remote System Management

    SNMP is a part of the TCP/IP protocol suite that supports maintenance functions for storage and communication devices. The Hitachi RAID storage systems use SNMP to transfer status and management commands to the SNMP Manager on the Red Hat Linux server host via a notebook PC (see Figure 4-1). When the SNMP manager requests status information or when a service information message (SIM) occurs, the SNMP agent on the storage system notifies the SNMP manager on the Red Hat Linux server. Notification of error conditions is made in real time, providing the Red Hat Linux server user with the same level of monitoring and support available to the mainframe user. The SIM reporting via SNMP enables the user to monitor the Hitachi RAID storage system from the Red Hat Linux server host.

    When a SIM occurs, the SNMP agent initiates trap operations, which alert the SNMP manager of the SIM condition. The SNMP manager receives the SIM traps from the SNMP agent, and can request information from the SNMP agent at any time.

    Note: The user is responsible for configuring the SNMP manager on the Red Hat Linux server host. For assistance with SNMP manager configuration on the Red Hat Linux server host, refer to the user documentation, or contact the vendors technical support.

    Private LAN

    Error Info.Public LAN

    SNMP Manager

    Service Processor

    UNIX/PC Server

    SIM

    Hitachi RAID storage system

    Figure 4-1 SNMP Environment

    Failover and SNMP Operation 4-3

    Configuration Guide for Red Hat Linux Host Attachment

  • 4-4 Failover and SNMP Operation

    Configuration Guide for Red Hat Linux Host Attachment

  • 5

    Troubleshooting 5-1

    Configuration Guide for Red Hat Linux Host Attachment

    Troubleshooting

    This chapter provides troubleshooting information for Red Hat Linux host attachment and instructions for calling technical support.

    General Troubleshooting Calling the Hitachi Data Systems Support Center

  • 5-2 Troubleshooting

    Configuration Guide for Red Hat Linux Host Attachment

    General Troubleshooting

    Table 5-1 lists potential error conditions that may occur during storage system installation and provides instructions for resolving each condition. If you cannot resolve an error condition, contact your Hitachi Data Systems representative for help, or call the Hitachi Data Systems Support Center for assistance.

    For troubleshooting information on the Hitachi RAID storage system, see the User and Reference Guide for the storage system (for example, Hitachi Virtual Storage Platform User and Reference Guide).

    For troubleshooting information on Hitachi Storage Navigator, see the Storage Navigator Users Guide for the storage system (for example, Hitachi Virtual Storage Platform Storage Navigator User Guide).

    For information on errors messages displayed by Storage Navigator, see the Storage Navigator Messages document for the storage system (for example, Hitachi Virtual Storage Platform Storage Navigator Messages).

    Table 5-1 Troubleshooting

    Error Condition Recommended Action

    The logical devices are not recognized by the system.

    Be sure that the READY indicator lights on the Hitachi RAID storage system are ON.

    Be sure that the LUNs are properly configured. The LUNs for each target ID must start at 0 and continue sequentially without skipping any numbers.

    The file system cannot be created.

    Be sure that the device name is entered correctly with mkfs.

    Be sure that the LU is properly connected and partitioned.

    The file system is not mounted after rebooting.

    Be sure that the system was restarted properly.

    Be sure that the auto-mount information in the /etx/fstab file is correct.

  • Troubleshooting 5-3

    Configuration Guide for Red Hat Linux Host Attachment

    Calling the Hitachi Data Systems Support Center

    If you need to call the Hitachi Data Systems Support Center, provide as much information about the problem as possible, including:

    The circumstances surrounding the error or failure. The exact content of any error messages displayed on the host systems. The exact content of any error messages displayed by Storage Navigator. The Storage Navigator configuration information (use the FD Dump Tool). The service information messages (SIMs), including reference codes and

    severity levels, displayed by Storage Navigator.

    The Hitachi Data Systems customer support staff is available 24 hours a day, seven days a week. If you need technical support, log on to the Hitachi Data Systems Portal for contact information: https://hdssupport.hds.com

  • 5-4 Troubleshooting

    Configuration Guide for Red Hat Linux Host Attachment

  • A

    Note on Using Veritas Cluster Server A-1

    Configuration Guide for Red Hat Linux Host Attachment

    Note on Using Veritas Cluster Server

    By issuing a SCSI-3 Persistent Reserve command for a Hitachi RAID storage system, the Veritas Cluster Server (VCS) provides the I/O fencing function that can prevent data corruption from occurring if the cluster communication stops. Each node of VCS registers reserve keys to the storage system, which enables these nodes to share a disk to which the reserve key is registered.

    Each node of VCS registers the reserve key when importing a disk group. One node registers the identical reserve key for all paths of all disks (LU) in the disk group. The reserve key contains a unique value for each disk group and a value to distinguish nodes.

    Key format:

    Example: APGR0000, APGR0001, BPGR0000, and so on

    When the Hitachi RAID storage system receives a request to register the reserve key, the reserve key and Port WWN of node are recorded on a key registration table of each port of storage system where the registration request is received. The number of reserve keys that can be registered to one storage system is 128 for a port. The storage system confirms duplication of registration by a combination of the node Port WWN and reserve key. Therefore, the number of entries of the registration table does not increase even though any request for registering duplicated reserve keys is accepted.

    Calculation formula for the number of used entries of key registration table:

    [number of nodes] [number of port WWN of node] [number of disk groups]

    When the number of registered reserve keys exceeds the upper limit of 128, key registration as well as operations such as installing an LU to the disk group fail. To avoid failure of reserve key registration, the number of reserve keys needs to be kept below 128. For this, restrictions such as imposing a limit on the number of nodes or on the number of server ports using LUN security function or maintaining the number of disk groups appropriate are necessary.

  • Example: When adding an LU to increase disk capacity, do not add the number of disk groups, but add an LU to the current disk group.

    FC-SW

    LU0 LU1

    LU2 disk group 1

    LU4 LU5

    LU6 disk group 2

    LU4 LU5

    disk group 3

    1A 2A Security List

    WWNa0

    WWNb0

    Security List

    WWNa1

    WWNb1

    Node A

    WWNa0 WWNa1

    Node B

    WWNb0 WWNb1

    Key registration table for Port-1A Key registration table for Port-2A

    Entry Reserve Key WWN Entry Reserve Key WWN

    0 APGR0001 WWNa0 0 APGR0001 WWNa1

    1 APGR0002 WWNa0 1 APGR0002 WWNa1

    2 APGR0003 WWNa0 2 APGR0003 WWNa1

    3 BPGR0001 WWNb0 3 BPGR0001 WWNb1

    4 BPGR0002 WWNb0 4 BPGR0002 WWNb1

    5 BPGR0003 WWNb0 5 BPGR0003 WWNb1

    6 - - 6 - -

    : : : : : :

    127 - - 127 - -

    Figure A-1 Adding Reserve Keys for LUs to Increase Disk Capacity

    A-2 Note on Using Veritas Cluster Server

    Configuration Guide for Red Hat Linux Host Attachment

  • Acronyms and Abbreviations Acronyms-1

    Configuration Guide for Red Hat Linux Host Attachment

    Acronyms and Abbreviations

    AL arbitrated loop AL-PA arbitrated loop physical address

    blk block

    CVS custom volume size

    DM Device Mapper

    FC fibre-channel FCP fibre-channel protocol FX Hitachi Cross-OS File Exchange

    GB gigabytes Gbps gigabits per second GRUB Grand Unified Bootloader

    HBA host bus adapter HDLM Hitachi Dynamic Link Manager HUS VM Hitachi Unified Storage VM

    I/O input/output

    LU logical unit LUN logical unit, logical unit number LUSE LUN Expansion LVI logical volume image LVM Logical Volume Manager

    MB megabytes MPE maximum number of physical extents

    OFC open fibre control

    PA physical address PC personal computer PP physical partition

    RAID redundant array of independent disks RHEL Red Hat Enterprise Linux

    SCSI small computer system interface

  • Acronyms-2 Acronyms and Abbreviations

    Configuration Guide for Red Hat Linux Host Attachment

    SIM service information message SNMP simple network management protocol

    TCO total cost of ownership TID target ID

    USP V/VM Hitachi Universal Storage Platform V/VM

    VLL Virtual LVI/LUN VSP Hitachi Virtual Storage Platform

    WWN worldwide name

  • Configuration Guide for Red Hat Linux Host Attachment

  • Hitachi Data Systems

    Corporate Headquarters 750 Central Expressway Santa Clara, California 95050-2627 U.S.A. www.hds.com

    Regional Contact Information

    Americas +1 408 970 1000 [email protected]

    Europe, Middle East, and Africa +44 (0) 1753 618000 [email protected]

    Asia Pacific +852 3189 7900 [email protected]

    MK-96RD640-05

    Configuration Guide for Red Hat Linux Host AttachmentContentsPrefaceIntended AudienceProduct VersionDocument Revision LevelChanges in this RevisionReferenced DocumentsDocument ConventionsConvention for Storage Capacity ValuesAccessing Product DocumentationGetting HelpComments

    IntroductionAbout the Hitachi RAID Storage SystemsDevice TypesInstallation and Configuration Roadmap

    Installing the Storage SystemRequirementsPreparing for the Storage System InstallationHardware Installation ConsiderationsLUN Manager Software InstallationSetting the Host ModeSetting the Host Mode Options

    Configuring the Fibre-Channel PortsPort Address Considerations for Fabric EnvironmentsLoop ID Conflicts

    Connecting the Storage System to the Red Hat Linux HostConfiguring the Host Fibre-Channel HBAsVerifying New Device Recognition

    Configuring the New Disk DevicesSetting the Number of Logical UnitsPartitioning the Devices Creating, Mounting, and Verifying the File SystemsCreating the File SystemsCreating the Mount DirectoriesMounting the New File SystemsVerifying the File SystemsSetting the Auto-Mount Parameters

    Failover and SNMP OperationHost FailoverPath FailoverDevice Mapper MultipathSNMP Remote System Management

    TroubleshootingGeneral TroubleshootingCalling the Hitachi Data Systems Support Center

    Note on Using Veritas Cluster ServerAcronyms and Abbreviations