platform disk support 2

Download Platform Disk Support 2

If you can't read please download the document

Upload: aero-plane

Post on 19-May-2015

3.802 views

Category:

Documents


4 download

TRANSCRIPT

  • 1.
    • Anthony McNamara & William Watson
  • Systems TSC-VSP
  • Sun Microsystems

Platform Storage & iSCSI Support 2. 1 What is not covered by this TOI 2 Controller types and functionality 3Controller firmware and patches 4 Operating system connectivity 5Disk size and geometry 6RAID levels, building and recovery 7 iSCSI concepts 8iSCSI requirements and building 9Abbreviations, issues & useful links 3. What is Not Covered by this TOI

  • PCI host bus adapters as this is embedded storage
  • Software RAID implementations
  • Solaris ZFS
  • Disk locator tools
  • Products that are not public knowledge or are under specific NDA with third parties

4. Controller Types and Functionality 5. Controller Types and Functionality 6. Controller Types and Functionality 7. Controller Performance

  • CF (DMA) - 33 MB per second
  • IDE DVD - 33 MB per second
  • IDE HDD - 133 MB per second
  • FC-AL - 120 MB per second
  • SATA 1 - 150 MB per second
  • SATA 2 - 300 MB per second
  • SAS - 300 MB per second
  • SCSI - 320 MB per second

8. LSI 1020/1030

  • 1020 is single channel, 1030 is dual channel
  • Common in x64 & SPARC platforms
  • Administered by /usr/sbin/raidctl in SPARC
  • Administered by BIOS & raidctl in x64
  • Runs on 64-bit PCI-X bus up to133MHz
  • Can run on 32-bit PCI bus at 33MHz (5v tolerant)
  • 2 x ARM966E-S processors on die
  • SCSI320 LVD parallel interface

9. LSI 1064x

  • Common in new x64 & SPARC platforms
  • Administered by /usr/sbin/raidctl in SPARC (init S)
  • Administered by BIOS & raidctl in x64
  • Runs on 64-bit PCI-X bus up to133MHz
  • Can run on 32-bit PCI bus at 33MHz (5v tolerant)
  • ARM 926 CPU at bus speed

10. LSI 1064e

  • Common in new x64 & SPARC platforms
  • Administered by /usr/sbin/raidctl in SPARC (init S)
  • Administered by BIOS & raidctl in x64
  • Runs on PCI express 1, 4 or 8 lane full duplex bus
  • ARM 966 CPU at 225 MHz

11. LSI 1068x and e

  • Original implemented for use on the Sun Fire X4500 but the ASIC was not available in time so not used
  • Administered by /usr/sbin/raidctl (init S)
  • 1068x runs on 66-133MHz PCI & PCI-X bus
    • (Not 5v tolerant so 3.3v only)
  • 1068e runs on PCIe 1, 4 or 8 lane full duplex bus
  • ARM 926 CPU at bus speed (x version)
  • ARM 966 CPU at 225MHz (e version)
  • 1068 is the same as 1064 but with 2 PHY modules

12. LSI 1078x and e

  • Single IC ROC design (RAID on Chip)
  • Option module for Sun Fire V445 platform (PCIe)
  • Administered by /usr/sbin/raidctl -r
  • 1078x runs on 66-133MHz PCI & PCI-X bus
    • (Not 5v tolerant so 3.3v only)
  • 1078e runs on PCIe 1, 4 or 8 lane, full duplex bus
  • Adds RAID 5 support

13. LSI Solaris Patches

  • Solaris 8 SPARC patch 115274-xx
  • Solaris 8 x64 patch (not supported)
  • Solaris 9 SPARC patch 115667-xx
  • Solaris 9 x64 patch 118559-xx (kernel)
  • Solaris 10 SPARC patch 119850-xx
  • Solaris 10 x64 patch 118855-xx (kernel)

14. NVidia NF2050/2200

  • NF2200 HT to PCIe bridge chip with MCP
    • (Media and Communications Processor)
  • NF2050 HT to PCIe bridge chip for I/O
    • (Reduced function NF2200 IC)
  • 4 PHY SATA support only, not SAS
  • RAID 0, 1 & 5 support via software driver
    • (RAID 5 on platforms with more than 2 x disks)
  • RAID currently supported under Windows only

15. NVidia NF3050/3400

  • NF3400 HT to PCIe bridge chip with MCP
    • (Media and Communications Processor)
  • NF3050 HT to PCIe bridge chip for I/O
    • (Reduced function NF3400 IC)
  • 4 PHY SATA support only, not SAS
  • RAID 0, 1 & 5 support via software driver
    • (RAID 5 on platforms with more than 2 x disks)
  • RAID currently supported under Windows only

16. Nvidia Solaris Patches

  • Solaris 8 SPARC patch (not applicable)
  • Solaris 8 x64 patch (not supported)
  • Solaris 9 SPARC patch (not applicable)
  • Solaris 9 x64 patch (not supported)
  • Solaris 10 SPARC patch (not applicable)
  • Solaris 10 x64 patch 118855-xx (kernel)

17. Marvell 88SX6081

  • Low cost SATA IC (Excludes SAS & RAID)
  • PCI and PCI-X signalling
  • 32-bit and 64-bit bus up to 133MHz
  • 8 x Gen2i programmable PHYs
  • Solaris 10 Update 2 support and above
  • Solaris Marvell driver marvell88sx (v 1.3)
  • Solaris sata driver (v 1.3) via SUNWckr

18. Marvell Solaris Patches

  • Solaris 8 SPARC patch (not applicable)
  • Solaris 8 x64 patch (not supported)
  • Solaris 9 SPARC patch (not applicable)
  • Solaris 9 x64 patch (not supported)
  • Solaris 10 SPARC patch (not applicable)
  • Solaris 10 x64 patch 118855-xx (kernel)
  • Solaris 10 x64 patch 119247-xx (man page update)
  • Solaris Generic Hitachi 500GB fw patch 124945-xx
  • Windows Hitachi 500GB fw patch 124955-xx

19. Uli M1575

  • Low cost 4 lane PCIe connected south bridge chip
  • Common on UltraSPARC T1blade platforms
  • 4-port SATA 3Gbps with NCQ
  • 2 x ATA-133
  • Hosts RAID 0, RAID 1, RAID 0+1, RAID 5, & JBOD
  • 8-port USB 2.0
  • 10/100 Ethernet PHY
  • Sun do not utilize RAID or LAN on this controller

20. O/S Connectivity - LSI1020

  • Solaris
    • The raidctl command
  • Red Hat Enterprise Linux 3/4/5
    • CIM provider daemon and CIM Java browser v3.06.00
    • mpt-status
  • SuSE Enterprise Linux 9/10
    • CIM provider daemon and CIM Java browser v3.06.00
    • mpt-status

21. O/S Connectivity - LSI1020

  • Windows 32-bit
    • CIM browser for Windows v3.06.00
  • Windows 64-bit
    • Driver only
  • Windows Vista
    • Driver only
  • Windows Server 2008
    • Driver only
  • Download via:
    • http://www.lsilogic.com/cm/DownloadSearch.do

22. O/S Connectivity - LSI106x

  • Solaris
    • The raidctl command
  • Red Hat Enterprise Linux 3/4/5
    • LSI MyStorage
    • mpt-status
    • MSM-IR (Supplied on supplement CD 1.3 and above)
  • SuSE Enterprise Linux 9/10
    • LSI MyStorage
    • mpt-status
    • MSM-IR (Supplied on supplement CD 1.3 and above)

23. O/S Connectivity - LSI106x

  • Windows 32-bit
    • MSM-IR (Supplied on supplement CD 1.3 and above)
  • Windows 64-bit
    • MSM-IR (Supplied on supplement CD 1.3 and above)
  • Windows Vista
    • Driver only at present
  • Windows Server 2008
    • Driver only at present

24. O/S Connectivity - NVidia

  • Linux
    • Not supported but the Linux volume management dmraid can provide pseudo RAID connectivity
  • Solaris
    • Not supported currently
  • Windows 32-bit
    • Nvidia Media Shield
  • Windows 64-bit
    • Nvidia Media Shield
  • Windows Vista / Windows Server 2008
    • Nvidia Media Shield

25. O/S Connectivity LSI Installation

  • Installation of the LSI CIM Browser:
  • 1. Go to the web site "http://www.lsilogic.com/downloads/downloads.do?"
  • 2. Select Miscellaneous and click the "GO" button
  • 3. Find the "CIM_linuxxxxxxxx.zip" file and download it
  • 4. Copy this file onto the V20z/V40z and put "CIM_linuxXXXXXXXXX.zip" in to "/tmp"
  • 5. Connect via ssh as root onto the V20x/V40z and "cd /tmp"
  • 6. Unzip the archive using unzip "CIM_linuxXXXXXXXXX.zip"
  • 7. Change directory to the install binary path "cd InstData/Linux/VM/"
  • 8. Ensure your DISPLAY variable is correct for the root login "echo $DISPLAY"
  • 9. Add execute permissions to the install binary "chmod 755 install.bin"
  • 10. Execute the install binary "./install.bin" and an installer window appears
                  • Cont ...

26. O/S Connectivity LSI Installation

  • 11. Answer all questions in the window to complete the installation
  • 12. When finished, exit the window
  • 13. Check CIM provider is running with:
    • # ps -aef| grep -i LSIProvider (and check for output similar to that below)
    • root212010 15:37 pts/000:00:04 /usr/local/bin/LSICim/jre/bin/
    • java -Djava.library.path=/usr/lib/ -classpath /usr/local/bin/LSICim/xerces.jar
    • :/usr/local/bin/LSICim/CIMOM.jar:/usr/local/bin/LSICim/HTTPClient.jar:/usr/loc
    • al/bin/LSICim/CIMLSIProvider.jar org.snia.wbemcmd.cimom.CIMServer
    • root372217030 16:17 pts/000:00:00 grep -i cimlsiprovider
  • If the provider is not running start the provider daemon with the following command:
    • /etc/init.d/lsicim start
  • 14. Start the CIM Browser /usr/local/bin/LSICim/CIMLSIBrowser
  • 15. The CIM Browser appears

27. O/S Connectivity MPT Installation

  • Installing the mpt-status utility:
  • 1. Make sure dev-utils are installed i.e. Make, kernel-source, glibc etc
  • 2. Download the mpt-status package for your particular Linux distribution:
  • SUSEhttp://www.novell.com/products/linuxpackages/suselinux/mpt-status.html
  • RedHatExternal http://www.drugphish.ch/~ratz/mpt-status/
  • Ubuntuhttp://packages.qa.debian.org/m/mpt-status.html
  • 3. If you downloaded an RPM then issue rpm -U mpt-status*
  • 4. If you downloaded source, un-archive the download and change directory to it
  • 5. Execute make ; make install (This will only work if the kernel-source etc are installed)
  • 6. Execute make un-install to remove the package if no longer required
  • 7. Execute /usr/sbin/mpt-status to view the current disk configuration

28. O/S Connectivity MSM Installation

  • Installing the MSM-IR package in Windows:
  • 1. Locate the MSM-IR utility installation files before proceeding
  • The MSM-IR installation files are provided in the InstallPack.exe or the optional install pack OptPack.zip on the Tools and Drivers CD-ROM you received with your server You can also locate the MSM-IR installation files in the these packages on the download page for your product in the Tools and Drivers CD image in windowsw2k3packages
  • If you are unsure of the file locations refer to your product Windows Operating System Installation Guide for more information
  • 2. If you received the Tools and Drivers CD-ROM with your server insert it into the CD ROM drive connected to your server
  • a. If you do not have the Tools and Drivers CD-ROM, copy the InstallPack.exe file from a remote computer to a hard disk drive (HDD) in your server via the JavaRConsole remote media utility provided within iLOM
  • b. Start the InstallPack.exe application

29. O/S Connectivity MSM Installation

  • Installing the MSM-IR package in Windows:
  • 3. Click on the Optional Components check box
  • 4. Click Next to accept the settings
  • 5. Review the important note and then click Next
  • The Welcome to the Sun Fire Installation Wizard displays
  • 6. Click Next
  • The End User License Agreement dialog box displays
  • 7. Select I accept this agreement and then click Next
  • 8. Click Finish
  • 9. Click Yes to restart your system to complete the installation
  • For full text of this installation and screen shots, refer to:
  • http://docs.sun.com/source/819-5039-12/Chap8F.htmlon the docs.sun.com web site

30. O/S Connectivity Nvidia Installation

  • Installing the Nvidia Media Shield Application:
  • 1.http://www.nvidia.com/object/nforce_nf_pro_winxp_6.70.htmlfor 32-bit Windows
  • http://www.nvidia.com/object/nforce_nf_pro_winxp64_6.69.htmlfor 64-bit Windows
  • 2. Download the WHQL certified driver and select Run in the download dialog box *
  • 3. Follow the instructions to install chipset drivers if you have not already done so
  • 4. Select both Nvidia IDE and RAID driver if applicable and follow on screen instructions
  • 5. Complete install
  • * Available on the
  • supplement CD

31. O/S Connectivity - Monitoring

  • MyStorage output not available but similar to MSM-IR
  • Output of BIOS:
    • HBA ID LUN VENDORPRODUCTREVSYNCWIDECAPACITY
    • --------------------------------------------------------------------
    • 000LSILOGIC 1030 IMIM10001673277 MB
    • 070LSILogic LSI1030[402]1032300 320.016
    • LSI Logic Corp. MPT boot ROM successfully installed!
  • Output of mpt-status:
    • [root@va64-v20zj-gmp03 bin]# /usr/sbin/mpt-status
    • ioc0 vol_id 0 type IM, 2 phy, 68 GB, state OPTIMAL, flags ENABLED
    • ioc0 phy 0 scsi_id 0 FUJITSUMAT3073N SUN72G0602, 68 GB, state ONLINE, flags
    • ioc0 phy 1 scsi_id 1 SEAGATEST373307LC0007, 68 GB, state ONLINE, flags

32. O/S Connectivity - Monitoring

  • Output of LSI CIM Browser:

33. O/S Connectivity - Monitoring

  • Output of MSM-IR:

34. O/S Connectivity - Monitoring

  • Output of Nvidia Media Shield:

35. Size and Geometry

  • Size is everything!
  • Disk geometry is organized in a layout called CHS or Cylinders, Heads and Sectors
  • There are 2 types of firmware on a hard disk drive:
    • Disk vendor firmware & Sun firmware
    • Different firmware causes mismatches in size
    • Different vendors causes mismatches in size
  • LSI 1020 can now handle differences in size with IM before a partition layout is created (disk label)
  • LSI 106x unaffected as all disks are Sun firmware

36. Expanding And Limitations

  • Expanders are a little like network hubs for storage:
  • A 4 port SAS card can be expanded to connect to an 8 drive enclosure etc.
  • SAS expanders work with LSI or Intel/Adaptec cards
  • The X4240 has a SAS expander built into the chassis
  • You dont need an extra card to address all 16 disks
  • Both the LSI and the Adaptec card can see all of the disks because of the expander but LSI limitation of 2 volumes of max RAID 1E limits expander use
  • Sun supports one SAS HBA for internal storage only

37. What is RAID

  • Redundant Array of Independent/Inexpensive Disks
  • Available with hardware controllers or software O/S
  • RAID 0 Stripe / span
  • RAID 1 Mirror / duplex
  • RAID 0+1 A mirror of stripes
  • RAID 1+0 A stripe of mirrors
  • RAID 1 E Enhanced mirroring
  • RAID 5 Distributed parity

38. What is RAID

  • RAID 0 (IS)
  • 2 disk minimum
  • Low cost
  • Odd or even disks
  • High performance by storing blocks across multiple spindles on different disks
  • Single disk failure results in full data loss

39. What is RAID

  • RAID 1 (IM)
  • 2 disk minimum
  • Even disks only
  • 50 % storage lost (high cost)
  • Improved read performance
  • Reduced write speed
  • Can recover from a single disk failures

40. What is RAID

  • RAID 0+1
  • 4 disk minimum
  • Even disks only
  • 50% storage lost (high cost)
  • Improved read performance
  • Improved write performance
  • Can recover from a single disk failure

41. What is RAID

  • RAID 1+0 (10)
  • 4 disk minimum
  • Even disks only
  • 50% storage lost (high cost)
  • Improved read performance
  • Improved write performance
  • Can recover from a single disk failure
  • Can recover from a dual disk failure if the failures occur on different mirrors

42. What is RAID

  • RAID 1E
  • 4+ disk for tolerance
  • Even & odd disks
  • Improved read and write performance
  • Same as RAID 10 when using even number of disks
  • Can recover from multiple disk failures if even number of disks are used in the array
  • Recovery from an odd number disk array is less than an even number

43. What is RAID

  • RAID 5
  • 3 disk minimum
  • High CPU usage
  • Slower read/write performance due to parity check
  • Traditionally, uses odd number of disks however partition implementations can use even disks also
  • Recovers from disk failures by using the missing parity bit rather than having a copy of the data on an alternate disk

44. Building RAID - LSI1020/1030

  • Power on platform
  • Press during boot
  • Select controller (listed as 1030)
  • Select RAID Properties
  • Highlight 1 stdisk and click on + to add to the array
  • Choose to preserve data or delete data on disk
  • Highlight 2 nddisk and click on + to add to the array
  • Press to exit RAID Properties
  • Select Save changes and exit this menu

45. Building RAID - LSI1020/1030 Select ControllerSelect RAID PropertiesAdd Primary Disk Add MembersExit ConfigurationDisk Now Resyncing 46. Building RAID - LSI1020/1030

  • LSI 1020/1030 RAID 1 Rules:
    • Resync time is asynchronous in OBP/BIOS mode
      • This is approximately 1 hour per gigabyte
    • Resync is synchronous when the driver is loaded
      • This is approximately 5 minutes per gigabyte
      • Resync time (Hours) = (Vol * 1024)/ 3 MB/sec)/ 3600
    • Replacement drives should either be empty or not contain a TOC (table of contents) F4 Diagnostics
      • Introducing drives with an invalid partition layout will result in the controller refusing to create a RAID or resync existing RAIDs
    • The LSI 1020/1030 controller associates array member information with the physical slot as well as the disk
      • Mix sizes only with new RAID arrays

47. Building RAID - LSI106x

  • Power on platform
  • Press to enter LSI setup when prompted
  • Select the primary on board controller (LSI1064)
  • Select RAID Properties
  • Select first drive (primary)
  • Select "D" to destroy existing drive data
  • Select second drive (secondary)
  • Select IM to create a RAID1 (IS to create RAID0)
  • Press to go back a screen and save settings

48. Building RAID - Nvidia MCP's

  • Power on platform
  • Press during boot to enter BIOS
  • Select Advanced >IDE Configuration > Nvidia RAID
  • Enable RAID on each SATA channel and

49. Building RAID - Nvidia MCP's

  • Power off and on platform again
  • Press during boot to enter Nvidia RAID BIOS
  • Select RAID mode and then add disk members
  • Press to finish Y/N to clear existing disk data
  • Press and then to exit

50. Building RAID - Nvidia MCP's

  • Reboot the platform
  • Boot the Windows installation CD
  • Press to add additional storage controllers
  • Insert USB floppy disk or CD-ROM
  • Browse inserted drive
  • Select both Nvidia class drivers
  • Allow installation to continue
  • Install the Nvidia MediaShield package in the GUI once Windows has finished installation & rebooted

51. Recovery RAID - LSI Logic

  • RAID Recovery is in many ways totally automatic
    • Hot spare will automatically sync (v40z only)
    • Replacement drives will automatically sync if the new disk is ether empty or does not contain a TOC
      • Introducing different size disks or drives with an invalid partition layout will result in the controller refusing to re-sync. Pressing'F4' in the LSI setup tool will display a diagnostics code which can be used to determine the problem. See next page for list;
    • The LSI 1020/1030 controller associates array member information with the physical slot as well as the disk. Therefore re-sync requires the replacement disk to be added to the same slot as the old damaged drive

52. Recovery RAID - LSI Logic

  • LSI RAID failure diagnostics codes:
    • 01. Problem with reading the disk serial number
    • 02. Disk does not support SMART
    • 04. Disk does not support wide data, sync mode or queue tags
    • 05. User disabled disconnects or queue tags for device
    • 07. Disk not big enough to mirror primary disk
    • 10. Disk does not have 512-byte sector sizes (check jumpers)
    • 11. Incorrect device type
    • 12. Hot spare selection not big enough to be used in the array
    • 13. Maximum disks already specified or array size exceeded
    • 03,06,08 & 09 are unused & should be regarded as unknown

53. Recovery RAID - Nvidia MCP's

  • When a Nvidia RAID array is detached/degraded:
    • Execute the Media-Shield raidtool in Windows
    • Browse the disks
    • If a disk has detached, right click on the second/lower degraded array and select delete to remove this disk
      • After a few seconds, Windows will prompt to say Found New Hardware
      • The disk will appear as a normal disk in Device Manager
      • Select the option Rebuild Array in the Media-Shield raidtool
    • If an array is degraded, select the option Synchronize an Array
      • This will force a complete rebuild of redundancy in RAID 1 or parity in RAID 5 arrays

54. Recovery RAID - Nvidia MCP's

  • When a Nvidia RAID disk has failed:
    • Physically replace the failed disk taking all precautions for the server/desktop environment you are working with
      • On selected systems, this could mean a power down and the opening of a platforms side or top cover
    • Power on the platform if applicable
    • After boot up, Windows will prompt to say Found New Hardware
    • Execute the Media-Shield raidtool in Windows
    • Browse the degraded array
    • Select the option Rebuild array
      • This will force a complete rebuild of redundancy in RAID 1 or parity in RAID 5 arrays

55. Solaris SATA Driver

  • First shipped in Solaris 10 Update 2 for Thumper
  • Partly ported to the generic hardware in Update 4
  • Delivered via package SUNWckr
  • /kernel/misc/sata 32-bit ELF driver
  • /kernel/misc/amd64/sata 64-bit ELF driver
  • Man page sata(7D)
  • Supports SATA and SATAII 1.5/3.0gb/sec

56. New Solaris raidctl Usage (May 2007)

  • raidctl -a {set | unset} -g disk {volume | controller}
  • raidctl -c [-f] [-r raid_level] disk1 disk2 [disk3...]
  • raidctl -d [-f] volume
  • raidctl -h
  • raidctl -l -g disk controller
  • raidctl -l volume
  • raidctl -l controller
  • raidctl -p "param=value" [-f] volume
  • raidctl -C "disks" [-r raid_level] [-z capacity] [-s stripe_size] [-f]
  • controller
  • raidctl -F filename [-f] controller
  • raidctl -S [volume | controller]
  • raidctl -S -g disk controller

57. What is iSCSI?

  • Enables the transport of Block I/O data over IP networks.
  • Operates on top of TCP through encapsulation of SCSI commands in a TCP/IP data stream
  • Transport of iSCSI mainly over Ethernet

58. What is iSCSI?

  • A transport for SCSI commands
  • An end to end protocol
  • Seamless implementation
    • On workstations and laptops
    • With current TCP/IP stacks
    • In a HBA
    • Uses existing routers/switches without changes
  • Has the concept of human readable SCSI device (node) naming
  • Transport includes security as a base concept
    • Authentication at the node level
    • Enabled for IPSec and other Security Techniques

59. Benefits of IP Storage

  • Leverages Ethernet-TCP/IP networks and enables storage to be accessed over LAN/WAN environments without altering storage applications.
  • Uses existing Ethernet/IP knowledge base and management tools
  • Provides consolidation of storage systems (Data backup, Server cluster, Replication, Business Continuity and Disaster Recovery)
  • Uses existing Network Infrastructure
  • Building WWSANfits in the development of modern IP Storage technologies
  • Storage resources are now available to more applications
  • Manage IP-based storage networks with existing tools and IT expertise

60. How Does it Work?

  • The iSCSI protocol sits above the physical and data-link layers and interfaces to the operating system's standard SCSI Access Method command set
  • iSCSI enables SCSI-3 commands to be encapsulated in TCP/IP packets and delivered reliably over IP networks
  • The iSCSI protocol runs on the host initiator and the receiving target device
  • iSCSI also enables the access of block-level storage that resides on Fibre Channel SANs over an IP network via iSCSI-to-Fibre Channel gateways such as storage routers and switches

61. Limitations of iSCSI

  • IP packets are delivered without a strict order.SCSI packets must be delivered one after another without delay, and breach of the order may result in data losses
  • Processor power on the client's side which uses such card
    • Recommended using special network cards which support mechanisms of CPU unload before TCP stack processing
  • Latency issues -
    • Although there are a lot of means developed to reduce influence of parameters which cause delays in processing of IP packets, the iSCSI technology is positioned for middle-level systems

62. Four Basic iSCSI Components

  • iSCSI Address and Naming Conventions
  • iSCSI Session Management
  • iSCSI Error Handling
  • iSCSI Security

63. Address and Naming Component

  • It's more convenient to use a combination of an IP address and a TCP port which are provided by a Network Portal. For example, Sun.com.ustar.storage.itdepartment.161.
  • Such name has an easy-to-perceive form and can be processed DNS. An iSCSI name provides a correct identification of an iSCSI device irrespective of its physical location.

64. Session Management Component

  • The iSCSI session consists of a Login Phase and a Full Feature Phase which is completed with a special command
  • The Login Phase is used to adjust various parameters between two network entities and confirm an access right of an initiator
  • When login is confirmed, iSCSI turns to the FULL Feature Phase. If more than one TCP connection was established iSCSI requires that each command/response pair goes through one TCP connection

65. Error Handling Component

  • For iSCSIerror handling & recovery to work correctly, both the initiator and the target must be able to buffer commands before confirmed. Each terminal must have a possibility to recover selectively a lost or damaged PDU within a transaction for recovery of data transfer
  • Hierarchy of iSCSI error handling and recovery after failures:
    • Lowest level - identification of an error & data recovery on the SCSI task level, for example, repeated transfer of a lost or damaged PDU
    • Next level - a TCP connection which transfers an iSCSI task can have errors. In this case we attempt to recover the connection
    • At last, the iSCSI session can be damaged. Termination and recovery of a session are usually not required if recovery is implemented correctly on other levels, but the opposite can happen. Such situation requires that all TCP connections be closed, all tasks, under-fulfilled SCSI commands be completed, and the session be restarted via the repeated login

66. Security Component

  • As the iSCSI can be used in networks where data can be accessed illegally, the specification allows fpr different security methods. Such encoding means as IPSec which use lower levels do not require additional matching because they are transparent for higher levels, and for the iSCSI as well
  • Various solutions can be used for authentication, for example, Kerberos or Private Keys Exchange, an iSNS server can be used as a repository of keys

67. Solaris iSCSI Software Requirements

  • The Solaris 10 1/06 or later release for Solaris iSCSI initiator software
  • The Solaris 10 8/07 or later release for Solaris iSCSI target software
  • The following software packages:
    • SUNWiscsir - Sun iSCSI Device Driver (root)
    • SUNWiscsiu - Sun iSCSI Management Utilities (usr)
    • SUNWiscsitgtr - Sun iSCSI Target Device Driver (root)
    • SUNWiscsitgtu - Sun iSCSI Target Management Utilities (usr)
  • S10 pkg available at: /net/iscsisupport.singapore/docs/pkgs/iscsi-initiator/S10u2
  • Patch available at: /net/iscsisupport.singapore/docs/patch/119090-23
  • Sun does not support S9 iSCSI

68. Troubleshooting iSCSI Commands

  • # iscsiadm list discovery
  • # iscsiadm list initiator-node
  • # iscsiadm list isns-server -v
  • # iscsiadm list static-config
  • # iscsiadm list target -v
  • # iscsiadm list discovery-address -v
  • # iscsiadm list isns-server -v
  • This iscsiadm(1M) command is only available on systems running Solaris 10 ( iSCSI is not supported on earlier versions)
  • Sun Explorer version 5.7 and later will gather essential iSCSI data.Refer to InfoDoc 82329 : Sun[TM] Explorer 5.7 Data Collector

69. Creating an iSCSI Device for Windows

  • Part i - From the NAS head
  • 0. Create or use an existing sfs2 file system
  • Part ii
  • 1. Configure the iSNS server
  • 2. Configure a iSCSI access group
  • 3. Create a iSCSI device
  • Note: This document assumes the Windows iSNS server and iSCSI initiator are installed on the Microsoft Windows platform
  • Information obtained from InfoDoc 85873 iSCSI basics created by Sushil Shirke
  • Part iii - From the Windows platform
  • 4. View and scan the iSCSI Initiator for the new device
  • 5. Log into the iSCSI device
  • Part iv
  • 6. Go to Disk Management
  • 7. Scan for new disks
  • 8. Create a Windows label
  • 9. Create a Windows partition
  • 10.Format Windows file system
  • 11.View in My Computer

70. Part I Create a NAS File System

      • Note: If a sfs2 file system is already created proceed to Part ii
  • 1. Telnet into the NAS head/# telnet 129.153.118.65
  • 2. Log in as admin followed by return/ connect to (? for list) ? [menu] admin
  • 3. Type menu at the NAS prompt and the menu will appear/ nas5x05 > menu
    • nas5x05StorageTek 5320 NAS Menu
    • ------------------------------------------------------------------------------
    • Operations| Configuration| Access Control
    • 1. Activity Monitor| A. Host Name & Network| K. Admin Access
    • 2. Show Log| B. Timezone, Time, Date | L. Volume Access
    • 3. Lock Console| C. Drive Letters| M. Trusted Hosts
    • 4. Licenses| D. Disks & Volumes|
    • | E. Users| Extensions
    • | F. Hosts| U. Language Selection
    • || V. EMAIL Configuration
    • | H. DNS & SYSLOGD| W. ADS Setup
    • | I. NIS & NIS+| X. CIFS/SMB Configuration
    • 0. Shutdown| J. NS Lookup Order| Y. RDATE time update

71. Part I Create a NAS File System

  • 4. Type the letter D to Enter Disks & Volumes
    • D. Disks & Volumes
    • DriveVolume(s)Available
    • A. ide1d1/cvol /dvol0B
    • B. isp1d040/iscsiemea499.1GB
    • C. isp1d041/logs /v2555.6GB
    • The right side shows the available blank space
  • 5. Select a isp device above and create a sfs2 file system

72. Part I Create a NAS File System

  • 6. Option B - isp1d040 was selected for this example
    • nas5x05StorageTek 5320 NAS Configure Disk
    • Disk isp1d040Size MB 571179SUNCSM200_R
    • # START SECSIZE SECTYPEC OWNERUSE%FREESIZEREQS ACTIVE
    • 1240 122880000 sfs2/iscsiemea1%57.574G/57.574G6787+1
    • 2122880240 1046896399 --1046896399 sectors (499.1GB) free
    • 3 11697766390 --
    • 4 11697766390 --
    • 5 11697766390 --
    • 6 11697766390 --
    • 7 11697766390 --
    • 8 11697766390 --
    • +------------------------------------------------------------------------+
    • |1. Edit|
    • ||
    • |SPACE page display|
    • +------------------------------------------------------------------------+

73. Part I Create a NAS File System

  • 7. Type 1 Edit
  • 8. Use the error keys and go to slice number 2
  • 9. Type 1 Create partition
  • 10.Type 1 sfs2
  • 11.Create a file system name. (May want use a name that helps with tracing)
  • 12.If compliance is installed a choice of Advisory or Mandatory
  • 13. Type in the desired capacity
  • 14. Press
  • 15. Type 7 Proceed with create

74. Part II Create an iSCSI Device

  • 1. Navigate the iSCSI menu by typing the corresponding letter to the left. Use the to scroll through extensions in order help find the menu choice
  • 2. The iSCSI main menu appears:
      • A.Configure iSCSI LUN
      • B.Configure Access List
      • C.Configure iSNS Server
  • 3. Select C Configure iSNS Server and type 1 to edit the fields
  • 4. Enter the IP of the server iSNS Microsoft server, currently (129.153.118.69)
  • 5. Type "7" Save Changes and press to return to the iSCSI menu
  • 6. Type option "B" Configure Access List.
  • 7. Type 7 to add to a list and the acces menu comes up
  • 8. Type "1" Edit Fields (In each follow fields below are an example)
      • Namenas5x05access
      • CHAP Initiator Nametest
      • CHAP Initiator Secret ************
      • Initiator IQN Name
      • Initiator IQN Name

75. Part II Create an iSCSI Device

  • 9. Type 7 Save changes
  • 10. Push twice to go back to iSCSI menu
  • 11. Type A Configure iSCSI LUN
  • 12. Type 1 Add a device
  • 13. Type 7 Add a LUN
  • 14. Type 1 Edit fields
  • 15. Enter an iSCSI device name. Alias is optional and sfs2 file system capacity of the iSCSI device must be no larger then the file system
    • Namenas5x05iscsidisk1 Alias Volume/iscsidisk1 Capacity10g ThinNo Access
    • ENTER to move through fieldsSave changes after last field +------------------------------------------------------------------------+ | Select or Add an Access for theiSCSI LUN.| | 1. Select/Add|+------------------------------------------------------------------------+ ESC for Menu

76. Part II Create an iSCSI Device

  • 16. Type 1 Select/Add
  • 17. Type A nas5x05access
    • Note: nas5x05access is the example iSCSI access group created Part ii
  • 18. Type 7 Save changes
  • 19. The following output will appear
    • Initialization in progress....
    • Elapsed time 0:076% done
    • A. iqn.1986-03.com.sun:01:00144f0f8322.44884716.nas5x05iscsidisk1
    • This will appear in the "Configure iSCSI LUN" menu once complete
  • 20. Once complete the iSCSI device should be available to the initiator on the Windows platform

77. Part III Windows Initiator(Example Using Windows Advanced Server)

  • 1. On a Windows platform, select the following:
    • Start > Programs > Microsoft iSCSI Initiator > Microsoft iSCSI Initiator
  • 2. When the iSCSI Initiator GUI displays,click on the Targets tab
  • 3. A new device should be in the list of targets
    • Below is the device in this example:
    • iqn.1986-03.com.sun:01:00144f0f83322.44884716.nas5x05iscsidisk1Inactive
  • 4. Click the Log On button
  • 5. Check the tick box Automatically restore this connection when the system boots
  • 6. Check the tick box Enable multi-path if desired
  • 7. Click on the Advanced button
  • 8. Verify the source IP is 129.153.118.69 for this example. The IP if the iSNS server configured on the NAS head

78. Part III Windows Initiator(Example Using Windows Advanced Server)

  • 9. Click the tick box CHAP login information
  • 10. Back space the name automatically entered in the User name field
  • 11. Re-enter in the name "test" used in this example when creating the iSCSI access group. As per the CHAP Initiator Name test that was entered above in Part II of this example
  • 12. Enter in the password created in the access group
  • 13.Click on the OK button
  • 14.The device should now show a Connected status:
    • iqn.1986-03.com.sun:01:00144f0f83322.44884716.nas5x05iscsidisk1Connected
  • 15. The device is now available to the Windows platform as locally attached storage device but communicating via IP. The new device will need to be installed as per a normal Windows storage module.

79. Part IV Create a Windows Partition

  • 1. Click on the "Disk Management" folder located Computer Management
  • 2. Once the Disk Management folder is clicked Windows will scan for devices and detect and new device is available and requires a write signature
  • 3. Click Cancel.The reason is the disk management GUI will load up and show all the local disks
  • 4. Right mouse click on the new disk
  • 5. Click the OK button
  • 6. Right mouse click on the disk and select Create a Partition
  • 7. Click the Next button
  • 8. Click Primary Partition
  • 9. Click the Next button for full capacity
  • 10. Click the Next button for the default assigned drive letter
  • 11. Verify the summary and then click the Next button
  • 12. Click Finish to start formatting the drive

80. Terms / Abbreviations

  • SAN - Storage Area Network
  • CDB - command descriptor block
  • PDU - Protocol Data Unit
  • QoS - Quality of Service
    • (usually describes a network through latency and band of a signal)
  • SNIA - Storage Networking Industry Association
  • DNS - Domain Name Server
  • PLOGI - Fibre Channel Port Login
  • iSCSI - Internet Small Computer Systems Interface
  • FCIP - Fibre Channel over TCP/IP
  • iFCP - Internet Fibre Channel Protocol
  • iSNS - Internet Storage Name Service
  • WWSAN - World Wide Storage Area Network

81. Current Issues Hot Bugs

          • Solaris iSCSI Initiator - Hot Bugs (P1-3 - All releases)
    • 6436879iscsi panic when target violates protocol after successful target/lun reset
    • 6488627isns-client: DevAttrReg: pg object: non-key attribute precedes key attributes
    • 6549867kernel/driver-i iSCSI initiator panic during iSNS discovery
    • 6555580i system panic [cpu1]/thread=ffffff0007ec5c80: assertion failed: 0, file: src/iscsi_conn.c, line: 956
    • 6559145i Upon reboot iscsi based filesystem fails to unmount with network connection issue
    • 6559860iSystem crashed while running snv_63 on V445
    • 6568295i iSCSI MPxIO failback does not happen with EMC CLARiiON
    • 6580820i iscsipanic while logining to new target under isns discovery
    • 6586114i iscsi initiator server panic while running ifconfig up/down test
    • 6598503i isns client: DevDereg deregister the iscsi storage node only
    • 6601828Solaris 10 (SPARC and x86) IPv6 iSCSI "discovery" level (initial setup) not finding target
    • 6602016i panics while isns discovery is being enabled or disabled
    • 6606807Panic while running iSCSI I/O
    • 6608820i mount at boot fails :System goes to maintenance mode if storage is not accessible

82. Current Issues RFE's

          • Solaris iSCSI Initiator - Hot RFEs (P1-3 - All releases)
    • 62924753 acc 27M kernel/driver-i persistent store should use NV_ENCODE_XDR
    • 63348903 dis 24M kernel/driver-i Need sysevents for property and visibility changes for all IMA object types
    • 63544113 dis 22M kernel/driver-i iSNS - add support for other iSNS discovery methods (DHCP/Heartbeat)
    • 63544133 dis 22M kernel/driver-i iSNS - add support for iSNS security
    • 63943072 dis 19M sajid.ziakernel/driver-i iSCSI software boot support
    • 63943082 dis 19M kernel/driver-i MC/S and EL1/2 support
    • 63970323 dis 18M kernel/driver-i bring discovered targets on in parallel
    • 64015623 acc 18M kernel/driver-i iSCSI security updates - addition of SASL support
    • 64254063 dis 16M kernel/driver-i RFE: MC/S and ERL1/2
    • 64576943 acc 13M kernel/driver-i adjust solaris initiator login parameters for better default performance
    • 64577023 dis 13M kernel/driver-i slow link iscsi throttle
    • 64977773 dis 43W kernel/driver-i iscsi initiator - make default connection retry duration configuration
    • 65582033 acc 19W kernel/driver-i iscsiadm should be able to deal with DNS names

83. Current Issues Product Pages

    • Current issues are listed under each product page etc:
      • http://systems-tsc/twiki/bin/view/Products/ProdIssuesUltra20
      • http://systems-tsc/twiki/bin/view/Products/ProdIssuesUltra20M2
      • http://systems-tsc/twiki/bin/view/Products/ProdIssuesUltra40
      • http://systems-tsc/twiki/bin/view/Products/ProdIssuesUltra40M2
      • http://systems-tsc/twiki/bin/view/Products/ProdIssuesW1100Z
      • http://systems-tsc/twiki/bin/view/Products/ProdIssuesV20z
      • http://systems-tsc/twiki/bin/view/Products/ProdIssuesSunFireX2100
      • http://systems-tsc/twiki/bin/view/Products/ProdIssuesSunFireX2100M2
      • http://systems-tsc/twiki/bin/view/Products/ProdIssuesSunFireX2200M2
      • http://systems-tsc/twiki/bin/view/Products/ProdIssuesSunFireX4100
      • http://systems-tsc/twiki/bin/view/Products/ProdIssuesSunFireX4500
      • http://systems-tsc/twiki/bin/view/Products/ProdIssuesSunFireX4600
      • http://systems-tsc/twiki/bin/view/Products/ProdIssuesSunBlade6000
      • http://systems-tsc/twiki/bin/view/Products/ProdIssuesSunBlade8000

84. Helpful Links

  • Monitoring MegaRAID under Windows:
    • Managing the Sun Fire[TM] V20z/ V40z server with an LSI MegaRAID card in Windows
    • Document ID: 82356Aug 31, 2005 Info Docs
  • Monitoring MegaRAID under Linux:
    • Tools and characteristic of MegaRaid in Linux
    • Document ID: 85638May 25, 2006 Info Docs
  • Sun Supported MegaRAID tool for Linux/Windows:
    • http://www.sun.com/download/products.xml?id=45b94409
    • (This is now included in Sun supplement CD's 1.3 and above)
  • MSM-IR TOI:
    • http://systems-tsc/twiki/pub/Products/SunFireX4100ToiRef/G4F-Storage-subsystem-TOI.pdf

85. Helpful Links and Aliases

  • Aliases for iSCSI:
    • iscsi-interest - Internal iSCSI Discussions
    • iscsi-iteam - iSCSI Initiator Development and Test Group
  • Initiator project page:
    • http://dhs.central/index.php/ISCSI
  • Target project page:
    • http://dhs.central/index.php/ISCSI_Target
  • TSC-X64 home page:
    • http://systems-tsc/twiki/bin/view/Teams/GlobalX64FocusTeam
  • This and Systems-TSC CEC presentations:
    • http://systems-tsc/twiki/bin/view/Teams/ESGcec07

86.

  • Anthony McNamara & William Watson
  • [email_address]
  • [email_address]