an open source approach to replication and recovery

Post on 11-Jan-2016

221 Views

Category:

Documents

0 Downloads

Preview:

Click to see full reader

TRANSCRIPT

An Open Source approach to replication and recovery.

Current Challenges:Miniscule IT Budgets.Backup Tapes don’t meet capacity needs or

are costly. Most RAIDs don’t protect from certain data

loss.Quick recovery of virtual disks and OS. Stuck with same vendor for offsite storage.

Vendor Specific SolutionsCostly; upwards of $60,000 for infrastructure.Offers features only available to like systems.Choose Two out of Three: Fast, Reliable, Cheap.Yearly support costs can damper most IT budgets.Typical useable storage capacity: 2.8TB with a single

shelf, the rest for snapshots and filesystem overhead.

An Open Source SolutionOpen Solaris with ZFS and iscsitarget

software.Relatively inexpensive compared to 3rd party

vendorsCan mix and match hardware.ZFS: Virtually unlimited capacity!Technical creativity when building scripts.

Cons about Opensolaris iSCSINot supported by VMware. (yet)OS and Filesystem Learning curve Limited to 1gb/s of bandwidth per SAN*

(until 10gb/s is released)No support for MTU 9000 (Jumbo Frames)

Necessary Tools for success.ZFS File System; /sbin/zpool and /sbin/zfs;

snapshotsIscsitadm (pkg add)SSH keygen/pgpA replication script (e.g. zfs-replicate.sh)CronMailEnable Vmware LVM/snapshot

What is Zpool/ZFS?ZFS is a file system designed by Sun Microsystems for the

Solaris Operating System. Features include support for high storage capacities (16 exabytes per pool or 16 million terrabytes), snapshots and copy-on-write clones, continuous integrity checking (256bit CRC checks of every block) and automatic repair (scrubbing), RAID-Z and ACLs. (Found on Wiki)

http://opensolaris.org/os/community/zfs/whatis/RAID 0 – 5: No protection against silent disk corruption and

bit rot. http://blogs.sun.com/bonwick/entry/raid_zZpool: disk pool creation, iostatus, and health display (build

RAIDZ, RAIDZ2, Jbods, stripes, mirrors, striped mirrors, etc…)

Open Solaris iscsitargetUses industry standard iscsi target and initiator

callsExtremely easy to use and seamlessly integrates

with Zpool/ZFSAllows for the creation of soft provisioned disks. ACL support, deny unwanted initiators.Iscsitadm list targets –v : details of each currently

active targets

Replication Script; Mail; CronMany available on the internetModified a script from the web (author

unknown) to work according to my requirements. Still in progress but works . (zfs-replicate.sh)

Use Cron to execute zfs-replicate on a schedule.

Mail the results to your disk admins.

Vmware EnterpriseConfigure VMWare for iSCSI supportAdd targets to Vmware host adapters.Enable LVM snapshot support: GUI far more

simpler than console.

Snapshots: How to?Reserve up to 40% of maximum pool capacity for

snapshots, i.e. 10TB pool, 6TB data, 4TB snapshots. Admin discretion and may be less depending on LUN configuration.

Snapshots are the size of the used capacity of the LUN/partition.

Making a snapshot is a breeze! /sbin/zfs snapshot pool/partition@snapshot

Send the snapshot over to the remote site. /sbin/zfs send pool/partition@snapshot | ssh username@host \

/sbin/zfs recv pool/partition

Replication types ZFS send/recv with SSH (avg. speed: 25MB/s at 256bit encryption)

Slowest replication, doesn’t require dedicated network and secure. Different algorithms provide different data throughput. Stunnel may be quicker!

ZFS send/recv with mbuffer (speeds: 80 to 120MB/s Avg: 360Gig/hr) lev@siscsi-sas:~$ zfs send sd/Linux@1 | mbuffer -b1024k -m 1500m -O

172.32.66.10:9999 ;LOCAL SITE lev@piscsi-sas:~$ mbuffer -m 1500M -s1024k -I 172.32.66.11:9999 | zfs recv

pdrive/Linux ;REMOTE SITE http://www.maier-komor.de/mbuffer.html

Must have enough system memory (+4GB) to support large buffers. Non encrypted, requires network paths to be safe.

ZFS send/recv with netcat/rsh (avg. speed: 35MB/s) Insecure data copy, just for comparison.

Data RecoveryActivate offsite VM ESX Host.Access offsite disk storage LUNs.

add iSCSI remote host to Virtual CenterRe-add VM Disk (LUNs) to offsite VM Infrastructure.

GUI: Advanced Features: LVM: EnableResignature CLI: /proc/vmware/config/LVM/EnableResignature

Resignatured LUNs will appear automatically in the storage section of Virtual Center.

Re-add VM Guests from resignatured LUNsOffsite LUNs can be added to primary VM site for data

recovery or testing...

Questions?

Tano SimonianEmail: tanniel@ucla.edu(ofc) 310 794 9669

top related