filer 教育訓練 simple fast reliable agenda ( 一 ) filer fundamental introduction 1 。 netapp...
TRANSCRIPT
Filer 教育訓練
Simple
Fast
Reliable
Agenda
( 一 ) Filer Fundamental Introduction
1 。 NetApp Storage Introduction
2 。 NetApp Storage OverView
( Break Time)
3 。 FAS2050 Spec Introduction
FAS2020 Spec Introduction
4 。 FAS2050 Fundamental Setup
( Break Time)
( 二 ) Filer Configuration
1 。 NetApp Fileview Management
( Break Time )
2 。 NetApp FC & Snapdrive Configuration
3 。 NetApp SyncMirror Configure
( Break Time)
Filer Fundamental Introduction
( 一 ) Filer Fundamental Introduction
1 。 NetApp Storage Introduction
2 。 NetApp Storage OverView
3 。 FAS2000 Spec Introduction
4 。 FAS2000 Fundamental Setup
Filer Fundamental Introduction
NetApp Storage Introduction
2009 NetApp Storage Product
HighestAvailability
HighestAvailability
840TB
FAS6040AFAS6040A
FAS6040FAS6040
Primary StoragePrimary StoragePrimary StoragePrimary Storage
NearLine StorageNearLine StorageNearLine StorageNearLine Storage
FAS6080AFAS6080A
FAS6080FAS6080
1176 TB
168 TB
FAS3020FAS3020
168 TB
1176TB
840TB
Mid-Range
High-End
VTL1400VTL1400
VTL700VTL700
336TB
672TB
VTL300VTL300
70TB
504 TB
FAS3070AFAS3070A
FAS3070FAS3070
504 TB
336 TB
FAS3040CFAS3040C
FAS3040FAS3040
336 TB
FAS2020AFAS2020A
FAS2050FAS2050
FAS2050AFAS2050A
68 TB
104 TB
104 TB
Office or Department
StandAloneStandAlone
FAS2020FAS2020
68 TB
FAS3140AFAS3140A
FAS3170AFAS3170A
FAS3140FAS3140
FAS3170FAS3170
420 TB
420 TB
840 TB
840 TB
New
New
完全不需搬動資料的硬體擴充彈性
更換主機即可直接快速升級至更高階機種資料不受影響,不需透過其他儲存系統或磁帶系統的備份回復輔助
Gigabit Ethernet
Frontend: Multiple Ethernet channelBackend: Multiple FC-AL channel
系統設備升級永遠不需要資料轉換
不用耗時的資料轉移即可完成升級至更高階系統
FAS6040FAS6040840 TBFAS3070FAS3070
504TB
FAS2020FAS202065TB
16TB
FAS270FAS270
FAS6080FAS60801176TB
FAS3020FAS3020
84 TB
NetApp FAS
升級需耗時的資料轉移時間
EMC: DMXDMX3, CXDMX, CXDMX3 CXCX,CXCX3, AXCX3
IBM: DS DS
HDS: ThunderLightning, AMS USP, WMSAMS, WMSUSP
HP: EVAEVA, XPXP, EVAXP, MSAEVA, MSAXP
FAS6070FAS6070
Upgrade Paths
• Simple “head swap” • Flexible upgrade options• Zero data migration • Investment protection
• Simple “head swap” • Flexible upgrade options• Zero data migration • Investment protectionFAS270FAS270FAS270FAS270
FAS250FAS250
*
* Disk shelf conversion
4TB
504TB
FAS3000Series
FAS6000Series
NetApp Unified Storage 可在任何 SAN 架構下運作
HOST LAYER
FABRIC LAYER
STORAGE LAYER
PROTOCOL TYPESCSI
Block
SCSI
Block
NFS CIFS
File File
Unified Storage Fabric Attached Storage
FC-SAN IP-SAN
FCP IP over GbE
FC DISK & S-ATA DISK
iSCSIFibreChannel Dedicated
Ethernet
Block File
CorporateLAN
Database, Email, AP Servers File SharingHome DirectoryWeb/Streaming
NFS CIFSDAFS
NFS CIFS
Unified Storage
IP StorageFC Storage
SAN ( Storage Area Network )
資料儲存必須依照特性選擇不同的存取模式
DAS SAN NAS 的比較
DAS SAN (Block) NAS (File)
Application Server
Application Server
FileSystem
FileSystem
NFS, CIFS
Application Server
FileSystem
RAID
SCSI, FCP
FC Switch &
Infrastructure
Ethernet Switch & Infrastructure
ORiSCSIFCP
EthernetSwitch
Application Server
Application Server
RAIDRAID
File System
兩種 SAN 的比較
DAS FC-SAN IP-SAN
Application Server
FileSystem
NFSCIFS
Application Server
FileSystem
RAID
SCSI, FCP
FCSwitch
iSCSIFCP
EthernetSwitch
Application Server
RAIDRAID
Virtualization
Application Server
FileSystem
Application Server
FileSystem
FCP
13
DepartmentalEnterprise
SANEnterprise
NASDepartmental
LANiSCSIFibreChannel
Dedicated
Ethernet
CIFS
NFS
HP EMC HDS
FAS2000 FAS3000 FAS6000 V-Series
Unified Multiprotocol Storage
Consolidate file and block workloads into a single system
NAS: CIFS, NFS
SAN: iSCSI, FCP
Adapt dynamically to performance and capacity changes
Multivendor storage support with V-Series
NetApp Unified Storage
Corporate Data Center Regional Office
NetAppFAS
Series
Home Dirs
CIFS
Exchange
LAN
Regional Data Center
WindowsServers
Exchange &SQL Server
iSCSI
LAN
UNIX®
Servers
Linux®
Servers
WindowsServers
Exchange
CRM
ERP
NetAppFAS
Series
Home Dirs /Network Shares
SQL Server
Windows®
Servers
iSCSI
CIFS, NFS
LAN
FC SAN WAN
LAN
Tape Library
NetAppFAS
Series
Filer Fundamental Introduction
NetApp Storage OverView
NetApp Storage OverView
Raid_DP
FlexVol
NVRAM
SnapShot
SnapRestore
Flex Clone
SyncMirror
Cluster
NetApp RAID_DP Structure
NetApp RAID_DP Structure
RAID 的歷史RAID
University California at Berkeley in 1987早期: Redundant Array of Inexpensive Disks現在: Redundant Array of Independent Disks
RAID 的目標更大的容量 (capacity)更高的效能 (performance)更佳的可靠度 (reliability)更高的可用度 (availability)
RAID levels :0 : Striping (No protection)1 : Mirroring2 : ECC bit-level checksum3 : Byte-level striping dedicated parity disk (single user)4 : Block-level striping dedicated parity disk (multi user)5 : Block-level striping distributed parity disk (multi user)
Level 3,4,5 Algorithms : XOR (Exclusive OR)
RAID 0 – Striping
將所資料打散在所有硬碟上面效能最高任何一顆硬碟故障所有資料就會流失無資料安全保護功能,風險性比單顆硬碟更高
D1 D3D2
A1 A2 A3
B1 B2 B3
C1 C2 C3
RAID 1 – Mirroring
能防止任何一顆硬碟故障而導致資料流失需要多一倍的儲存空間讀取效能略高於單顆硬碟所需成本最高
D1 D2
A1 A1
B1 B1
C1 C1
RAID 3 – Striping + Single Parity Drive
能防止任何一顆硬碟故障而導致資料流失 ( 包括 Parity)將資料經過 XOR 的運算值寫入某個特定的 Parity 硬碟Stripe 的單位 : Byte Level讀寫效能高適合單人工作環境,特別是多媒體的影音剪輯不適合多人工作環境
D1 D3D2
0 1 0
1 1 0
1 1 1
P
1
0
1
RAID 4 – Striping + Single Parity Drive
能防止任何一顆硬碟故障而導致資料流失 ( 包括 Parity)將資料經過 XOR 的運算值寫入某個特定的 Parity 硬碟Stripe 的單位 : Block Level適合多人工作環境讀取效能佳標準的設計上 Parity 硬碟成為寫入效能上的瓶頸
D1 D3D2
0 1 0
1 1 0
1 1 1
P
1
0
1
D D D D P
1 0 1 1 1
0 1 1 0 0
1 0 0 0 1
1 1 0 1 1
RAID 4 – 新增一顆硬碟很容易
XOR -eXclusive OR
A B XOR
0 0 0
0 1 1
1 0 1
1 1 0
D D D D D P
1 0 1 1 1
0 1 1 0 0
1 0 0 0 1
1 1 0 1 1
RAID 4 – 新增一顆硬碟很容易
D D D D D P
1 0 1 1 0 1
0 1 1 0 0 0
1 0 0 0 0 1
1 1 0 1 0 1
RAID 4 – 新增一顆硬碟很容易要新增的硬碟內容都是“ 0” 即可隨時可新增一顆新的硬碟到 RAID group ,無需執行高風險的重建磁碟陣列 !!
“Stair-steps”
RAID 5 – Striping + Distributed Parity
能防止任何一顆硬碟故障而導致資料流失將資料和經過 XOR 的 Parity 運算值分散寫入到所有的硬碟Stripe 的單位 : Block Level適合多人工作環境寫入效能相對於其他 RAID 方式來說不佳
D1 D3D2
0 1 0
1 1 0
D4
1
0
1 1 1 1
1 1 0 0
1 1 0 0
P
P
P
P
P
Data to be stored: 001110111100100
RAID 5 – 新增一顆硬碟需停機或等待
RAID XOR 需重算無法不等待下完成新增單顆硬碟
D1 D3D2
0 0 0
1 0 1
D4
1
1
1 1 0 0
1 0 0 0
0 0 0 0
D5
1
1
0
1
0
P
P
P
P
P
D1 D3D2 P DP
NetApp RAID-DPTM
RAID-DPTM(Double Parity 或 Diagonal Parity)
結合 WAFL®(Write Anywhere File Layout) 及 NVRAM 的技術,使效能可以克服傳統 RAID 4 問題新增硬碟不需停機、等待,立即可用RAID-DP 安全性等同 RAID6 :比 RAID5 安全 4000 倍以上RAID-DP 效能比 RAID6 更優異
Diagonal Dual Parity RAID 6
Diagonal Dual Parity RAID 6RAID 4 延伸一顆硬碟Patented Low Overhead Technology
NetApp FAS
高效能
RAID 01 、 RAID 10
對一些高安全性需求的環境,且又要有高效能的環境下,一般建議用 RAID1/0 ,但建置成本高且安全性不如 RAID 6
不能保護「任意」 兩顆硬碟故障的資料安全RAID 01 、 0+1 、 0/1
先做 RAID 0(Striping) ,再做 RAID 1(Mirroring)
RAID 10 、 1+0 、 1/0先做 RAID 1(Mirroring) ,再做 RAID 0(Striping)
RAID 0+1, 01, 0/1
D1 D2 D3 D4
RAID 0
RAID 1
A1 A2 A3 A4
D5 D6 D7 D8RAID 0
A1 A2 A3 A4
RAID 0+1, 01, 0/1
D1 D2 D3 D4
D5 D6 D7 D8RAID 0
RAID 0
RAID 1 在 RAID 重建期間失去了資料安全的保護能力
A1 A2 A3 A4
B1 B2 B3 B4
A1 A2 A3 A4
B1 B2 B3 B4
RAID 0+1, 01, 0/1
D1 D2 D3 D4
D5 D6 D7 D8RAID 0
RAID 0
RAID 1 仍不能保護 * 任意 * 兩顆硬碟故障的資料安全
A1 A2 A3 A4
B1 B2 B3 B4
A1 A2 A3 A4
B1 B2 B3 B4
RAID 1+0, 10, 1/0
D1 D2 D3 D4
D1’ D2’ D3’ D4’
RAID 1RAID 0 RAID 1 RAID 1 RAID 1
RAID 1+0, 10, 1/0
D1 D2 D3 D4
D1’ D2’ D3’ D4’
RAID 1RAID 0 RAID 1 RAID 1 RAID 1
在 RAID 重建期間失去了資料安全的保護能力
RAID 1+0, 10, 1/0
D1 D2 D3 D4
D1’ D2’ D3’ D4’
RAID 1RAID 0 RAID 1 RAID 1 RAID 1
仍不能保護 * 任意 * 兩顆硬碟故障的資料安全
RAID 6 – Distributed Double Parity
較 RAID 5 多做一次 ParityStripe 的單位 : Block Level適合多人工作環境讀寫效能較 RAID 5 更差無法不停機不等待下完成新增單顆硬碟
D1 D3D2
B0
D4
A0P0
P3
P2
P1
P4
Q0Q0
Q1Q1
Q2Q2
Q3Q3
D1
C2D2
C3 B3
B4
A1
A4Q4Q4
Six Disk “RAID-DP” Array
{D D D D P DP
Start with simple RAID 4 Parity
3 1 2 3 9
{D D D D P DP
Add “Diagonal Parity”
31
2
1
11
3
1
22
1
3
31
22
95
8
7
7
1212
11{D D D D P DP
Fail One Drive
31
2
1
11
3
1
22
1
3
31
22
95
8
7
7
1212
11{D D D D P DP
7
Fail Second Drive
31
2
1
11
3
1
22
1
3
31
22
95
8
7
7
1212
11{D D D D P DP
7
Recalculate from Diagonal Parity
31
2
1
11
3
1
22
1
3
31
22
95
8
7
7
1212
11{D D D D P DP
7
Recalculate from Row Parity (standard RAID4)
31
2
1
11
3
1
22
1
3
31
22
95
8
7
7
1212
11{D D D D P DP
7
The rest of the block … diagonals everywhere
3121
1131
2213
3122
9587
7121211{
D D D D P DP
RAID Level 0 10+11+0
4 5 6WAFL
RAID-DP
經濟 - 以最低的成本提供資料安全保護
效能 - 讀寫效能皆最高
擴充 - 不須等待隨時動態擴增一顆或多顆
安全 - 保護任何一顆硬碟故障的資料安全
重建期間的安全 - 保護 * 任意 * 兩顆硬碟故障
經濟 + 效能 + 擴充 + 安全 + 重建期間的安全
各種 RAID 的特性
NetApp RAID-DP 的優勢
0%0%
5%5%
10%10%
15%15%
20%20%
Up to 5% Up to 2.6% Up to 17.9%
FCATA
FCATA
FC
ATA
17.9%
1.7%2.6%
.2%
5%3%
ATA
小於 10 億分之 1
*RAID-DP 在資料硬碟重建期間又發生 2 個磁區錯誤而導致資料流
失的機率( 以 16 顆硬碟組成
RAID 為例 )
Protected withRAID-DP
.0000000001%
平均每年硬碟的故障率
* 硬碟發生磁區錯誤的機率( 以 300GB FC /
320GB SATA 為例 )
*RAID3/4/5 在磁碟重建期間發生磁區錯誤導致資料流失的機率
( 以 8 顆硬碟組成RAID 為例 )
RAIDRAID 重建期間資料仍有安全保護重建期間資料仍有安全保護
*Source: Network Appliance
FlexVol
FlexVol
NetApp 跨越傳統磁碟陣列空間運用
RG 0
72G
RG 1
144G
RG 2
300G
AggregateFlexVol 1 FlexVol 2
FlexVol 3
Flexible Volume :
橫跨傳統磁碟陣列的空間應用方式讓效能與空間利用大幅的提升內部虛擬化的應用讓主機不需要安裝任何軟體即可享受真正虛擬化
. . .. . .. . .
. . . . . .. . .
ONTAP 7G flexvols: Volumes are logically striped across all disks in the aggregate
Striping volumes across all disks maximizes performance across all volumes.
Volume StructureTraditional vs. ONTAP 7G
“Traditional” (pre-7G) volumes: Volumes are physically bound to raidgroup(s).
tradvol 1
raidgroup
tradvol 2
raidgroup raidgroup
flexvol 1
raidgroup
flexvol 2
raidgroup raidgroup
aggregate
Flexvols can be instantly grown or contracted within the aggregate.Flexvols can be configured to grow automatically, triggered by low space condition.
One or more writeable flexclones can be instantly created against any flexvol.
flexclone
Aggregate
Disks Disks Disks
With DOT7.0
Volumes not tied toPhysical Storage
FlexCloneTMFlexVolTM
Pooled Physical Storage
Disks Disks Disks
Without DOT7.0
Volumes tied toPhysical Storage
Traditional Volume
Before7G
Low utilization~20-30%
High utilization~60-80%
After7G Available for Anyone
App 2App 1
WasteWaste
App 1 App 3
App 3
App 2
Leverage all of your disk with ONTAP 7G , doubling utilization
Flexible virtualized storage
App 2App 1App 3
App 3App 2App 1
提高存取效能
NetApp內部虛擬化讓效能大幅增加
傳統作法業界最佳標竿
53
LUNs
Application-levelsoft allocation
1 TB
800 GB
FlexVol volumes: Separates space visible to users from
physical disks Increases control of space allocation
– Flexible provisioning
– Higher utilization
– Higher granularity
Physical Storage: 1 TB
FlexVol® Volumes:
2TB
Container-levelsoft allocation
1 TB
300GB
200GB
200GB
50GB
150GB
100GB
NetApp2.0 的特色 – 資料空間隨選配置讓應用程式與實體儲存空間隔離
即時動態線上檔案系統擴充能力
屬於 Dynamic Online File System Growth最少可一次只增加 ( 減少 ) 數 MB 的容量最多可一次增加數 ( 減少 )TB 的容量 不需等待就能立刻使用新增的容量不影響系統運作及效能不需重建檔案系統對 Unix 而言 -- 不需更動 mounting point 的設定對 Windows 而言 -- 不需更動網路磁碟的設定
即時動態線上檔案數量上限擴充能力
在檔案系統容量不變的前提下,可隨時增加 inode 數量 ( 可容納的最大檔案數量 )
避免因檔案數量達到 volume 的系統上限時,即使仍有剩餘儲存空間,也無法再存入檔案了完全不影響系統運作
NVRAM
NVRAM
NVRAM的機制-比 Local Disk 快 , 加速反應時間
NVRAM相當於 OLTP Database 的 Log
WAFL Combined with NVRAM
WAFL uses NVRAM “consistency points” (NetApp’s flavor of journalling), thus assuring filesystem integrity and fast reboots.
CP flush to disk occurs once every 10 seconds or when NVRAM reaches half full.
NVRAM placement is at the file system operation level, not at the (more typical) block level. This assures self-consistent CP flushes to disk.
No fsck!
NetApp Storage OverView
Snapshot
A
B
C
A
B
C
Snap 1
NetApp Snapshot™ Technology
Take snapshot 1Copy pointers only
No data movement
Blocks in LUN or File
Blocks on the Disk
A
B
C
A
B1
C
A
B
C
Snap 1
NetApp Snapshot™ Technology
Blocks on the Disk
B
A
B
C
A
C
B1
B1
Snap 2
A
B
C
Snap 1
Take snapshot 1
Continue writing data
Take snapshot 2Copy pointers only
No data movement
Blocks in LUN or File
A
B1
C2
Snap 3
A
B1
C
Snap 2
NetApp Snapshot™ Technology
Take snapshot 1Blocks
on the Disk
A
B
C
A
B
C
Continue writing data
Take snapshot 2
Continue writing data
Take snapshot 3
Simplicity of model =Best disk utilization
Fastest performance
Unlimited snapshots
B1
B1
C2
C2
A
B
C
Snap 1
Blocks in LUN or File
80%
20%
A 90MB Volume with 20% Snap Reserve
TotalCap. =90MB
ActiveFilesystem
~18MBsnap
reserve
~72MB
Start with an empty volume
• Start with an empty 90MB vol1• Snap Reserve starts at standard 20%• Changing snap reserve to zero allocates all blocks to the active file system (AFS)• However net avail blocks does not change
Add 9MB to vol1
• Copy 9MB to vol1• Total used capacity = 9.3MB/92MB cap. = 10% • Snap Reserve left at 0% to illustrate this math• (Note: reserve=0 is standard in a LUN config)
Set Back to 20% Reserve
• Return Snap Reserve to standard 20%• 92MB total – 20% snap reserve (18MB) = ~74MB• Thus capacity is now shown as 9/74 = 13% in AFS, not 10% as on prior slide
Take a Snap
• Numbers barely change because of low overhead• But used blocks are now “locked down” by the snap
Before snap
After snap
Change all blocks in AFS
• An easy way to do this is to delete the 9MB dataset• AFS capacity goes back to 0% as it should• Snapped (R/O) blocks must now be accounted for in snap area, not AFS• Thus snap capacity shows 9MB/18MB total = 50% capacity
Before deletion
After deletion
Copy 9MB dataset again
• Copy 9MB dataset again.• Both AFS and snap area now contain 9MB each• Hence back to 13% as before in AFS …• … but Snapshot numbers do not change
Before copy
After copy
Snap list command
The %/used column: The %/used column shows space consumed by snapshots divided by currently used disk space in the volume, regardless if that used space is due to AFS used blocks or snapshot-protected blocks. The first number is cumulative for all snapshots listed so far, and the second number is for the specified snapshot alone.
The %/total column: The %/total column shows space consumed by snapshots divided by total disk space (both blocks used and free blocks available) in the volume.
Snap list command (cont’d)
• %/used uses the blocks in use as divisor, eg 9M/18M=50%• %/total uses the total blocks as divisor, eg 9M/90M=10%
Take a Second Snap
• Again numbers barely change because of low overhead• But 2nd set of 9MB of blocks are now locked by 2nd snap
Before snap
After snap
Snap list command with 2 snaps
• mysnap2 shows zeros because the AFS has not yet changed.• mysnap still shows its original percentages.
Change all blocks in AFS again
• Delete the 9MB dataset again• AFS capacity again goes back to 0% as it should• Snapped (R/O) blocks must again be accounted for in snap area, not AFS• Thus snap (9MB (1st snap) + 9MB (2nd snap))/18MB reserve = 100% cap.
Before deletion
After deletion
78%80%
20%22%
Snapped Blocks May Encroach on AFS
Snap Reserve = 20%
Snapped blocksmay encroach upon usable AFS space
AFS AFS
hardlimit
NOlimit
snapreserve
snapreserve
AFS blocksmay not encroach upon Snap reserve area
不搬動區塊的快照功能空間最省效率最高, 每個 volume 都有 255 份的快照備份
快照備份區有 RAID的保護一顆硬碟故障不會導致資料流失
系統管理人員可以隨時刪除任一快照時間的備份資料而不影響其他快照備份的內容依據 < 時 >,< 天 >,< 週 > 為週期混合使用進行快照或隨時依需要進行快照隨時動態調整快照空間所佔比率 (0-50%)
調整時不會失去原有快照備份內容超過設定保留空間上限時會發出警告,仍可正常進行快照,不會覆蓋原有快照內容
Snapshot
NetApp Storage OverView
SnapRestore
112/04/21
® 可數秒內瞬間回復整個檔案系統,不限容量大小
Snapshot.0
File: SOME.TXT
C´A B C
Active File System
File: SOME.TXT
Disk blocks
因有 Snapshot.0的快照備份 , App的損毀只影響 C’區塊
SnapRestore
112/04/21
®
可數秒內瞬間回復整個檔案系統,不限容量大小Active File System
File: SOME.TXT
A B C
Disk blocks
使用單一指令即可瞬間將整個檔案系統 (或單一檔案 )回復到某個快
照時間點的備份
SnapRestore
SnapMirror
SnapMirror Defined
SnapMirror replicates a filesystem on one NetApp controller to a read-only copy on another controller (or within the same controller)
Based on rolling Snapshots, only changed blocks between snapshots are copied once initial mirror is established
Asynchronous or synchronous operation
Runs over IP or FC
Data is accessible read-only at remote site
Live read-write is achieved by breaking mirror
SnapMirror icon
SnapMirror Function
…...
SAN or NAS Attached hosts
Source
of source volume(s)
Baseline copy
…...
Source
of changed blocks
Periodic updates
Step 1: Baseline
Step 2: Updates
OR
Target
LAN/WAN
Target
LAN/WANSAN or NAS Attached hosts
Immediate Write Acknowledgement
Immediate Write Acknowledgement
Snap A
SnapMirror Baseline
Baseline Transfer
Source Volume Target Volume
Snap A
SnapMirror Baseline (cont’d)
Baseline Transfer
Source Volume Target Volume
Completed
Target file system is now consistent, and a mirror of the Snapshot A file system
Source file system continues to change
during transfer
Commonsnapshot
Commonsnapshot
SnapMirror Incremental
Incremental Transfer
Source Volume Target Volume
Snap B
Target volume is now consistent, and a mirror of the Snapshot B file system
Completed
Snap A
Commonsnapshot
Commonsnapshot
SnapMirror Incremental (cont’d)
Source Volume Target Volume
Snap CIncremental
TransferCompleted
Target volume is now consistent, and a mirror of
the Snap C file system
Latest mirrorsnapshotrolls forward;trailing mirrorsnapshot isdeleted
Snap B
Configuring SnapMirror is a Snap!
SnapMirror Applications
Data replication for local read access at remote sitesSlow access to corporate data is eliminated
Offload tape backup CPU cycles to mirror
Isolate testing from production volumeERP testing, Offline Reporting
Cascading MirrorsReplicated mirrors on a larger scale
Disaster recoveryReplication to “hot site” for mirror failover and eventual recovery
Data Replication for Warm Backup/Offload
For Corporations with a warm backup site, or need to offload backups from production servers
For generating queries and reports on near-production data
MAN/WAN
Backup Site
Production Sites
Tape Library
& WRITEREAD
Isolate Testing from Production
Target can temporarily be made read-write for app testing, etc.Source continues to run onlineResync forward after re-establishing the mirror relationship
SnapMirror
Production Backup/Test
READ & WRITE
XSnap C Incremental
Transfer
SnapMirror Resync
(Resync backward works similarly in opposite direction)
91
SnapMirror Modes
Synchronous SnapMirror
34
No data loss exposure Replication distance < 100 km Some performance impact
1
42
Seconds of data exposure No performance impact
Semi-Synchronous SnapMirror
1
2
1 minute to hours of data exposure
No distance limit No performance impact
Asynchronous SnapMirror
3Every Write
2Every Write
3
12
Changed blocks
Set intervals
1
Op
erat
ion
al
Ap
pli
cati
on
Sys
tem
Sit
e
Reg
ion
al
NetApp Storage OverView
FlexClone
基於 Snapshot 技術發展的 Flex Clone
Parent
Aggregate
snapshot
Clone
FlexVol
FlexVol Clone
可階層化的 Flexible Clones
Clone1Clone2
Clone3
Parent
CloneA
Aggregate
snap1snap2
snap2
SnapA
測試開發環境完全不需浪費重複的空間
Production database 100GB
Mirror copy 100GB
Development copies 30GB
Testing copies 30GB
Total: 260GB
節省 67% 以上的儲存空間即時產生的獨立線上測試開發環境 完全不影響線上儲存環境運作可以大量產生額外與獨立的測試開發環境
Test 1 Test 2 Test 3
Production Mirrored Copy
Dev 1 Dev 3Dev 2
Assumption: up to 10% change in data in the test and dev environments
FlexClone 的實際應用舉例
8分鐘8分鐘NetApp 模式
2天2天儲存產業的模式
既耗時又浪費錢應用程式升級的風險很高
即時完成不浪費錢應用程式的品質會更高
“ 所有開發人員可即時獲得最新的線上資料的複本,不但能更快速帶來高品質的應用程式,同時成本又較低。”“ 所有開發人員可即時獲得最新的線上資料的複本,不但能更快速帶來高品質的應用程式,同時成本又較低。”
1:5
生產 開發和測試
1:1+
生產 開發和測試
建立資料複本 Clones每週 3 個,連續 5 週 = 15 Clones
建立資料複本 Clones每週 3 個,連續 5 週 = 15 Clones
Source: Veritest, 5/05. NetApp FAS3020 vs. EMC CLARiiON CX500
業界最佳標竿
DR架構讓異地備援 / 測試開發更簡便
Primary
Primary
Remote DR
DR Server PoolDR Server Pool
DR /DR /Test & DevTest & Dev
iSCSI
Remote DR
FC
Clone
FC
Disks
ATA
Disks
數秒鐘
AP Server PoolAP Server Pool
Production /Production /TransactionTransaction
A-SIS
Data
General data in flexible volume
Meta Data
DeduplicationProcess
Deduped data in flexible volume
Deduped (Single Instance) Storage
Meta Data
FAS Deduplication: Function
FAS Deduplication: Commands‘sis’ == single instance storage command
License it
Turn it on
[Deduplicate existing data]
Schedule when to deduplicate or run manually
Check out what’s happening
See the savings!
license add <dedup_license>
sis on <vol>
sis start -s <vol>
sis config [-s schedule] <vol>
sis start <vol>
sis status [-l] <vol>
df –s <vol>
Path State Status Progress
/vol/vol5 Enabled Active 40MB (20%) done
Path State Status Progress
/vol/vol5 Enabled Active 30MB Verified
OR
/vol/vol5 Enabled Active 10% Merged
netapp1> sis status
Path State Status Progress
/vol/vol5 Enabled Active 25 MB Scanned
Path State Status Progress
/vol/vol5 Enabled Active 25 MB Searched
Gathering
Sorting
Deduplicating
Checking
FAS Deduplication:“df –s” Progress Messages and Stages
Completenetapp1> df –s /vol/vol5
Filesystem used saved %saved
/vol/vol5/ 24072140 9316052 28%
Typical Space Savings Results
in archival and primary storage environments:Video Surveillance 1%PACS 5%Movies 7%Email Archive 8%ISOs and PSTs 16%Oil & Gas 30%Web & MS Office 30 - 45%Home Dirs 30 - 50%Software Archives 48%Tech Pubs archive 52%SQL 70%VMWARE 40~60%
In data backup environments, space savings can be much higher. For instance, tests with Commvault Galaxy provided a 20:1 space reduction over time, assuming daily full backups with 2% daily file modification rate. (Reference: http://www.netapp.com/news/press/news_rel_20070515)
Initial Limitations, Caveats
Deduplicates Active File System, not SnapshotsFlexVols onlyNo SnapLockNo SnapVaultNo V-SeriesNo vfilerNo space deduplication when NDMP to tapeDedupe metadata (change log files, fileprint file, etc.) is not de-dupedSpace savings are dependent upon dataset
High Availability
SyncMirror
Synchronous Mirroring
What is SyncMirror?
Two synchronous mirrors (plexes) of a filesystem within a single volume
Both plexes are updated synchronously on writes. Can be described as RAID 4 + 1
No single point of failure in hardware will cause a mirrored volume to fail except for the filer head itself.
P D D D
RAID group 0
RAID group 1
P D D D
/vol0/plex1
P D D D
RAID group 0
RAID group 1
P D D D
/vol0/plex0
FC FC
front end LAN/SAN
back end
Stretch MetroClusterCampus DR Protection
LAN
ReplicatedData
A A
B B
Site #1 Site #2
300 meters distance
What: Replicate synchronously Upon disaster, fail over to partner filer at remote site to access replicated data
Benefits No single point of failure No data loss Fast data recovery
Limitations Distance
YX
X-mirrorY-mirror
FC
MetroCluster is a unique, cost-effective synchronous replication solution for combined high availability and disaster recovery within a campus or metro area
Fabric MetroClusterMetropolitan DR Protection
BuildingA
BuildingB
LAN
Cluster interconnect
dark fibre
BenefitsDisaster protectionComplete redundancyUp-to-date mirrorSite failover
vol Y’vol X vol Yvol X’
up to 100km
Fabric MetroCluster (2)Metropolitan DR Protection
BuildingA
BuildingB
LAN
Cluster interconnect
dark fibre
BenefitsDisaster protectionComplete redundancyUp-to-date mirrorSite failover
vol Y’vol X vol Yvol X’
DWDM DWDM
up to 100km
Disaster Protection Scenarios
Within DataCenter
Campus distancesWAN Distances
Primary Data Center
Local High Availability• Component failures• Single system failures
Campus Protection• Human Error• HVAC failures• Power failures• Building Fire• Architectural failures• Planned Maintenance
downtime.
• Electric grid failures• Natural disasters
Floods Hurricanes Earth Quake
Regional Disaster Protection
NetApp DR Solutions Portfolio
• High system protection
Within DataCenter
MetroCluster (Stretch)
Campus distances
Async SnapMirror• Most cost effective with
RPO from 10 min. to 1 day
MetroCluster (Fabric)• Cost effective zero RPO
protection
Sync SnapMirror• Most robust zero RPO
protection
WAN Distances
Primary Data Center
Clustered Failover (CFO) • Cost effective zero
RPO protection
Disk Failure Protection Solution Portfolio
Classes of Failure Scenarios
RAID 4SyncMirror
Checksums
RAID 4
RAID DP +SyncMirror
Any 5 Disks 1 Disk
IncreasingCost of
Protection RAID DP
Any 2 Disks Any 3 disks
NetApp Storage OverView
Cluster
Overview of High Availability
Cluster: A pair of standard NetApp controllers (nodes) that share access to separately owned sets of disks, in a shared-nothing architecture
Also referred to as redundant controllers
Logical configuration is active-active. A pseudo active-passive config is achieved by owning all disks under one controller (except for a boot disk set under the partner controller).
Dual-ported disks are cross-connected between controllers via independent Fibre Channel links
High speed interconnect between controllers acts as a “hearbeat” link and also path for NV-RAM mirror
Provides high availability in the presence of catastrophic hardware failures
FC
High Availability Architecture
Controller A Controller B
LAN/SAN
High speed Interconnect
Controller A Disks
Controller B Disks
FC
(shared nothing model)
Active path to owned disks
Standby path to partner disks
Mirrored NVRAM
Controller A Controller B
Mirrored NVRAMNVRAM NVRAM
When a client request is received• The controller logs it in its local NVRAM• The log entry is also synchronously copied to the partner’s NVRAM• Acknowledgement is returned to client
LAN/SAN
Failover mode
Controller A Controller B
During the failover process:
NVRAMVirtual instanceof Controller A
IP addr or WWPNof Controller A
• Virtual instance of partner is created on surviving node
• Partner’s IP addresses (or WWPNs) are set on standby NICs (or HBAs), or aliased on top of existing NICs (or HBAs).
• Surviving node takes over the partner’s disk and replays it’s intended NVRAM log entries
LAN/SAN
NVRAMX
Takeover and Giveback
Upon detection of failure, failover takes 40~120sec
On the takeover controller, data service is never impacted and is fully available during the entire failover operation.
On the failed controller, both takeover and giveback are nearly invisible to clients.
NFS cares only a little (typically stateless connections)CIFS cares more (connection-based, caches, etc.) Block protocols (FC, iSCSI): depends on the tolerance of the application
host HBAs are typically configured for (worst case) 2 minute timeout
Takeover is manual, automatic or negotiated
Giveback is manual, or automatic
High Availability Architecture
Components Required:
A second ControllerCluster interconnect kit4 Crossover FC cables2 cluster licenses
Cluster Interconnect Hardware
• Open Standards Infiniband link
• Fast (10 Gbit/sec)
• Redundant Connect
• Up to 200m between controllers
• Integrated into NVRAM5 card
Filer Fundamental Introduction
FAS2050 Spec Introduction
FAS2050 Front View
Light Display
FAS2050
A
B
FAS2050 Front Open View
00
DriveBays
0-34-7
8-1112-1516-1920-23
SAS SAS SAS SAS
SAS SAS SAS SAS
SAS SAS SAS SAS
SAS SAS SAS SAS
SAS SAS SAS SAS
450 450 450 450
450 450 450 450
450 450 450 450
450 450 450 450
450 450 450 450
FAS2050A Rear View
Console portFC 0a,0b
RLM
Power supplies
2 x GbE e0a,e0b
1 2AB
1 2
REPLACE THIS ITEM WITHIN2 MINUTES OF REMOVAL2
REPLACE THIS ITEM WITHIN2 MINUTES OF REMOVAL2
REPLACE THIS ITEM WITHIN 2 MINUTES OF REMOVAL
0a 0bLNKLNK
e0be0a
FAS2050
REPLACE THIS ITEM WITHIN 2 MINUTES OF REMOVAL
0a 0bLNKLNK
e0be0a
FAS2050
Power supplies
B
A
FAS2020 Front View
FAS2020 A
B
Light Display
FAS2020 Front Open View
0
4
8
8
4
0
3
7
11
3
11SAS
SAS
SAS
SAS
SAS
SAS
SAS
SAS
SAS
SAS
SAS
SAS450 450 450 450
450 450 450 450
450 450 450 450
FAS2020 Rear View
0a 0bLNKLNK
FAS2020
10/100 10/100/1000
Console portFC 0a,0b
RLM
Power supplies2 x GbE e0a,e0b
Power supplies
A
FAS2050A Specifications
Filer Specifications FAS2050A
Max. Raw Capacity 104TB
Max. Number of Disk Drives
(Internal + External)
104
Max. Volume/Aggregate Size 16TB
ECC Memory 4GB
Nonvolatile Memory 512MB
Ethernet 10/100/1000 Copper 4
Onboard Fibre Channel 2(1, 2, or 4Gb )
FAS2020 Specifications
Filer Specifications FAS2020
Max. Raw Capacity 68TB
Max. Number of Disk Drives
(Internal + External)
68
Max. Volume/Aggregate Size 16TB
ECC Memory 1GB
Nonvolatile Memory 128MB
Ethernet 10/100/1000 Copper 2
Onboard Fibre Channel 2(1, 2, or 4Gb )
Filer Fundamental Introduction
FAS2050&2020 Fundamental Setup
FAS2000 Fundamental Setup
FAS2000 Fundamental Setup
1 。 Setup
2 。 How to add disk to aggr
3 。 SnapShot & SnapRestore Demo
FAS2000 Fundamental Setup
Setup
How to add disk to aggr
Add Disk to Aggr0
How to add disk to aggr
FAS2000 Fundamental Setup
SnapShot & SnapRestore Demo
FAS2000 Fundamental Setup
SnapShot & SnapRestore Demo
Filer Configuration
( 二 ) Filer Configuration
1 。 NetApp Fileview Management
2 。 NetApp FC & Snapdrive Configuration
3 。 NetApp SyncMirror Configuration
NetApp Fileview Management
FileView
http://storage_ip/ na_admin/
NetApp Fileview Management
FileView
Filer Configuration
( 二 ) Filer Configuration
1 。 NetApp Fileview Management
2 。 NetApp FC & Snapdrive Configuration
3 。 NetApp SyncMirror Configuration
Fibre Channel
About Fibre Channel
The Windows host can use the Fibre Channel Protocol for SCSI to access data on storage systems that run supported versions of Data ONTAP software. Fibre Channel connections require one or more supported host bus adapters (HBAs) in the Windows host.
The storage system is a Fibre Channel (FC) target device. The Fibre Channel service must be licensed and running on the storage system.
Each host HBA port is an initiator that uses FC to access logical units of storage (LUNs) on a storage system to store and retrieve data.
On the Windows host, a worldwide port name (WWPN) identifies each port on an HBA. The host WWPNs are used as identifiers when creating initiator groups on a storage system. An initiator group permits host access to specific LUNs on a storage system.
NetApp SnapDrive Configuration
SnapDrive
SnapDrive software is an optional management package for Microsoft Windows . SnapDrive can simplify some of the management and data protection tasks associated with iSCSI and FCP storage.
SnapDrive for Windows software integrates with the Windows Volume Manager so that storage systems can serve as storage devices for application data in Windows 2000 Server and Windows Server 2003 environments.
MPIO concepts
Multipath I/O (MPIO) solutions use multiple physical paths between the storage system and the Windows host. If one or more of the components that make up a path fails, the MPIO system switches I/O to other paths so that applications can still access their data.
If you have multiple paths between a storage system and a Windows host computer, you must have some type of MPIO software so that the Windows disk manager sees all of the paths as a single virtual disk. Without MPIO software, the disk manager treats each path as a separate disk, which can corrupt the data on the virtual disk.
Q & A