wafl overview
TRANSCRIPT
WAFL Overview
NetApp Spotlight Series
More Info
2© 2008 NetApp. All rights reserved.
WAFL – No pre-allocated locations (data and metadata blocks are treated equally). Writes go to nearest available free block.
Writing to nearest available free block reduces disk seeking (the #1 performance challenge when using disks).
WAFL: Write Anywhere File LayoutFilesystem for Improved Productivity
Berkeley Fast File System/Veritas File System/NTFS/etc. – Writes to pre-allocated locations (data vs. metadata)
...
1-2 MB Cylinders
...
3© 2008 NetApp. All rights reserved.
Write Anywhere? Why do we do this?
“Write anywhere” does not mean that we literally write anywhere, to just any random block.
WAFL == Write Anywhere File Layout
“Write anywhere” means that we can write anywhere, so we get to choose where we write.
And we choose carefully are efficiently.
4© 2008 NetApp. All rights reserved.
WAFL Architecture Overview
5© 2008 NetApp. All rights reserved.
WAFL uses integrated RAID4
RAID4 is similar to better known RAID5:
– RAID5: parity is distributed across all disks in the RAID group
– RAID4: parity is contained in a single disk in the RAID group
Tradeoffs with the single parity disk RAID4 model:
– CON: The parity disk becomes the ‘hot spot’ or bottleneck in the RAID group, due to intensive XOR parity calculations on it.
– PRO: The RAID group can be instantly expanded by adding (pre-zeroed) data disks, because no parity re-calculation occurs.
6© 2008 NetApp. All rights reserved.
WAFL eliminates the parity disk bottleneck
WAFL overcomes the ‘classic’ parity disk bottleneck issue, by the use of flexible write allocation policies:
– Writes any filesystem block to any disk location (data and meta data)*
– New data does not overwrite old data
– Allocates disk space for many client-write operations at once in a single new RAID-stripe write (no parity re-calculations)
– Writes to stripes that are near each other
– Writes blocks to disk in any order
* except root inode
7© 2008 NetApp. All rights reserved.
Result: Minimal seeks and no bottleneck
RAID4 with Typical File System RAID4 with WAFL
Requests are scattered across the disks, causing the parity disk to
seek excessively
WAFL writes blocks to stripes near each other, eliminating long seeks
on the parity disk.
8© 2008 NetApp. All rights reserved.
WAFL Combined with NVRAM
WAFL uses NVRAM “consistency points” (NetApp’s flavor of journalling), thus assuring filesystem integrity and fast reboots.
CP flush to disk occurs once every 10 seconds or when NVRAM reaches half full.
NVRAM placement is at the file system operation level, not at the (more typical) block level. This assures self-consistent CP flushes to disk.
No fsck!
9© 2008 NetApp. All rights reserved.
NVRAM
General-purpose NV-RAM
Disk Driver
SemanticWrite Alloc
NFS or CIFS
TCP/ or UDP/IP
NetAppNV-RAM
File System
Disk Driver
NVRAMSemanticWrite Alloc
TCP/ or UDP/IP
File System
NVRAM safe-storesthe disk blocks
NVRAM safe-stores the FS operation
NVRAM placement is key!
NFS or CIFS
10© 2008 NetApp. All rights reserved.
NVRAM and memory – key points
Main memory is the write cache
The NVRAM is not the write cache– It is a redo log– Once written, we never even look at it again– Unless a controller fault occurs before a CP is complete
and then we redo the operations in it
“NVRAM-limited performance” is a myth– Write throughput is limited by the disks or the controller
Redo-logging is very space efficient– Record only changed data– Big win for small writes
11© 2008 NetApp. All rights reserved.
Seek Example in a SAN environment
Assume 4K disk blocks, 2.5 msec for one seek+rotate, and an ideal 200MB/sec FC path.
200MB/sec FC bandwidth x .0025sec = .5MB worth of data blocks not sent on the channel during that seek.
.5MB x 1 block/4KB = 128 blocks not sent
Therefore a 2.5ms seek for just 1 block equates to a 128 block penalty
Conclusion: one seek every 128 blocks or less ( ~1%) wastes at least half of your FC bandwidth!
(seek 1 block)128 blocks 128 blocks (seek 1 block)
12© 2008 NetApp. All rights reserved.
The Protocol Overhead issue
• Yes, we have TCP/IP overhead.
• Yes, we have double-buffering overhead.
• Yes, we might well have <obscure performance gotcha>.
• Despite all that, we're able to improve performance, even
with databases (now over 40% of NetApp customer base).
• Clearly, we're doing *something* sufficiently right to
make up for the overhead.
Isn’t NAS slower than local disk?
13© 2008 NetApp. All rights reserved.
The Protocol Overhead issue
• TCP/IP might seem to be a massive overhead, but passing
packets up and down the stack turn out only to consume
microseconds per request.
(For example: 1Ghz CPU speed == 1 ____second clock cycle.
Keep the timing in perspective with today’s CPU speeds!
• TCP/IP might seem to be a massive overhead, but passing
packets up and down the stack turn out only to consume
microseconds per request.
(For example: 1Ghz CPU speed == 1 nanosecond clock cycle. So 1000 extra
CPU cycles for TCP stack = 1000x1ns = 1 microsecond)
• Eliminating head seeks, which WAFL does better than any
other file system thanks to its full integration with RAID,
saves whole milliseconds, eg, a 1000x savings.
(seek 1 block)128 blocks 128 blocks (seek 1 block)
TCPoverhead
TCP overhead is small by comparison
14© 2008 NetApp. All rights reserved.
Superior performance vs. Competition
15© 2008 NetApp. All rights reserved.
Summary
WAFL extracts more ops/sec and lowest latency from a single drive due to minimum seeks.
This equates to faster overall performance WAFL’s “anywhere” property makes NetApp’s RAID-4 the performance and scalability winner.
Fastest File System in the world with RAID enabled