ayse ozturk sunum (0,2 mb)

Post on 05-Jul-2015

328 Views

Category:

Documents

2 Downloads

Preview:

Click to see full reader

TRANSCRIPT

The Design Of Efficient Initialization And Crash Recovery For Log-based File Systems Over Flash Memory

Chin-Hsien WU and Tei-Wei KUO;Li-Pin CHANNational Taiwan University;National Chiao-Tung University

Flash technology considerations

Performance->Quick mounting/unmountingEffective garbage collection Reliability->Efficient roll-backİnitialization and crash recovery

Why Flash Memory?

Non-volatile Shock-resistant Power-economic Affordable capacity Suitable to mobile devices due to

less energy need and better vibration tolerance

Features:

Write-once: updates to existing data on a page are only possible after an erase operation

No in-place updates Out-place updates preferred for

performance and endurance

Different implementations

NFS(Native File System):JFFS/2,LFM,YAFFS/2*Access to files by vaiable sized records

with (file-id and file-offset) pairs Block-emulation approach:FTL/Lite,CompactFlash,SmartMedia*Access to files by indexes of LBA(Logical

Block Addresses) and RAM resident address translation tables

Mounting time:

Most important design consideration Mount:scanning all spare areas of

pages to reconstruct house-keeping data structures in main memory

Time consuming Will be impractical soon

Alternative way

Snapshot of entire housekeeping data structure

Speed up initialization What if crash or improper

unmounting?

Proposed Solution

Log Management Scheme via Log-Record Manager(LRM) and Logger

For initialization and crash recovery scan check regions instead of all spare areas.

LRM

Collect log-records in main memory Write/Update to files Merge/Delete as necessary

Logger

Commit=write to flash memory log-records via check regions data structures

Check regions

=#log segments and a log segment directory

Segments in size of a block

Log-segment directory

Organization of check regions

Visual representation-see article

Related work

Effective garbage collection->cost-benefit policy by value-driven heuristic function

Periodically move live data among blocks to increase even erase counts

Adopt SRAM as write buffers, several cleaning polic,es Interrupt emulation mechanism to reduce interference

of IO on user tasks Layers for index processing Space efficient search-tree like structure for address

translation In-memory file system metadata(inode and

cache)during unmounting write to flash memory. If crash occurs, all spare areas need to be scanned Closest work to writer’s methodology

Problem Formulation

Characteristics: NAND flash No overwrite->new data is written to free space-

>invalidate previous which is called dead Block recycling policy minimize overhead due to

live pages 1 million erase counts max, then frequent write

errors Wear levelling->evenly erase blocks to increase

lifetime Wear-levelling if strong locality on updates causes

too much overhead

Initialization and Crash Recovery

Logical address space o by LBA(Logical Block Address) index for block-device emulation via RAM-

resident translation tables->mounting:rebuilt by scanning all spare areas

o by (file id,offset pairs)for NFS->variable sized records to show writes/updates;hiearchical ds-tree in main memory to reflect updates;scan to construct logical view

Problem

Scan time is intolerable to many users

Snapshot does not work well in case of crash and cause lengthy shut down procedure due to large fle size

Solution

NFS is adapted in the solution which could be extended for block-emulation type.

NFS->writes, updates in sequential style(by appending)

BMI(back-up memory image) by scanning spare areas

PMI(primary memory image) by scanning check regions and stored in main memory

Additional log records are allowed as metadata about write/update operations on continous segment with starting offset and size

System architecture

Efficient Crash Recovery

Writes must be in appending fashion even inside a block

Committing log records in-order is required for the mechanism

Version tags to track the recency of data

In case of crash most recent and most consistent check region is used for recovery

Intelligent scan

If first page of block is free, then skip the rest which is free due to in-order commitment policy

If metadata match some check region, then skip that part since already exists

Skipping some parts intelligently based on above logic reduces crash recovery time and provides better performance.

Performance

Performance measurements with different metrics, ratio, buffer size and crash recovery scenarios show that the solution is efficient and has good performance on different cases.

Conclusion

Reconstruction via check regions Reduce crash recovery time Improve initialization time with less

overhead Log-management scheme works

well…

Future Work

More performance enhancement Even less overhead

top related