Chinaunix首页 | 论坛 | 博客
  • 博客访问: 862042
  • 博文数量: 73
  • 博客积分: 7176
  • 博客等级: 少将
  • 技术积分: 2526
  • 用 户 组: 普通用户
  • 注册时间: 2004-12-14 17:16
文章分类
文章存档

2012年(1)

2011年(9)

2010年(20)

2009年(28)

2008年(15)

分类:

2008-07-07 17:43:32


转一篇文章。 Jeff Bonwick的blog上的, Jeff是slab和ZFS的作者, 相信很多朋友都听过他的名字。
我对Storage一窍不通,但这篇文章实在是有趣,让我们门外汉也知道了很多趣事。

original URL:
-------------
http://blogs.sun.com/bonwick/en_US/entry/raid_z


RAID-Z

The original promise of RAID (Redundant Arrays of Inexpensive Disks) was that it would provide fast, reliable storage using cheap disks. The key point was cheap; yet somehow we ended up . Why?

RAID-5 (and other data/parity schemes such as RAID-4, RAID-6, even-odd, and Row Diagonal Parity) never quite delivered on the RAID promise -- and can't -- due to a fatal flaw known as the RAID-5 write hole. Whenever you update the data in a RAID stripe you must also update the parity, so that all disks XOR to zero -- it's that equation that allows you to reconstruct data when a disk fails. The problem is that there's no way to update two or more disks atomically, so RAID stripes can become damaged during a crash or power outage.

To see this, suppose you lose power after writing a data block but before writing the corresponding parity block. Now the data and parity for that stripe are inconsistent, and they'll remain inconsistent forever (unless you happen to overwrite the old data with a full-stripe write at some point). Therefore, if a disk fails, the RAID reconstruction process will generate garbage the next time you read any block on that stripe. What's worse, it will do so silently -- it has no idea that it's giving you corrupt data.

There are software-only workarounds for this, but they're so slow that software RAID has died in the marketplace. Current RAID products all do the RAID logic in hardware, where they can use NVRAM to survive power loss. This works, but it's expensive.

There's also a nasty performance problem with existing RAID schemes. When you do a partial-stripe write -- that is, when you update less data than a single RAID stripe contains -- the RAID system must read the old data and parity in order to compute the new parity. That's a huge performance hit. Where a full-stripe write can simply issue all the writes asynchronously, a partial-stripe write must do synchronous reads before it can even start the writes.

Once again, expensive hardware offers a solution: a RAID array can buffer partial-stripe writes in NVRAM while it's waiting for the disk reads to complete, so the read latency is hidden from the user. Of course, this only works until the NVRAM buffer fills up. No problem, your storage vendor says! Just shell out even more cash for more NVRAM. There's no problem your wallet can't solve.

Partial-stripe writes pose an additional problem for a transactional filesystem like ZFS. A partial-stripe write necessarily modifies live data, which violates one of the rules that ensures transactional semantics. (It doesn't matter if you lose power during a full-stripe write for the same reason that it doesn't matter if you lose power during any other write in ZFS: none of the blocks you're writing to are live yet.)

If only we didn't have to do those evil partial-stripe writes...

Enter RAID-Z.

RAID-Z is a data/parity scheme like RAID-5, but it uses dynamic stripe width. Every block is its own RAID-Z stripe, regardless of blocksize. This means that every RAID-Z write is a full-stripe write. This, when combined with the copy-on-write transactional semantics of ZFS, completely eliminates the RAID write hole. RAID-Z is also faster than traditional RAID because it never has to do read-modify-write.

Whoa, whoa, whoa -- that's it? Variable stripe width? Geez, that seems pretty obvious. If it's such a good idea, why doesn't everybody do it?

Well, the tricky bit here is RAID-Z reconstruction. Because the stripes are all different sizes, there's no simple formula like "all the disks XOR to zero." You have to traverse the filesystem metadata to determine the RAID-Z geometry. Note that this would be impossible if the filesystem and the RAID array were separate products, which is why there's nothing like RAID-Z in the storage market today. You really need an integrated view of the logical and physical structure of the data to pull it off.

But wait, you say: isn't that slow? Isn't it expensive to traverse all the metadata? Actually, it's a trade-off. If your storage pool is very close to full, then yes, it's slower. But if it's not too close to full, then metadata-driven reconstruction is actually faster because it only copies live data; it doesn't waste time copying unallocated disk space.

But far more important, going through the metadata means that ZFS can validate every block against its 256-bit checksum as it goes. Traditional RAID products can't do this; they simply XOR the data together blindly.

Which brings us to the coolest thing about RAID-Z: self-healing data. In addition to handling whole-disk failure, RAID-Z can also detect and correct silent data corruption. Whenever you read a RAID-Z block, ZFS compares it against its checksum. If the data disks didn't return the right answer, ZFS reads the parity and then does combinatorial reconstruction to figure out which disk returned bad data. It then repairs the damaged disk and returns good data to the application. ZFS also reports the incident through Solaris FMA so that the system administrator knows that one of the disks is silently failing.

Finally, note that RAID-Z doesn't require any special hardware. It doesn't need NVRAM for correctness, and it doesn't need write buffering for good performance. With RAID-Z, ZFS makes good on the original RAID promise: it provides fast, reliable storage using cheap, commodity disks.


阅读(1551) | 评论(0) | 转发(0) |
给主人留下些什么吧!~~