This, and the "48 TB Media Server" precursor thread, make for some pretty fantastic reading. Kapone's posts in particular were very informative.
A few quick thoughts of my own on random topics:
The reason to choose RAID-5 over other solutions is speed. There are too many potential failures modes in the storage chain to make as safe as RAID-1, WHS, or any of a half-dozen alternatives, but it can be blazingly quick.
As a practical matter though, this is really just benchmark porn. My onboard nVidia GigE adapters were only good for 10-12 MB/s over Windows network shares. When I moved to Marvell Yukon controllers, that number jumped to 60 MB/s, which is the best I can get despite iPerf numbers above 950 Mbit/s. I saw higher reported numbers and utilization in Vista, but the actual transfers happened no more quickly. In short, with modern drives approaching 90 MB/s in average sequential benchmarks, it doesn't take much of an array to saturate a GigE network. Really, to make use of these 400 MB/s+ arrays, they ought to be local storage.
I do take issue with those making massive R5 and R6 volumes. The maximum I'd recommend for R5 is 8 drives. For R6, perhaps 6 more. At most. Those with more drives should use nested arrays. The acceleration in the size of drives has outpaced the speed with which most RAID controllers can reconstruct a degraded array. Even if you space out your drive purchases, you're liable to run into at least one drive that'll fail within days of another, and then you lose the entire array. It's been pointed out that a simple block read error usually won't cause modern controllers to abandon the recovery, but there's nothing they can do about a complete mechanical failure.
Put another way, RAID is not a backup for itself. Anything important should be on separate backup system, though that can be comprised of a second array. I can't emphasize this enough for those who would use RAID-5 to grant themselves the illusion of data security.
My personal setup is Ciprico RAIDCore BC4852 in my server with four 1 TB WD GP drives. I've been very impressed with it for a few reasons, though less so for others.
- The card is PCI-X. I have only PCI slots. It's backwards compatible, yes, but the real-world throughput of a 32-bit 33 MHz PCI slot is 95 MB/s for drives in RAID-0 and 75 MB/s for drives in RAID-5. The disparity is a byproduct of the fact that unlike other RAID cards, the RAIDCore series use main memory for system cache, so they're transferring quite a lot more data back and forth through the motherboard than a true "hardware" RAID card would.
- Because main memory is the write cache, and I've got 5 GB of it in my server, I effectively have 4 GB of write cache. Writes are very fast, though obviously it's of some import to have the server plugged into a UPS.
- The software interface is stellar. This 4000-series card, the 5000-series, and VST Pro all use the same driver revision. They also all have the same management interface, and it wouldn't surprise me to discover that you could span an array from the 4000-series to a 5000-series. Something to consider if you're one of those fortunate enough to have PCI-X slots on your motherboard.
A question arose earlier why anyone would bother to use VST Pro in place of Intel's Matrix RAID driver. The answer is that it supports every feature you could want except for RAID-6, including OCE and RAID-level migration. It's extremely flexible and stable software. And it's faster by quite a lot at disk writes. Toms Hardware had some benchmarks to that effect in the recent past.
I think that's all for now. Thanks again to those who contributed to this thread.