I have a NAS4Free server deployed using ZFS.
The hardware is an Intel E8400 at 3.0 GHz on a Gigabyte P35 board with 4gb memory.
The case is an Antec P180 which is quiet and cools efficiently. Drives are rubber mounted.
NAS4Free is a variant of the current FreeBSD unix 9 that includes ZFS v28.
As of this writing, there exists one higher version of ZFS, but Oracle has not yet released that to open source.
ZFS allows for Mirror, and various flavors of software RAID, with either 1 or 2 drive redundancy.
It differs from hardware raid in the way the disks are handled, and appears to be more friendly toward consumer drives.
As I understand ZFS, it does offer significant advantages over traditional hardware RAID, plus it is self-healing.
One of those advantages is data rebuild after a drive failure.
My previous configuration was a pair of 2TB Caviar Green in RAID1 (mirror).
These drives have a higher failure rate that seems to be common across 2TB and larger platforms.
I lost another 2TB drive this month, and the rebuilding process is painfully slow, as the Intel solution seems to test every byte a zillion times.
It took days for the verification to complete. During the verification, the system is usable but painfully slow.
I understand a ZFS rebuild is much more intelligent, and only rebuilds the data affected by the replaced drive.