AVS Forum banner
231K views 5K replies 189 participants last post by  robnix 
#1 · (Edited)
#3 ·

Quote:
Originally Posted by Puwaha  /t/1438027/what-is-a-good-value-server-board-and-cpu-for-a-20tb-whs-flexraid-server#post_22567196


Honestly with that much storage at risk I would not use a snapshot RAID product like FlexRAID. Updates, validations and rebuilds would take days. With that much storage at risk I'd start investigating a ZFS platform.

Not sure what you are basing your reply on.


FlexRAID can do snapshot raid or realtime raid and has file based checksums.


That being said, even with Snapshot RAID, the rebuild takes as long as it takes to read all the data drives concurrently and to write to the replacement drive. Assuming the CPU is not the bottleneck, then it is a bit slower than the time required to fill a drive by regular copy processes.


ZFS is not the solution for someone running WHS.


To the OP. I run the usual Supermicro X9SCM boards with ECC ram and a XEON E3-1230v2 CPU. Might be overkill for what you want, but in my case, I didn't want to skimp too much.
 
#5 ·
I don't think you really need a Server MB/CPU, it would just be overkill for a WHS.

I have used a old P35/ Core 2 Duo/4GB in the past with 16TB and it fed my 2 HTPC's over the network with no problems, and rebuilt a bad driver fine with FlexRAID.


What you had before (H61 and G630 would be fine)

I would recommend upgrading to Z77 just for the better features and the fact that you can get 8+ onboard SATA ports, which can help you save on SATA cards.


Your Desktop/HTPC would be more than capable to use as a WHS if you decided to upgrade either of those.
 
#6 ·
Only thing i would watch out for is preferring a Mainboard with a Intel NIC (possibly even one of Intels own 7-series boards), in a server mostly dealing with data streaming this can make a difference.

Other then that, no need to do anything special for a WHS server. Pick a board with a Intel NIC and plenty onboard SATA, and go from there.


Also Re: FlexRAID

I use it quite effectively on a 25TB raid right now. Snapshot or realtime doesn't make any difference on how long rebuilds take, the concept is the same. I run an update every night, and the time the update takes is only dependent on how much data changed in one day, which usually isn't all that much (unless i have one of my ripping days where i process a whole load of BDs.
). A full verification does take its time of course, but it also does on any other parity concept, you simply need to check everything. ZFS doesn't make it faster.


I run an actual Windows Server 2008 R2, and not a WHS, but FlexRAID works just beautifully with it.
 
#8 ·
Good points all above, but I still make the personal case that FlexRAID is not the best choice for that much data. I have personally outgrown FlexRAID but its a good choice if all you are protecting is mostly in hanging data like movie files. Like mention above don't forget the intel NIC. Other brands work just fine on a windows server based OS, but I typically only use those "other" brand NICs as a maintenance/service connection.
 
#9 ·
Those mb with Intel NIC tend to be expensive (the cheapest is ASUS P8Z77-V, $185, except for Intel's own mb that don't have enough SATA ports). Practically you'd better choose any mb you like. Add an Intel NIC PCI Express x1 card like this, $30 , later if necessary.
 
  • Like
Reactions: jstaple2
#10 ·
My other option is use my z68 deluxe Asus motherboard which has intel NIC and replace that with a z77.


But I think I want intel at both ends.


Does a pci intel NIC card take away any bandwidth from SATA raid cards ?


Would it compete against my HDDs in data flow if I added a pci card vs bought a mobo integrated with intel LAN ?
 
#11 ·

Quote:
Originally Posted by Puwaha  /t/1438027/what-is-a-good-value-server-board-and-cpu-for-a-20tb-whs-flexraid-server#post_22568954


Good points all above, but I still make the personal case that FlexRAID is not the best choice for that much data. I have personally outgrown FlexRAID but its a good choice if all you are protecting is mostly in hanging data like movie files. Like mention above don't forget the intel NIC. Other brands work just fine on a windows server based OS, but I typically only use those "other" brand NICs as a maintenance/service connection.

It would be nice for you to elaborate on why you hold this position. I could make the argument that as your data grows, FlexRAID (and others like it) are a better solution than ZFS.


It is not about how much data, but about what you want to do with that data and what level of protection you desire. ZFS loses ALL data if the number of failed drives exceeds the amount of parity information. There are no rules in ZFS that stipulate to what degree of parity information you require based on how much data you store so there is nothing inherently safer about ZFS. Safety is fully understanding the risks of each solution and being comfortable enough to go with it.
 
#12 ·

Quote:
Originally Posted by renethx  /t/1438027/what-is-a-good-value-server-board-and-cpu-for-a-20tb-whs-flexraid-server#post_22569443


Those mb with Intel NIC tend to be expensive (the cheapest is ASUS P8Z77-V, $185, except for Intel's own mb that don't have enough SATA ports). Practically you'd better choose any mb you like. Add an Intel NIC PCI Express x1 card like this, $30 , later if necessary.

I came to that same conclusion myself which is why I found the SuperMicro boards to be ideal (I bought mine for $180ish if memory serves). Featurewise I got 2 Intel NICs, ECC memory support, a good configuration of PCIe slots and IPMI!


IPMI is a feature that is a pre-requisite for any future server hardware (so tempting to try to have it on HTPC's as well
)
 
#14 ·

Quote:
Originally Posted by Mfusick  /t/1438027/what-is-a-good-value-server-board-and-cpu-for-a-20tb-whs-flexraid-server#post_22569842


What is advantage of ECC memory ?

Its able to check and correct errors making it more accurate and keeping the Server running instead of crashing , It usually is a little bit slower because of this.

With a Home Media Server its not needed or worth it IMO - memory errors are pretty rare.
 
#15 ·

Quote:
Originally Posted by acejh1987  /t/1438027/what-is-a-good-value-ser...a-20tb-whs-flexraid-server/0_40#post_22570002


Its able to check and correct errors making it more accurate and keeping the Server running instead of crashing , It usually is a little bit slower because of this.

With a Home Media Server its not needed or worth it IMO - memory errors are pretty rare.

Thanks!
 
#16 ·

Quote:
Originally Posted by Mfusick  /t/1438027/what-is-a-good-value-ser...a-20tb-whs-flexraid-server/0_40#post_22569654


My other option is use my z68 deluxe Asus motherboard which has intel NIC and replace that with a z77.

But I think I want intel at both ends.

Does a pci intel NIC card take away any bandwidth from SATA raid cards ?

Would it compete against my HDDs in data flow if I added a pci card vs bought a mobo integrated with intel LAN ?

More on this...


Would adding a intel NIC card on pci slot slow down or compete against the flow if data from pci raid cards ?


Anyone have opinion ?
 
#17 ·

Quote:
Originally Posted by hdkhang  /t/1438027/what-is-a-good-value-server-board-and-cpu-for-a-20tb-whs-flexraid-server#post_22569719


It would be nice for you to elaborate on why you hold this position. I could make the argument that as your data grows, FlexRAID (and others like it) are a better solution than ZFS.

As your data grows, you expose yourself to greater risk with FlexRAID. With ZFS' self-healing filesystem and data integrity... it protects against silent data corruption/bit-rot... and does this on the fly, and in a scheduled "scrub." Granted FlexRAID offers something similar with it's "Verify" function, but this effectively renders your server unusable for hours or even days depending on how much data you have. You have to run a whole verify job too, or it's worthless... so during those painful hours or days, don't change any data on your server, because there is no way to run an update. Don't try to watch a movie, or hope that your DVR-HTPCs hard drives don't fill up, because you won't be able to move DVR recordings.



What's the big deal about bit-rot? There are numerous examples, but a real life study of 1.5 million HDDs in the NetApp database found that on average 1 in 90 SATA drives will have silent corruption... which is not caught by hardware RAID verification processes (much less software.) For a RAID-5 system that works out to one undetected error for every 67 TB of data read. On a full 3TB HDD you can easily hit that 67TB in no time.

Quote:
It is not about how much data, but about what you want to do with that data and what level of protection you desire. ZFS loses ALL data if the number of failed drives exceeds the amount of parity information.

I know where you are going with this argument. And the ability to pull a hard drive and have access to the raw NTFS files is a good safety-net. But think about this... how many times have you ever done this with a WHSv1, SnapRAID or FlexRAID hard drive? In fact the only time you would ever do this is in a catastrophic server failure, or you are decommissioning a server. You don't just yank hard drives out of your server unless they are bad.


This doesn't happen very often, to almost render this plus for FlexRAID pretty moot. As long as you haven't lost more than your fault-tolerance disks, you can import a ZFS set of disks into another server with near instant access... with one simple command. And you can do this hardware-agnostically, just as with a single NTFS/FlexRAID disk.

Quote:
There are no rules in ZFS that stipulate to what degree of parity information you require based on how much data you store so there is nothing inherently safer about ZFS. Safety is fully understanding the risks of each solution and being comfortable enough to go with it.

Agreed. I did state that my needs had outgrown FlexRAID. ZFS is an enterprise class product versus a one-man show (no matter how talented that one man may be)... so who do you trust your data to? It's not just movies we are talking about here. It's music, photos, backups, documents, and much more... and more importantly your precious and expensive time.



Here are some other pros for ZFS as compared to FlexRAID:


1. No need for hardware RAID cards. ZFS was designed with cheap commodity hard drives in mind, not expensive RAID-capable or enterprise disks. Granted FlexRAID fits into this category as well... except when it comes to bit-rot which is more prevalent in commodity consumer-level hard drives.


2. No need for "checkdisk" or "fsck" type apps to correct filesystem problems... Besides, those apps take your data offline to check, meaning you can't stream movies while chkdsk runs on a 3TB hard drive!


3. Pooled storage without the need to re-run first-time parity when you add more hard drives. Calculating that first parity can take a long time in FlexRAID. Every time you add a new HDD to FlexRAID, you must run the first parity sync all over again, and with every new disk the process gets longer and longer. ZFS adds disks instantly.


4. ZFS pools stay online if you are rebuilding a RAID-Z set. With FlexRAID, your pool is offline while you rebuild.


5. Instantly create your pool/filesystem - No need to wait hours while the first parity build takes place, it's on the fly with ZFS.


6. True RAID-Z ability... mix and match RAID levels in the same pool... mix and match RAID-0, RAID-1, RAID-5, etc.


7. Snapshots/Rollback - Make a snapshot, and you now have a "Time-Machine"-like set of data saved with no extra effort or disk space. Only changes to the snapshot are written to disk. You can run snapshots daily, hourly, or every minute if you are ultra-paranoid and have a lot of changing data. Snapshot-RAID cannot compete here.


8. Copy-on-Write - pull the power plug on your in the middle of a file write and nothing bad happens with ZFS Try that with your NTFS backed FlexRAID system.


9. ZFS is space efficient - built in compression (modern CPUs are more than able to keep up)... you can compress files on the fly to free up more hard disk space.


10. Huge filesystems... up to 16 exabytes Why would you ever need that much? Who could imagine a 4TB drive 10 years ago?


11. Deduplication... great for backups or sets that have a lot of the same data.


12. Simple backups: "zfs send" command


13. On the fly Encryption


14. It's free! No need to purchase an OS license or RAID license.



It's worth noting that even Brahim recognizes that FlexRAID isn't perfect for every user and is trying to implement ZFS-like features in his "NZFS" product.
 
#18 ·

Quote:
Originally Posted by acejh1987  /t/1438027/what-is-a-good-value-server-board-and-cpu-for-a-20tb-whs-flexraid-server#post_22570002


Its able to check and correct errors making it more accurate and keeping the Server running instead of crashing , It usually is a little bit slower because of this.

With a Home Media Server its not needed or worth it IMO - memory errors are pretty rare.

It's actually a lot more common than people think. Ever witnessed an out of the blue-BSOD that never happens again?


Here's Google's experiences...
http://news.cnet.com/8301-30685_3-10370026-264.html


"each memory module experienced an average of nearly 4,000 correctible errors per year, and unlike your PC, Google servers use error correction code (ECC) that can nip most of those problems in the bud. That means a correctable error on a Google machine likely is an uncorrectable error on your computer"
 
#19 ·

Quote:
Originally Posted by Mfusick  /t/1438027/what-is-a-good-value-server-board-and-cpu-for-a-20tb-whs-flexraid-server#post_22570271


More on this...

Would adding a intel NIC card on pci slot slow down or compete against the flow if data from pci raid cards ?

Anyone have opinion ?

Nope.


All NIC cards reside on the PCI bus... no matter if they are built into the mobo or an expansion card.


And no, the NIC will not compete for PCI bandwidth with your RAID card. Your x16-lane graphics card is more bandwidth hungry than any RAID card could ever hope to be... and people don't have problems playing graphics-intensive games online.
 
#20 ·

Quote:
Originally Posted by Mfusick  /t/1438027/what-is-a-good-value-server-board-and-cpu-for-a-20tb-whs-flexraid-server#post_22570271


More on this...

Would adding a intel NIC card on pci slot slow down or compete against the flow if data from pci raid cards ?

Anyone have opinion ?

This depends on your setup. At some point I saw you mention PCI-x cards and PCI cards. If this is the case it is obviously less than optimal. If you meant PCI Express cards that's a different story. In the cheap server builds I've seen people are actually using PCI Intel NIC's to keep their PCI-E slots free for HBA cards. The 133 MB/s limit for PCI isn't going to handicap the NIC. Now if you've got a HBA (SATA Card) on your PCI bus that's going to be a problem. Since all of the PCI slots are probably sharing 1 PCI-E lane they will be competing for the available bandwidth. Everything you have in a PCI-E slot has it's own dedicated bandwidth. So the moral of the story is I wouldn't put more than one device one the PCI bus since they are sharing bandwidth. Anything on PCI-E has dedicated bandwidth. Now if you've got HBA's on PCI for most of what you do it should work fine. Just streaming a video isn't going to be limited by PCI. Where the limited bandwidth is going to come into play is if you've got to do a rebuild. When your trying to access more than one disk at a time on PCI they're going to be bandwidth limited.
 
#21 ·
Since ZFS has been brought up. What I think would be a real nice setup would be a virtualized system with WHS as the front end, and ZFS providing the storage through NFS or iSCSI. This gets you around the lack of disk expander on WHS 2011 since WHS would just see one big disk. You'd also get the increased speed of a raid array plus the advantages of ZFS, but all the features of WHS in one box. This is one of the advantages of virtualization that I see. This is a project I'd really like to tackle. I've already got WHS virtualized. I just can't afford to get the HBA and disks I'd need to try it. I've got my media on a Unraid server, and just use WHS for backing up my computers and some small file storage. So this would be just one of those let's see if I can do this projects.
 
#22 ·

Quote:
Originally Posted by Mfusick  /t/1438027/what-is-a-good-value-server-board-and-cpu-for-a-20tb-whs-flexraid-server#post_22570271


More on this...

Would adding a intel NIC card on pci slot slow down or compete against the flow if data from pci raid cards ?

Anyone have opinion ?

Not "PCI" (there is no "PCI" in Z77/H77 chipset
) but "PCI Express", they are completely different, i.e, parallel shared bus topology vs. serial point-to-point bus topology. There is no "slow down" or "compete" in PCI Express. Moreover NIC is even connected to the chipset's PCI Express controller (x1 link), while RAID card will be connected to CPU's PCI Express controller (x16 or x8 link).


BTW onboard NIC is also connected to the chipset via a PCI Express x1 link, it's no different from a discrete NIC.


Chipset and CPU are connected via Direct Media Interface (DMI) 2.0 = PCI Express 2.0 x4 link.
 
#23 ·

Quote:
Originally Posted by Puwaha  /t/1438027/what-is-a-good-value-server-board-and-cpu-for-a-20tb-whs-flexraid-server#post_22570419


As your data grows, you expose yourself to greater risk with FlexRAID. With ZFS' self-healing filesystem and data integrity... it protects against silent data corruption/bit-rot... and does this on the fly, and in a scheduled "scrub." Granted FlexRAID offers something similar with it's "Verify" function, but this effectively renders your server unusable for hours or even days depending on how much data you have. You have to run a whole verify job too, or it's worthless... so during those painful hours or days, don't change any data on your server, because there is no way to run an update. Don't try to watch a movie, or hope that your DVR-HTPCs hard drives don't fill up, because you won't be able to move DVR recordings.

I'm not intimately familiar with ZFS, but am curious to know how it knows that something is bad automatically? My understanding is that this can only happen if you actually access the file in some way, otherwise the system will be completely wasteful if it is continually checking all files for silent corruption. My opinion on the silent corruption is similar to your dismissal of using native format drives. If the file sits there in a corrupted state but I am able to repair that file, then the difference to me is inconsequential.
Quote:
Originally Posted by Puwaha  /t/1438027/what-is-a-good-value-server-board-and-cpu-for-a-20tb-whs-flexraid-server#post_22570419


What's the big deal about bit-rot? There are numerous examples, but a real life study of 1.5 million HDDs in the NetApp database found that on average 1 in 90 SATA drives will have silent corruption... which is not caught by hardware RAID verification processes (much less software.) For a RAID-5 system that works out to one undetected error for every 67 TB of data read. On a full 3TB HDD you can easily hit that 67TB in no time.

Again, most of the discussion here is on predominantly static data, most of which is infrequently accessed. Even in a ZFS environment, the chances of picking up on an automatic healing of silent corruption is very low.
Quote:
Originally Posted by Puwaha  /t/1438027/what-is-a-good-value-server-board-and-cpu-for-a-20tb-whs-flexraid-server#post_22570419


I know where you are going with this argument. And the ability to pull a hard drive and have access to the raw NTFS files is a good safety-net. But think about this... how many times have you ever done this with a WHSv1, SnapRAID or FlexRAID hard drive? In fact the only time you would ever do this is in a catastrophic server failure, or you are decommissioning a server. You don't just yank hard drives out of your server unless they are bad.

This doesn't happen very often, to almost render this plus for FlexRAID pretty moot. As long as you haven't lost more than your fault-tolerance disks, you can import a ZFS set of disks into another server with near instant access... with one simple command. And you can do this hardware-agnostically, just as with a single NTFS/FlexRAID disk.

Which do you think is a bigger plus... having some chance of recovering lost data in the event of a catastrophic failure OR ensuring that one or two instances of silent corruption are corrected on the fly?
Quote:
Originally Posted by Puwaha  /t/1438027/what-is-a-good-value-server-board-and-cpu-for-a-20tb-whs-flexraid-server#post_22570419


Agreed. I did state that my needs had outgrown FlexRAID. ZFS is an enterprise class product versus a one-man show (no matter how talented that one man may be)... so who do you trust your data to? It's not just movies we are talking about here. It's music, photos, backups, documents, and much more... and more importantly your precious and expensive time.

Again, for home server use, having all the bells and whistles doesn't mean much if there are aspects of which you would rather not have to deal with. e.g. ZFS inability to add data on the fly to the primary array without having to make new vdevs or whatnot, all drives spinning up to read a single file and the associated wear and tear that has on a typical home usage scenario, not to mention power draw and there are many others.
Quote:
Originally Posted by Puwaha  /t/1438027/what-is-a-good-value-server-board-and-cpu-for-a-20tb-whs-flexraid-server#post_22570419


Here are some other pros for ZFS as compared to FlexRAID:

1. No need for hardware RAID cards. ZFS was designed with cheap commodity hard drives in mind, not expensive RAID-capable or enterprise disks. Granted FlexRAID fits into this category as well... except when it comes to bit-rot which is more prevalent in commodity consumer-level hard drives.

2. No need for "checkdisk" or "fsck" type apps to correct filesystem problems... Besides, those apps take your data offline to check, meaning you can't stream movies while chkdsk runs on a 3TB hard drive!

3. Pooled storage without the need to re-run first-time parity when you add more hard drives. Calculating that first parity can take a long time in FlexRAID. Every time you add a new HDD to FlexRAID, you must run the first parity sync all over again, and with every new disk the process gets longer and longer. ZFS adds disks instantly.

4. ZFS pools stay online if you are rebuilding a RAID-Z set. With FlexRAID, your pool is offline while you rebuild.

5. Instantly create your pool/filesystem - No need to wait hours while the first parity build takes place, it's on the fly with ZFS.

6. True RAID-Z ability... mix and match RAID levels in the same pool... mix and match RAID-0, RAID-1, RAID-5, etc.

7. Snapshots/Rollback - Make a snapshot, and you now have a "Time-Machine"-like set of data saved with no extra effort or disk space. Only changes to the snapshot are written to disk. You can run snapshots daily, hourly, or every minute if you are ultra-paranoid and have a lot of changing data. Snapshot-RAID cannot compete here.

8. Copy-on-Write - pull the power plug on your in the middle of a file write and nothing bad happens with ZFS Try that with your NTFS backed FlexRAID system.

9. ZFS is space efficient - built in compression (modern CPUs are more than able to keep up)... you can compress files on the fly to free up more hard disk space.

10. Huge filesystems... up to 16 exabytes Why would you ever need that much? Who could imagine a 4TB drive 10 years ago?

11. Deduplication... great for backups or sets that have a lot of the same data.

12. Simple backups: "zfs send" command

13. On the fly Encryption

14. It's free! No need to purchase an OS license or RAID license.

It's worth noting that even Brahim recognizes that FlexRAID isn't perfect for every user and is trying to implement ZFS-like features in his "NZFS" product.

1. Not a point of comparison, anything ZFS can use, FlexRAID/SnapRAID/unRAID can use. Actually, being that FlexRAID/SnapRAID can reside in the windows space, it has a good chance of driver support as well.


2. SnapRAID/FlexRAID do daily parity updates, during these updates, all the checking that impacts the data is done, this would be similar to your whole ZFS auto repair silent corruption thingo upon access of files.


3. ZFS cannot add already filled drives. FlexRAID/SnapRAID/unRAID can add empty drives instantly as well. So this point is lost by ZFS.


4. unRAID pools remain online while you are repairing. SnapRAID doesn't have pooling but depending on the solution you employ to pool your drives, the non impacted data is still available. It is only the lost data that is unavailable. In the case of unRAID it can also do emulate failed drives the way ZFS and hardware RAID does. I have tested this functionality quite a bit.


5. Again, see point 3... empty drives take nearly zero time to create a pool/array whatever you want to call it.


6. For home use, who cares about this point? Why would I care to mix and match RAID levels in the same pool? In fact, with pooling software, I can do just that. I can have hardware RAID mixed with software RAID mixed with whatever else, it would be a mess and not worth doing however.


7. Again, for home use in storing of media files, how often does do your movies/music change? Most people know what these systems are designed for and what they are not good at, it is a personal decision. Many keep documents and photos separately or have them duplicated.


8. Expand further on why this is important in a home environment? I have UPS on every one of my computers. How often are you pulling power to drives while the machine is running?


9. Compression is useless for compressed data (music, movies and photos) so this point is useless for home media servers. For documents and the like, most people don't have terabytes upon terabytes of the stuff to worry about the small file savings.


10. Not relevant today for home server usage


11. WHS handles that dedup stuff


12. Plenty of backup software available, if using Crashplan for instance, then backup is automatic and so I don't think this is an important point of distinction for the inteded usage.


13. NTFS volumes can encrypt data


14. Can't argue with Free... but you do realise that SnapRAID on Linux is also free. Despite this, many people are happy to pay to have a single point of support. Not everyone is willing or able to put the time and effort in to self troubleshoot. Also, it is only free in so far as you don't require functionality from Windows for instance. If you need to virtualize windows for functionality (which many people still do) then it is not free anymore.
 
#24 ·

Quote:
Originally Posted by duff99  /t/1438027/what-is-a-good-value-server-board-and-cpu-for-a-20tb-whs-flexraid-server#post_22570513


Since ZFS has been brought up. What I think would be a real nice setup would be a virtualized system with WHS as the front end, and ZFS providing the storage through NFS or iSCSI. This gets you around the lack of disk expander on WHS 2011 since WHS would just see one big disk. You'd also get the increased speed of a raid array plus the advantages of ZFS, but all the features of WHS in one box. This is one of the advantages of virtualization that I see. This is a project I'd really like to tackle. I've already got WHS virtualized. I just can't afford to get the HBA and disks I'd need to try it. I've got my media on a Unraid server, and just use WHS for backing up my computers and some small file storage. So this would be just one of those let's see if I can do this projects.

The OS such as FreeBSD or Solaris, that ZFS runs on should be able to provide the filer services you mentioned. I use virtualized Win7 for some of the Windows specific services.

My server runs Solaris 11 providing SMB/CIFS (PCs) and Netatalk/AFP (Macs) and I have Win7 running on Virtualbox for Media Center (my PVR) connected to a couple of HDHomerun network tuners. It also runs some of the extenders in the house.
 
#25 ·

Quote:
Originally Posted by hdkhang  /t/1438027/what-is-a-good-value-server-board-and-cpu-for-a-20tb-whs-flexraid-server#post_22570774


I'm not intimately familiar with ZFS, but am curious to know how it knows that something is bad automatically? My understanding is that this can only happen if you actually access the file in some way, otherwise the system will be completely wasteful if it is continually checking all files for silent corruption.

Why is that "wasteful"? Your server sits idle most of the day. It should be doing something useful for you.

Quote:
My opinion on the silent corruption is similar to your dismissal of using native format drives. If the file sits there in a corrupted state but I am able to repair that file, then the difference to me is inconsequential.

Who says you will be able to repair that file under FlexRAID at all? There are lots of horror stories on the FlexRAID forums of people encountering bugs, or the parity data becoming corrupt, etc.


As hard drives become larger and larger with the bits packed tighter and tighter, the stability of these systems is called into question. I'd rather have a filesystem that actively works to prevent a problem before you have the problem. Have you ever had to run chkdsk on a 3TB drive? I have... it takes a very long time. And even chkdsk only scrubs the metadata, not the actual data itself. Its estimated that firmware bugs in hard drives cause 5-10% of all silent corruption errors. Why wouldn't you want a file system that checksums the block of data, the metadata, and even the pointer to the metadata? Most filesystems store the metadata with the block of data. If the block is corrupt, then the metadata is useless. You need an independant source of metadata, and a way to check it.

Quote:
Again, most of the discussion here is on predominantly static data, most of which is infrequently accessed. Even in a ZFS environment, the chances of picking up on an automatic healing of silent corruption is very low.

I don't think you get it. Static data has a higher chance of bit-rot. The more data is accessed, the more chance the hard drives' built in error correcting routines have the chance to fix it before it reaches the PC. If it can't do that this would be like UPS saying, "We guarantee that your package wasn't damaged when we picked it up." Not quite the guarantee you were looking for.

Quote:
Which do you think is a bigger plus... having some chance of recovering lost data in the event of a catastrophic failure OR ensuring that one or two instances of silent corruption are corrected on the fly?

I've already stated my opinion on this matter.

Quote:
Again, for home server use, having all the bells and whistles doesn't mean much if there are aspects of which you would rather not have to deal with. e.g. ZFS inability to add data on the fly to the primary array without having to make new vdevs or whatnot, all drives spinning up to read a single file and the associated wear and tear that has on a typical home usage scenario, not to mention power draw and there are many others.

Ugh... the power draw argument. We are talking about a few dollars a year in most areas of the the US.


And I'm not sure what you are referring to when you say " ZFS inability to add data on the fly to the primary array without having to make new vdevs or whatnot"

Quote:
1. Not a point of comparison, anything ZFS can use, FlexRAID/SnapRAID/unRAID can use. Actually, being that FlexRAID/SnapRAID can reside in the windows space, it has a good chance of driver support as well.

Not really a pro for ZFS, this was more of an anti-con.

Quote:
2. SnapRAID/FlexRAID do daily parity updates, during these updates, all the checking that impacts the data is done, this would be similar to your whole ZFS auto repair silent corruption thingo upon access of files.

Nope. Snapshot-RAID can not compensate for hard drive errors or guarantee the integrity of the data when the data is read... only when validating or restoring. If your parity data is corrupt... you have no chance of recovery. if you are "updating" invalid data, then your parity becomes just as corrupt, and you can only recover the corrupt data.

Quote:
3. ZFS cannot add already filled drives. FlexRAID/SnapRAID/unRAID can add empty drives instantly as well. So this point is lost by ZFS.

I can't argue this point. Its a nice feature. But once again you are trusting of the native filesystem already having your data intact.

Quote:
4. unRAID pools remain online while you are repairing. SnapRAID doesn't have pooling but depending on the solution you employ to pool your drives, the non impacted data is still available. It is only the lost data that is unavailable. In the case of unRAID it can also do emulate failed drives the way ZFS and hardware RAID does. I have tested this functionality quite a bit.

The point still stands in comparison of ZFS and FlexRAID.

Quote:
6. For home use, who cares about this point? Why would I care to mix and match RAID levels in the same pool? In fact, with pooling software, I can do just that. I can have hardware RAID mixed with software RAID mixed with whatever else, it would be a mess and not worth doing however.

This is certainly an advanced use, and probably not recommended for home use, I'll admit. But the fact that you aren't using hardware RAID with ZFS keeps you from having to make/break RAID sets at the hardware level to do this, which adds an extra layer of complexity when using this with a user-space snapshot-RAID.

Quote:
7. Again, for home use in storing of media files, how often does do your movies/music change? Most people know what these systems are designed for and what they are not good at, it is a personal decision. Many keep documents and photos separately or have them duplicated.

Surely, you jest.

Quote:
8. Expand further on why this is important in a home environment? I have UPS on every one of my computers. How often are you pulling power to drives while the machine is running?

No, but you can. Obviously, this is the extreme scenario, but a cable can come loose, vibrations in the case can cause heads to go crazy, you never know. Not everyone has the foresight to use a UPS. What about hardware faults? A bad power supply?

Quote:
9. Compression is useless for compressed data (music, movies and photos) so this point is useless for home media servers. For documents and the like, most people don't have terabytes upon terabytes of the stuff to worry about the small file savings.

If you only use your fileserver to store media, then you are limited in your vision.

Quote:
10. Not relevant today for home server usage

What was the average hard drive size 10 years ago? What will be the size 10 years from now? Data accumulation is not getting stagnant or smaller. It's estimated that ZFS will last us for 30 years before it will be outdated. This is not just an enterprise problem... the very fact that you need a file-server in the first place should show you that this is relevant.

Quote:
11. WHS handles that dedup stuff

Only for backups. Not for user-data. And I don't think this feature survived into WHS2011. ZFS can dedupe at the block level all data.

Quote:
12. Plenty of backup software available, if using Crashplan for instance, then backup is automatic and so I don't think this is an important point of distinction for the inteded usage.

The point is... this is baked into ZFS. No need to seek out and validate a 3rd party tool

Quote:
13. NTFS volumes can encrypt data

I never said it didn't. ZFS is a filesystem as well as a pooling/RAID system. This is also an anti-con for ZFS, and is very easy to setup.

Quote:
14. Can't argue with Free... but you do realise that SnapRAID on Linux is also free.

Sure, but SnapRAID doesn't do pooling, so you have to use LVM or Greyhole. ZFS takes all of these features and presents them to the user as one unified toolset, rather than having to piece it all together.

Quote:
Despite this, many people are happy to pay to have a single point of support. Not everyone is willing or able to put the time and effort in to self troubleshoot. Also, it is only free in so far as you don't require functionality from Windows for instance. If you need to virtualize windows for functionality (which many people still do) then it is not free anymore.

I don't get your point here. Who says you need to run Windows to run a fileserver? If you need a virtualization platform, then VMWare ESXi is free... Citrix XEN is free... KVM through Linux (ProxMox is awesome, btw) is free... even the core-hyperv windows 2008 (no GUI) OS is free.


And as for support... anyone can read or ask questions in the forums of their favorite product. As a one-man show, Brahim is very eager to use that community for end-user support... and I don't blame him.
 
#26 ·

Quote:
Originally Posted by Tong Chia  /t/1438027/what-is-a-good-value-server-board-and-cpu-for-a-20tb-whs-flexraid-server#post_22570935


The OS such as FreeBSD or Solaris, that ZFS runs on should be able to provide the filer services you mentioned. I use virtualized Win7 for some of the Windows specific services.

My server runs Solaris 11 providing SMB/CIFS (PCs) and Netatalk/AFP (Macs) and I have Win7 running on Virtualbox for Media Center (my PVR) connected to a couple of HDHomerun network tuners. It also runs some of the extenders in the house.

I understand WHS isn't necessary. Part of the reason is that I want to do it that way is the challenge, learning some new things, just to see if I can make it work. The other reason is that I like WHS. I like the centralized backup primarily, but there are some other things also. I'm still running the original WHS since it's a hassle to upgrade, but I would miss the drive pooling. I figure this way I would have essentially the drive pooling, but with the fast, safe back end of ZFS.


So basically I understand it's not necessary. Right now my needs are met. WHS is backing up my computers. Unraid is storing my media. I don't need to do anything I just want to play with some new things. Maybe I'm the only one with these particular needs, but I like the features of WHS with the backing of ZFS.
 
This is an older thread, you may not receive a response, and could be reviving an old thread. Please consider creating a new thread.
Top