AVS Forum banner

1 - 20 of 4731 Posts

·
Registered
Joined
·
1,125 Posts
Honestly with that much storage at risk I would not use a snapshot RAID product like FlexRAID. Updates, validations and rebuilds would take days. With that much storage at risk I'd start investigating a ZFS platform.
 

·
Registered
Joined
·
2,164 Posts

Quote:
Originally Posted by Puwaha  /t/1438027/what-is-a-good-value-server-board-and-cpu-for-a-20tb-whs-flexraid-server#post_22567196


Honestly with that much storage at risk I would not use a snapshot RAID product like FlexRAID. Updates, validations and rebuilds would take days. With that much storage at risk I'd start investigating a ZFS platform.

Not sure what you are basing your reply on.


FlexRAID can do snapshot raid or realtime raid and has file based checksums.


That being said, even with Snapshot RAID, the rebuild takes as long as it takes to read all the data drives concurrently and to write to the replacement drive. Assuming the CPU is not the bottleneck, then it is a bit slower than the time required to fill a drive by regular copy processes.


ZFS is not the solution for someone running WHS.


To the OP. I run the usual Supermicro X9SCM boards with ECC ram and a XEON E3-1230v2 CPU. Might be overkill for what you want, but in my case, I didn't want to skimp too much.
 

·
Registered
Joined
·
246 Posts
I don't think you really need a Server MB/CPU, it would just be overkill for a WHS.

I have used a old P35/ Core 2 Duo/4GB in the past with 16TB and it fed my 2 HTPC's over the network with no problems, and rebuilt a bad driver fine with FlexRAID.


What you had before (H61 and G630 would be fine)

I would recommend upgrading to Z77 just for the better features and the fact that you can get 8+ onboard SATA ports, which can help you save on SATA cards.


Your Desktop/HTPC would be more than capable to use as a WHS if you decided to upgrade either of those.
 

·
Registered
Joined
·
1,058 Posts
Only thing i would watch out for is preferring a Mainboard with a Intel NIC (possibly even one of Intels own 7-series boards), in a server mostly dealing with data streaming this can make a difference.

Other then that, no need to do anything special for a WHS server. Pick a board with a Intel NIC and plenty onboard SATA, and go from there.


Also Re: FlexRAID

I use it quite effectively on a 25TB raid right now. Snapshot or realtime doesn't make any difference on how long rebuilds take, the concept is the same. I run an update every night, and the time the update takes is only dependent on how much data changed in one day, which usually isn't all that much (unless i have one of my ripping days where i process a whole load of BDs.
). A full verification does take its time of course, but it also does on any other parity concept, you simply need to check everything. ZFS doesn't make it faster.


I run an actual Windows Server 2008 R2, and not a WHS, but FlexRAID works just beautifully with it.
 

·
Registered
Joined
·
246 Posts

Quote:
Originally Posted by Nevcairiel  /t/1438027/what-is-a-good-value-server-board-and-cpu-for-a-20tb-whs-flexraid-server#post_22568164


preferring a Mainboard with a Intel NIC

Great point - I would recommend this too, I believe some of the ASUS Z77-V Series boards also have the Intel LAN (The 'Pro' definitely does)
 

·
Registered
Joined
·
1,125 Posts
Good points all above, but I still make the personal case that FlexRAID is not the best choice for that much data. I have personally outgrown FlexRAID but its a good choice if all you are protecting is mostly in hanging data like movie files. Like mention above don't forget the intel NIC. Other brands work just fine on a windows server based OS, but I typically only use those "other" brand NICs as a maintenance/service connection.
 

·
Premium Member
Joined
·
16,132 Posts
Those mb with Intel NIC tend to be expensive (the cheapest is ASUS P8Z77-V, $185, except for Intel's own mb that don't have enough SATA ports). Practically you'd better choose any mb you like. Add an Intel NIC PCI Express x1 card like this, $30 , later if necessary.
 
  • Like
Reactions: jstaple2

·
Banned
Joined
·
29,681 Posts
Discussion Starter #10
My other option is use my z68 deluxe Asus motherboard which has intel NIC and replace that with a z77.


But I think I want intel at both ends.


Does a pci intel NIC card take away any bandwidth from SATA raid cards ?


Would it compete against my HDDs in data flow if I added a pci card vs bought a mobo integrated with intel LAN ?
 

·
Registered
Joined
·
2,164 Posts

Quote:
Originally Posted by Puwaha  /t/1438027/what-is-a-good-value-server-board-and-cpu-for-a-20tb-whs-flexraid-server#post_22568954


Good points all above, but I still make the personal case that FlexRAID is not the best choice for that much data. I have personally outgrown FlexRAID but its a good choice if all you are protecting is mostly in hanging data like movie files. Like mention above don't forget the intel NIC. Other brands work just fine on a windows server based OS, but I typically only use those "other" brand NICs as a maintenance/service connection.

It would be nice for you to elaborate on why you hold this position. I could make the argument that as your data grows, FlexRAID (and others like it) are a better solution than ZFS.


It is not about how much data, but about what you want to do with that data and what level of protection you desire. ZFS loses ALL data if the number of failed drives exceeds the amount of parity information. There are no rules in ZFS that stipulate to what degree of parity information you require based on how much data you store so there is nothing inherently safer about ZFS. Safety is fully understanding the risks of each solution and being comfortable enough to go with it.
 

·
Registered
Joined
·
2,164 Posts

Quote:
Originally Posted by renethx  /t/1438027/what-is-a-good-value-server-board-and-cpu-for-a-20tb-whs-flexraid-server#post_22569443


Those mb with Intel NIC tend to be expensive (the cheapest is ASUS P8Z77-V, $185, except for Intel's own mb that don't have enough SATA ports). Practically you'd better choose any mb you like. Add an Intel NIC PCI Express x1 card like this, $30 , later if necessary.

I came to that same conclusion myself which is why I found the SuperMicro boards to be ideal (I bought mine for $180ish if memory serves). Featurewise I got 2 Intel NICs, ECC memory support, a good configuration of PCIe slots and IPMI!


IPMI is a feature that is a pre-requisite for any future server hardware (so tempting to try to have it on HTPC's as well
)
 

·
Registered
Joined
·
246 Posts

Quote:
Originally Posted by Mfusick  /t/1438027/what-is-a-good-value-server-board-and-cpu-for-a-20tb-whs-flexraid-server#post_22569842


What is advantage of ECC memory ?

Its able to check and correct errors making it more accurate and keeping the Server running instead of crashing , It usually is a little bit slower because of this.

With a Home Media Server its not needed or worth it IMO - memory errors are pretty rare.
 

·
Banned
Joined
·
29,681 Posts
Discussion Starter #15

Quote:
Originally Posted by acejh1987  /t/1438027/what-is-a-good-value-ser...a-20tb-whs-flexraid-server/0_40#post_22570002


Its able to check and correct errors making it more accurate and keeping the Server running instead of crashing , It usually is a little bit slower because of this.

With a Home Media Server its not needed or worth it IMO - memory errors are pretty rare.

Thanks!
 

·
Banned
Joined
·
29,681 Posts
Discussion Starter #16

Quote:
Originally Posted by Mfusick  /t/1438027/what-is-a-good-value-ser...a-20tb-whs-flexraid-server/0_40#post_22569654


My other option is use my z68 deluxe Asus motherboard which has intel NIC and replace that with a z77.

But I think I want intel at both ends.

Does a pci intel NIC card take away any bandwidth from SATA raid cards ?

Would it compete against my HDDs in data flow if I added a pci card vs bought a mobo integrated with intel LAN ?

More on this...


Would adding a intel NIC card on pci slot slow down or compete against the flow if data from pci raid cards ?


Anyone have opinion ?
 

·
Registered
Joined
·
1,125 Posts

Quote:
Originally Posted by hdkhang  /t/1438027/what-is-a-good-value-server-board-and-cpu-for-a-20tb-whs-flexraid-server#post_22569719


It would be nice for you to elaborate on why you hold this position. I could make the argument that as your data grows, FlexRAID (and others like it) are a better solution than ZFS.

As your data grows, you expose yourself to greater risk with FlexRAID. With ZFS' self-healing filesystem and data integrity... it protects against silent data corruption/bit-rot... and does this on the fly, and in a scheduled "scrub." Granted FlexRAID offers something similar with it's "Verify" function, but this effectively renders your server unusable for hours or even days depending on how much data you have. You have to run a whole verify job too, or it's worthless... so during those painful hours or days, don't change any data on your server, because there is no way to run an update. Don't try to watch a movie, or hope that your DVR-HTPCs hard drives don't fill up, because you won't be able to move DVR recordings.



What's the big deal about bit-rot? There are numerous examples, but a real life study of 1.5 million HDDs in the NetApp database found that on average 1 in 90 SATA drives will have silent corruption... which is not caught by hardware RAID verification processes (much less software.) For a RAID-5 system that works out to one undetected error for every 67 TB of data read. On a full 3TB HDD you can easily hit that 67TB in no time.

Quote:
It is not about how much data, but about what you want to do with that data and what level of protection you desire. ZFS loses ALL data if the number of failed drives exceeds the amount of parity information.

I know where you are going with this argument. And the ability to pull a hard drive and have access to the raw NTFS files is a good safety-net. But think about this... how many times have you ever done this with a WHSv1, SnapRAID or FlexRAID hard drive? In fact the only time you would ever do this is in a catastrophic server failure, or you are decommissioning a server. You don't just yank hard drives out of your server unless they are bad.


This doesn't happen very often, to almost render this plus for FlexRAID pretty moot. As long as you haven't lost more than your fault-tolerance disks, you can import a ZFS set of disks into another server with near instant access... with one simple command. And you can do this hardware-agnostically, just as with a single NTFS/FlexRAID disk.

Quote:
There are no rules in ZFS that stipulate to what degree of parity information you require based on how much data you store so there is nothing inherently safer about ZFS. Safety is fully understanding the risks of each solution and being comfortable enough to go with it.

Agreed. I did state that my needs had outgrown FlexRAID. ZFS is an enterprise class product versus a one-man show (no matter how talented that one man may be)... so who do you trust your data to? It's not just movies we are talking about here. It's music, photos, backups, documents, and much more... and more importantly your precious and expensive time.



Here are some other pros for ZFS as compared to FlexRAID:


1. No need for hardware RAID cards. ZFS was designed with cheap commodity hard drives in mind, not expensive RAID-capable or enterprise disks. Granted FlexRAID fits into this category as well... except when it comes to bit-rot which is more prevalent in commodity consumer-level hard drives.


2. No need for "checkdisk" or "fsck" type apps to correct filesystem problems... Besides, those apps take your data offline to check, meaning you can't stream movies while chkdsk runs on a 3TB hard drive!


3. Pooled storage without the need to re-run first-time parity when you add more hard drives. Calculating that first parity can take a long time in FlexRAID. Every time you add a new HDD to FlexRAID, you must run the first parity sync all over again, and with every new disk the process gets longer and longer. ZFS adds disks instantly.


4. ZFS pools stay online if you are rebuilding a RAID-Z set. With FlexRAID, your pool is offline while you rebuild.


5. Instantly create your pool/filesystem - No need to wait hours while the first parity build takes place, it's on the fly with ZFS.


6. True RAID-Z ability... mix and match RAID levels in the same pool... mix and match RAID-0, RAID-1, RAID-5, etc.


7. Snapshots/Rollback - Make a snapshot, and you now have a "Time-Machine"-like set of data saved with no extra effort or disk space. Only changes to the snapshot are written to disk. You can run snapshots daily, hourly, or every minute if you are ultra-paranoid and have a lot of changing data. Snapshot-RAID cannot compete here.


8. Copy-on-Write - pull the power plug on your in the middle of a file write and nothing bad happens with ZFS Try that with your NTFS backed FlexRAID system.


9. ZFS is space efficient - built in compression (modern CPUs are more than able to keep up)... you can compress files on the fly to free up more hard disk space.


10. Huge filesystems... up to 16 exabytes Why would you ever need that much? Who could imagine a 4TB drive 10 years ago?


11. Deduplication... great for backups or sets that have a lot of the same data.


12. Simple backups: "zfs send" command


13. On the fly Encryption


14. It's free! No need to purchase an OS license or RAID license.



It's worth noting that even Brahim recognizes that FlexRAID isn't perfect for every user and is trying to implement ZFS-like features in his "NZFS" product.
 

·
Registered
Joined
·
1,125 Posts

Quote:
Originally Posted by acejh1987  /t/1438027/what-is-a-good-value-server-board-and-cpu-for-a-20tb-whs-flexraid-server#post_22570002


Its able to check and correct errors making it more accurate and keeping the Server running instead of crashing , It usually is a little bit slower because of this.

With a Home Media Server its not needed or worth it IMO - memory errors are pretty rare.

It's actually a lot more common than people think. Ever witnessed an out of the blue-BSOD that never happens again?


Here's Google's experiences...
http://news.cnet.com/8301-30685_3-10370026-264.html


"each memory module experienced an average of nearly 4,000 correctible errors per year, and unlike your PC, Google servers use error correction code (ECC) that can nip most of those problems in the bud. That means a correctable error on a Google machine likely is an uncorrectable error on your computer"
 

·
Registered
Joined
·
1,125 Posts

Quote:
Originally Posted by Mfusick  /t/1438027/what-is-a-good-value-server-board-and-cpu-for-a-20tb-whs-flexraid-server#post_22570271


More on this...

Would adding a intel NIC card on pci slot slow down or compete against the flow if data from pci raid cards ?

Anyone have opinion ?

Nope.


All NIC cards reside on the PCI bus... no matter if they are built into the mobo or an expansion card.


And no, the NIC will not compete for PCI bandwidth with your RAID card. Your x16-lane graphics card is more bandwidth hungry than any RAID card could ever hope to be... and people don't have problems playing graphics-intensive games online.
 

·
Registered
Joined
·
368 Posts

Quote:
Originally Posted by Mfusick  /t/1438027/what-is-a-good-value-server-board-and-cpu-for-a-20tb-whs-flexraid-server#post_22570271


More on this...

Would adding a intel NIC card on pci slot slow down or compete against the flow if data from pci raid cards ?

Anyone have opinion ?

This depends on your setup. At some point I saw you mention PCI-x cards and PCI cards. If this is the case it is obviously less than optimal. If you meant PCI Express cards that's a different story. In the cheap server builds I've seen people are actually using PCI Intel NIC's to keep their PCI-E slots free for HBA cards. The 133 MB/s limit for PCI isn't going to handicap the NIC. Now if you've got a HBA (SATA Card) on your PCI bus that's going to be a problem. Since all of the PCI slots are probably sharing 1 PCI-E lane they will be competing for the available bandwidth. Everything you have in a PCI-E slot has it's own dedicated bandwidth. So the moral of the story is I wouldn't put more than one device one the PCI bus since they are sharing bandwidth. Anything on PCI-E has dedicated bandwidth. Now if you've got HBA's on PCI for most of what you do it should work fine. Just streaming a video isn't going to be limited by PCI. Where the limited bandwidth is going to come into play is if you've got to do a rebuild. When your trying to access more than one disk at a time on PCI they're going to be bandwidth limited.
 
1 - 20 of 4731 Posts
Top