Mfusick's How to build an affordable 30TB Flexraid media server: Information Requested.! - AVS Forum
Forum Jump: 
Reply
 
Thread Tools
post #1 of 3787 Old 11-07-2012, 03:14 PM - Thread Starter
AVS Addicted Member
 
Mfusick's Avatar
 
Join Date: Aug 2002
Location: Western MA
Posts: 22,355
Mentioned: 2 Post(s)
Tagged: 0 Thread(s)
Quoted: 141 Post(s)
Liked: 761
1

.Let's begin the journey....EDIT: PROJECT COMPLETED See Later Posts for results

-

"Too much is almost enough. Anything in life worth doing is worth overdoing. Moderation is for cowards."

Last edited by Mfusick; 08-13-2014 at 10:24 AM.
Mfusick is online now  
Sponsored Links
Advertisement
 
post #2 of 3787 Old 11-07-2012, 07:22 PM
AVS Special Member
 
Puwaha's Avatar
 
Join Date: Jan 2003
Posts: 1,124
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 1 Post(s)
Liked: 33
Honestly with that much storage at risk I would not use a snapshot RAID product like FlexRAID. Updates, validations and rebuilds would take days. With that much storage at risk I'd start investigating a ZFS platform.
Puwaha is offline  
post #3 of 3787 Old 11-07-2012, 08:15 PM
AVS Special Member
 
hdkhang's Avatar
 
Join Date: Aug 2004
Location: Sydney, Australia
Posts: 2,164
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 0 Post(s)
Liked: 19
Quote:
Originally Posted by Puwaha View Post

Honestly with that much storage at risk I would not use a snapshot RAID product like FlexRAID. Updates, validations and rebuilds would take days. With that much storage at risk I'd start investigating a ZFS platform.

Not sure what you are basing your reply on.

FlexRAID can do snapshot raid or realtime raid and has file based checksums.

That being said, even with Snapshot RAID, the rebuild takes as long as it takes to read all the data drives concurrently and to write to the replacement drive. Assuming the CPU is not the bottleneck, then it is a bit slower than the time required to fill a drive by regular copy processes.

ZFS is not the solution for someone running WHS.

To the OP. I run the usual Supermicro X9SCM boards with ECC ram and a XEON E3-1230v2 CPU. Might be overkill for what you want, but in my case, I didn't want to skimp too much.
hdkhang is offline  
post #4 of 3787 Old 11-07-2012, 11:02 PM - Thread Starter
AVS Addicted Member
 
Mfusick's Avatar
 
Join Date: Aug 2002
Location: Western MA
Posts: 22,355
Mentioned: 2 Post(s)
Tagged: 0 Thread(s)
Quoted: 141 Post(s)
Liked: 761
My storage is mostly movies and to rebuild my largest drive (3TB) if it failed is acceptable.

I know it's a slow process but that's acceptable to me in event of a drive failure.

-

"Too much is almost enough. Anything in life worth doing is worth overdoing. Moderation is for cowards."
Mfusick is online now  
post #5 of 3787 Old 11-08-2012, 07:06 AM
Senior Member
 
acejh1987's Avatar
 
Join Date: Dec 2006
Posts: 224
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 0 Post(s)
Liked: 19
I don't think you really need a Server MB/CPU, it would just be overkill for a WHS.
I have used a old P35/ Core 2 Duo/4GB in the past with 16TB and it fed my 2 HTPC's over the network with no problems, and rebuilt a bad driver fine with FlexRAID.

What you had before (H61 and G630 would be fine)
I would recommend upgrading to Z77 just for the better features and the fact that you can get 8+ onboard SATA ports, which can help you save on SATA cards.

Your Desktop/HTPC would be more than capable to use as a WHS if you decided to upgrade either of those.
acejh1987 is offline  
post #6 of 3787 Old 11-08-2012, 07:20 AM
AVS Special Member
 
Nevcairiel's Avatar
 
Join Date: Mar 2010
Location: Germany
Posts: 1,010
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 9 Post(s)
Liked: 110
Only thing i would watch out for is preferring a Mainboard with a Intel NIC (possibly even one of Intels own 7-series boards), in a server mostly dealing with data streaming this can make a difference.
Other then that, no need to do anything special for a WHS server. Pick a board with a Intel NIC and plenty onboard SATA, and go from there.

Also Re: FlexRAID
I use it quite effectively on a 25TB raid right now. Snapshot or realtime doesn't make any difference on how long rebuilds take, the concept is the same. I run an update every night, and the time the update takes is only dependent on how much data changed in one day, which usually isn't all that much (unless i have one of my ripping days where i process a whole load of BDs. tongue.gif). A full verification does take its time of course, but it also does on any other parity concept, you simply need to check everything. ZFS doesn't make it faster.

I run an actual Windows Server 2008 R2, and not a WHS, but FlexRAID works just beautifully with it.
Nevcairiel is online now  
post #7 of 3787 Old 11-08-2012, 07:40 AM
Senior Member
 
acejh1987's Avatar
 
Join Date: Dec 2006
Posts: 224
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 0 Post(s)
Liked: 19
Quote:
Originally Posted by Nevcairiel View Post

preferring a Mainboard with a Intel NIC

Great point - I would recommend this too, I believe some of the ASUS Z77-V Series boards also have the Intel LAN (The 'Pro' definitely does)
acejh1987 is offline  
post #8 of 3787 Old 11-08-2012, 11:07 AM
AVS Special Member
 
Puwaha's Avatar
 
Join Date: Jan 2003
Posts: 1,124
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 1 Post(s)
Liked: 33
Good points all above, but I still make the personal case that FlexRAID is not the best choice for that much data. I have personally outgrown FlexRAID but its a good choice if all you are protecting is mostly in hanging data like movie files. Like mention above don't forget the intel NIC. Other brands work just fine on a windows server based OS, but I typically only use those "other" brand NICs as a maintenance/service connection.
Puwaha is offline  
post #9 of 3787 Old 11-08-2012, 01:27 PM
AVS Club Gold
 
renethx's Avatar
 
Join Date: Jan 2006
Posts: 16,081
Mentioned: 6 Post(s)
Tagged: 0 Thread(s)
Quoted: 158 Post(s)
Liked: 356
Those mb with Intel NIC tend to be expensive (the cheapest is ASUS P8Z77-V, $185, except for Intel's own mb that don't have enough SATA ports). Practically you'd better choose any mb you like. Add an Intel NIC PCI Express x1 card like this, $30, later if necessary.
renethx is offline  
post #10 of 3787 Old 11-08-2012, 02:21 PM - Thread Starter
AVS Addicted Member
 
Mfusick's Avatar
 
Join Date: Aug 2002
Location: Western MA
Posts: 22,355
Mentioned: 2 Post(s)
Tagged: 0 Thread(s)
Quoted: 141 Post(s)
Liked: 761
My other option is use my z68 deluxe Asus motherboard which has intel NIC and replace that with a z77.

But I think I want intel at both ends.

Does a pci intel NIC card take away any bandwidth from SATA raid cards ?

Would it compete against my HDDs in data flow if I added a pci card vs bought a mobo integrated with intel LAN ?

-

"Too much is almost enough. Anything in life worth doing is worth overdoing. Moderation is for cowards."
Mfusick is online now  
post #11 of 3787 Old 11-08-2012, 02:44 PM
AVS Special Member
 
hdkhang's Avatar
 
Join Date: Aug 2004
Location: Sydney, Australia
Posts: 2,164
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 0 Post(s)
Liked: 19
Quote:
Originally Posted by Puwaha View Post

Good points all above, but I still make the personal case that FlexRAID is not the best choice for that much data. I have personally outgrown FlexRAID but its a good choice if all you are protecting is mostly in hanging data like movie files. Like mention above don't forget the intel NIC. Other brands work just fine on a windows server based OS, but I typically only use those "other" brand NICs as a maintenance/service connection.

It would be nice for you to elaborate on why you hold this position. I could make the argument that as your data grows, FlexRAID (and others like it) are a better solution than ZFS.

It is not about how much data, but about what you want to do with that data and what level of protection you desire. ZFS loses ALL data if the number of failed drives exceeds the amount of parity information. There are no rules in ZFS that stipulate to what degree of parity information you require based on how much data you store so there is nothing inherently safer about ZFS. Safety is fully understanding the risks of each solution and being comfortable enough to go with it.
hdkhang is offline  
post #12 of 3787 Old 11-08-2012, 02:49 PM
AVS Special Member
 
hdkhang's Avatar
 
Join Date: Aug 2004
Location: Sydney, Australia
Posts: 2,164
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 0 Post(s)
Liked: 19
Quote:
Originally Posted by renethx View Post

Those mb with Intel NIC tend to be expensive (the cheapest is ASUS P8Z77-V, $185, except for Intel's own mb that don't have enough SATA ports). Practically you'd better choose any mb you like. Add an Intel NIC PCI Express x1 card like this, $30, later if necessary.

I came to that same conclusion myself which is why I found the SuperMicro boards to be ideal (I bought mine for $180ish if memory serves). Featurewise I got 2 Intel NICs, ECC memory support, a good configuration of PCIe slots and IPMI!

IPMI is a feature that is a pre-requisite for any future server hardware (so tempting to try to have it on HTPC's as well smile.gif)
hdkhang is offline  
post #13 of 3787 Old 11-08-2012, 03:17 PM - Thread Starter
AVS Addicted Member
 
Mfusick's Avatar
 
Join Date: Aug 2002
Location: Western MA
Posts: 22,355
Mentioned: 2 Post(s)
Tagged: 0 Thread(s)
Quoted: 141 Post(s)
Liked: 761
What is advantage of ECC memory ?

-

"Too much is almost enough. Anything in life worth doing is worth overdoing. Moderation is for cowards."
Mfusick is online now  
post #14 of 3787 Old 11-08-2012, 04:07 PM
Senior Member
 
acejh1987's Avatar
 
Join Date: Dec 2006
Posts: 224
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 0 Post(s)
Liked: 19
Quote:
Originally Posted by Mfusick View Post

What is advantage of ECC memory ?

Its able to check and correct errors making it more accurate and keeping the Server running instead of crashing , It usually is a little bit slower because of this.
With a Home Media Server its not needed or worth it IMO - memory errors are pretty rare.
acejh1987 is offline  
post #15 of 3787 Old 11-08-2012, 05:19 PM - Thread Starter
AVS Addicted Member
 
Mfusick's Avatar
 
Join Date: Aug 2002
Location: Western MA
Posts: 22,355
Mentioned: 2 Post(s)
Tagged: 0 Thread(s)
Quoted: 141 Post(s)
Liked: 761
Quote:
Originally Posted by acejh1987 View Post

Its able to check and correct errors making it more accurate and keeping the Server running instead of crashing , It usually is a little bit slower because of this.
With a Home Media Server its not needed or worth it IMO - memory errors are pretty rare.

Thanks!

-

"Too much is almost enough. Anything in life worth doing is worth overdoing. Moderation is for cowards."
Mfusick is online now  
post #16 of 3787 Old 11-08-2012, 05:42 PM - Thread Starter
AVS Addicted Member
 
Mfusick's Avatar
 
Join Date: Aug 2002
Location: Western MA
Posts: 22,355
Mentioned: 2 Post(s)
Tagged: 0 Thread(s)
Quoted: 141 Post(s)
Liked: 761
Quote:
Originally Posted by Mfusick View Post

My other option is use my z68 deluxe Asus motherboard which has intel NIC and replace that with a z77.
But I think I want intel at both ends.
Does a pci intel NIC card take away any bandwidth from SATA raid cards ?
Would it compete against my HDDs in data flow if I added a pci card vs bought a mobo integrated with intel LAN ?

More on this...

Would adding a intel NIC card on pci slot slow down or compete against the flow if data from pci raid cards ?

Anyone have opinion ?

-

"Too much is almost enough. Anything in life worth doing is worth overdoing. Moderation is for cowards."
Mfusick is online now  
post #17 of 3787 Old 11-08-2012, 06:27 PM
AVS Special Member
 
Puwaha's Avatar
 
Join Date: Jan 2003
Posts: 1,124
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 1 Post(s)
Liked: 33
Quote:
Originally Posted by hdkhang View Post

It would be nice for you to elaborate on why you hold this position. I could make the argument that as your data grows, FlexRAID (and others like it) are a better solution than ZFS.

As your data grows, you expose yourself to greater risk with FlexRAID. With ZFS' self-healing filesystem and data integrity... it protects against silent data corruption/bit-rot... and does this on the fly, and in a scheduled "scrub." Granted FlexRAID offers something similar with it's "Verify" function, but this effectively renders your server unusable for hours or even days depending on how much data you have. You have to run a whole verify job too, or it's worthless... so during those painful hours or days, don't change any data on your server, because there is no way to run an update. Don't try to watch a movie, or hope that your DVR-HTPCs hard drives don't fill up, because you won't be able to move DVR recordings.


What's the big deal about bit-rot? There are numerous examples, but a real life study of 1.5 million HDDs in the NetApp database found that on average 1 in 90 SATA drives will have silent corruption... which is not caught by hardware RAID verification processes (much less software.) For a RAID-5 system that works out to one undetected error for every 67 TB of data read. On a full 3TB HDD you can easily hit that 67TB in no time.

Quote:
It is not about how much data, but about what you want to do with that data and what level of protection you desire. ZFS loses ALL data if the number of failed drives exceeds the amount of parity information.

I know where you are going with this argument. And the ability to pull a hard drive and have access to the raw NTFS files is a good safety-net. But think about this... how many times have you ever done this with a WHSv1, SnapRAID or FlexRAID hard drive? In fact the only time you would ever do this is in a catastrophic server failure, or you are decommissioning a server. You don't just yank hard drives out of your server unless they are bad.

This doesn't happen very often, to almost render this plus for FlexRAID pretty moot. As long as you haven't lost more than your fault-tolerance disks, you can import a ZFS set of disks into another server with near instant access... with one simple command. And you can do this hardware-agnostically, just as with a single NTFS/FlexRAID disk.

Quote:
There are no rules in ZFS that stipulate to what degree of parity information you require based on how much data you store so there is nothing inherently safer about ZFS. Safety is fully understanding the risks of each solution and being comfortable enough to go with it.

Agreed. I did state that my needs had outgrown FlexRAID. ZFS is an enterprise class product versus a one-man show (no matter how talented that one man may be)... so who do you trust your data to? It's not just movies we are talking about here. It's music, photos, backups, documents, and much more... and more importantly your precious and expensive time.


Here are some other pros for ZFS as compared to FlexRAID:

1. No need for hardware RAID cards. ZFS was designed with cheap commodity hard drives in mind, not expensive RAID-capable or enterprise disks. Granted FlexRAID fits into this category as well... except when it comes to bit-rot which is more prevalent in commodity consumer-level hard drives.

2. No need for "checkdisk" or "fsck" type apps to correct filesystem problems... Besides, those apps take your data offline to check, meaning you can't stream movies while chkdsk runs on a 3TB hard drive!

3. Pooled storage without the need to re-run first-time parity when you add more hard drives. Calculating that first parity can take a long time in FlexRAID. Every time you add a new HDD to FlexRAID, you must run the first parity sync all over again, and with every new disk the process gets longer and longer. ZFS adds disks instantly.

4. ZFS pools stay online if you are rebuilding a RAID-Z set. With FlexRAID, your pool is offline while you rebuild.

5. Instantly create your pool/filesystem - No need to wait hours while the first parity build takes place, it's on the fly with ZFS.

6. True RAID-Z ability... mix and match RAID levels in the same pool... mix and match RAID-0, RAID-1, RAID-5, etc.

7. Snapshots/Rollback - Make a snapshot, and you now have a "Time-Machine"-like set of data saved with no extra effort or disk space. Only changes to the snapshot are written to disk. You can run snapshots daily, hourly, or every minute if you are ultra-paranoid and have a lot of changing data. Snapshot-RAID cannot compete here.

8. Copy-on-Write - pull the power plug on your in the middle of a file write and nothing bad happens with ZFS Try that with your NTFS backed FlexRAID system.

9. ZFS is space efficient - built in compression (modern CPUs are more than able to keep up)... you can compress files on the fly to free up more hard disk space.

10. Huge filesystems... up to 16 exabytes Why would you ever need that much? Who could imagine a 4TB drive 10 years ago?

11. Deduplication... great for backups or sets that have a lot of the same data.

12. Simple backups: "zfs send" command

13. On the fly Encryption

14. It's free! No need to purchase an OS license or RAID license.


It's worth noting that even Brahim recognizes that FlexRAID isn't perfect for every user and is trying to implement ZFS-like features in his "NZFS" product.
Puwaha is offline  
post #18 of 3787 Old 11-08-2012, 06:37 PM
AVS Special Member
 
Puwaha's Avatar
 
Join Date: Jan 2003
Posts: 1,124
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 1 Post(s)
Liked: 33
Quote:
Originally Posted by acejh1987 View Post

Its able to check and correct errors making it more accurate and keeping the Server running instead of crashing , It usually is a little bit slower because of this.
With a Home Media Server its not needed or worth it IMO - memory errors are pretty rare.

It's actually a lot more common than people think. Ever witnessed an out of the blue-BSOD that never happens again?

Here's Google's experiences...
http://news.cnet.com/8301-30685_3-10370026-264.html

"each memory module experienced an average of nearly 4,000 correctible errors per year, and unlike your PC, Google servers use error correction code (ECC) that can nip most of those problems in the bud. That means a correctable error on a Google machine likely is an uncorrectable error on your computer"
Puwaha is offline  
post #19 of 3787 Old 11-08-2012, 06:38 PM
AVS Special Member
 
Puwaha's Avatar
 
Join Date: Jan 2003
Posts: 1,124
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 1 Post(s)
Liked: 33
Quote:
Originally Posted by Mfusick View Post

More on this...
Would adding a intel NIC card on pci slot slow down or compete against the flow if data from pci raid cards ?
Anyone have opinion ?

Nope.

All NIC cards reside on the PCI bus... no matter if they are built into the mobo or an expansion card.

And no, the NIC will not compete for PCI bandwidth with your RAID card. Your x16-lane graphics card is more bandwidth hungry than any RAID card could ever hope to be... and people don't have problems playing graphics-intensive games online.
Puwaha is offline  
post #20 of 3787 Old 11-08-2012, 06:44 PM
Senior Member
 
duff99's Avatar
 
Join Date: Jan 2007
Posts: 368
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 0 Post(s)
Liked: 16
Quote:
Originally Posted by Mfusick View Post

More on this...
Would adding a intel NIC card on pci slot slow down or compete against the flow if data from pci raid cards ?
Anyone have opinion ?

This depends on your setup. At some point I saw you mention PCI-x cards and PCI cards. If this is the case it is obviously less than optimal. If you meant PCI Express cards that's a different story. In the cheap server builds I've seen people are actually using PCI Intel NIC's to keep their PCI-E slots free for HBA cards. The 133 MB/s limit for PCI isn't going to handicap the NIC. Now if you've got a HBA (SATA Card) on your PCI bus that's going to be a problem. Since all of the PCI slots are probably sharing 1 PCI-E lane they will be competing for the available bandwidth. Everything you have in a PCI-E slot has it's own dedicated bandwidth. So the moral of the story is I wouldn't put more than one device one the PCI bus since they are sharing bandwidth. Anything on PCI-E has dedicated bandwidth. Now if you've got HBA's on PCI for most of what you do it should work fine. Just streaming a video isn't going to be limited by PCI. Where the limited bandwidth is going to come into play is if you've got to do a rebuild. When your trying to access more than one disk at a time on PCI they're going to be bandwidth limited.
duff99 is offline  
post #21 of 3787 Old 11-08-2012, 07:06 PM
Senior Member
 
duff99's Avatar
 
Join Date: Jan 2007
Posts: 368
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 0 Post(s)
Liked: 16
Since ZFS has been brought up. What I think would be a real nice setup would be a virtualized system with WHS as the front end, and ZFS providing the storage through NFS or iSCSI. This gets you around the lack of disk expander on WHS 2011 since WHS would just see one big disk. You'd also get the increased speed of a raid array plus the advantages of ZFS, but all the features of WHS in one box. This is one of the advantages of virtualization that I see. This is a project I'd really like to tackle. I've already got WHS virtualized. I just can't afford to get the HBA and disks I'd need to try it. I've got my media on a Unraid server, and just use WHS for backing up my computers and some small file storage. So this would be just one of those let's see if I can do this projects.
duff99 is offline  
post #22 of 3787 Old 11-08-2012, 08:00 PM
AVS Club Gold
 
renethx's Avatar
 
Join Date: Jan 2006
Posts: 16,081
Mentioned: 6 Post(s)
Tagged: 0 Thread(s)
Quoted: 158 Post(s)
Liked: 356
Quote:
Originally Posted by Mfusick View Post

More on this...
Would adding a intel NIC card on pci slot slow down or compete against the flow if data from pci raid cards ?
Anyone have opinion ?

Not "PCI" (there is no "PCI" in Z77/H77 chipset smile.gif) but "PCI Express", they are completely different, i.e, parallel shared bus topology vs. serial point-to-point bus topology. There is no "slow down" or "compete" in PCI Express. Moreover NIC is even connected to the chipset's PCI Express controller (x1 link), while RAID card will be connected to CPU's PCI Express controller (x16 or x8 link).

BTW onboard NIC is also connected to the chipset via a PCI Express x1 link, it's no different from a discrete NIC.

Chipset and CPU are connected via Direct Media Interface (DMI) 2.0 = PCI Express 2.0 x4 link.
renethx is offline  
post #23 of 3787 Old 11-08-2012, 08:50 PM
AVS Special Member
 
hdkhang's Avatar
 
Join Date: Aug 2004
Location: Sydney, Australia
Posts: 2,164
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 0 Post(s)
Liked: 19
Quote:
Originally Posted by Puwaha View Post

As your data grows, you expose yourself to greater risk with FlexRAID. With ZFS' self-healing filesystem and data integrity... it protects against silent data corruption/bit-rot... and does this on the fly, and in a scheduled "scrub." Granted FlexRAID offers something similar with it's "Verify" function, but this effectively renders your server unusable for hours or even days depending on how much data you have. You have to run a whole verify job too, or it's worthless... so during those painful hours or days, don't change any data on your server, because there is no way to run an update. Don't try to watch a movie, or hope that your DVR-HTPCs hard drives don't fill up, because you won't be able to move DVR recordings.

I'm not intimately familiar with ZFS, but am curious to know how it knows that something is bad automatically? My understanding is that this can only happen if you actually access the file in some way, otherwise the system will be completely wasteful if it is continually checking all files for silent corruption. My opinion on the silent corruption is similar to your dismissal of using native format drives. If the file sits there in a corrupted state but I am able to repair that file, then the difference to me is inconsequential.
Quote:
Originally Posted by Puwaha View Post

What's the big deal about bit-rot? There are numerous examples, but a real life study of 1.5 million HDDs in the NetApp database found that on average 1 in 90 SATA drives will have silent corruption... which is not caught by hardware RAID verification processes (much less software.) For a RAID-5 system that works out to one undetected error for every 67 TB of data read. On a full 3TB HDD you can easily hit that 67TB in no time.

Again, most of the discussion here is on predominantly static data, most of which is infrequently accessed. Even in a ZFS environment, the chances of picking up on an automatic healing of silent corruption is very low.
Quote:
Originally Posted by Puwaha View Post

I know where you are going with this argument. And the ability to pull a hard drive and have access to the raw NTFS files is a good safety-net. But think about this... how many times have you ever done this with a WHSv1, SnapRAID or FlexRAID hard drive? In fact the only time you would ever do this is in a catastrophic server failure, or you are decommissioning a server. You don't just yank hard drives out of your server unless they are bad.
This doesn't happen very often, to almost render this plus for FlexRAID pretty moot. As long as you haven't lost more than your fault-tolerance disks, you can import a ZFS set of disks into another server with near instant access... with one simple command. And you can do this hardware-agnostically, just as with a single NTFS/FlexRAID disk.

Which do you think is a bigger plus... having some chance of recovering lost data in the event of a catastrophic failure OR ensuring that one or two instances of silent corruption are corrected on the fly?
Quote:
Originally Posted by Puwaha View Post

Agreed. I did state that my needs had outgrown FlexRAID. ZFS is an enterprise class product versus a one-man show (no matter how talented that one man may be)... so who do you trust your data to? It's not just movies we are talking about here. It's music, photos, backups, documents, and much more... and more importantly your precious and expensive time.

Again, for home server use, having all the bells and whistles doesn't mean much if there are aspects of which you would rather not have to deal with. e.g. ZFS inability to add data on the fly to the primary array without having to make new vdevs or whatnot, all drives spinning up to read a single file and the associated wear and tear that has on a typical home usage scenario, not to mention power draw and there are many others.
Quote:
Originally Posted by Puwaha View Post

Here are some other pros for ZFS as compared to FlexRAID:
1. No need for hardware RAID cards. ZFS was designed with cheap commodity hard drives in mind, not expensive RAID-capable or enterprise disks. Granted FlexRAID fits into this category as well... except when it comes to bit-rot which is more prevalent in commodity consumer-level hard drives.
2. No need for "checkdisk" or "fsck" type apps to correct filesystem problems... Besides, those apps take your data offline to check, meaning you can't stream movies while chkdsk runs on a 3TB hard drive!
3. Pooled storage without the need to re-run first-time parity when you add more hard drives. Calculating that first parity can take a long time in FlexRAID. Every time you add a new HDD to FlexRAID, you must run the first parity sync all over again, and with every new disk the process gets longer and longer. ZFS adds disks instantly.
4. ZFS pools stay online if you are rebuilding a RAID-Z set. With FlexRAID, your pool is offline while you rebuild.
5. Instantly create your pool/filesystem - No need to wait hours while the first parity build takes place, it's on the fly with ZFS.
6. True RAID-Z ability... mix and match RAID levels in the same pool... mix and match RAID-0, RAID-1, RAID-5, etc.
7. Snapshots/Rollback - Make a snapshot, and you now have a "Time-Machine"-like set of data saved with no extra effort or disk space. Only changes to the snapshot are written to disk. You can run snapshots daily, hourly, or every minute if you are ultra-paranoid and have a lot of changing data. Snapshot-RAID cannot compete here.
8. Copy-on-Write - pull the power plug on your in the middle of a file write and nothing bad happens with ZFS Try that with your NTFS backed FlexRAID system.
9. ZFS is space efficient - built in compression (modern CPUs are more than able to keep up)... you can compress files on the fly to free up more hard disk space.
10. Huge filesystems... up to 16 exabytes Why would you ever need that much? Who could imagine a 4TB drive 10 years ago?
11. Deduplication... great for backups or sets that have a lot of the same data.
12. Simple backups: "zfs send" command
13. On the fly Encryption
14. It's free! No need to purchase an OS license or RAID license.
It's worth noting that even Brahim recognizes that FlexRAID isn't perfect for every user and is trying to implement ZFS-like features in his "NZFS" product.

1. Not a point of comparison, anything ZFS can use, FlexRAID/SnapRAID/unRAID can use. Actually, being that FlexRAID/SnapRAID can reside in the windows space, it has a good chance of driver support as well.

2. SnapRAID/FlexRAID do daily parity updates, during these updates, all the checking that impacts the data is done, this would be similar to your whole ZFS auto repair silent corruption thingo upon access of files.

3. ZFS cannot add already filled drives. FlexRAID/SnapRAID/unRAID can add empty drives instantly as well. So this point is lost by ZFS.

4. unRAID pools remain online while you are repairing. SnapRAID doesn't have pooling but depending on the solution you employ to pool your drives, the non impacted data is still available. It is only the lost data that is unavailable. In the case of unRAID it can also do emulate failed drives the way ZFS and hardware RAID does. I have tested this functionality quite a bit.

5. Again, see point 3... empty drives take nearly zero time to create a pool/array whatever you want to call it.

6. For home use, who cares about this point? Why would I care to mix and match RAID levels in the same pool? In fact, with pooling software, I can do just that. I can have hardware RAID mixed with software RAID mixed with whatever else, it would be a mess and not worth doing however.

7. Again, for home use in storing of media files, how often does do your movies/music change? Most people know what these systems are designed for and what they are not good at, it is a personal decision. Many keep documents and photos separately or have them duplicated.

8. Expand further on why this is important in a home environment? I have UPS on every one of my computers. How often are you pulling power to drives while the machine is running?

9. Compression is useless for compressed data (music, movies and photos) so this point is useless for home media servers. For documents and the like, most people don't have terabytes upon terabytes of the stuff to worry about the small file savings.

10. Not relevant today for home server usage

11. WHS handles that dedup stuff

12. Plenty of backup software available, if using Crashplan for instance, then backup is automatic and so I don't think this is an important point of distinction for the inteded usage.

13. NTFS volumes can encrypt data

14. Can't argue with Free... but you do realise that SnapRAID on Linux is also free. Despite this, many people are happy to pay to have a single point of support. Not everyone is willing or able to put the time and effort in to self troubleshoot. Also, it is only free in so far as you don't require functionality from Windows for instance. If you need to virtualize windows for functionality (which many people still do) then it is not free anymore.
btiltman likes this.
hdkhang is offline  
post #24 of 3787 Old 11-08-2012, 10:27 PM
AVS Special Member
 
Tong Chia's Avatar
 
Join Date: Nov 2002
Posts: 1,140
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 0 Post(s)
Liked: 16
Quote:
Originally Posted by duff99 View Post

Since ZFS has been brought up. What I think would be a real nice setup would be a virtualized system with WHS as the front end, and ZFS providing the storage through NFS or iSCSI. This gets you around the lack of disk expander on WHS 2011 since WHS would just see one big disk. You'd also get the increased speed of a raid array plus the advantages of ZFS, but all the features of WHS in one box. This is one of the advantages of virtualization that I see. This is a project I'd really like to tackle. I've already got WHS virtualized. I just can't afford to get the HBA and disks I'd need to try it. I've got my media on a Unraid server, and just use WHS for backing up my computers and some small file storage. So this would be just one of those let's see if I can do this projects.

The OS such as FreeBSD or Solaris, that ZFS runs on should be able to provide the filer services you mentioned. I use virtualized Win7 for some of the Windows specific services.
My server runs Solaris 11 providing SMB/CIFS (PCs) and Netatalk/AFP (Macs) and I have Win7 running on Virtualbox for Media Center (my PVR) connected to a couple of HDHomerun network tuners. It also runs some of the extenders in the house.
Tong Chia is offline  
post #25 of 3787 Old 11-08-2012, 10:43 PM
AVS Special Member
 
Puwaha's Avatar
 
Join Date: Jan 2003
Posts: 1,124
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 1 Post(s)
Liked: 33
Quote:
Originally Posted by hdkhang View Post

I'm not intimately familiar with ZFS, but am curious to know how it knows that something is bad automatically? My understanding is that this can only happen if you actually access the file in some way, otherwise the system will be completely wasteful if it is continually checking all files for silent corruption.

Why is that "wasteful"? Your server sits idle most of the day. It should be doing something useful for you.

Quote:
My opinion on the silent corruption is similar to your dismissal of using native format drives. If the file sits there in a corrupted state but I am able to repair that file, then the difference to me is inconsequential.

Who says you will be able to repair that file under FlexRAID at all? There are lots of horror stories on the FlexRAID forums of people encountering bugs, or the parity data becoming corrupt, etc.

As hard drives become larger and larger with the bits packed tighter and tighter, the stability of these systems is called into question. I'd rather have a filesystem that actively works to prevent a problem before you have the problem. Have you ever had to run chkdsk on a 3TB drive? I have... it takes a very long time. And even chkdsk only scrubs the metadata, not the actual data itself. Its estimated that firmware bugs in hard drives cause 5-10% of all silent corruption errors. Why wouldn't you want a file system that checksums the block of data, the metadata, and even the pointer to the metadata? Most filesystems store the metadata with the block of data. If the block is corrupt, then the metadata is useless. You need an independant source of metadata, and a way to check it.

Quote:
Again, most of the discussion here is on predominantly static data, most of which is infrequently accessed. Even in a ZFS environment, the chances of picking up on an automatic healing of silent corruption is very low.

I don't think you get it. Static data has a higher chance of bit-rot. The more data is accessed, the more chance the hard drives' built in error correcting routines have the chance to fix it before it reaches the PC. If it can't do that this would be like UPS saying, "We guarantee that your package wasn't damaged when we picked it up." Not quite the guarantee you were looking for.

Quote:
Which do you think is a bigger plus... having some chance of recovering lost data in the event of a catastrophic failure OR ensuring that one or two instances of silent corruption are corrected on the fly?

I've already stated my opinion on this matter.

Quote:
Again, for home server use, having all the bells and whistles doesn't mean much if there are aspects of which you would rather not have to deal with. e.g. ZFS inability to add data on the fly to the primary array without having to make new vdevs or whatnot, all drives spinning up to read a single file and the associated wear and tear that has on a typical home usage scenario, not to mention power draw and there are many others.

Ugh... the power draw argument. We are talking about a few dollars a year in most areas of the the US.

And I'm not sure what you are referring to when you say " ZFS inability to add data on the fly to the primary array without having to make new vdevs or whatnot"

Quote:
1. Not a point of comparison, anything ZFS can use, FlexRAID/SnapRAID/unRAID can use. Actually, being that FlexRAID/SnapRAID can reside in the windows space, it has a good chance of driver support as well.

Not really a pro for ZFS, this was more of an anti-con.

Quote:
2. SnapRAID/FlexRAID do daily parity updates, during these updates, all the checking that impacts the data is done, this would be similar to your whole ZFS auto repair silent corruption thingo upon access of files.

Nope. Snapshot-RAID can not compensate for hard drive errors or guarantee the integrity of the data when the data is read... only when validating or restoring. If your parity data is corrupt... you have no chance of recovery. if you are "updating" invalid data, then your parity becomes just as corrupt, and you can only recover the corrupt data.

Quote:
3. ZFS cannot add already filled drives. FlexRAID/SnapRAID/unRAID can add empty drives instantly as well. So this point is lost by ZFS.

I can't argue this point. Its a nice feature. But once again you are trusting of the native filesystem already having your data intact.

Quote:
4. unRAID pools remain online while you are repairing. SnapRAID doesn't have pooling but depending on the solution you employ to pool your drives, the non impacted data is still available. It is only the lost data that is unavailable. In the case of unRAID it can also do emulate failed drives the way ZFS and hardware RAID does. I have tested this functionality quite a bit.

The point still stands in comparison of ZFS and FlexRAID.

Quote:
6. For home use, who cares about this point? Why would I care to mix and match RAID levels in the same pool? In fact, with pooling software, I can do just that. I can have hardware RAID mixed with software RAID mixed with whatever else, it would be a mess and not worth doing however.

This is certainly an advanced use, and probably not recommended for home use, I'll admit. But the fact that you aren't using hardware RAID with ZFS keeps you from having to make/break RAID sets at the hardware level to do this, which adds an extra layer of complexity when using this with a user-space snapshot-RAID.

Quote:
7. Again, for home use in storing of media files, how often does do your movies/music change? Most people know what these systems are designed for and what they are not good at, it is a personal decision. Many keep documents and photos separately or have them duplicated.

Surely, you jest.

Quote:
8. Expand further on why this is important in a home environment? I have UPS on every one of my computers. How often are you pulling power to drives while the machine is running?

No, but you can. Obviously, this is the extreme scenario, but a cable can come loose, vibrations in the case can cause heads to go crazy, you never know. Not everyone has the foresight to use a UPS. What about hardware faults? A bad power supply?

Quote:
9. Compression is useless for compressed data (music, movies and photos) so this point is useless for home media servers. For documents and the like, most people don't have terabytes upon terabytes of the stuff to worry about the small file savings.

If you only use your fileserver to store media, then you are limited in your vision.

Quote:
10. Not relevant today for home server usage

What was the average hard drive size 10 years ago? What will be the size 10 years from now? Data accumulation is not getting stagnant or smaller. It's estimated that ZFS will last us for 30 years before it will be outdated. This is not just an enterprise problem... the very fact that you need a file-server in the first place should show you that this is relevant.

Quote:
11. WHS handles that dedup stuff

Only for backups. Not for user-data. And I don't think this feature survived into WHS2011. ZFS can dedupe at the block level all data.

Quote:
12. Plenty of backup software available, if using Crashplan for instance, then backup is automatic and so I don't think this is an important point of distinction for the inteded usage.

The point is... this is baked into ZFS. No need to seek out and validate a 3rd party tool

Quote:
13. NTFS volumes can encrypt data

I never said it didn't. ZFS is a filesystem as well as a pooling/RAID system. This is also an anti-con for ZFS, and is very easy to setup.

Quote:
14. Can't argue with Free... but you do realise that SnapRAID on Linux is also free.

Sure, but SnapRAID doesn't do pooling, so you have to use LVM or Greyhole. ZFS takes all of these features and presents them to the user as one unified toolset, rather than having to piece it all together.

Quote:
Despite this, many people are happy to pay to have a single point of support. Not everyone is willing or able to put the time and effort in to self troubleshoot. Also, it is only free in so far as you don't require functionality from Windows for instance. If you need to virtualize windows for functionality (which many people still do) then it is not free anymore.

I don't get your point here. Who says you need to run Windows to run a fileserver? If you need a virtualization platform, then VMWare ESXi is free... Citrix XEN is free... KVM through Linux (ProxMox is awesome, btw) is free... even the core-hyperv windows 2008 (no GUI) OS is free.

And as for support... anyone can read or ask questions in the forums of their favorite product. As a one-man show, Brahim is very eager to use that community for end-user support... and I don't blame him.
Puwaha is offline  
post #26 of 3787 Old 11-08-2012, 11:14 PM
Senior Member
 
duff99's Avatar
 
Join Date: Jan 2007
Posts: 368
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 0 Post(s)
Liked: 16
Quote:
Originally Posted by Tong Chia View Post

The OS such as FreeBSD or Solaris, that ZFS runs on should be able to provide the filer services you mentioned. I use virtualized Win7 for some of the Windows specific services.
My server runs Solaris 11 providing SMB/CIFS (PCs) and Netatalk/AFP (Macs) and I have Win7 running on Virtualbox for Media Center (my PVR) connected to a couple of HDHomerun network tuners. It also runs some of the extenders in the house.

I understand WHS isn't necessary. Part of the reason is that I want to do it that way is the challenge, learning some new things, just to see if I can make it work. The other reason is that I like WHS. I like the centralized backup primarily, but there are some other things also. I'm still running the original WHS since it's a hassle to upgrade, but I would miss the drive pooling. I figure this way I would have essentially the drive pooling, but with the fast, safe back end of ZFS.

So basically I understand it's not necessary. Right now my needs are met. WHS is backing up my computers. Unraid is storing my media. I don't need to do anything I just want to play with some new things. Maybe I'm the only one with these particular needs, but I like the features of WHS with the backing of ZFS.
duff99 is offline  
post #27 of 3787 Old 11-08-2012, 11:53 PM
AVS Special Member
 
Tong Chia's Avatar
 
Join Date: Nov 2002
Posts: 1,140
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 0 Post(s)
Liked: 16
Quote:
Originally Posted by duff99 View Post

I understand WHS isn't necessary. Part of the reason is that I want to do it that way is the challenge, learning some new things, just to see if I can make it work. The other reason is that I like WHS. I like the centralized backup primarily, but there are some other things also. I'm still running the original WHS since it's a hassle to upgrade, but I would miss the drive pooling. I figure this way I would have essentially the drive pooling, but with the fast, safe back end of ZFS.
So basically I understand it's not necessary. Right now my needs are met. WHS is backing up my computers. Unraid is storing my media. I don't need to do anything I just want to play with some new things. Maybe I'm the only one with these particular needs, but I like the features of WHS with the backing of ZFS.

No problem, learning exercise is a good reason.

For starters you need a motherboard with 4 or more sata connectors, for this to work well you need a quad core and at least 8Gb ram, a dual core with hyperthreading will also work. The Intel NICs are the best supported. Recent NVidia or Radeons are well supported by the Solaris Xserver.
HBA is optional until you get something running. You can try this if you have at least 3 disks for RAIDZ, as an experiment it can be leftovers from a past upgrade.


The most interesting is a baremetal hypervisor like VMWare VSphere/Esxi 5.1 hypervisor, this is the freebie.
Your motherboard and CPU must support VT-x and preferably VT-d. check your BIOS for this.
Create 2 VMs one WHS and the other Solaris 11.
Solaris 11 will get you ZFS, NFS and iSCSI. I would use SMB/CIFS for starters as it is the easiest to setup. You can download the VM images from Oracle if you want to skip the installation hassle.

Create your ZFS pool and enable NFS, iSCSI and CIFS. The napp-it tool is probably easiest to get this going.

An alternative to Oracle is Nexenta based the OpenIndiana fork of Solaris and it has a Debian user environment.
Tong Chia is offline  
post #28 of 3787 Old 11-09-2012, 12:13 AM
AVS Special Member
 
Nevcairiel's Avatar
 
Join Date: Mar 2010
Location: Germany
Posts: 1,010
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 9 Post(s)
Liked: 110
While we're all advocating ZFS here, one reason for me to use FlexRAID was the ability to very simply join my existing 2TB drives and my new 3TB drives into one volume without any problems whatsoever (as long as my parity is on 3TB drives)
Can ZFS deal properly with drives of varying sizes, without losing any capacity?

Not to mention that i actually need to run a windows server for some other services that run on the thing, virtualizing this just to run a Linux/BSD/Solaris VM on the side for ZFS seems rather excessive. I just don't see the big advantages. Like many others on their home servers, i primarily store media files on it, so even if one bit gets flipped somewhere, it won't even stop the movie from playing. Sure, technically its corrupted now, but not sure all the extra effort involved in running ZFS with Windows on the side would be worth it.

Theoretically perfect solutions are great and all, but at the end of the day, we all have to tailor our setups to our use-cases.
Nevcairiel is online now  
post #29 of 3787 Old 11-09-2012, 12:21 AM
AVS Special Member
 
Tong Chia's Avatar
 
Join Date: Nov 2002
Posts: 1,140
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 0 Post(s)
Liked: 16
Quote:
Originally Posted by Nevcairiel View Post

While we're all advocating ZFS here, one reason for me to use FlexRAID was the ability to very simply join my existing 2TB drives and my new 3TB drives into one volume without any problems whatsoever (as long as my parity is on 3TB drives)
Can ZFS deal properly with drives of varying sizes, without losing any capacity?

Nope, the ZFS pool defaults to the smallest storage unit, so you get only 2Tb. It is terrible as a logical volume manager.
Tong Chia is offline  
post #30 of 3787 Old 11-09-2012, 12:54 AM
AVS Special Member
 
Tong Chia's Avatar
 
Join Date: Nov 2002
Posts: 1,140
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 0 Post(s)
Liked: 16
My primary interest in ZFS is the focus on data integrity it provides. I use cheap consumer drives for my home server, these things have a media error rate of 1E14, this means an unrecoverable read error every 12.5Tb, taking the disk layout overhead, this reduces to 10Tb.

Put another way there is 10% chance of a unrecoverable read error every 1Tb of data read, this is the total data read and not per drive.
BTW this is the reason for having the heroic recovery reads in consumer drives that cause it to fall out of an array.

I look for a file system that can handle this level of errors, with error recovery performance better than NTFS or ext3fs

ZFS is one such filesystem, there is a lot of research papers on how it did various things and up until a year ago it was open source so you can actually see what it did.
Tong Chia is offline  
Reply Home Theater Computers

Tags
Flexraid , Intel Core I5 4670k 3 4ghz Lga 1150 Quad Core Desktop Processor , Intel Core I3 4130 3 4 3 Fclga 1150 Processor Bx80646i34130 , Asrock , Asrock Z87m Extreme4 Lga1150 Intel Z87 Chipset Ddr3 Quad Crossfirex Quad Sli Sata3 Usb3 0 Microatx M , Asus , Asus Us , Asus Computer , Asus Components , Norco , Asus Computer International Direct , Flexraid Raid F , Seagate , Seagate Hard Drives , Seagate Freeagent Theater 1080p Hd Media Player Stcea201 Rk , Hitachi , Dell , Windows 7 Vista Xp Media Center Mce Pc Remote Control And Infrared Receiver , Hp Oem Window Media Center Mce Pc Remote Control And Infrared Receiver For Windows7 Vista Xp Home Pr , Intel , Intel Core I5 3570 3 4 Ghz Processor , Intel Pentium G2020 2 9ghz Lga 1155 Dual Core Desktop Processor , Intel Pentium G3220 3 0ghz Lga 1150 Dual Core Desktop Processor , Amd , Amd A10 6700 Richland 4 2ghz Socket Fm2 65w Quad Core Desktop Processor Amd Radeon Hd Ad6700okhlbox , Amd A6 5400k 3 6ghz Socket Fm2 Dual Core Desktop Processor , Amd A8 5600k 3 6ghz Socket Fm2 Quad Core Desk
Gear in this thread - 1080p by PriceGrabber.com

Thread Tools
Show Printable Version Show Printable Version
Email this Page Email this Page


Forum Jump: 

Posting Rules  
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are Off
Pingbacks are Off
Refbacks are Off