NAS - Build or buy - AVS Forum | Home Theater Discussions And Reviews
Forum Jump: 
 10Likes
Reply
 
Thread Tools
post #1 of 51 Old 05-03-2017, 11:56 AM - Thread Starter
Advanced Member
 
perpetual98's Avatar
 
Join Date: Nov 2006
Location: SE Wisconsin
Posts: 850
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 94 Post(s)
Liked: 52
NAS - Build or buy

My main HTPC is really starting to become an octopus with various drives jammed in the case, and approaching an unwieldy amount of external USB drives and I'm starting to run out of space. I think right now (at work, going from memory) for storage I have two 4TB drives internal and a 1TB drive internal for recorded TV, and I also have two 2TB drives in one of those USB toaster things and I think another 2TB via USB as well. I'd like to start cleaning up my closet where everything resides and have been pondering using some old PCs I have laying around and maybe getting all of the hard drives into their own case.

I've been eyeballing NAS units on Amazon like Drobo and Synology, etc but I was wondering if I could accomplish something similar with hardware I already own. I know I have a low-power E350 PC that I could spare, and some AMD mobo/RAM (M4A78-Pro if memory serves) combination that's collecting dust and at least one or two older Pentium units (D945)

Are those older scrap PCs useful for this sort of thing or would I be better off ponying up for some dedicated hardware. I'd rather use that money to get some more spinners for storage.

I'd LOVE to have some backups of all my stuff that's not just copies of most (not all) of my rips that my ex wife has. If I had to re-rip everything I've done I might just call no-joy and stream things because I've been ripping for over a decade.
perpetual98 is offline  
Sponsored Links
Advertisement
 
post #2 of 51 Old 05-03-2017, 05:39 PM
Advanced Member
 
smitbret's Avatar
 
Join Date: Nov 2005
Location: East Idaho - Pocatello
Posts: 807
Mentioned: 4 Post(s)
Tagged: 0 Thread(s)
Quoted: 325 Post(s)
Liked: 101
Quote:
Originally Posted by perpetual98 View Post
My main HTPC is really starting to become an octopus with various drives jammed in the case, and approaching an unwieldy amount of external USB drives and I'm starting to run out of space. I think right now (at work, going from memory) for storage I have two 4TB drives internal and a 1TB drive internal for recorded TV, and I also have two 2TB drives in one of those USB toaster things and I think another 2TB via USB as well. I'd like to start cleaning up my closet where everything resides and have been pondering using some old PCs I have laying around and maybe getting all of the hard drives into their own case.

I've been eyeballing NAS units on Amazon like Drobo and Synology, etc but I was wondering if I could accomplish something similar with hardware I already own. I know I have a low-power E350 PC that I could spare, and some AMD mobo/RAM (M4A78-Pro if memory serves) combination that's collecting dust and at least one or two older Pentium units (D945)

Are those older scrap PCs useful for this sort of thing or would I be better off ponying up for some dedicated hardware. I'd rather use that money to get some more spinners for storage.

I'd LOVE to have some backups of all my stuff that's not just copies of most (not all) of my rips that my ex wife has. If I had to re-rip everything I've done I might just call no-joy and stream things because I've been ripping for over a decade.

www.lime-technology.com

You literally just described unRAID
oneeyeblind likes this.
smitbret is offline  
post #3 of 51 Old 05-04-2017, 08:07 PM - Thread Starter
Advanced Member
 
perpetual98's Avatar
 
Join Date: Nov 2006
Location: SE Wisconsin
Posts: 850
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 94 Post(s)
Liked: 52
After doing some reading on unRAID I'm going to play with it with some old hardware I have and see how it goes.

What happens if I have my current hard drives formatted NTFS? My limited experience with Linux would lead me to believe that it's not going to recognize them, but perhaps my limited experience is just that, too limited for details like that.
perpetual98 is offline  
Sponsored Links
Advertisement
 
post #4 of 51 Old 05-05-2017, 03:35 AM
AVS Forum Special Member
 
bryansj's Avatar
 
Join Date: Feb 2004
Location: Atlanta, GA
Posts: 9,421
Mentioned: 17 Post(s)
Tagged: 0 Thread(s)
Quoted: 1753 Post(s)
Liked: 789
Quote:
Originally Posted by perpetual98 View Post
After doing some reading on unRAID I'm going to play with it with some old hardware I have and see how it goes.

What happens if I have my current hard drives formatted NTFS? My limited experience with Linux would lead me to believe that it's not going to recognize them, but perhaps my limited experience is just that, too limited for details like that.
You'll reformat them as part of adding them to unRAID. I you are needing to keep your data on the drives then you'll need a different solution (or more new drives).

bryansj is offline  
post #5 of 51 Old 05-05-2017, 04:57 AM
One-Man Content Creator
 
wmcclain's Avatar
 
Join Date: May 2006
Posts: 24,310
Mentioned: 19 Post(s)
Tagged: 0 Thread(s)
Quoted: 4519 Post(s)
Liked: 4332
Quote:
Originally Posted by perpetual98 View Post
After doing some reading on unRAID I'm going to play with it with some old hardware I have and see how it goes.

What happens if I have my current hard drives formatted NTFS? My limited experience with Linux would lead me to believe that it's not going to recognize them, but perhaps my limited experience is just that, too limited for details like that.
Linux supports NTFS but UnRAID uses a limited set of file systems, so you would have to reformat and reload your drives.

For a comparison of various "non-traditional RAID" schemes, see this SnapRAID table: http://www.snapraid.it/compare

Build vs buy: build.

-Bill
jujuman200 likes this.

Review older films here: 1979 and earlier | 1980s | 1990s | Combined reviews: Strange Picture Scroll
Unofficial OPPO FAQS: UDP-203 | BDP-103 | BDP-93 | BDP-83 | BDP-80    
wmcclain is online now  
post #6 of 51 Old 05-05-2017, 08:47 AM
Newbie
 
mhhd's Avatar
 
Join Date: Jan 2009
Posts: 12
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 9 Post(s)
Liked: 2
mhhd is offline  
post #7 of 51 Old 05-05-2017, 10:04 AM - Thread Starter
Advanced Member
 
perpetual98's Avatar
 
Join Date: Nov 2006
Location: SE Wisconsin
Posts: 850
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 94 Post(s)
Liked: 52
Got unRAID downloaded and installed onto a flash drive. The initial motherboard that I wanted to us wouldn't POST so after playing around a bit with it I gave up and went with another computer. It's a 3.0GHZ Core 2 Duo with 4GB of RAM. The motherboard only has 5 SATA ports on it while the one I wanted to use had 6 SATA III ports, but for playing around with, I guess it doesn't matter much. I might have to move everything to another case though as the current one is small and I might want to get a hot-swap bay for the 5.25" bays and this case only has 2 5.25" bays. Dumped a 1TB hard drive in as a parity drive for playing around, but it's one of those that's throwing all sorts of SMART errors. The other drive is a tiny 80GB drive that I had, but while I learn the system, it should be fine, I don't want to ruin any of my live data. So far I can't believe how snappy unRAID runs off of a USB stick. Pleasantly surprised. It's building the parity as I type this and is 6% done and I'm guessing it won't be done before I go home from work. I still need to do some figuring out what all this stuff means.

My head tells me that I should have all drives be the same size and I'm thinking that I should get 2 more 4TB drives so I have a total of 5 of them which would leave me roughly 16GB of storage plus parity? Still doesn't make sense to me, but maybe the 2TB drives get relegated to other duties. If I had five 4TB drives and one took a dump I shouldn't really lose anything unless more than one died at the same time?
perpetual98 is offline  
post #8 of 51 Old 05-05-2017, 10:24 AM - Thread Starter
Advanced Member
 
perpetual98's Avatar
 
Join Date: Nov 2006
Location: SE Wisconsin
Posts: 850
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 94 Post(s)
Liked: 52
I might also play around with SnapRAID as it looks intriguing too, and I wouldn't need to immediately get new disks just to copy my current ones to it. Can it run off of a flash drive or would I need to dedicate a SATA (or IDE port I suppose) to the OS?
perpetual98 is offline  
post #9 of 51 Old 05-05-2017, 11:19 AM
One-Man Content Creator
 
wmcclain's Avatar
 
Join Date: May 2006
Posts: 24,310
Mentioned: 19 Post(s)
Tagged: 0 Thread(s)
Quoted: 4519 Post(s)
Liked: 4332
Quote:
Originally Posted by perpetual98 View Post
I might also play around with SnapRAID as it looks intriguing too, and I wouldn't need to immediately get new disks just to copy my current ones to it. Can it run off of a flash drive or would I need to dedicate a SATA (or IDE port I suppose) to the OS?
snapRAID is just a tiny little application program, not a complete OS. You need to run it under Linux or Windows, etc. However you run that, it doesn't care.

-Bill

Review older films here: 1979 and earlier | 1980s | 1990s | Combined reviews: Strange Picture Scroll
Unofficial OPPO FAQS: UDP-203 | BDP-103 | BDP-93 | BDP-83 | BDP-80    
wmcclain is online now  
post #10 of 51 Old 05-05-2017, 11:24 AM - Thread Starter
Advanced Member
 
perpetual98's Avatar
 
Join Date: Nov 2006
Location: SE Wisconsin
Posts: 850
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 94 Post(s)
Liked: 52
Yeah, I understand that, I just wasn't too clear with my question.

Here's another noob question. Once my data (movies/pictures/music) is put into one of the arrays, it's pretty much stuck there? It appears that if I'm using unRAID and for some reason (let's say the motherboard died) I just wanted my data back into a Windows machine, am I SOL? If I have to run something in a Windows environment, I might as well just run it in the HTPC and not bother with a NAS.

Sorry for the dumb questions.
perpetual98 is offline  
post #11 of 51 Old 05-05-2017, 11:28 AM
AVS Forum Special Member
 
bryansj's Avatar
 
Join Date: Feb 2004
Location: Atlanta, GA
Posts: 9,421
Mentioned: 17 Post(s)
Tagged: 0 Thread(s)
Quoted: 1753 Post(s)
Liked: 789
SnapRAID isn't destructive to your current data. You just need to dedicate a drive for parity the same as unRAID. You can try out SnapRAID free and not have to format any drives.

Any particular reason you want a parity solution? IMO it is sort of a hassle with little reward.

bryansj is offline  
post #12 of 51 Old 05-05-2017, 11:32 AM
One-Man Content Creator
 
wmcclain's Avatar
 
Join Date: May 2006
Posts: 24,310
Mentioned: 19 Post(s)
Tagged: 0 Thread(s)
Quoted: 4519 Post(s)
Liked: 4332
Quote:
Originally Posted by perpetual98 View Post
Yeah, I understand that, I just wasn't too clear with my question.

Here's another noob question. Once my data (movies/pictures/music) is put into one of the arrays, it's pretty much stuck there? It appears that if I'm using unRAID and for some reason (let's say the motherboard died) I just wanted my data back into a Windows machine, am I SOL? If I have to run something in a Windows environment, I might as well just run it in the HTPC and not bother with a NAS.

Sorry for the dumb questions.
snapRAID (and I think this is true of unRAID and flexRAID, but not the others in the table; but snapRAID is what I use, so take that into account) never modifies your files in any way (with the rare exception of the case where file damage has been detected and you need to correct it).

You use whatever directory layout you want, run snapRAID as long as it is useful. You want to quit: just stop using it. Your files, directories, file systems are just as they were. You can pull out the drives, take them to another computer and forget snapRAID was ever in use.

It keeps its own parity and content (= checksum files) but does not modify the originals.

-Bill

Review older films here: 1979 and earlier | 1980s | 1990s | Combined reviews: Strange Picture Scroll
Unofficial OPPO FAQS: UDP-203 | BDP-103 | BDP-93 | BDP-83 | BDP-80    
wmcclain is online now  
post #13 of 51 Old 05-05-2017, 11:35 AM - Thread Starter
Advanced Member
 
perpetual98's Avatar
 
Join Date: Nov 2006
Location: SE Wisconsin
Posts: 850
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 94 Post(s)
Liked: 52
Thanks for the info guys. I will do some more playing around with it. I wonder if I can use Linux to run the OS/SnapRAID but use my NTFS drives or is that just inviting more problems thank it's worth? I'd have to scare up a Windows license I think if I was going to go with something like Windows 7 for the current computer I'm testing on.
perpetual98 is offline  
post #14 of 51 Old 05-05-2017, 12:39 PM
One-Man Content Creator
 
wmcclain's Avatar
 
Join Date: May 2006
Posts: 24,310
Mentioned: 19 Post(s)
Tagged: 0 Thread(s)
Quoted: 4519 Post(s)
Liked: 4332
Quote:
Originally Posted by perpetual98 View Post
Thanks for the info guys. I will do some more playing around with it. I wonder if I can use Linux to run the OS/SnapRAID but use my NTFS drives or is that just inviting more problems thank it's worth? I'd have to scare up a Windows license I think if I was going to go with something like Windows 7 for the current computer I'm testing on.
Check the snapRAID forum: https://sourceforge.net/p/snapraid/discussion/1677233/. I think you'll find other people doing that.

The FAQ says no problem for NTFS data discs, but has this note for parity on NTFS: "OK in Windows. No in Linux, as it doesn't support fallocate() command needed to allocated the parity files. You can anyway use it if you limit the use of the parity disk to contain only the parity file."

-Bill

Review older films here: 1979 and earlier | 1980s | 1990s | Combined reviews: Strange Picture Scroll
Unofficial OPPO FAQS: UDP-203 | BDP-103 | BDP-93 | BDP-83 | BDP-80    
wmcclain is online now  
post #15 of 51 Old 05-05-2017, 01:25 PM
AVS Forum Special Member
 
bryansj's Avatar
 
Join Date: Feb 2004
Location: Atlanta, GA
Posts: 9,421
Mentioned: 17 Post(s)
Tagged: 0 Thread(s)
Quoted: 1753 Post(s)
Liked: 789
Again I have to ask why you want a parity solution. RAID is not backup. It may seem like you are backing up your files, but it's not like you have a true extra copy. If your data gets corrupted then the parity will only restore the corrupted version.

You must also have the parity drive be equal to or larger than your largest drive in the array.

Not that having a parity solution is bad, it just doesn't seem to make sense if you were just wanting network attached storage.

To me the best thing you could do now is to run StableBit DriveScanner and be alerted to possible drive failure so you can take action (it can email you if there is a problem). Running StableBit DrivePool with it then it can automatically start relocating data from a failing drive onto other good drives in the pool.

bryansj is offline  
post #16 of 51 Old 05-05-2017, 01:36 PM
One-Man Content Creator
 
wmcclain's Avatar
 
Join Date: May 2006
Posts: 24,310
Mentioned: 19 Post(s)
Tagged: 0 Thread(s)
Quoted: 4519 Post(s)
Liked: 4332
Quote:
Originally Posted by bryansj View Post
Again I have to ask why you want a parity solution. RAID is not backup. It may seem like you are backing up your files, but it's not like you have a true extra copy. If your data gets corrupted then the parity will only restore the corrupted version.
snapRAID keeps hashes to verify file integrity and the block level. I perform a weekly scrub of 12% of my array, meaning the whole array is checked every 2 months. Each file is read and compared with its saved hash. If a silent error due to disc failure is detected the file can be recovered.

-Bill

Review older films here: 1979 and earlier | 1980s | 1990s | Combined reviews: Strange Picture Scroll
Unofficial OPPO FAQS: UDP-203 | BDP-103 | BDP-93 | BDP-83 | BDP-80    
wmcclain is online now  
post #17 of 51 Old 05-05-2017, 01:36 PM - Thread Starter
Advanced Member
 
perpetual98's Avatar
 
Join Date: Nov 2006
Location: SE Wisconsin
Posts: 850
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 94 Post(s)
Liked: 52
I'm basically looking to avoid data loss in the event of hard drive failure. I've had enough hard drives fail in my experience (I spent a lot of time in IT at work) that it would be nice to minimize the damage. Sure, I could buy another 4TB drive and just mirror it, which is always an option. I guess I was leaning towards a NAS because I could drive pool and increase storage that way and also try to minimize data loss. If I have a full 4TB drive mirrored, I now have two full 4TB drives, but I so at least have a backup in the event that one goes belly up. Another thing (if my case allows) is that I can get rid of the USB stuff and get all of my drives into a case with fan(s)

Another option is just to buy one of those cages where I can mount the hard drives in the 5.25" bays and that would allow me to get the drives at least physically cleaned up and cooled down and get some clutter out of my closet where everything resides now.

I'm still researching things and who knows which route I end up going, but I appreciate the info.
perpetual98 is offline  
post #18 of 51 Old 05-05-2017, 01:40 PM
AVS Forum Special Member
 
bryansj's Avatar
 
Join Date: Feb 2004
Location: Atlanta, GA
Posts: 9,421
Mentioned: 17 Post(s)
Tagged: 0 Thread(s)
Quoted: 1753 Post(s)
Liked: 789
Quote:
Originally Posted by wmcclain View Post
snapRAID keeps hashes to verify file integrity and the block level. I perform a weekly scrub of 12% of my array, meaning the whole array is checked every 2 months. Each file is read and compared with its saved hash. If a silent error due to disc failure is detected the file can be recovered.

-Bill
Yes, I've run SnapRAID myself as well as FlexRAID. They still are not backup solutions. It can be better than nothing.

bryansj is offline  
post #19 of 51 Old 05-05-2017, 01:42 PM
One-Man Content Creator
 
wmcclain's Avatar
 
Join Date: May 2006
Posts: 24,310
Mentioned: 19 Post(s)
Tagged: 0 Thread(s)
Quoted: 4519 Post(s)
Liked: 4332
Quote:
Originally Posted by perpetual98 View Post

Another option is just to buy one of those cages where I can mount the hard drives in the 5.25" bays and that would allow me to get the drives at least physically cleaned up and cooled down and get some clutter out of my closet where everything resides now.
I use those. Nothing like swappable hard drives. My server build was several years ago, but you might find the notes useful: http://watershade.net/wmcclain/curator.html

-Bill

Review older films here: 1979 and earlier | 1980s | 1990s | Combined reviews: Strange Picture Scroll
Unofficial OPPO FAQS: UDP-203 | BDP-103 | BDP-93 | BDP-83 | BDP-80    
wmcclain is online now  
post #20 of 51 Old 05-05-2017, 01:45 PM - Thread Starter
Advanced Member
 
perpetual98's Avatar
 
Join Date: Nov 2006
Location: SE Wisconsin
Posts: 850
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 94 Post(s)
Liked: 52
I was eyeballing this guy
perpetual98 is offline  
post #21 of 51 Old 05-05-2017, 01:51 PM
One-Man Content Creator
 
wmcclain's Avatar
 
Join Date: May 2006
Posts: 24,310
Mentioned: 19 Post(s)
Tagged: 0 Thread(s)
Quoted: 4519 Post(s)
Liked: 4332
When shopping for server hardware I always look at the UnRAID forums: https://forums.lime-technology.com/forum/9-hardware/

I don't use that software but those guys test a lot of hardware.

-Bill

Review older films here: 1979 and earlier | 1980s | 1990s | Combined reviews: Strange Picture Scroll
Unofficial OPPO FAQS: UDP-203 | BDP-103 | BDP-93 | BDP-83 | BDP-80    
wmcclain is online now  
post #22 of 51 Old 05-06-2017, 09:38 AM
AVS Forum Special Member
 
Defcon's Avatar
 
Join Date: Sep 2001
Posts: 2,895
Mentioned: 12 Post(s)
Tagged: 0 Thread(s)
Quoted: 1529 Post(s)
Liked: 856
A full backup is always desirable but not everyone can do that. Parity is far more preferable than the traditional solutions like Raid/ZFS which are inflexible and complicated. Even if you have backups you will not be taking them in real time, and parity can help in event of failure anytime.
Defcon is offline  
post #23 of 51 Old 05-06-2017, 11:48 AM
 
ajhieb's Avatar
 
Join Date: Jul 2009
Location: The Commonwealth, not the Jelly.
Posts: 2,696
Mentioned: 15 Post(s)
Tagged: 0 Thread(s)
Quoted: 1037 Post(s)
Liked: 698
Quote:
Originally Posted by Defcon View Post
A full backup is always desirable but not everyone can do that. Parity is far more preferable than the traditional solutions like Raid/ZFS which are inflexible and complicated. Even if you have backups you will not be taking them in real time, and parity can help in event of failure anytime.
Most of the solutions aren't calculating parity in real-time either, so that point is pretty much moot.

I'd also suggest that RAID isn't particularly complicated, at least from an end user standpoint. The real drawback is usually hardware expense, and the caveat that all drives (in the same array) need to be the same size. (I assume that's what you meant by inflexible)

That said, setting up my RAID system was trivially easy, and zero maintenance on my part. It checks drive health in the background, and parity calculations are done in real time. Drive pooling isn't necessary as it is inherent to RAID. It certainly isn't for everybody, but I think I paid about $250 total for my RAID controller and 16bay hotswap chassis (system pulls) and it has suited my needs rather well. I currently have a 4x4TB RAID 5 array, a 6x6TB RAID 5 array, and 6 empty bays. When the price comes down on 8 or 10TB drives, I'll add another 6 drive array, and I'll be set for several more years.

Sorry for the tangent. I just have to take up for my RAID setup once and a while.

That said, The stuff on my 4x4TB array, is my "important" stuff that I also have backed up on burnt media as well as cloud storage. My 6x6TB array is all media that I've ripped so I consider the data there to be the backup of my original media. The parity in this case is just to mitigate the risk of losing time re-ripping my media vs the probability of a drive failure. I think that's true of most people (here) using any solution involving parity. (Maintaining uptime is also a benefit of real-time parity but not really a big selling point in this arena) If you asked people point blank if they would pay an extra $100, $150, $200 (whatever the price is for an parity drive) as insurance against ripping a bunch of their media again, it's a no-brainer.

Anyway, not all of that was directed at you, @Defcon I know you know most of that, but just clarifying for anyone reading.

PS. The only reason that I haven't moved from traditional RAID to unRAID is the lack of additional multiple parity drives and/or the inability to create multiple single-parity arrays. Either one would eliminate my need for the other, but I need at least one to feel comfortable.
ajhieb is offline  
post #24 of 51 Old 05-06-2017, 03:42 PM
Advanced Member
 
smitbret's Avatar
 
Join Date: Nov 2005
Location: East Idaho - Pocatello
Posts: 807
Mentioned: 4 Post(s)
Tagged: 0 Thread(s)
Quoted: 325 Post(s)
Liked: 101
Quote:
Originally Posted by ajhieb View Post
Most of the solutions aren't calculating parity in real-time either, so that point is pretty much moot.

I'd also suggest that RAID isn't particularly complicated, at least from an end user standpoint. The real drawback is usually hardware expense, and the caveat that all drives (in the same array) need to be the same size. (I assume that's what you meant by inflexible)

That said, setting up my RAID system was trivially easy, and zero maintenance on my part. It checks drive health in the background, and parity calculations are done in real time. Drive pooling isn't necessary as it is inherent to RAID. It certainly isn't for everybody, but I think I paid about $250 total for my RAID controller and 16bay hotswap chassis (system pulls) and it has suited my needs rather well. I currently have a 4x4TB RAID 5 array, a 6x6TB RAID 5 array, and 6 empty bays. When the price comes down on 8 or 10TB drives, I'll add another 6 drive array, and I'll be set for several more years.

Sorry for the tangent. I just have to take up for my RAID setup once and a while.

That said, The stuff on my 4x4TB array, is my "important" stuff that I also have backed up on burnt media as well as cloud storage. My 6x6TB array is all media that I've ripped so I consider the data there to be the backup of my original media. The parity in this case is just to mitigate the risk of losing time re-ripping my media vs the probability of a drive failure. I think that's true of most people (here) using any solution involving parity. (Maintaining uptime is also a benefit of real-time parity but not really a big selling point in this arena) If you asked people point blank if they would pay an extra $100, $150, $200 (whatever the price is for an parity drive) as insurance against ripping a bunch of their media again, it's a no-brainer.

Anyway, not all of that was directed at you, @Defcon I know you know most of that, but just clarifying for anyone reading.

PS. The only reason that I haven't moved from traditional RAID to unRAID is the lack of additional multiple parity drives and/or the inability to create multiple single-parity arrays. Either one would eliminate my need for the other, but I need at least one to feel comfortable.
Quote:
Originally Posted by Defcon View Post
A full backup is always desirable but not everyone can do that. Parity is far more preferable than the traditional solutions like Raid/ZFS which are inflexible and complicated. Even if you have backups you will not be taking them in real time, and parity can help in event of failure anytime.
I have a feeling this is going to be a long post but it seems that OP is following a lot of the same path in the evolution of his media server that I went on. So using my media server as an example let me throw my $.02 in the ring.

I started out with unRAID a few years ago, building it out of old parts I had lying around in my garage. It ran fine on my old AMD Thunderbird and I had even read of people using single core Semprons with great success. However, because unRAID is its own self-contained environment it just became a storage box for my media and I had to use my desktop PC to run my media server software (Mezzmo) since it was Windows based and I needed a CPU with a little more power to transcode for video streaming. That meant I had to leave two PCs on if I wanted to stream my media. I also hated that I had to start with empty drives and couldn't just import a drive into the array.

So, I eventually got an AMD FX-6100 CPU and ran WHS 2011 on it. It ran Mezzmo fine and I installed FlexRAID. I ran single parity (RAID-5). FlexRAID has a decent (albeit, not really intuitive GUI) and it worked fairly well but I got really slow transfer speeds to and from the FlexRAID array; fast enough to watch or listen to the media but reading and writing from the file structure was much slower than I had even experienced with unRAID. At some point, I dumped WHS 2011 and just migrated my FlexRAID setup to Win 7 Pro.

Everything ran fine for a few years except for a few minor glitches with FlexRAID here and there. FlexRAID seems to have had globally poor support and I can say that my experience was pretty similar. It seems to be a one man show (a guy named Brahim) and he seemed greatly overwhelmed by the project but he had a novel way of talking over your head and then getting annoyed if you asked for clarification or for follow-up issues. If you are familiar with Saturday Night Live think 'Nick Burns, the Company's Computer Guy'.

Over time I added more and more drives, an additional SATA card or two, decided to go Dual-Parity and finally when I upgraded the CPU to an AMD 8350 I was unable to register FlexRAID. After a couple of emails to FlexRAID they told me that I had upgraded too much and now I was pretty much using FlexRAID on a different computer than had originally been licensed. Long story short, I ended up having to repurchase FlexRAID and it left a bad taste in my mouth. I continued to deal with FlexRAID's quirks for another year or so until I finally got fed up a month ago. I just had so many issues with parity updates, Validations, etc. that I finally just tore down the array and moved the array to SnapRAID and continued to use only FlexRAID's pooling features but it just never felt like a good match. I can't really put my finger on it, but it was obvious that FlexRAID was never meant to be a pooling only software.

Now, I have uninstalled FlexRAID and have been running SnapRAID with StableBit DrivePool for about 3 weeks. SnapRAID w/Elucidate has been so transparent that I am almost unsure that it is actually working. There were a couple of occasions where I had to reinstall FlexRAID and the whole initialization of the database would have my media server down for an entire day or more. With SnapRAID and DrivePool that is not a problem. I can create the Pool and have it available for sharing within minutes. SnapRAID can then build the array in the background while DrivePool keeps everything up and online.

Which takes us to "RAID is not backup".

True, but I don't need a full backup. I keep my important documents, photos and music on my server, but my BD and DVD Rips take up 90% of the space. I can/do backup everything but my video files to a 3TB HDD as well as automatic cloud storage (SpiderOak). If I had a catastrophic failure, the important stuff is safe and I can always rerip everything. In reality, I would never rerip everything since that would probably takes month on end but I guess the backups could be considered to be in boxes in the garage. In this situation, dual parity is plenty of protection.

With my new setup of SnapRAID/DrivePool, I am considering just getting the Stablebit Suite because the DriveScanner is a pretty nice piece of software and the CloudDrive looks pretty slick. It encrypts data automatically and then uploads it to the cloud storage of my choice. This pretty much nullifies my desire for SpiderOak's encrypted storage. I just haven't settled on a Cloud Service, yet.

In had also tried FreeNAS (ZFS RAID) way back when I was trying unRAID. It was nice and hardy but I couldn't import drives with existing data and every drive had to be the same size or else you lose storage space. Additionally, the hardware requirements for ZFS tended to be a little less budget friendly. Expanding the array with unRAID, FlexRAID and SnapRAID is also a snap versus having to break down your array and rebuild or just create a new array and import to a pool like you would have to do with ZFS or Hardware.
ajhieb and bytor99999 like this.
smitbret is offline  
post #25 of 51 Old 05-06-2017, 05:48 PM
AVS Forum Special Member
 
Defcon's Avatar
 
Join Date: Sep 2001
Posts: 2,895
Mentioned: 12 Post(s)
Tagged: 0 Thread(s)
Quoted: 1529 Post(s)
Liked: 856
w.r.t RAID - my opinion is that -

1. hw RAID is not relevant anymore now that cpu's are fast enough to do calculations. Being tied to a $$$ controller card is a recipe for failure. This isn't even new, most motherboards have included some kind of sw RAID for over a decade now and all OS's have it too. There's a reason ZFS is so well regarded.

2. RAID was designed for high availability enterprise applications and that's where it belongs. It became popular with enthusiasts because it was the only way to get high performance, it was a fancy thing to play with, and it was the only way to make a pool.

3. The dangers and limitations of RAID are not worth it to me - not being able to combine multiple size disks, all disks active at all times, the RAID reconstruct hole, data not being in native format, its just endless. No one needs the throughput for a media server.

My media server journey has been very similar - started with SW Raid (using Intel/Windows). WHS was a gift from the gods, it was the perfect solution and actually marketed towards non techie uses, unfortunately MS abandoned it with the disastrous decisions. FlexRaid was the next holy grail, but its a single developer and has had issues. SnapRaid came out soon after. I was also aware of unRaid but at the time it was crippled by slow speeds and I was unsure about moving to Linux.

DrivePool + Scanner is a fantastic combo that in many ways cannot be beat - multiple duplication levels controllable per folder, a really advanced scanner that to my knowledge no one else has (it consults a database specific to each drive model), predictive failure avoidance, and it is dirt cheap. I built a Windows 2012 R2 and then 2016 Essentials test server. But the file placement in DrivePool is very unpredictable, it can spread files in the same folder over multiple disks, which is not what I want. I did use SnapRaid (its wonderful) but you can't use drive placement rules with it.

I also wanted to use Docker containers for the apps I use, and was using Ubuntu VMs running under HyperV. Its a lot of manual work. Took another look at unRaid - all of this is dead simple and it does a lot more now, i.e. I can run a Windows VM for anything missing.

SnapRaid does read only pools using symlinks. AFAIK there is no write pool solution that doesn't stripe data other than FlexRaid (should be considered defunct), DrivePool or unRaid. There was Amahi/Greyhole which is abandoned.

And this is why commerical NAS units like Qnap/Synology continue to sell with a 5x markup in hw, because all this stuff is still too complex for the average user. IMO the simplest right now is unRaid. I actually have >40TB data all in NTFS and migrating that to XFS for unRaid is going to be a major pain .
oneeyeblind likes this.
Defcon is offline  
post #26 of 51 Old 05-06-2017, 06:24 PM
 
ajhieb's Avatar
 
Join Date: Jul 2009
Location: The Commonwealth, not the Jelly.
Posts: 2,696
Mentioned: 15 Post(s)
Tagged: 0 Thread(s)
Quoted: 1037 Post(s)
Liked: 698
Quote:
Originally Posted by Defcon View Post
w.r.t RAID - my opinion is that -

1. hw RAID is not relevant anymore now that cpu's are fast enough to do calculations. Being tied to a $$$ controller card is a recipe for failure. This isn't even new, most motherboards have included some kind of sw RAID for over a decade now and all OS's have it too. There's a reason ZFS is so well regarded.

2. RAID was designed for high availability enterprise applications and that's where it belongs. It became popular with enthusiasts because it was the only way to get high performance, it was a fancy thing to play with, and it was the only way to make a pool.

3. The dangers and limitations of RAID are not worth it to me - not being able to combine multiple size disks, all disks active at all times, the RAID reconstruct hole, data not being in native format, its just endless. No one needs the throughput for a media server.
Fair points. And I to be clear I wasn't trying to push hardware RAID as a perfect fit for media storage. Just pointing out it isn't difficult at all to set one up, and it's way lower maintenance compared to a lot of the software parity solutions out there (at least from what I've gathered reading all of the threads here on parity updates, and checks and scans and other things that need to be scheduled and monitored) That said, I was already building a server for hosting a bunch of VMs I've been toying with so it was a better fit for me, but I recognize I'm not a typical HT enthusiast. Plus coming from an IT background I've had loads of experience with RAID anyway.

I do think "recipe for failure" is a bit of a stretch though. I've been running RAID on my home systems in various configurations since I bought my first Mylex DAC960 before the turn of the millennium. The only data loss I've ever experienced was with a cheap firmware RAID controller (not a $$$ enterprise controller) in combination with the 3TB Seagates that are currently the subject of a class action suit because of their unusually high failure rate. Hard to pin the failure on RAID when literally all 12 drives I owned died within a short span. No parity solution was ever going to save that data.

I suppose I can see the matching drive limitation being a downside, but It's never really effected me much. As I retire old drives that are too old or too small, I buy new drives. I usually buy several at once, so getting drives that match hasn't been a problem. And since I'm creating relatively small arrays in terms of number of drives, I can have different sized drives as long as the ones in each array match. As noted I'm using 4TB and 6TB drives now. So it's not like you're completely stuck with whatever drive you choose. The biggest drawback is instead of buying a 3TB now, and a 5TB a month from now and a 8TB next year, when I want to expand, I'll just buy 3 or more of whatever drive has the most TB/$ and off I go.

Quote:
And this is why commerical NAS units like Qnap/Synology continue to sell with a 5x markup in hw, because all this stuff is still too complex for the average user. IMO the simplest right now is unRaid. I actually have >40TB data all in NTFS and migrating that to XFS for unRaid is going to be a major pain .
And don't forget, some of those high dollar commercial NAS units are using hardware RAID too. (though it seems like linux software RAID has dominated that market as of late)

The one thing I can say for sure, based on my dealings with the owner, and the way he's treated forum members here, I will never ever use or recommend FlexRAID under any circumstance.
ajhieb is offline  
post #27 of 51 Old 05-06-2017, 06:56 PM
AVS Forum Special Member
 
Defcon's Avatar
 
Join Date: Sep 2001
Posts: 2,895
Mentioned: 12 Post(s)
Tagged: 0 Thread(s)
Quoted: 1529 Post(s)
Liked: 856
Quote:
Originally Posted by ajhieb View Post
Fair points. And I to be clear I wasn't trying to push hardware RAID as a perfect fit for media storage. Just pointing out it isn't difficult at all to set one up, and it's way lower maintenance compared to a lot of the software parity solutions out there (at least from what I've gathered reading all of the threads here on parity updates, and checks and scans and other things that need to be scheduled and monitored) That said, I was already building a server for hosting a bunch of VMs I've been toying with so it was a better fit for me, but I recognize I'm not a typical HT enthusiast. Plus coming from an IT background I've had loads of experience with RAID anyway.

I do think "recipe for failure" is a bit of a stretch though. I've been running RAID on my home systems in various configurations since I bought my first Mylex DAC960 before the turn of the millennium. The only data loss I've ever experienced was with a cheap firmware RAID controller (not a $$$ enterprise controller) in combination with the 3TB Seagates that are currently the subject of a class action suit because of their unusually high failure rate. Hard to pin the failure on RAID when literally all 12 drives I owned died within a short span. No parity solution was ever going to save that data.

I suppose I can see the matching drive limitation being a downside, but It's never really effected me much. As I retire old drives that are too old or too small, I buy new drives. I usually buy several at once, so getting drives that match hasn't been a problem. And since I'm creating relatively small arrays in terms of number of drives, I can have different sized drives as long as the ones in each array match. As noted I'm using 4TB and 6TB drives now. So it's not like you're completely stuck with whatever drive you choose. The biggest drawback is instead of buying a 3TB now, and a 5TB a month from now and a 8TB next year, when I want to expand, I'll just buy 3 or more of whatever drive has the most TB/$ and off I go.



And don't forget, some of those high dollar commercial NAS units are using hardware RAID too. (though it seems like linux software RAID has dominated that market as of late)

The one thing I can say for sure, based on my dealings with the owner, and the way he's treated forum members here, I will never ever use or recommend FlexRAID under any circumstance.
I don't know for sure, but my guess is since Qnap and Synology have a specialized storage engine that is RAID like but does allow mismatched drive sizes, that its done in software.

If you remove the need for high throughput by striping reads/writes, I can think of no other justification for RAID. Parity doesn't require striping.

FlexRaid is actually the most well designed of all the solutions - it has multiple protocols including real time parity etc and a lot of neat ideas. It was also completely free with a GUI etc to begin with. Its not easy for one person to do all that. It could easily have become the standard with some more funding and different choices.
Defcon is offline  
post #28 of 51 Old 05-06-2017, 07:49 PM
 
ajhieb's Avatar
 
Join Date: Jul 2009
Location: The Commonwealth, not the Jelly.
Posts: 2,696
Mentioned: 15 Post(s)
Tagged: 0 Thread(s)
Quoted: 1037 Post(s)
Liked: 698
Quote:
Originally Posted by Defcon View Post
I don't know for sure, but my guess is since Qnap and Synology have a specialized storage engine that is RAID like but does allow mismatched drive sizes, that its done in software.

If you remove the need for high throughput by striping reads/writes, I can think of no other justification for RAID. Parity doesn't require striping.

FlexRaid is actually the most well designed of all the solutions - it has multiple protocols including real time parity etc and a lot of neat ideas. It was also completely free with a GUI etc to begin with. Its not easy for one person to do all that. It could easily have become the standard with some more funding and different choices.
I have it from reliable sources that Brahim hasn't been the one doing the development work on FlexRAID in quite some time. He was the "idea guy" and he farmed out most/all the coding the last few years. And while I agree it's not easy to do all that by himself, it doesn't excuse his condescending attitude, boasting, belittling of competing products (and the people who developed them) and in general, terrible demeanor. He also quite literally didn't understand the difference between a logical drive and a physical drive, which should be a pretty basic concept for someone who is writing software to protect one of those things. With the number of people here who have had problems with FlexRAID/Brahim and the number who are successfully using unRAID or SnapRAID (and/or Stablebit Drive Pool) I can't see a reason to steer anybody in the direction of FlexRAID.

As far as OEM NAS units, the line is somewhat blurred but I think several of them are more RAID-like than not. You're still generally tied to their hardware, you're getting real-time parity in pretty much every instance, and the data is usually striped (RAID-5) That or they're just doing straight mirroring (RAID-1) I haven't checked the specs on any of the Qnap or Synology boxes in a while, but last I looked I didn't recall any having the "feature" allowing you to pull a drive and it be readable in another machine unless they were doing mirroring/RAID-1. But when talking about an appliance the distinction between hardware and software RAID is moot. The hardware in the unit is dedicated to maintaining the array(s). Whether or not there is a dedicated XOR engine or not doesn't change anything.
oneeyeblind likes this.
ajhieb is offline  
post #29 of 51 Old 05-06-2017, 10:43 PM
Senior Member
 
Join Date: Oct 2016
Posts: 371
Mentioned: 2 Post(s)
Tagged: 0 Thread(s)
Quoted: 217 Post(s)
Liked: 40
perpetual98, it seems that SnapRAID may work well for you from what some posters suggested. I haven't used it, so I can't help you further. Since your drives are formatted as NTFS, SnapRAID only supports that filesystem in Windows. You can use your existing hardware such as ASUS M4A78-Pro for your server. You can use Linux, it probably best to use Pentium D945 system for doing backups of your SnapRAID.

There are cases that already have hot swap bays such as LIAN LI PC-Q25, LIAN LI PC-Q26, SilverStone DS380.

Quote:
Originally Posted by Defcon View Post
And this is why commerical NAS units like Qnap/Synology continue to sell with a 5x markup in hw, because all this stuff is still too complex for the average user. IMO the simplest right now is unRaid. I actually have >40TB data all in NTFS and migrating that to XFS for unRaid is going to be a major pain .
It seems you are well educated in the RAID department, but doubt you did any price comparisons. To build a NAS similar to Synology DS1515+, it will cost more to build with similar features. Also to make building a NAS worst is Synology DS1815+. There are no such thing as five times markup.
tecknurd is offline  
post #30 of 51 Old 05-07-2017, 06:47 AM - Thread Starter
Advanced Member
 
perpetual98's Avatar
 
Join Date: Nov 2006
Location: SE Wisconsin
Posts: 850
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 94 Post(s)
Liked: 52
Solid information guys!

Here's what's happening in the short-term for me. I ordered THIS off of Amazon and it should be here tomorrow, but the tracking number is USPS so I'm not holding my breath as it's in CA and I'm in WI.

I think what I'm going to do it remove my HTPC components that are in a somewhat small case now with not much ventilation (never really needed it as it used to reside in a chilly basement) and move the guts to a larger case I have that has more fans already in it, plus the space to put my new hard drive cage. The next step is to look into putting SnapRAID on the HTPC itself and forego having a separate server entirely. I'll probably still need to pick up some new spinners, but I will deal with that after I get everything localized into my HTPC. I've also been trying to free up some space on the HTPC this weekend as well by moving files and getting rid of stuff that I don't need.
perpetual98 is offline  
Sponsored Links
Advertisement
 
Reply Home Theater Computers

Thread Tools
Show Printable Version Show Printable Version
Email this Page Email this Page


Forum Jump: 

Posting Rules  
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are Off
Pingbacks are Off
Refbacks are Off