Building my Media Server - Another Storage Question - AVS Forum | Home Theater Discussions And Reviews
Forum Jump: 
  • 2 Post By kapone
Thread Tools
post #1 of 7 Old 08-09-2014, 03:24 PM - Thread Starter
Advanced Member
Aryn Ravenlocke's Avatar
Join Date: Aug 2006
Location: Tempe, AZ, USA
Posts: 754
Mentioned: 2 Post(s)
Tagged: 0 Thread(s)
Quoted: 351 Post(s)
Liked: 134
Building my Media Server - Another Storage Question

Originally Posted by Defcon View Post
There is IMO very little point of using any block based parity system (RAID, StorageSpaces, ZFS) for home use where most data is static. I'd rather have the flexibility of using any size disk I want, and the safety of data being in native formats with no extra management layer needed to access it.
I have another thread going that is similar to this, but I'm trying very hard to keep the question in hand there from getting hijacked since I'm rapidly approaching the point at which I will be dropping a pile of money on a solution/decision.

Defcon is pointing out the issue that my SO, raised in regards to the media server/disk farm. She is by no means trying to meddle in the development, but she's doing her best to show some engaged interest. Her first questions dealt with why I was paying an extra $40/drive for the same storage from the same company. I told her I wanted drives designed for NAS enclosures and not stand-alone drives since I was going to be employing 2 cases working with some sort of pooling/RAID solution.

Then she wanted to know why I was bothering with doing a parity setup. Wouldn't it just be easier to have all the discs separate, and if one fails, replace it? She was pointing out that I could get 5 TB externals for $37/TB, substantially below the price point for the 3 TB NAS internals I was looking at. Basically, it came out that I could get 50 TB of external storage (non-parity) for the same price I was going to be getting a little under 45 TB (48 - 3+overhead for parity) not including the cost of 2 cases (which run about $270 each).

I am going to have external HD backups of everything file being copied to the media server already. I currently have 10 assorted external drives sitting on my desk. They'll be going into cold storage after I copy everything from them onto the server. They are my first line of backup. That means if one disk in a non-parity array failed, I could pull one of those out and replace the data. If the backup drive failed, I could always re-rip a few hundred DVDs and Blurays.

It still seems to me though that parity would make a bit more sense and be at least slightly more secure. If nothing else, if/when a drive goes, the other drives can pick up the slack until the drive is replaced, then those drives can help re-write the information. This would mean no downtime for a few hundred movies or a few dozen shows, so long as I was prompt about replacing the bad drive.

Given that the media server will be used exclusively for the purpose of hosting media and not for anything else that needs significant read/write abilities. So, am I really over-doing it and over-thinking this? Is a RAID/parity solution overkill? Or is it actually a rather prudent solution?
Aryn Ravenlocke is offline  
Sponsored Links
post #2 of 7 Old 08-09-2014, 03:28 PM
Mfusick's Avatar
Join Date: Aug 2002
Location: Western MA
Posts: 29,684
Mentioned: 23 Post(s)
Tagged: 0 Thread(s)
Quoted: 608 Post(s)
Liked: 2698
I would not spend an extra $40 per drive. There has been studies in the past showing that consumer drives are just as reliable as enterprise drives. Save the cash.

Since I removed the WD Green drives (similar to RED) I have not done an RMA on a hard drive in 2 years, my server is full of the cheapest drives I could find (mostly Seagate 7200.14's, TOSHIBA 3TB, and a couple Hitachi 7200rpm) and it's the best luck I've ever had.

In comparison I RMA-ed 12 WD drives in a row inside of 2 years old. I would avoid the WD green drives, I had rather poor luck those in my server. RED drives also lack the full vibration technology of the WD RED PRO, so I'd probably avoid those too unless you are getting a deal since they seem overpriced to me relative to what you get. The WD RED PRO seems a bit more solid though... although I have no first hand experience with that. Seagate NAS seems to hold up well and get good reviews if you insist on a NAS oriented Drive. Hitachi makes a good one too, it's even a full 7200rpm. But again, I don't personally spend the extra on the "nas" specific drives. I've had great luck.

If you are running software raid and not hardware raid you can basically buy any drive you want. If you are running hardware raid I'd look for a drive that is MFG certified for use in that.

Last edited by Mfusick; 08-09-2014 at 03:35 PM.
Mfusick is offline  
post #3 of 7 Old 08-09-2014, 03:39 PM
Senior Member
mjt5282's Avatar
Join Date: Nov 2002
Location: Old Greenwich, CT, USA
Posts: 221
Mentioned: 1 Post(s)
Tagged: 0 Thread(s)
Quoted: 16 Post(s)
Liked: 13
a modern software RAID like ZFS (sorry I am not familiar with the others) is hugely powerful, in fact, i believe the development of ZFS allowed for inexpensive deployment of technology that would have cost tens of thousands of dollars in the first decade of the 21st century ... however, I also deploy in an off-site manner backup copies of my media data et al using simpler MD linux RAID from the WD My Cloud etc servers (2 disks 3 or 4 Tb mirrors) and use rsync to pull all from my FreeNAS server.

without parity, losing a disk means the data is gone, man, and needs to be manually recopied from a static backup or worse, the Blu Rays, DVDs and CDs re-ripped! I have probably gone off the deep end, but I have built a FreeNAS server with ZFS RaidZ3 11 disks in a 16 disk chassis.

IMHO the WD RED disks are the only ones you should be considering for your software raid or WD My Whatever devices. The only question is how many and what TB size :-)

i use some old WD Green 3Tb drives in the afore-mentioned 11-wide raidz3 pool ... but they have been "patched" for RAID use. It is not for non-techies. The Reds out of the box offer what you need. Don't bother to buy the faster 7,200 RPM enterprise drives. Those aren't the droids you are looking for. Feel free to PM me to ask more questions about WD drives or FreeNAS ZFS ;-)
mjt5282 is offline  
post #4 of 7 Old 08-09-2014, 04:15 PM
AVS Forum Special Member
Defcon's Avatar
Join Date: Sep 2001
Posts: 2,453
Mentioned: 9 Post(s)
Tagged: 0 Thread(s)
Quoted: 1194 Post(s)
Liked: 644
FIle based parity like in FlexRaid/unRaid solves both problems - data is readable, and can be reconstructed as needed. In fact you can do one better and keep the parity disks offline, thus avoiding any use on them.
Defcon is offline  
post #5 of 7 Old 08-09-2014, 05:21 PM
AVS Forum Special Member
ajhieb's Avatar
Join Date: Jul 2009
Location: The Commonwealth, not the Jelly.
Posts: 2,693
Mentioned: 15 Post(s)
Tagged: 0 Thread(s)
Quoted: 1034 Post(s)
Liked: 693
The reason that NAS/Enterprise drives are necessary in some situations has to do with how RAID controllers deal with errors when encountered, which is different than standard controllers.

Western Digital refers to the feature on their Red drives as TLER (Time Limited Error Recovery) but pretty much all of the NAS/Enterprise drives have a similar feature. In short what it does exactly what the name implies... it limits the amount of time the drive will spend trying to recover a bad block. While there is no great consequence in a typical desktop environment (other than the drive is temporarily unresponsive) in a hardware RAID environment, the controller will recognize the drive as unresponsive, and flag it as offline, necessitating a rebuild from parity data. Once that happens, the increased reads prompted by the rebuild increase the likelihood of encountering another error, which will then send the RAID into "failed" status (if only using single parity)

If you eliminate the RAID controller from the equation, you eliminate the need for drives that support TLER (or the equivalent)

That said, most NAS/Enterprise drives also include technology to compensate for the increased vibration of a multi-drive system. If you think that vibration isn't a significant issue with storage arrays, I suggest you watch
(h/t to @EricN) The point being that the amount of vibration it takes to affect a hard drive is probably much less than you would imagine, and consequently, vibration is a more significant issue than many people would think.

Desktop drives are generally going to be less reliable than NAS/Enterprise drive in large arrays. As mentioned above by @Mfusick, the WD Green drives are generally a poor choice. If you have a look at this thread you can see the carnage of a Hardware to FlexRaid migration and the pile of Seagate and Hitachi drives that didn't survive the transition.

Many people find the features of NAS/Enterprise drives to be a worthwhile investment (other don't of course)

If it were me, and I had the budget to get the NAS drives, I would. The incremental difference in price for me is good insurance against having to start a "Anyone ever wish they did their Storage Server right the first time? (or better?)" thread. However if the budget is tight, it is certainly a reasonable place to save some money.

RAID protection is only for failed drives. That's it. It's no replacement for a proper backup.
ajhieb is offline  
post #6 of 7 Old 08-10-2014, 12:27 AM
AVS Forum Special Member
ilovejedd's Avatar
Join Date: Aug 2006
Posts: 3,764
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 76 Post(s)
Liked: 67
Originally Posted by Defcon View Post
File based parity like in FlexRaid/unRaid solves both problems - data is readable, and can be reconstructed as needed. In fact you can do one better and keep the parity disks offline, thus avoiding any use on them.
unRAID uses block level parity, not file based. You can't just take parity disk offline without repercussions. I expect FlexRAID tRAID is similar given it requires dedicated disks for parity. Both FlexRAID RAID-F and SnapRAID appear to be file based.

That said, I believe what you are trying to espouse is that a JBOD+parity type setup (which is what unRAID, FlexRAID and SnapRAID are) instead of something that utilizes striping is typically the best solution for media servers and on this point, I agree. I believe that's what the OP wants as well.

I'm not familiar with Windows Storage Spaces but when I was reading up on it, it seems the HDDs don't have separate file systems (or at least they're not readable on Windows 7, etc). For me, HDD/data portability was a concern so that automatically put Storage Spaces out of the running.

Last edited by ilovejedd; 08-10-2014 at 12:37 AM.
ilovejedd is offline  
post #7 of 7 Old 08-10-2014, 05:33 AM
AVS Forum Special Member
kapone's Avatar
Join Date: Jan 2003
Posts: 4,633
Mentioned: 9 Post(s)
Tagged: 0 Thread(s)
Quoted: 260 Post(s)
Liked: 209
Hard drive space is cheap. As in, really cheap.

After experimenting with almost every solution under the sun, including some pretty exotic setups, I have reverted back to basics for my home server (it's kinda difficult to call it "a" server, but it is. It just comprises about 12U of rack space ).

I ended up running Drivepool from Stablebit. I started out with running it on WHS 2011, but have migrated over to Server 2012 R2, and it runs just great with that. All disks are natively NTFS, Drivepool just pools them together and presents a logical volume to the end user. No restrictions on size of disks etc etc. It even supports a cache disk (SSD) for writes, not that there's anything wrong with writing to the pool directly (I can saturate a gigabit link easily, that's over 100MBps). I only use a cache disk, since my main switch is actually 10gbps, and so is the server and my main workstation. I do some pretty compute and disk intensive stuff on the workstation, so the 10gbps comes in quite handy. Everything else in the house is gigabit hardwired.

I set folders for "duplication" where necessary, it takes up twice the space, but as I said, HDD space is cheap.

Some folders I setup as "triplicates", which are really critical, and then these folders are also backed up to to the cloud (after encryption of course).

Server type setups are ridiculously cheap if you buy used stuff from ebay, and getting 24, 48, 96 or whatever HDD bays is a fraction of the cost today, than it was, not even 5 years ago.
Mfusick and ilovejedd like this.
kapone is offline  
Sponsored Links
Reply Home Theater Computers

Thread Tools
Show Printable Version Show Printable Version
Email this Page Email this Page

Forum Jump: 

Posting Rules  
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are Off
Pingbacks are Off
Refbacks are Off