or Connect
AVS › AVS Forum › Video Components › Home Theater Computers › Does anyone still prefer Hardware RAID 5 for media server with Windows as the OS?
New Posts  All Forums:Forum Nav:

Does anyone still prefer Hardware RAID 5 for media server with Windows as the OS? - Page 3

post #61 of 183
Quote:
Originally Posted by Sammy2 View Post

I currently have 7 drives in my PC and 3 in my HTPC. I am wanting to do this without a large capital outlay. JBOD is what I have now and I am not too concerned about data loss as It is simply media. So maybe I squeeze another drive in the PC as a parity drive and make it the server. I don't want another machine, just better management of data on the ones I have. While I understand all that you are saying it sounds like more work than it is worth whereupon I could just spend some time getting things organized.

So, without spending a bunch of money, what is the recommended thing to do?

You really want all your drives on the same machine so you can pool them and use the parity based recovery option if you encounter a drive failure.

Buy a cheap 3TB Seagate at Costco for $99 and use it for your parity. Purchase flexraid and set it all up. My suggestion above is indeed more work- but that's how I did it because part of my project was better media organization and I took the opportunity to get organized.

The work is in the organization. You do not need do it. You can just pool them and get the parity running with flexraid- but it does nothing for helping your organize biggrin.gif Organization is an entirely different problem not related to flexraid. Your going to have that issue no matter what. It just is what it is.

Pooling and setting up folders that contain everything is how I attacked it. Now all my bluray rips are all inside the folder called BLU RAY RIPS and I only have 1 drive to worry about (my V drive AKA my flexraid drive ) Gone are the days when I have 5 rips on this hard drive, and 10 rips on that hard drive, and some movies on that machine, and some tv shows on this machine... etc...etc...

If your going to make the leap I just assumed you might do as I did and take the opportunity to get organized nice from the start. It pays dividends as you keep growing. Having only 1 hard drive is nice, everything is all in one place, all organized and named inside folders. Easy to find stuff. Easy to set up your clients and point them to media collections.
post #62 of 183
@Sammy2
I second Mfusick recommendations.

If you insist however in RAID'ing multiple PC as supported in FlexRAID, you can.
You will need to create your configurations using the Expert option.
First, you will need to map the remote PC drives (remote shares) in the FlexRAID UI (required for the drag & drop configuration functionality). Then, create an Expert Snapshot RAID configuration that will include your local and remote drives.
Similarly, you will also create an Expert Pooling configuration that will contain your local and network drives.

If you have a gigabit network, things won't be bad at all. I used to run such setup as I had data that made sense to be remote but still needed protection.
If your remote PC requires authentication to access the remote shares, you might need to run the FlexRAID service under a user that has access to those shares.
Read: http://wiki.flexraid.com/2011/06/09/running-flexraid-under-a-different-user-account-for-network-access/

Although, we really need to separate all that FlexRAID talk into its own thread. This is really getting OT. redface.gif
post #63 of 183
I'm moving off Flexraid to Hardware RAID 6 with 1 hot spare this weekend. I'll be running Windows 8 with my data volume deduped using the same dedupe that's in Server 2012. The performance hit with Windows dedupe is minimal, and the space savings are tremendous.
post #64 of 183
Quote:
Originally Posted by robnix View Post

I'm moving off Flexraid to Hardware RAID 6 with 1 hot spare this weekend. I'll be running Windows 8 with my data volume deduped using the same dedupe that's in Server 2012. The performance hit with Windows dedupe is minimal, and the space savings are tremendous.
If performance isn't too crucial, have you thought of trying Windows dedupe on FlexRAID's Transparent RAID? You get the same double parity with global hot spare support and much more flexibility.
You could run dedupe on a per drive basis (yes, a single large volume will dedupe better than multiple volumes, but still here you have the option of running it selectively per drive and you still get something dedupe wise).
Also, unless your storage data entirely changed, I guessing the performance boost from RAID 6 might be overkill for your needs.

That is unless you just want to play with hardware RAID for a while, which is cool too. smile.gif
post #65 of 183
Quote:
Originally Posted by spectrumbx View Post

Although, we really need to separate all that FlexRAID talk into its own thread. This is really getting OT. redface.gif

Yeah, into the vendor forum. The AVScience and AVSales areas are kept separate for a good reason. Signal-to-noise drops precipitously when every remotely related thread becomes a dumping ground for marketing manure. Go hawk your crap in the appropriate venue.
post #66 of 183
Quote:
Originally Posted by spectrumbx View Post

If performance isn't too crucial, have you thought of trying Windows dedupe on FlexRAID's Transparent RAID? You get the same double parity with global hot spare support and much more flexibility.
You could run dedupe on a per drive basis (yes, a single large volume will dedupe better than multiple volumes, but still here you have the option of running it selectively per drive and you still get something dedupe wise).
Also, unless your storage data entirely changed, I guessing the performance boost from RAID 6 might be overkill for your needs.

That is unless you just want to play with hardware RAID for a while, which is cool too. smile.gif

I've already made the move off Flexraid. The Win8 dedupe switch is coming up.

Without trying to start another argument discussion, I had some issues with Flexraid performance, then some stability issues that led me to not trust it as much as I want to. I still think it's a good product, but it simply doesn't work for me anymore. I picked up an LSI 9260-8i pretty cheaply on Ebay and have been much happier.
post #67 of 183
Quote:
Originally Posted by spectrumbx View Post

Agreed. New thread as this is about "mirroring isn't all that!" tongue.gif

That said:
http://wiki.flexraid.com/about/raid-engines/

As far as your concerns, read more on Snapshot RAID.
If the array is out of sync (snapshot not updated and then you have a failure), then yes, you risk losing some data (up to what was changed since last snapshot).
Read on the requirement of Snapshot RAID and see if that fits your needs. Else, look for real-time parity like FlexRAID's RT-RAID and Transparent RAID, which don't have such limitation.
Most issues comes from users using Snapshot RAID for the wrong purpose or not understanding its requirements.

Above anything else, you should test the recovery features for yourself as talk is cheap: http://wiki.flexraid.com/2011/08/24/testing-flexraids-cruisecontrol-recovery-operations/
Testing the recovery features is very easy as you don't actually need to really fail a disk.
Don't put it into production until your have tested that it does what you expect of it.

This will tie back into the original topic I think. So you're saying that even if some temp file gets created or there is some metadata handling going on and an update hasn't been run, then the array is useless? If it always has to be in sync for a recovery to be successful then how could a failure not occur every time? Most media software generates temp files or metadata on media volumes all the time.

Obviously a hardware RAID5/6 would never run into such an issue.
post #68 of 183
Quote:
Originally Posted by EricN View Post

Yeah, into the vendor forum. The AVScience and AVSales areas are kept separate for a good reason. Signal-to-noise drops precipitously every remotely related thread becomes a dumping ground for marketing manure. Go hawk your crap in the appropriate venue.

Go complain to whomever you feel like. wink.gif

Responding to user questions is not marketing and has always been allowed here. It is part of the dynamics.
You may not know the history of FlexRAID, but it started on this very forum and existed only on this forum until it moved into its own forum.

AVS is FlexRAID's birth place and took shape from this very community. It was never intended to be commercial (I was literally forced to make it commercial).
It is discussions like this that often generate ideas to improve it.
My capacities here are mostly that of a community user who happens to know more on a specific product and responds as such.
As you don't see "FlexRAID Support" under my handle, I thread here as the same user that was here far before FlexRAID and will continue to do so irregardless of FlexRAID. rolleyes.gif
Quote:
Originally Posted by robnix View Post

I've already made the move off Flexraid. The Win8 dedupe switch is coming up.

Without trying to start another argument discussion, I had some issues with Flexraid performance, then some stability issues that led me to not trust it as much as I want to. I still think it's a good product, but it simply doesn't work for me anymore. I picked up an LSI 9260-8i pretty cheaply on Ebay and have been much happier.
That's more than fair. There has never been a product that has worked perfect for a 100% of users.
RAID-F isn't for all deployment scenarios and hence the birth of tRAID.
It is important to remember that FlexRAID isn't a product and that the actual products are RAID-F and tRAID, each very different.
Edited by spectrumbx - 8/7/13 at 7:48am
post #69 of 183
Quote:
Originally Posted by techmattr View Post

This will tie back into the original topic I think. So you're saying that even if some temp file gets created or there is some metadata handling going on and an update hasn't been run, then the array is useless? If it always has to be in sync for a recovery to be successful then how could a failure not occur every time? Most media software generates temp files or metadata on media volumes all the time.

Obviously a hardware RAID5/6 would never run into such an issue.
Exclusions. smile.gif
In RAID-F's Snapshot RAID, default exclusions are defined, which you can edit to fit your usage. These temp and metadata files are to be excluded through simple exclusions or regular expressions.
Snapshot RAID is not for real-time data as long discussed. It is for data that does not change often like backups, pictures, music files, movie rips, etc.

And no, the array is not useless even in such case. But once the array is out of sync, then you stand to have a few unrecoverable files depending on the type of change.
Read this to understand the limitations: http://wiki.flexraid.com/2011/10/18/understanding-the-limitations-of-snapshot-raid/
Moving, deleting, copying new files in are all safe operations (they will not compromise recovery even if not sync'ed). It is edits (like editing a word document or changing MP3 tags) that are things to watch out for.
If your data changes too often, like in database files, then you don't want Snapshot RAID. Look into tRAID.
post #70 of 183
Thread Starter 
Quote:
Originally Posted by Mfusick View Post

No

Let me ask you this Mfusick. If an enterprise that employed college computer science degrees and certificated network administration professionals had the option of a dedicated server for their network ( one built using server specific technology like ECC ram and such), and using Windows Server OS of some sort.

And all they had to choose from were Flexraid, or a dedicated RAID controller card with RAID 5 as the option. What would they choose? And why would they choose it?

And please just stick to answering those two questions seriously and honestly.
Edited by g725s - 8/8/13 at 6:52pm
post #71 of 183
I am not sure, I don't work in enterprise IT. I have no experience with that. I do have lots of experience building HTPC's and consumer level servers, and if I had a normal consumer budget and needs I'd choose flexraid running off a cheap OS like WHS2011 for $35, with affordable consumer level parts, including CPU, MOBO and Hardrives.

I don't want to spend $200+ each on hard drives.
I don't want to spend extra on ECC RAM
I don't want to spend extra for a SATA card (although I have 2 in my Norco 4220 as sata cards, I flashed away the RAID. But I think your considering fewer than 8 drives which you can get from a motherboard. I'm at 20 HDD bays )
I don't want to spend extra for a server motherboard
I don't want to spend hundreds more for WHS enterprise level software

I think your more than $2000 apart from your two scenarios in some strange attempt to validate hardware RAID as a better option. In your theoretical situation perhaps the $2000+ doesn't matter, but in the real world for most people around here it's a deal breaker.

WHS enterprise alone costs $500+ just for the license key ... haha. I would not want to use affordable consumer HDD's in a hardware RAID set up either. I'm nearly certain you do not need ECC memory too.

Where are you trying to lead me with your question ? I am not understanding it ?

If you really wanted me to design a system I'd need to know a budget to work with. If sky is the limit then I would go with 5 year warranty Enterprise level 10,000rpm WD drives in RAID, off a $1000 RAID card on a supermicro or Asus motherboard with a Intel Xeon E5-2690 CPU and 64GB of ECC DDR3. I'd run WHS probably (the real enterprise one), and I would have dual teamed INTEL NIC LAN. Possibly some INTEL PCI SSD cards too depending on the needs. (this set up is $10,000+ )

I would trade you my 30TB Flexraid server for that set up instantly biggrin.gif

You can't talk about a couple hundred dollar prebuilt micro server with budget parts inside and compare that to what real IT does. They are worlds apart. The parts that a pre-built PC or server use (like DELL, or HP, or ACER or GATEWAY ) are chosen for the sole reason of what specs can they advertise at the lowest possible price and make the most possible margin.

It's not about quality at all.

Consumers buy stuff based on specs. Like CPU speed or HDD size, amount of RAM etc. So the cheap consumer stuff is aimed at making it cheap, maximizing profits. It's not about supreme reliability or performance. In comparison the parts you buy individually like a motherboard are often better quality (example ASUS or ASROCK ) than you see in a pre-built machine.

Just my feelings. You can disagree.
post #72 of 183
Quote:
Originally Posted by g725s View Post

I think your prices are a little high there for my needs.

I have an HP N40L and all I would need is the HP P410 Smart Array card that I can get used off eBay with the cache modual for $85 bucks (this would give me hardware RAID 5). Or I could spend $60 bucks on FlexRAID. I paid $249 for my microserver but upgraded the RAM to 8gb ECC for another $40 (it only came with 2gb). The microserver deal also came with a copy and license for WHS 2011 which I put on the included 250gb HDD.

Throw in four 4tb drives and I would have roughly 12tb of usable disk space using hardware RAID 5. So far I'm looking at $375 prior to adding the drives for a pretty good server that can do hardware RAID 5 or another RAID if I wanted. Not $2,500 plus.

Back to the topic at hand.. I think you should just run your HDD's as JOBD's. Use Flexraid to pool them into a single pool, and also set up one drive as a parity drive in case you need to recover a failed data drive.

How many HDD's can that bad boy hold ? I would start out with 3 data and 1 parity- That would get you 12TB of usable space and you could suffer a single simultaneous drive failure. You can add more as you go. Your data will also remain readable in other systems, so if you remove a drive and put it into another machine no problems there. Or- you might find in future you want to build a newer, better server and upgrade, or you need more space. You can just take out your drives and re-use them. You can even add hard drives with data on them to your flexraid pool too.

I'm not sure you can do that with RAID hardware. Infact, I am nearly certain you lose all data on the drive when you built the array. And if you remove a drive from an array that's big trouble too.

I do not think the hardware raid is worth the hassle in your simple situation. But it would work just fine. Make sure you use identical drives, and you don't plan on removing them, adding more, or upgrading in the future. Flexraid leaves your door open to more options. Also no worries about TLER issues with hard drives you can use any drives you want, including green drives. I like the Seagate 7200.14's these days. $35 per TB and very fast, they play nice with RAID, use low energy and almost always available everywhere for good prices. 4TB is a little slower, but sometimes the size is worth the trade off if your limited on drive bays.

I have a 30 page thread about me rebuilding my server multiple times before I got it right. I am about to add a second 8 port sata card to go all the way to 20 hot swap bays. I started out with $250 budget and cheap parts. I could have never made the journey I did if I was not on Flexraid, which allowed me great flexibility and upgrade paths. That is what I like about it.
post #73 of 183
Quote:
Originally Posted by Mfusick View Post

Where are you trying to lead me with your question ? I am not understanding it ?

I think you were trolled. I doubt he actually expected that you could address his question.
post #74 of 183
Quote:
Originally Posted by EricN View Post

I think you were trolled. I doubt he actually expected that you could address his question.

Well on top of that, it still is an apples to oranges question anyway, because the hardware used would be enterprise hardware. All rackmounted with redundant PSUs, drive bays and SAS/SCSI hard drives which are made to be stored in that compact environment with a huge workload. On an HTPC, people use COTS SATA drives, not SCSI/SAS for their raids. On top of that, enterprise-level systems require dedicated or immediate support for their products. This is a "home use" concept we're discussing here. An assumption / prefix to any question on this topic is "Based on equipment used in the HTPC world", because in a home using a HTPC, your users are much fewer sometimes only 1, your requirements shift to low cost, and keeping the longevity of your drives, which means minimum reads/writes. A hardware Raid-5 will be far more taxing on your drives than the typical software raids people use in their FlexRaid/UnRaid systems here. You can't compare textbook definitions of things because we aren't talking about even remotely close standards, equipment, or environments.
Edited by damelon - 8/9/13 at 10:13am
post #75 of 183
Well said ^

He said he has a HP N40; I am not that familiar with it but at first glance it looks like a nice little machine. It should work well for his intended purpose if his purpose is consumer level home storage. WHS is a good platform for that, and he can easily use Flexraid or Snap raid, or drive bender, or any of the myriad of options that install into his current OS. I am guesing he might get Unraid going to- but since he already has WHS2011 he said- I thinking building on that makes the most sense and is very easy to do.

It is by no means real IT, or enterprise level stuff. It's purely a consumer level product made to be affordable and deliver modest performance. It's ideal for a typical HTPC or home storage solution IMO, but certainly quite different from a full blow enterprise solution. I still don't understand his question. I originally thought he was trying to defend hard ware raid, but after re-reading it I am not sure anymore.

Bottom line is that HP machine he has can do either so the door is wide open to him. Hardware raid requires a RAID card, identical HDD's that are appropriate for RAID (Seagate NAS, WD RED, etc) and do not have TLER issues or head parking. He would need to set up his array up front and buy all his drives now. Assuming this is all acceptable to him- then RAID hardware is fine.

Otherwise,

He can install something like a software raid (I use flexraid on WHS2011 and it works fine) on his existing WHS2011 and he can basically add any hard drives he wants - now or in the future. He can take his server apart, upgrade it- rebuild it- add drives with data on them, take drives out, replace drives, mix match different sized HDD's and different brands and models of HDD's etc... etc... It just seems easier for a basic media server. The performance is not an issue- my flexraid server with WHS2011 does 100MB/sec. LAN becomes your speed limit so RAID hardware that can go faster than LAN is somewhat pointless. It is true that certain RAID set up's offer scaled performance that increases but in a home LAN network you will never utilize that. The software option seems easier- and he could even re-use some existing drives he might already own. That is how I got started. This grow as you go option is under-rated IMO. I said it a couple times earlier, if you don't mind spending it all now and doing it all at once RAID hardware can make sense sometimes, but if you don't want to spend all the money and take all the effort to do it all properly up front it's a bad option with limited future upgrade ability.

Question for you,

With Unraid you can add drives in future- just not full right? You need for format them for Linux and "pre-clear" but other than that then that is it right ? (I am noob at Unraid, but I might build one of USB on a spare machine for my bro.)
post #76 of 183
Quote:
Originally Posted by Mfusick View Post

Question for you,

With Unraid you can add drives in future- just not full right? You need for format them for Linux and "pre-clear" but other than that then that is it right ? (I am noob at Unraid, but I might build one of USB on a spare machine for my bro.)

Yes. I don't even remember pre-foratting them. In each case I just put one in and booted up my machine, and started the array. It detects the new disk and shows that it is "Unformatted" and lets you format it though the unraid gui. I believe that you want to preClear the disk before you actually add it to the array since if the GUI does this by itself, the whole array is offline while it is doing this, so the rest of your existing array might not be usable for a day on a 2-3TB drive. Been a while since I've done it, I'd have to go re-read some web pages.
Edited by damelon - 8/9/13 at 12:34pm
post #77 of 183
sounds like adding a full drive into flexraid. That takes some time due to parity calculation. An empty drive takes 15 seconds. A full 4TB drive might take 10 hours eek.gif
post #78 of 183
Thread Starter 
Quote:
Originally Posted by EricN View Post

I think you were trolled. I doubt he actually expected that you could address his question.

I don't feel I was trying to troll. And I should have edited that post to state they were to use an HP N40L only. But I was wondering if he could see any benefits of hardware RAID over software RAID.

I have searched this out and feel that with the HP N40L Microserver I have, which is not Enterprise class but is better than a typical consumer motherboard for server applications in regards to resiliency. I have decided to go with a HP P410 controller card and RAID 5.

Reading the responses to Software vs Hardware RAID in other forums that are server specific, and reading all the problems users have with FlexRAID and UnRAID at those forums, I really feel that Hardware RAID is a better choice if you can do it. I feel that it for sure takes more effort than just loading up software on the server and configuring it. You need to make sure the controller card is compatible with the drives you intend to use and you should use all the same type drives, of course the controller should be compatible with the MB and BIOS too.

In my N40L I have 8mb of ECC RAM, 250gb OS drive, 4 x 4tb Hitachi HDD's for RAID 5 , P410 card with 512mb BBWC, Remote Access Card, WHS 2011 (at this point I have just a bit over $800 into it for roughly 12tb of storage), and the ability to add another RAID 5 set in the future (24tb storage).
post #79 of 183
Quote:
Originally Posted by g725s View Post

I don't feel I was trying to troll. And I should have edited that post to state they were to use an HP N40L only. But I was wondering if he could see any benefits of hardware RAID over software RAID.

I have searched this out and feel that with the HP N40L Microserver I have, which is not Enterprise class but is better than a typical consumer motherboard for server applications in regards to resiliency. I have decided to go with a HP P410 controller card and RAID 5.

Reading the responses to Software vs Hardware RAID in other forums that are server specific, and reading all the problems users have with FlexRAID and UnRAID at those forums, I really feel that Hardware RAID is a better choice if you can do it. I feel that it for sure takes more effort than just loading up software on the server and configuring it. You need to make sure the controller card is compatible with the drives you intend to use and you should use all the same type drives, of course the controller should be compatible with the MB and BIOS too.

In my N40L I have 8mb of ECC RAM, 250gb OS drive, 4 x 4tb Hitachi HDD's for RAID 5 , P410 card with 512mb BBWC, Remote Access Card, WHS 2011 (at this point I have just a bit over $800 into it for roughly 12tb of storage), and the ability to add another RAID 5 set in the future (24tb storage).

To each their own! Best of luck to you! Just make sure you keep those drives cool! #1 Important thing!

All opinions aside, my personal experiences have really moved me away from hardware RAID. I started to do HTPC media storage several years back. Probably 2008 or so. Originally I had a single RAID card (Highpoint RocketRAID) and ran something like 8 1TB HDDs in RAID 5. For the most part, it ran pretty well, but sometimes the system would boot and one of the drives wouldn't spin up. This resulted in a failover scenario and when it found it again, it automatically tried to rebuild the entire drive. That would take a long time, and during that time, ALL drives were doing massive parity crunches, which means that they were all spun up, and doing heavy workloads for about a day straight. I never lost data on this array, but the same thing happened several times. Eventually the card died, which means my entire raid was also dead, unless I purchased the same card.

Later, I added an 8 disk JBOC housing, which I used to store 8 1.5TB drives. I added another raid controller and that connected using external cables to my housing. Those drives would fail constantly. Not only that, the added head generated during rebuilds would sometimes cause additional failures. I lost the entire array. I then switched to Raid 5e, allowing a hot spare. I really believe heat was the primary problem, and that these seagate drives just weren't made for this type of confined storage. I even lost that array again. I can't tell you how many weekends I had to leave that thing alone while it was "Rebuilding".

Tech has gotten better, but I moved to unRaid after building my theater room. Since I moved, I have not had one problem. Not one parity error. Primarily, the best thing was that I put it in a big roomy case with tons of fans instead of some hot swap kind of system that has small fans behind the bays. I think my case uses 8-10 120mm fans. Good Airflow. On top of that, even if a drive fails, there is 0 impact to the availability of the rest of my array. I can still replace it and rebuild like a RAID5, except that all of the parity is only on one drive instead of spread across the entire array, so a rebuild only affects the drives in question. If I were to also lose that parity drive, I can't recover my data, but I only lose the data on the single non-parity disks, the rest of them can be accessed individually and the data recovered. There is no such thing in a hardware raid. If you can't rebuilt it, it is ALL gone. When you are talking about 10+TB of data, that is a LOT of data to lose. How do you plan to back that data up? I don't want to use a hardware raid that puts me at risk of losing it all. (Since I don't backup my data elsewhere) If I run into multi-disk failure, at least I still have 90% of my data. I can also expand the size of my array at any time without adding a new RAID card. It just makes a lot of sense. It is also a lot cheaper.

The one con I do have about unRaid is the time it takes to write to the array. it does bother me, but I never really need my rips to be moved faster, it would just be nice. I usually move them over to the array while I am at work anyway using remote desktop, so the time doesn't matter, but it is a legit con, and I refuse to use their cache drive option because it scares me.

I really do weigh the pros and cons often on this, and I used to love hardware RAID, but things have improved a lot in the past few years. It isn't the only option anymore, and the software options just make a lot more sense for home media storage.
Edited by damelon - 8/14/13 at 7:31am
post #80 of 183
Quote:
Originally Posted by damelon View Post

I had a single RAID card (Highpoint RocketRAID)

For the most part, it ran pretty well, but sometimes the system would boot and one of the drives wouldn't spin up.

some hot swap kind of system that has small fans behind the bays

Those drives would fail constantly. Not only that, the added head generated during rebuilds would sometimes cause additional failures.

I lost the entire array

I even lost that array again

If you can't rebuilt it, it is ALL gone.

Hotboxing 8 drives with an inadequate PSU, a crap controller, and no backups? I'm not surprised that didn't go well. If you heard a disaster story about someone buying a huge TV and securing it to a cheap, undersized mount because their budget had no room for an appropriate one, would you conclude that wall mounting is not practical for the home? The answer is to buy a smaller TV and divert some money towards supporting it properly. The same is true of storage.
Quote:
Originally Posted by damelon View Post

When you are talking about 10+TB of data, that is a LOT of data to lose. How do you plan to back that data up?

RAID is not a backup. If you follow storage advice from someone who claims RAID will suffice in lieu of backups, or if you buy products from vendors claiming their RAID software will act as a backup, you are just asking for more pain. What kind of data was it anyways? Where did it come from?
post #81 of 183
Quote:
Originally Posted by spectrumbx View Post

However, I am not even here to argue about the merits of FlexRAID. Rather, my point is to let others realize how RAID 1 is not strictly the "safest" as chanted everywhere.
For large arrays, using duplication is just a costly and bad strategy. Duplication costs more and provides less protection than most realize anytime you go beyond two disks; and, this gets more apparent as the disk count grows.

What you are selling here is complete and utter BS. There is nothing ever better than a 1:1 copy. What's the mantra that you regularly hear when anyone brings up "What's the best RAID-level for me" question? RAID is not a Backup... and particularly striped and parity RAID-levels will never be as safe as a 1:1 copy. Preferably, that 1:1 copy should be on a separate device. At least RAID-1 gets you to that 1:1 ratio.

Plus your statement on how "duplication provides less protection than most realize" is complete snake-oil, if you are only arguing tolerance to disk failure. If I have 10 disks, I can make 5 individual RAID-1 sets and this will completely destroy anything FlexRAID offers for "tolerance". Notice, I didn't stripe the RAID-1 sets for a nested RAID-1+0.

Mirror/Duplication only costs more if you need pooling... but will never be out-classed when it comes to tolerance.
post #82 of 183
Quote:
Originally Posted by Mfusick View Post

You really want all your drives on the same machine so you can pool them and use the parity based recovery option if you encounter a drive failure.

No, you really don't. I love all this false bravado for FlexRAID... it is a clever product that serves a niche, but is not not how any competent enterprise would do it. And make no mistake, the enterprise is something FlexRAID is shooting for. Redundancy is key. Redundancy in PSUs, redundancy in controllers, redundancy in NICs, redundancy in paths, redundancy in switches, redundancy in whole SANs. Granted, this level of redundancy is out-of-scope for a home media server, but FlexRAID doesn't offer you anything better than a RAID-1 and a backup will provide. I would argue that a JBOD on one machine, and a separate mirrored JBOD on a second machine is probably as far as you'd want to go to duplicate an enterprise up-time in a home environment.

Quote:
Buy a cheap 3TB Seagate at Costco for $99 and use it for your parity. Purchase flexraid and set it all up. My suggestion above is indeed more work- but that's how I did it because part of my project was better media organization and I took the opportunity to get organized.

And this is kind of the point... if you are arguing that disk space is cheap, then why are you concerned about the losses that RAID-1 provide? You are just getting greedy with your disk space with the illusion that you are protected. While FlexRAID offers some level of protection depending on the number of parity drives you use, at no point can you ever consider your data any more safe than a 1:1 mirror.

Quote:
The work is in the organization. You do not need do it. You can just pool them and get the parity running with flexraid- but it does nothing for helping your organize biggrin.gif Organization is an entirely different problem not related to flexraid. Your going to have that issue no matter what. It just is what it is.

How is this any different when it comes to shared media to a separate client HTPC? If I have one hard drive on one PC, and another hard drive on a second PC... you can present both media shares to both PCs. You don't have to have giant pools to get all your media in one spot. I'm not arguing against the convenience factor for having it all in one spot... just that the technical challenge isn't there to get all your media presented to a client PC.

Quote:
Pooling and setting up folders that contain everything is how I attacked it. Now all my bluray rips are all inside the folder called BLU RAY RIPS and I only have 1 drive to worry about (my V drive AKA my flexraid drive ) Gone are the days when I have 5 rips on this hard drive, and 10 rips on that hard drive, and some movies on that machine, and some tv shows on this machine... etc...etc...

Pooling and data protection are two different things. It's nice that FlexRAID offers both in the same package, but I'd argue that it's not the safest, nor the best performing option out there.
post #83 of 183
Yeah--- I don't care about 1:1 or tolerance babble. Sorry. I just don't. I am 50 times more concerned with cost and value and ease of implementation.

If given the choice of doing something very hard, or losing a 3TB drive full of data, I'd rather just throw the 3TB drive out the window or into the pool and enjoy my life. biggrin.gif

I am only willing to invest so much time, and so much money into this. For this reason I chose flexraid. It's practical. I am not going to go buy more drives so I can run 1:1 copies. That's just idiototic to me. All I have is movies and TV shows for the most part. You chose to quote me above, but your assuming the angle from a place I am not standing. My entire point in this thread has been manageable costs, manageable effort, high value, and reasonable performance and expectations. I'm not looking for the best possible, just good enough. I have rebuilt a drive with flexraid and it worked. I put in a new drive and like magic the new drive had all the movies the drive I took out had on it. I even watched a couple to see if they worked...

I hear what your saying but this entire thread keeps getting out of the context of AVS or a simple media server.

The OP was wondering does he do cheapo raid on a cheapo HP microserver, or run Flexraid in WHS on pooled drives. All this side babble is semantics.
post #84 of 183
It's not semantics when someone loses an eye! biggrin.gif
post #85 of 183
I know what everyone is saying-- I just can't get over the following:

Flexraid Costs $69
10 more HDD's costs $1000
$930 difference.

Even double or triple parity is significantly cheaper...
post #86 of 183
Quote:
Originally Posted by Mfusick View Post

You guys remind me of this gif:


And,

I am re-quoting this GIF just because it's funny. tongue.gif
post #87 of 183
Quote:
Originally Posted by Puwaha View Post

What you are selling here is complete and utter BS. There is nothing ever better than a 1:1 copy. What's the mantra that you regularly hear when anyone brings up "What's the best RAID-level for me" question? RAID is not a Backup... and particularly striped and parity RAID-levels will never be as safe as a 1:1 copy. Preferably, that 1:1 copy should be on a separate device. At least RAID-1 gets you to that 1:1 ratio.

Plus your statement on how "duplication provides less protection than most realize" is complete snake-oil, if you are only arguing tolerance to disk failure. If I have 10 disks, I can make 5 individual RAID-1 sets and this will completely destroy anything FlexRAID offers for "tolerance". Notice, I didn't stripe the RAID-1 sets for a nested RAID-1+0.

Mirror/Duplication only costs more if you need pooling... but will never be out-classed when it comes to tolerance.

Summary: you are still not seeing the light, and I don't blame you. wink.gif

For my part, the information has been provided. Make of it what you wish.
There is still that $1k I offered up for grab. Setup what you have just claimed with your 10 disks in RAID 1, and I will setup an array with equal disks used for duplication as parity disks and we will start failing each others disks.
Free money... and it really puzzles me why no one is going for it as strongly as some of you feel. confused.gif
post #88 of 183
Quote:
Originally Posted by spectrumbx View Post

Summary: you are still not seeing the light, and I don't blame you. wink.gif

Sorry, but I did. I tried FlexRAID and then gave my license away.


Quote:
There is still that $1k I offered up for grab. Setup what you have just claimed with your 10 disks in RAID 1, and I will setup an array with equal disks used for duplication as parity disks and we will start failing each others disks.

Ok... 4 disks under FlexRAID with two disks as parity? This was the challenge?

Under FlexRAID:
Lose one parity disk... everything is fine
Lose one data disk... data on missing disk is offline until a repair?
Lose two parity disks... everything is fine
Lose two data disks... data is missing until repair?
Lose one parity, one data... missing data on one data disk until repair, other is fine?
Lose more than the above, and everything is gone unless one of the surviving disk members is a data disk?

Under two separate RAID-1 sets:
Lose one mirror disk... everything is fine
Lose two drives in same mirror... only data from the mirror is missing
Lose both mirrored disk (one from each RAID-1 set)... everything is fine
Lose any 3 disks... data survives on the remaining disk

Hmm... I know which one is simpler. And it seems to me that if you lose 3 disks in the FlexRAID scenario, you have a 50% chance of losing *all* data... if the only surviving member of the 4 disk set is only a parity drive. If you lose 3 disks in the RAID-1 set you will retain 50% of all your datano matter what.

Yep... RAID-1 still safer.

Quote:
Free money... and it really puzzles me why no one is going for it as strongly as some of you feel. confused.gif

I'll PM you my address to send my check? tongue.gif
post #89 of 183
He said 10 disks, not 4.

Have you considered this situation?

As you said for 10 disks: "If I have 10 disks, I can make 5 individual RAID-1 sets and this will completely destroy anything FlexRAID offers for "tolerance".

Let's examine that.

With 10 disks set up as 5 RAID-1 mirrors, lets say you lose 4 disks. What if those 4 disks are both from the same RAID 1 sets. You could lose 2 whole RAID-1 sets, or 40% of your data.

Now lets take a similar setup in FlexRAID with 10 disks. 5 disks are allocated to data, and 5 disks are allocated to parity. You could lose any 4 disks and you would never lose any data. Each failed disk is either going to be a parity disk (which doesnt matter), or a data disk (which case any remaining parity disk can stand in for it). Heck, you could lose any 5 disks and never lose any data.

How is that not superior to RAID-1 in that case?

Neither is straight up better, FlexRAID in that setup would never lose any data until you lose at least 6 disks. RAID-1 could start losing data after only 2 failed disks if they happened to be in the same set.

I already wrote out this table in a previous post for an 8-disk array.
Code:
type         flex    raid1
1-best  0               0
1-worst 0               0
2-best  0               0
2-worst 0               25
3-best  0               0
3-worst 0               25
4-best  0               0
4-worst 0               50
5-best  25              25
5-worst 100             50
6-best  50              50
6-worst 100             75
7-best  75              75
7-worst 100             75
8-best  100             100
8-worst 100             100

FlexRAID is better for 1-4 faield disks sicne it will never lose data. RAID-1 is better for 5-7 failed disks in the worst cases.

I would personally choose the FlexRAID option because the chances of losing more than 4 disks at once in an 8 disk array are pretty slim. I wouldnt want to chance losing 2-4 disks in RAID-1 and 2 of those disks happend to be in the same set.

Then lets not overlook FlexRAID's new mode: tRAID. tRAID solves the problem you mentioned of data not being available until after restore. tRAID works exaclty like RAID-F FlexRAID that has been described (from a fault tolerance standpoint), but offers real-time parity and real-time reconstruction of a failed disk. It's transparent, you don't even know the disk is gone other than the fact that it tells you, and reading data from that disk is slower since the array is degraded. But you can keep the array online and read data from the failed disk(s) even while rebuilding onto a replacement disk or hot-spare which it also supports.
Edited by SirMaster - 8/13/13 at 9:52pm
post #90 of 183
the tolerance levels are important, but raid1 proponents keep ignoring capacity. i dont have unlimited funds. no single drive will give me enough capacity. a 10 disk raid 1 set, even with a tolerance of 9, is absurd. great that the theory works out for you, no way will i put that in practice. SirMasters layout gives me the best options with flexraid and my disk capacity. i was going to do 9 disks with 1 parity, but now might do 8 with 2 parity.
New Posts  All Forums:Forum Nav:
  Return Home
  Back to Forum: Home Theater Computers
AVS › AVS Forum › Video Components › Home Theater Computers › Does anyone still prefer Hardware RAID 5 for media server with Windows as the OS?