I am on WHS 2011, with flexraid managing my raid configuration.
Ever since flexraid has been giving me some or the other issues, i am sure its been working great for most of the people here, but for people who have moved out, or consider another efficient alternatives from flexraid please guide.
I am on a 24bay supermicro chassis server (brought from the seller the one that lot of other members on here brought from him on this forum), and about 7 bays full of 3tb disks with 1 3tb parity.
what options do i have? Now that flexraid has managed my movie files accross 8 disks distributing accross various disks, is it advisable to move to another raid setup? if so... will these movies stored in various disks be still in the same functional format if i move to a different raid setup?
please guide.
currently i am unable to run updates/verify/validate, keeps showing ERROR: [update] error: NullPointerException[null] after 4-5hrs of update runtime.
The mods on the flexraid forum are helping, but the issue they earlier pointed was probably an older version, which i upgraded but still the issue persists, and now probably low on memory.
I've used the stock hardware that came with supermicro chassis that lot of other members adopted on our forum from the same vendor.
However, i dont think the memory ever spiked to over 60% the whole 4-5hrs of the update run time.
I'd be interested as well. I have 17 drives in my FlexRAID array and I'm dreading the day something like this happens. Knock on wood it hasn't yet. I have been searching around for alternatives too. I used to run hardware RAID 5, but I had a write hole issue and lost all my data before so I don't trust that either. We just had an issue at work with a RAID 6 array and the same write hole issue happened there.. so I'm skeptical about RAID 6 now too.
As far as alternatives I have looked at, maybe ZFS, FreeNAS or Windows Storage Spaces. I'm to the point where I have about 16TB of data accrued on my drives, so migration to another platform is another issue. When I migrated from WHS v1 to WHS 2011 with FlexRAID, I had to remove drives one by one and add them to my FlexRAID pool on WHS 2011, copying over the data manually. See post here: http://forum.flexraid.com/index.php?topic=697.msg6373#msg6373 I think it took me several days of manual labor.
Thanks for your reply, and pointing out to the thread.
One thing is sure, that files stored in flexraid, can be safely moved to a new location without them being ripped into various bits, as its not ripped apart to store accross two drives. Also, my DRU's are in prefect shape, just that the updates/verify/validates arent running successfully, I havent lost any data yet.
If nothing else works, i will wait till the seagate 8Tb is out (i guess they plan to sell it for under 270 dollars per disk), will copy out all the movies from the flexraid pool to the 2x 8tb disk, and then plan on a new raid setup, that is less dramatic to manage.
I've been running unRAID for over 7 years with only a few minor issues that were easily resolved. Unfortunately, switching to unRAID means reformatting all of your drives to the reiserFS file system. You'd then have to transfer data over to the reformatted drives. I actually tried switching over to FlexRAID a while back and had nothing but problems during the transition so I ended up going back to unRAID and couldn't be happier. Each setup has pros and cons, but it's mostly been maintenance and problem free. I did lose a good bit of data a while back, but it was due to an error on my part and nothing to do with unRAID itself. I also use a 24-bay Supermicro server rack, but I swapped out the innards and replaced them with a low power AMD micro-ATX board and FM2 CPU. I already had several Supermicro 8-port SATA controllers so it was an easy transition.
The best part about unRAID is that I don't have another Windows system to deal with. UnRAID is easy to set up and configure and runs off of a USB flash drive, freeing up an important SATA port that can be used for a data drive. It is limited to a single parity drive, but I don't buy large quantities of drives from the same batch so the likelihood of multiple drive failures is pretty slim. Rebuilding lost data is as easy as swapping out the bad drive with a new one and restarting the array. My setup currently has 44.5 TB of storage and a 4TB parity drive. I've got about fifteen 1.5TB drives that are starting to get a bit long in the tooth so I'm gradually replacing them with larger drives as my budget permits. The idea of upgrading to 6TB or greater drives sounds enticing, but then it would literally take a couple of days to run a parity check.
I am starting to get an impression that FlexRaid is not good for a large amount of data. I am reading about various problems on the forums a lot. It looks like it starts off good, but after a couple of months, or years flexraid has nothing been but more of a drama queen.
I'm not able to get a verify or validate to pass, but updates work no problem. It was failing on my music share so I just removed it from the pool and recreated the parity. Now it fails on something else. I can't be bothered to run through the troubleshooting steps each time it fails.
I considered switching to T-Raid, but the existing user deal expired. I've lost two HDDs while running FlexRAID. First it wouldn't restore every file which I assume was related to the failing validate/verify. Next was the parity drive failed so I didn't really lose data.
There you go, this is a classic example of similar issues i have been reading accross various forums with people using FlexRaid.
I am starting to become a flexraid hater now. Looks like getting into the flexraid bandwagon was a suicide mission, the protector became the reaper.
Been using FreeNAS since V7 (or 6.2, don't remember clearly anymore)...
FreeNAS with ZFS has been near maintenance free for me... but I won't hide the fact that I don't use any of the fancy (dedup, snapshot...etc...) ZFS features which could lead to trouble...
Parity is calculated and written in real time, and I also have second FreeNAS setup that is rsync'ed by the main one...
I use two (kinda mirrored) offline 3TB USB drives for backing up Music, Photos and important data in case of accidental removal from the FreeNAS boxes...
The only hassle I have is with copying important data onto the external drives... but didn't someone say no pain no gain...
I was really thinking just this week of starting a poll similar to "who has had a problem free, all faults were honestly your own, experience with flexraid" thread. It has quite the loyal following, and it's good looking software
Amongst it's fanful followers are for sure Assassin and Mfusick. I know that Assassin had tested the recovery in Flexraid by purposefully failing and recovering data to different disks or something similar.
There are others who've successfully tested the recovery power of flexraid. Parity calculations and recovery certainly work, whether in flexraid or other raid products. In my experience, the failure of one of my DRUs didn't recover when I put in the new disk. Then I had to create found.000 and found.001 folders and retry. It still didn't work, so as a last resort (24hrs later) I retried the failed disk and it had everything on it and still worked. That was when I recently switched back to Raid-F, but copied the data from the "suspect" disk to a spare before building my parity in raid-f. Looking for these problems to go away with the overhaul (just put the "suspect" disk into a desktop external enclosure, so no more problem disks in the array + landing disk + ReFS + fingers crossed)
Ever since flexraid has been giving me some or the other issues, i am sure its been working great for most of the people here, but for people who have moved out, or consider another efficient alternatives from flexraid please guide.
I'd be interested as well. I have 17 drives in my FlexRAID array and I'm dreading the day something like this happens. Knock on wood it hasn't yet. I have been searching around for alternatives too. I used to run hardware RAID 5, but I had a write hole issue and lost all my data before so I don't trust that either. We just had an issue at work with a RAID 6 array and the same write hole issue happened there.. so I'm skeptical about RAID 6 now too.
I'm curious how you managed to lose all of your data with the write hole issue. It takes some pretty unusual circumstances to lose any data through the write hole, and even then it should only be a stripe or two which should result in a small number of corrupt files. If you lost all your data, I suspect something else was the cause or severely compounded the problem.
As far as the issue at work, in an IT environment, I wouldn't ever recommend running RAID of any variety without
A solid UPS solution
A caching RAID controller with battery backup
Write-behind caching disabled on the drives.
With those in place it is nearly impossible to lose data because of the write hole. (I'd recommend it for home use as well but for many the cost makes it impractical since the data is usually less important)
Amongst it's fanful followers are for sure Assassin and Mfusick. I know that Assassin had tested the recovery in Flexraid by purposefully failing and recovering data to different disks or something similar.
I believe there are a couple of errors on that chart with respect to unRAID, IIRC. Perhaps it's just the semantics used, but it says you can't start using unRAID with filled discs. If you have discs that have already been formatted with reiserfs and they contain data, I'm pretty sure you can add them to the array as is without losing data. Also, unRAID is accessible via both the web GUI and via command line. One of the things I like about unRAID is that you can preclear a new disc in the background without affecting server operation. When the preclear is complete you simply stop the array and assign the new disc. Restart the array and it does a quick format and then it becomes part of the array.
FWIW, according to the chart, none of the other software RAID solutions were around when I started using unRAID except ZFS, which I was unaware of at the time. Installation and upgrade for unRAID has to be the easiest solution available. There is no OS and no drivers to install. Just copy the files to an approved USB flash drive and boot it up. If you buy a pre-configured thumb drive it is literally plug and play. To upgrade you just copy over 2 or 3 files to the flash drive and reboot. The hardware supported covers a wide range so it will work with just about any motherboard you have available, although I'm sure there are a few that won't. They have a huge support forum so you should be able to find out if any hardware you have is compatible.
I have had the misfortune of having drive failures over the years and unRAID recovered the data every time. In fact, the only time I ever had any real problems with the server was when I made the mistake of trying to switch to FlexRAID. I won't make that mistake again.
With all of the software RAID solutions out there, it's just a matter of finding the one that has the features you want. I don't have any need for a lot of bells and whistles and I just want to be able to access my media from any PC on my home network. I also want a system that's reliable and works 24/7 without any interaction from me. UnRAID fits my needs perfectly, but obviously YMMV. The do have a lot of plug-ins available for additional features. Check this link for a list:
I've been using unraid on an inexpensive server I built using parts I had laying around. It's been running for over a year with no issues. I don't have any experience running other raid solutions, but unraid has been awesome for me.
On a side note, did anyone else have their mind blown when they learned how a parity drive works? I did. Being able to reconstruct, in real time, and stream a movie that was stored on a failed disk almost triggered an existential crisis in me
I occasionally have a small problem with a file being locked and FlexRAID not being able to read it during an update pass, but I usually just move it off the FlexRAID volume, and then back on again, that usually helps to resolve it. Not sure if thats some software I use that causes it..
Sometimes I wish it would send more comprehensive mails, like the reason for the failure so I don't have to worry when I see its another locked file .. now I have to go check the log before I can relax and fix it.
Otherwise, Validate and Update are running fine on my 16 discs at >40 TB now with 2 PPUs. Not sure what can go wrong to mess things up, but I did have to restore a failed disk before and it worked perfectly. Its certainly capable to manage these kinds of volumes.
All on Snapshot RAID-F of course, I don't like the concepts behind tRAID for my media storage, personally.
Regarding the OPs initial question, none of the similar solutions offer what I want out of FlexRAID. Of course your scenario might be different, but right now FlexRAID is the easiest solution to what I want.
Most importantly for me thats Pooling and Parity Protection with mixed-size drives (I have 2, 3 and 4TB drives in service right now), preferably running on Windows, as thats what my server runs for some multimedia related applications.
Sometimes I wish it would send more comprehensive mails, like the reason for the failure so I don't have to worry when I see its another locked file .. now I have to go check the log before I can relax and fix it.
I'd love to have it email the portion of the log from the time the operation started to the end for success or fail. It sometimes reports success even if it finds corrupt files.
I think others here share the same opinion, but to prevent misrepresenting them I'll only speak for myself. I don't give a rats patoot what a product site on the web says. However, whether or not it was correctly stated on the web I'd come here and correct it if I knew it to be wrong since this is avs-htpc
I'd make an exception an try to send a correction request email if it was a product I wanted to see succeed like xbmc, mb3, pulse 8, and the mistake happened on some highly popular site like lifehacker or ars. Otherwise information and comparisons etc are a dime/dzn
I'd make an exception an try to send a correction request email if it was a product I wanted to see succeed like xbmc, mb3, pulse 8, and the mistake happened on some highly popular site like lifehacker or ars. Otherwise information and comparisons etc are a dime/dzn
The thing is that the point at issue is not really a mistake. It is more of an omission of a exceedingly rare and unimportant situation. Hardly worth bringing up at all. But since it was brought up, I would have expected the person who brought it up to want to see the omission eliminated if all it took was an email. Apparently I misunderstood.
Although i have been bitchin about flexraid right from the start of this thread, but i must admit one thing, that flexraid has never betrayed me with recovery. My disks have failed 3 times in the past, one time was parity, and other two times were two seperate DRU's one each time, and all the three times my data was prefectly restored, or rebuilt without a single byte of loss.
But the headaches to make it work has never stopped, now that it has completely failed to fetch me even one single full update.
Brahim has been pointing to issues that does not reflect in neither log files, and due to the holiday he and other admins are out.
And i am sitting here all freaked out.
I am just going to shut down my server before any damage occurs, and brahim comes up with a solution.
Been following up with him here. http://forum.flexraid.com/index.php/topic,4560.0.html
After a search attempt I figured this might be as good as any place to sneak in this question... what would you recommend for a Linux based simple file server installation? Something along the lines of installing OpenELEC... it can't get any easier yet has plenty of power.
As I don't consider RAID a form of backup I'm not too concerned about losing the data. However I would use 4-6 3TB drives and using a RAID would be fine. I'd require headless much like a NAS where I can configure it via my browser. Virtually all it has to do is serve files via SAMBA and iSCSI anonymously. Perhaps NFS as a bonus. Again ease of installation is more important than speed, redundancy, flexibility and it would be installed on the current line of Z87 motherboards. Windows is not an option...
I appreciate any hints or suggestions. I'm some what familiar with UNIX such as I know my way around vi although for this project it has to be idiot proof.
After a search attempt I figured this might be as good as any place to sneak in this question... what would you recommend for a Linux based simple file server installation? Something along the lines of installing OpenELEC... it can't get any easier yet has plenty of power.
As I don't consider RAID a form of backup I'm not too concerned about losing the data. However I would use 4-6 3TB drives and using a RAID would be fine. I'd require headless much like a NAS where I can configure it via my browser. Virtually all it has to do is serve files via SAMBA and iSCSI anonymously. Perhaps NFS as a bonus. Again ease of installation is more important than speed, redundancy, flexibility and it would be installed on the current line of Z87 motherboards. Windows is not an option...
I appreciate any hints or suggestions. I'm some what familiar with UNIX such as I know my way around vi although for this project it has to be idiot proof.
Can you start using SnapRAID with already filled disk? Yes *[1]
Can you start using UnRAID with already filled disk? No *[2]
Can you start using FlexRAID with already filled disk? Yes *[1]
*[1]As long as you're using a supported file system. *[2]Unless you're using a supported file system.
WHS2011 + Stablebit Drivepool + Good backup plan. Been bulletproof in the time I've been using it. No parity to worry about (be that good or bad), but it works well and performance is great.
A multi-drive server setup with absolutely no form of protection or data recovery is a non-starter, IMHO. I've had several occasions where drives failed and I was able to recover all of my data in unRAID. No parity pretty much guarantees that you'll lose data at some point. For me, backing up almost 45 TB of storage is not economical, practical, or convenient. The cost of a single parity drive is inconsequential compared to dealing with recovery of the lost data. In my case, as it probably is with most others here, all of my data consists of media that can be recovered from other sources. Problem is, restoring all of that data manually can be time consuming. It's a whole lot easier to install a new drive and let it rebuild the lost data with no intervention from me.
FYI to morganf - I revisited the chart on the SnapRAID site and noticed a link at the bottom of the page for reporting any errors in the chart. I attempted to do so and it took me to the forums. Since I am not registered with the forum I let it be. I see no need to register with a forum I would not be participating in for a product that holds no interest for me. If you are a member there, and I suspect you are, feel free to report the errors. Then again, you'd have to actually admit that the errors exist in order to do that which you apparently have a difficult time grasping. The data in the chart is either a Yes or No entry with no footnotes to qualify "rare or unimportant" circumstances for the category "Can you start using it with already filled disk?" The answer is clearly Yes as I have already demonstrated, but there should be a clarifying footnote regarding reiserfs drives. The Interface entry should say "Both" and not just "GUI" since you can access unRAID via a bash prompt when you have a monitor and keyboard connected directly to the server. You can also telnet into the server and access a prompt.
While it is true that the OS must be able to read the disk file system for SnapRaid to read the disk, Windows can be made to read just about any file system out there. I am not sure if UnRaid can be made to do the same...and if it can be, will UnRaid still offer you support?
While it is true that the OS must be able to read the disk file system for SnapRaid to read the disk, Windows can be made to read just about any file system out there. I am not sure if UnRaid can be made to do the same...and if it can be, will UnRaid still offer you support?
Another good reason to choose ZFS with healing capabilities over a hardware controller RAID solution.
With drives getting larger and larger, the silent data corruption factor is amplified. If you have stagnant data like a home media server, not so much, but in a production system with hundreds of terabytes of read/writes per year, you're going to see these errors happening more often.
I have 99% confirmed my choice is going to be moving to ZFS for my next server build.
There are several disadvantages for ZFS as the system for a home media server. Some of them are listed in the SnapRAID comparison chart.
Also, SnapRAID maintains checksums at a block level (rather than a file level, SnapRAID divides the data into virtual blocks that it tracks), and SnapRAID verifies the checksums on your data whenever it reads the data. And SnapRAID has a scrub function where it reads all (or some, whatever you specify) of the data and verifies the checksums. If there are errors in the data, SnapRAID can restore the data and verify that the checksum is correct for the restored data. In short, SnapRAID has tools to identify and repair errors in your data, silent or otherwise.
This is baffling, but here is the scenario for why this is relevant and an important distinction worth at least a footnote
-say once upon a time I was @morganf with a Linux server (going back I see that you had described this setup and it was silly of me to imply you had no outside-of-windows knowledge, my apologies) . . . so I'm deciding which parity solution I want and hear great things about unraid. Then I see it won't work with drives that have existing data. No footnote, just a plain no, do not pass go, do not collect $200.
Now I'm contemplating whether I want to even try unraid because I'd have to build a separate server with at least two HDDs one as a parity then being copying data over. Then move each disk into the new unraid box after its data was copied off (expanding storage each time with the now empty disk, as it's data would be copied to unraid)
Then instead I see that someone correctly stated that I could start unraid with reiserfs drives and existing data.
Now I realize that I only need one new drive, a flash drive, and some patience. Then I can unplug my existing disks, boot from the unraid flash drive and create a new unraid-reiserfs array without parity on the new empty drive. Reboot back into my Linux server with one existing drive and the new drive plugged in. Have my server mount the unraid volume and copy my existing data in. Then delete my existing disk and formatting, reboot back to unraid and create a new single disk array without parity on my now empty unformatted existing disk. After all disks are copied I can remove the formatting and data from the last disk since it's data was copied, and it will become the parity
All new unraid setup without buying a separate box or a bunch of extra hard drives
I see no point in furthering the argument. It's an exercise in futility trying to point someone that ignorant to the actual facts when he continues to insist he is right even after being proven wrong.
morganf, you continue to amaze me by telling senior members here that they don't know what they're talking about. They have established themselves as being highly knowledgeable time and time again. You are an unknown quantity and essentially a newbie to these forums and are doing your best to chip away at any credibility you might have had every time you post.
I see no point in furthering the argument. It's an exercise in futility trying to point someone that ignorant to the actual facts when he continues to insist he is right even after being proven wrong.
morganf, you continue to amaze me by telling senior members here that they don't know what they're talking about. They have established themselves as being highly knowledgeable time and time again. You are an unknown quantity and essentially a newbie to these forums and are doing your best to chip away at any credibility you might have had every time you post.
I agree it is futile to continue the conversation, since ajhieb holds such massive double standards. It is interesting that ajhieb derides the SnapRaid chart for not being 100% fully correct (it is only a summary chart so it can not be 100% correct for the zillions of possible cominations) and then posts a different chart and says it is fine that it is not 100% correct since it is only a summary chart. His double standards are pretty common and easily spotted. He also refuses to support himself when he knows he is wrong, instead pretending (repeatedly) that he does not need to support his own statements (only everyone else does).
I would never list ajhieb on a list of credible people.
Ok, after some research, adding each disk as a separate vdev and using snapraid would still have the striping of raidz. If you lose one of the vdevs, you lose the entire zpool. I don't understand why, so zfs must still be striping them. That also means that you need the zpool to read any of the drives. So if vdev 2 goes and you can't repair with snapraid, you lose all vdevs.
This is an older thread, you may not receive a response, and could be reviving an old thread. Please consider creating a new thread.
Related Threads
?
?
?
?
?
AVS Forum
34M posts
1.5M members
Since 1999
A forum community dedicated to home theater owners and enthusiasts. Come join the discussion about home audio/video, TVs, projectors, screens, receivers, speakers, projects, DIY’s, product reviews, accessories, classifieds, and more!