Originally Posted by Killroy™
Actually, my example was a method to do it without losing any files in case something went wrong.
But this is the logic behind the error...if there are too many changes in either deletes, moves, and additions before you update then the parity needs to create more data than the parity disk can hold. This would not be an issue if the parity drive was larger than the actual largest drive (I have tested this) but since most of us only use the same hard drive size as our largest data drive then it becomes a problem. I never had an issue when I was using 1TB data drives and a 1.5TB parity drive. The extra room in the parity drive would overcome this problem. But if you have 3TB drives and your parity drive is also 3TB then you will have a problem if you make too many changes (about 200GB or so) before updating.
This is an old bug that has existed in FlexRAID since version 1.0. I thought he fixed it in beta version 2 but it was just pushed further by greater changes before updates. I think version 1 could handle 100GB of changes before the error but version 2 was increased to about 200GB...although I am not sure of the exact number.
1. I am very glad to have stumbled upon this thread only because of the bug report being claimed. I mean, thanks to those backing up the product and to the others... well, sorry that your favorite program
is no longer free.
Give it another 3 months and you will fully understand why it was critical for this project to get funded.
2. So Killroy, I have been trying to replicate the bug you are referring to for hours now without any success. I made sure to follow every element you named to lead to the error. Still, no success replicating the issue. For the record, every time a user has reported a parity space issue on the forum it was always traced back to user error (spanning, junctions and hard links, PPU having a smaller usable space
than advertised, etc.).
In fact, I even added a new feature to FlexRAID to scan to hardlinks and junctions: http://wiki.flexraid.com/2012/04/21/...snapshot-raid/
However, your particular experience could still be valid, and I would like to see it replicated so I can fix it if not already fixed in the latest release.
I would like to offer you a free license of FlexRAID in exchange to access to a test system were you are able to setup a failing scenario proving your claimed bug.
I have designed FlexRAID to be very smart with data changes such that things like moves only result in metadata update and no parity change. As matter of fact, FlexRAID is the only solution that can recover 100% of the data even after massive data moves across DRUs.
Still, given that you are running some pretty hefty systems, maybe you are encountering something the typical user isn't.
So, would you take my offer? (I will greatly appreciate that)