Originally Posted by jrwalte
False. The other drives in the same DRU are independent of each other and will still be accessible by windows explorer or your storage pool. In essence, it will work exactly the same as if you placed each drive in their own DRU, except you have that 'rare' extra protection of multi-drive failure within the same DRU. It isn't required...but why not?
I realized the error in my thought process regarding drive access the moment I submitted my last response. I'm still learning the ropes with FlexRAID and have yet to set up my first server with it so I'm not currently seeing any benefit to grouping drives into one large DRU other than the miniscule added drive protection. The drive pool will look exactly the same to the end user if the drives are used as individual DRUs or grouped together. I guess it all boils down to personal preference. The question isn't so much "Why not?" as it is "Why would I?"
Speaking of unRAID, I'm in a bit of a pickle at the moment and I can blame it entirely on this thread. I had recently attempted to upgrade my server from version 4.5.6 to version 7 when I discovered I had the dreaded HPA issue with one of my drives. I eventually got it fixed but then figured it was probably wise to replace my Gigabyte motherboard with one that didn't cause the HPA to occur. I installed a new Asus motherboard along with a new AMD CPU and performed the upgrade without a hitch.
After reading this thread I decided to acquire a FlexRAID license to play around with it. The thing is, I wanted to use it with the new hardware. I installed a new hard drive for the OS (Windows 7) and then installed FlexRAID. My plan was to then replace the old motherboard in the unRAID server and install the new hardware in a separate case so I could play around with FlexRAID.
Now for the fun part. The first time I fired up unRAID with the old motherboard and CPU I got the HPA error again so I had to switch back to unRAID version 4.5.6. Somewhere along the way one of my drives decided to show up as unformatted. I tried to remount the drive but couldn't get it online. I gave in and decided to sacrifice whatever data was on the drive (mostly Blu-Rays and DVDs, all of which could be recovered and reinstalled) and go ahead and format it. The drive flat out refused to format in unRAID. I then took it out of the array and reset the configuration to initial conditions so it would run a fresh parity check from scratch. This worked for a couple of minutes and then it just stopped running the parity check. Wash, rinse, and repeat and still got the same results. Now I have an array with no parity protection. Somewhere along the way another drive decided to show up as unformatted, but this one only had a couple hundred GB of data so it's not much of a loss.
I'm now in the process of formatting the "unformatted" drives as NTFS drives and then copying the remaining data over from the unRAID server one drive at a time. I'll copy the contents of one drive and then pull that drive once the copy is complete. I'll format that drive and repeat the process with the next drive in the array until I've finished copying all of the data. I should have everything copied over and be up and running with FlexRAID sometime this weekend.
This was a clearcut case of it not being broke and I decided to fix it. I somehow managed to kill a perfectly good working unRAID array. Were it not for the glowing support from respected members I would have continued along quite happily with my unRAID setup. I'm not blaming them for any of this since it was clearly my own (un)doing, but that's what I get for thinking the grass was greener on the other side.
FYI - I ran the manufacturer's diagnostic software on each of the uncooperative drives and they each passed the long tests with flying colors.Edited by captain_video - 10/4/12 at 4:46am