Originally Posted by Mfusick
But isn't the issues you had limits if unraid product?
With Flexraid I can shut it down and the drives are fully readable in any machine with the data on them.
Your blaming Flexraid for an unraid limitation
I never could discern exactly what the issue was with my upgrade. As such, I'm not specifically placing the blame on FlexRAID, but my reluctancy to use it are based on my experience. Here's an outline of what transpired:
I installed a hard drive for the OS, installed Windows 7, and then installed FlexRAID, all with just the OS drive connected and nothing else. I then shut down the server, disconnected the OS drive, reinstalled the USB flash drive, reconnected all of the unRAID drives, and reconfigured it so that it would boot back up from the unRAID USB drive. I then transferred data from the 1st data drive in unRAID to a newly formatted NTFS drive on another PC across my network. Once the data transfer was complete, I'd swap out the data drives in the server, disconnect all of the unRAID drives, and reconfigure it to boot in FlexRAID. Once it booted up I would add the new drive to the array and then shut everything down.
I reconnected the unRAID drives, disconnected the NTFS drive(s), and reconfigured the server to boot back into unRAID. I then took the old unRAID drive with the reiserfs file system and reformatted it on the Windows PC with an NTFS partition. I was careful not to mix drives with different formats in either array configuration. I repeated this entire process one drive at a time until I ran into problems. I would always delete the configuration file on the unRAID flash drive and then reboot the array so unRAID would see everything as a new array and allow me to start the array with no error messages about missing drives. I just wouldn't initiate a parity check after starting the array.
I have two Supermicro AOC-SASLP-MV8 8-port SATA controllers installed in the server. The motherboard has six SATA ports, IIRC. I only used 4 of the 6 ports on the motherboard for drives in the array and the rest were connected to the SATA controller cards. Everything went fine and the server booted into FlexRAID until I had installed about half the drives (I had 20 total drives in the array at that time, IIRC). This would include the OS drive, the four drives connected to the motherboard, and 8 drives connected to the first SATA controller. I forget if I was ever actually able to connect all 8 drives to the controller of if I was able to go beyond that and start on the 2nd controller. Whatever the case, the server would simply hang while attempting to initialize the drives on the controllers following the POST screen. I simply could not get it to boot into Windows to get to FlexRAID. I replaced the Windows 7 OS with a copy of Windows Home Server and the same scenario repeated itself. I reinstalled the OS several times with no better results. The PC would hang after a certain point and not allow me to get into Windows or WHS. The OS had not even begun loading yet as it was still going through the initial POST and drive initialization stage.
This is what became so perplexing to me because there was no rational reason for this to happen. Aside from the newly added OS drive, all other drives had been connected to the array previously except the OS drive, albeit in different slots of the array. When I restored everything back to the way it was, I had absolutely no problems booting from the unRAID USB drive with all 20 drives attached. The drive I had been using for the OS was a Crucial 64GB SSD that has been used on numerous PCs both before and after the server upgrade attempt with no problems whatsoever. I tried swapping out hardware and cables wherever I could, but the results were the same. The server configured for FlexRAID simply refused to boot after a certain number of drives were added. I installed the very latest Windows 7/Vista drivers available for all hardware.