Originally Posted by lockdown571
If you are are upgrading all the hardware anyways, why not just keep the Unraid server intact? Use that for all of your hard drive storage and then build a windows machine that just mounts the Unraid server. Wouldn't that be substantially easier? The only downside I can think of is slightly more power usage with using two machines.
He's only upgrading part of the hardware. I went through a similar scenario several months ago. IIRC, I initially tried it with a Windows 7-based server but switched to WHS, but I don't recall exactly why. I do recall that I ran into all sorts of issues with my Supermicro SATA controllers in that they wouldn't initialize properly during POST for some strange reason. They worked fine in unRAID but would hang with a Windows install after adding a certain number of drives back into the array (I believe I was working with about 15 data drives at the time). What's really odd is that this would occur before the OS was even loaded, which made absolutely no sense.
In any case, I went through the exact scenario the OP described. I installed a new drive in a spare Windows PC and transferred the entire contents of a mapped unRAID drive to the newly formatted Windows drive. I installed my Windows OS on a new drive using the server hardware with the unRAID boot drive disconnected so I could test the drive with the data in the server environment. I just swapped out the new drive for the old, reconfigured the BIOS to boot from the new OS drive instead of the USB drive and fired it up to make sure everything worked. I'd then shut down the server, disconnect the OS drive, and reconfigure the server to boot from the unRAID flash drive. I seem to recall that I just deleted the configuration file from the flash drive to allow the array to start, but without parity and no nag messages about a drive missing. I'd install the old unRAID drive in the spare PC and format it.
I'd repeat the process by copying over one drive at a time until I had transferred all of the data, at least in theory anyway. The server would hang after swapping out about 7 or 8 drives and the reason is still a complete mystery to me. I went through this scenario a couple of times before I threw in the towel and returned everything to the original unRAID setup. I did, however, end up upgrading to the latest 5.0-rc version of unRAID, which was much better than the previous version I had been using (4.7, IIRC).
BTW, version 5.0-rc has a Plex plug-in, among lots of others, including bit-t0rrents.
One recommendation I'd make is that if your motherboard uses a Realtek NIC I would definitely upgrade it to an Intel NIC (the EXPI9301CT is the preferred NIC). It should maximize your transfer rates and is far better than the POS Realtek chips used by most motherboard manufacturers. Regardless of which NIC you use, it's going to be a tedious and time consuming process. Depending on the size of your drives and the transfer rates you achieve, expect to be able to transfer 2-3 drives worth of data per day. I'd start a transfer in the morning and it would be complete when I got home from work. I'd start a 2nd transfer at that time and, if I was lucky, it would be done before I went to bed. I'd then start a 3rd transfer and hope it would be completed before I left for work.