Originally Posted by wkearney99
So the question is whose hypervisor will be the least problematic. It appears ESXi requires playing around with OEM.TGZ driver file in order to have the SAT2-MV8 card recognized. I've not yet done that, so I don't know just how well (if at all) it would handle passing the raw drives as virtual drives. The next experiment would be to use Hyper-V and see how that performs, passing the drives for OI and snapraid.
Hyper-V passes whole drives easily, and transparently. The issue you will run into is that it can be difficult to get Linux or BSD distros (for ZFS) to install the Hyper-V synthetic drivers for the NIC, drive-interface, etc. If you can't get the Hyper-V drivers installed on the VM, then you are stuck with emulated devices and the performance is poor.
If you can get the synthetic drivers installed, then performance of the NIC and drives is very very close to native speed on Hyper-V. CentOS supports the synthetic drivers failrly easily with an installer from Microsoft, and the latest Ubuntus have it built into the kernel, so it works at the installer level. I've searched and searched, but couldn't find a way to make a BSD-based OS work with the synthetic drivers... which pretty much eliminates any Solaris derivative for ZFS. I've been able to get ZFS on Ubuntu, and played around with BrtFS on Ubuntu, but gave up on those implementations.
Unfortunately, there isn't a perfect hypervisor, they all have trade-offs.
And if it wasn't for the poor performance of Storage Spaces, I'd be using a Windows 8 (or Server 2012) VM in Hyper-V with the new RFS which has a few key features that ZFS has. Obviously, Windows VMs run very well in Hyper-V, so you can still use something like FlexRaid or Snapraid.