Originally Posted by dtrounce
I have a somewhat similar setup:
* Chassis: Supermicro SC847A-R1400LPB - 36 hot-swap 3.5", plus 2 internal 3.5" drives in a 4U. 9x mini-SAS connectors for the 36 SATA drives
* Motherboard: Supermicro X8SAX: X58, 2x PCI-X, 1x PCI, 2x PCIe x16, 1x PCIe x1. 6x SATA - two for the 2x internal drives for RAID1 OS, 4x for hot-swap (needs a reverse mini-SAS breakout cable)
* Intel i7-920
* 2x6GB = 12GB RAM
* 2x RaidCORE BC4852 PCI-X (running final 3.3.1 firmware/drivers). Used just to provide 2x 8x = 16x SATA ports now. Note only 15 arrays/legacy drives are possible, regardless of the number of cards you have, as SCSI limits you to 15 devices in addition to the controller. Needs 4x reverse mini-SAS breakout cables
* 2x Supermicro AOC-SASLP-MV8 - provides 4x more mini-SAS for 16 more hot-swap drives (need 4x mini-SAS cables)
* Sabrent SiI3114 PCI card for 4x more SATA ports. Using just one port, to replace the 16th drive not usable with the RAIDCore
To use the RAIDCore controllers (even just for SATA ports, not RAID), you need either Windows or Linux for the host - those are the only RAIDCore drivers available. So I don't think you can run ESXi as the host. I think this also rules out most of the other options for a dedicated storage box running ZFS, e.g. FreeNAS (based on FreeBSD), Nexenta (based on OpenSolaris), etc.
I am running Windows Server 8/2012 beta as the host. You can run Hyper-V VMs, but in order to use more than 3 drives (plus the VHD for OS) you need a guest OS that runs the Hyper-V Integration components, only available for Windows and Linux. So again, you can't run nice ZFS-based systems in VMs. There are projects to port ZFS to Linux, which might work (either ZFS-FUSE, native Ubuntu, etc.) but these don't look stable and would be hard work to get up and running.
I want to run Windows DFS as well to keep this box in sync over a slow network link with another Windows Server box. So for a VM-based storage software, that would require exporting the volume from the VM to Windows as iSCSI - more complexity and points of failure. So I really want a good software RAID solution that can use all 36 drives and runs on Windows.
* Software RAID, to run across different the 6 controllers
* RAID5 at least. RAID6 is better
* Works with 36 disks
* Totally reliable, bug-free. ZFS looks like a really good option here with checksums.
* A single volume exported. Drive pooling would be ok too
* High performance - several hundred MB/sec (like RAIDCore native)
* Expandable, as I add and replace disks). RAIDCore has this
* Fast/smart rebuilds. ZFS has this, most others don't
Options I've considered/tested are:
* Traditional Windows software RAID. RAID5 only, speed is ok, well-tested and reliable. Not expandable, need to set up separate volumes on different slices to use different sized disks, no RAID6, no email alerts
* Windows Storage Spaces, with Server 2012. This looks like it should address the needs, but the actual product (at least in the current beta) is nowhere near the promise yet, either for stability or features. Testing shows terrible performance (20MB/sec), it still has bugs in the beta that makes it unusable (e.g. freezing, artificial limits), no alerts, no RAID6. It also doesn't rebalance data if you add drives. It's not yet even as useful as traditional software RAID
* FlexRAID. I was quite excited about this option. But testing with the current FlexRAID 2.0 update 6 would not start the storage pool correctly after a reboot, making the data inaccessible. Also it is $100 for parity and pooling, which would be fine if it was a well-tested product, not still a promising but buggy work-in-progress
* unRAID - not an option, no drivers for RAIDCore, only 20 disks. And expensive.
* SnapRAID - not realtime, command-line only
* BTRFS - experimental, no RAID5/6, Linux-based
ZFS looks like a really good option. But
* Not native for Windows. So requires the complexity of a VM, which requires Linux to use the RAIDCore cards. This requires a stable Linux-based ZFS system - I'm not sure that exists yet. Plus there is the danger as ZFS now is using virtualized disks (even it owns them) so there may be caching issues, where ZFS thinks it has written a block, but the host has cached it. And the complexity of iSCSI to use DFS.
* Less flexible than you think. You can't change the number of disks (vdevs) once it is created. So need to create up-front with 36x vdevs.
Any other thoughts on this?