I shared this with a fellow poster in a PM... and thought I'd share here:
I have a home-built system based on
The trials with this hardware...
- Athlon II X4 630 quad-core processor (already had this laying around)
- AsRock 990FX Extreme4 motherboard (I purchased this as it had VT-d/IOMMU support for PCI passthrough) It has 6 AMD SATA 3 ports, and an extra 2 SATA-III ports based on the Marvel SE9120 chip... for a total of 8 SATA-III ports, and a Broadcom BCM57781 NIC
- 16GB of cheapo RAM (I originally orderd 32GB, but two of the DIMMS were faulty and Newegg was out of the replacements, so I just took the refund and kept the 16GB that worked correctly)
- I just purchased an IBM M1015 SATA card for passthrough, as I was getting some odd behavior from the on-board AMD SATA ports... even running a full OS natively.
- I had an HP quad port gigabit NIC (Intel 82571EB based) already to throw in the box
- I also had an Intel CT Desktop gigabit NIC (Intel 82574L based) to throw in the box (these are very reliable cheap NICs that I highly recommend if you need a OS/VM-platform agnostic NIC. It will work on practically any OS, and is perfect in VMware)
- Two cheapo SIL3124 based 2 port SATA-II cards for extra storage needs
- An old AMD Radeon HD 2400 graphics card since the mobo doesn't have on-board video. It's passively cooled so I don't have to worry about the fan going out. I might see if I can get a PCI slot based really old VGA card, so I can reclaim the 16x PCIe slot... they are about $15 on ebay.
Now... let me tell you my tale... I originally wanted to build the box as a VMware host and run a storage VM on it that used the ZFS filesystem. This was in January, and the box was very unstable with the 32GB I originally put in it. I later tested the memory, and found two of the DIMMS faulty, so I sent them back to Newegg to get replacements, but they were out of that model, and I just took the refund instead.
So I set about getting ESXi installed, and realized that some of my hardware just wasn't going to work well. At the time, my SIL3124 SATA-II cards were not ESXi 5.0 update 1 or 2 compatible. Plus the motherboard had two extra Marvel SATA-III SE9120 based SATA-III ports... which ESXi 5.0 update 1 or 2 didn't support. So I was down to just using the 6 on-board AMD based SATA ports with ESXi 5.0. At the time, ESXi 5.1 was out and I gave it a whirl... which added support for my SIL3124 SATA cards but not the Marvel ports. So I had 10 working SATA ports. Also adding to my frustration was the on-board Broadcom BCM57781 NIC from the motherboard didn't work well in ESXi 5.0... but was very flakey in 5.1. Passing through a SATA card to a VM gave me more instability and an occasional PSOD (Pink Screen of Death... ESXi's version of Windows' famous BSOD).
I didn't want to use a storage VM with RDM mapped drives, I wanted native access as ZFS wants to controll the SATA controller and the hard drives. This is important to ZFS as it can't use it's more advanced features to detect controller faults or hard drive faults if the disks are virtuallized. I needed that hardware passthrough to work.Trying the hardware in native mode
I had another ESXi box running 5.0 u1 running on a very cheapo AMD build (Sempron dual core processor, cheap 880G+ based mobo, 8GB of RAM.) This was running all my production VMs very well and and was very stable. So I decided to just give up on ESXi on the new hardware, and run OpenIndiana + Napp-It natively on the hardware. OpenIndiana is Solaris based, so it also had issues with some of my hardware. It didn't like the Marvel SATA controller, but it did like the SIL 3124 cards. So I still had 10 SATA ports to work with. I figured I could just run the native OpenIndiana build to serve as a storage backend for my running ESXi box and general file-server duties.
The OpenIndiana box ran very well out of the box. I tweaked a few things here and there and everything was working great. Then I had a rash of hard drive failures, that lead to my RAID on ZFS finally giving up the ghost. It was partially my fault as I had some old hard drives in there and it seemed my newer drives were just getting trashed by all the activity.
I was a little pissed by this time, and thought I should look elsewhere for a different way to share out storage and do basic file-server duties. I threw on a copy of of Windows server 2012 natively and played with Storage Spaces... but the write performance was just abysmal. I decided to give ESXi another shot.
I checked VMware's site and discovered that they had just released ESXi 5.1 update 1. So I backed all my data off of the 2012 install and tried it out.Going back to ESXi
Holy cow!!! ESXi5.1 u1 detected my Broadcom NIC, it detected my SIL3124 SATA cards, and it even detected my Marvel SATA ports on-board the mobo. ESXi always worked with the Intel based NICs, by the way. So at this point I had access to all 12 of my SATA-II and III ports... and I had access to all 6 of my NICs. I thought I had a winner!
Since I had my data sitting on another box I decided to take it slow and test and prove out everything before making it live.
I didn't have the IBM M1015 card yet so I passed through the 6 on-board AMD SATA ports to an OpenIndiaan VM to test out ZFS again. My thought process was that the improvements VMware made to ESXi in 5.1u1 may make the system a little more reliable, considering my problems before. OI installed fine, and I had everything working in no time. But... the performance of OI as a VM was not providing the storage performance I was expecting. So i decided to go ahead and buy the IBM M1015 card so that I could pass it through. This card is very well known in the VMware and OpenIndiana world, so I though this would work perfect. It's highly recommended.OpenIndiana virtualized networking disappoints
While I was waiting on the IBM card, I started to test the network on OpenIndiana, and after some iperf tests and some research, I discovered that the that the VMXNET3 virtual NIC is not very good in OI. I tried everything, but the virtualized NIC was just not giving me the performance I needed. So I was wrongly blaming the SATA ports for my storage performance when it was actually the VMXNET3 NIC that was causing the problems in my virtualized OI VM. I didn't have these problems when I ran OpenIndiana natively, and I didn't feel like fighting with a virtualized OI anymore and using the emulated e1000 NICs. I wanted the 10Gb/s performance between VMs when you use the VMXNET3 NICs. This would make my VMs scream from a storage standpoint, as the e1000 NICs would only give me a gigabit of network performance, and I didn't feel like going through the hassle of teaming virtual e1000 NICs... I just shouldnt have to.
OpenIndiana works really well natively... I'd stay away from it if you want top-notch VMXNET3 NIC support.Decisions and testing to replace OI... the outliers
So I decided to give up on a virtualized ZFS based VM, and thought about my options. There's the FreeBSD based ZFS distros like FreeNAS, NAS4Free... but I'd played around with NAS4Free on a native machine once and I just didn't trust FreeBSD's version of ZFS to my data. ZFS is a Solaris product and all the tests I've seen online showed that ZFS on Solaris was night-and-day better than what FreeBSD had to offer. Plus I didn't like NAS4Free's weird folder permissions. It took way to much command line work to get permissions set correctly on folders... even though it has a nice GUI for everything else. Then there is Nexenta, which is somewhat Solaris based, but the free version had this artificial limit of of 18TB of raw storage space. I don't have 18TB now, but I didn't want to have to face that down the road. And since Nexenta is somewhat Solaris based, I thought I might see the same network probelems in a virtualized Nexenta with the VMXNET3 NICs.
I'd already tried Server 2012's version of Storage Spaces, and that was going to be a no-go for me. So I started looking at Linux options. I could probably "roll-my-own" with Ubuntu, but I just wanted to use something that worked at this point. ZFS is on Linux, but it's in it's infancy, and the performance is not that great from what I've read online. I tried it out, and they were right. I didn't want to spend too much time to tweak for performance when the implementation of ZFS is completely incompatible with Linux. BTRFS is the Linux equivilant to ZFS but it's still in development, and doesnt have all the features of ZFS yet. So I looked at some purpose built Linux NAS distros...The Linux options
I had already played with Amahi before... wasn't impressed. Nothing against the product... I seriously considered it as a WHS replacement once the WHS 2011 bomb dropped about not having Drive-Extender. Greyhole just seemed like a kludge to me... and although easily fixed, I didn't like the fact that Amahi out of the box tries to be your DHCP server. This is rediculous... everyone has a home router of some sort that already does this function... and does it reliably. If I need to service the Amahi box, I don't want to be without DHCP. This is one of the main reasons I am a huge proponent of single-purpose machines/VMs. That way one thing doesn't take out all the other functions that you depend on. I'll never buy a TV with a built in DVD player. Once the DVD player craps out, then all you have is the TV... or vice-versa. This is also what lead me down the path of virtualization years ago as I could build purpose-built single-function VMs on a stable back-end platform. It's almost a contradiction of ideas, as all the VMs rely on the stable back-end... but the trade off is worth it in my opinion. So Amahi was out of the picture... too many eggs in one basket for me. It tries to hard to do everything, which usually means it doesn't excel at anything.
I then looked at OpenFiler and OpenMediaVault. OpenFiler is somewhat of an enigma... in my research it works and works very well at what it does. But development has seemed to have stopped, and it just didn't seem like the correct way to go on a dead-end product. So I finally tried OpenMediaVault. It's pretty easy to get installed and working... and while it is not the most bleeding-edge kernel, it should be very stable and has a decent community behind it. So up it went into a VM... I passed through the 6 AMD SATA ports and built a RAID 10 with 6 2TB drives. Initial impressions were that it would be fine and do what I need. It supported the VMXNET3 NIC and seemed stable enough as a VM. Iperf tests showed promising results between VMs... I did a lot of research and tweaked NFS and TCP settings to get the maximum speed out of the VMXNET3 driver. Between a Server 2008 VM and OpenMediaVault, I was able to get up to 5Gb/s pure network performance with them both running the VMXNET3 NICs. This 5Gb/s speed is pretty good... the VMXNET3 NICs will be limited by the bus speed of your host. Considering I'm running an Athlon II processor, and not so fast memory... I found this acceptable.
The NFS tweaks did the most good, as when I exported a virtual hard drive to the Server 2008 VM through ESXi over NFS... I was able to get real-world performance of 229-277MB/s (write and read respectively) performance out of my RAID 10 running on a virtual OMV, exported as a VMDK over NFS on an ESXi datastore. It provides anywhere from 1337-27122 IOPS (27K IOPS! wowsers!) depending on the function. Pretty good! I could probably get more performance if I go iSCSI directly from OMV to the Server 2008 VM... and I would... but I'm not that familiar with OMV yet, and it seems that you can only dedicate a whole volume (Block IO, not File I/O) to their implementation of iSCSI. But 250MB/s isn't that bad... it's better than Gigabit LAN speeds, and that's what I was really after. If I only wanted Gigabit speeds then I would have settled on emulated e1000 NICs a long time ago.
I migrated the RAID 10 from the built-in AMD SATA ports to the IBM M1015 card without issue when it arrived. Passthrough on ESXi with my hardware is easy and stable now on version 5.1 update 1. If you've had instability before, I highly recommend this version of ESXi.Conclusion
So that's where I'm at now. I'm reloading all my data onto the OMV virtual machine... CIFS/Samba performance is decent... I get 70-80MB/s over gigabit LAN ... so that's acceptable. NFS now screams... which is important as NFS is faster for ESXi to detect as being "online" than iSCSI. This is important if you have to reboot your ESXi box (power-failure, hardware issue, whatever)... and you are depending on that storage VM to be up the first thing to provide a datastore for your other VMs. With iSCSI, you might have to do a rescan of the Storage Bus in the ESXi console to re-detect your LUNs. I plan on migrating the other production VMs from my other working ESXi box soon. I am also going to load Crashplan headless on it to backup to their cloud service. It saved my bacon on my native install of OpenIndiana right before my ZFS RAID died.
I currently run the following VMs in production:
- IPFire as my router/firewall. It's a great little distro that is very easy to use. It is able to keep up with my 50Mb/s up/down fiber internet service... even virtualized. I've used it on Hyper-V, VMware, and Proxmox (KVM) with great success. I don't know why people are so in love with pfSense.
- Windows 7 VM that has access to the HD HOmerun Prime cable card network-based tuners. This serves the kids' Media Center extenders to allow them to watch the channels I approve of. I can customize the guide to only show the appropriate channels for them, while keeping the full guide on the two other full physical HTPCs in the house for the "adults". :/
- Another Windows 7 VM that I use to play around with. It is my internet sand-box, that I use to access "seedy" areas of the internet. I don't care if it gets jacked with viruses... I can just restore it from a clone or snapshot. Everything else in the house (physical and VM) has anti-virus, so no worries of a jacked VM spreading a virus... and if it did...that's what backups are for.
- OpenVPN server based on Ubuntu. It allows my in-laws to access my videos, pictures from their remote media center box just as if their machine was on my local network
- PIAF which is PBX In A Flash... a CentOS based Asterisk PBX distro that allows me to have free home phone service through Google Voice.
- WHS v1 which is basically relegated to provide backups of windows systems in the house and to offload TV recordings from the HTPCs.
- WHS 2011 - just loaded this up to play with it. Since it's Server 2008 based, the VMXNET3 NIC has better performance than the WHS v1 virtual machine. Since WHS v1 is Server 2003 based, the NIC performance is fine (about 1Gb/s performance on VMXNET3)... but server 2008 runs better. I might migrate all WHS functions currently on the WHS v1 VM to this one.
- Server 2012 just to play with and learn
- Server 2008 just to play with
- OpenIndiana VM just to play with to see if I can ever get it to work well again virtualized.
- Comming soon a vShpere Management Assistant VM to play with UPS support through ESXi... to allow me to gracefully shutdown the VMs and ESXi in case of a power outage.
- Anything else I want to play with.
I love virtualization, as it allows you to do so much with your existing hardware and use it to it's fullest potential. Most purpose built physical machines rarely use all their abilities considering the electricy they consume. Virtualizing functions into a single (or multiple) servers allows you to effectively utilize all the abilities of the hardware spread out among the functions you desire. But this allows you to effectively separate duties so that when the TV crashes, it doesn't kill the DVD player.
I plan on upgrading the CPU to an 8 core AMD FX processor in the next month or so to expand the resources available to my VMs... and probably see if I can get another 16GB of RAM. That will give me a grand total of 8 cores, 32GB of RAM, 20 ports of SATA II&III (4 SATA-II, and 16 SATA-III), and 6 Gigabit NICs. This is a lot of power for a very little bit of money on commodity/consumer hardware.
If you have any questions about any of the above, please don't hesitate to ask. I've used Hyper-V, KVM, Xen, VMware... I've run into most problems in virtualization before... so I can help out where I can.