or Connect
AVS › AVS Forum › Video Components › Home Theater Computers › ESXi wanted - Any gurus on here?
New Posts  All Forums:Forum Nav:

ESXi wanted - Any gurus on here? - Page 2

post #31 of 89
Quote:
Originally Posted by tcs2tx View Post

I could be mistaken, but I do not believe that this is completely accurate. The below article describes how to do raw device mapping of local hard drives:

http://blog.davidwarburton.net/2010/10/25/rdm-mapping-of-local-sata-storage-for-esxi/

I can't comment on whether this works because I pass a controller card to my OpenIndiana-based VM for ZFS storage.

I saw a couple posts like that, but either they didn't work or they involved doing some command line stuff that I had no idea what it meant.
post #32 of 89
Quote:
Originally Posted by ncarty97 View Post

I saw a couple posts like that, but either they didn't work or they involved doing some command line stuff that I had no idea what it meant.
It works just fine. Yes, a lil bit of command line work is needed.
post #33 of 89
There's a LOT of good info on getting ESXI up and running over at the unRaid forum. Go to the unRaid Compulsive Design subforum and look for a thread titled "Atlas, My Virtualized unRaid Server"
cool.gif
post #34 of 89
Thread Starter 
Well I went ahead and made my order.

Supermicro X9SCM-IIF-B $186
Intel Xeon E3-1230V2 $214
2 Samsung 8GB ECC $140

The 250GB Samsung 840 is on sale now for $160 so I ordered one of those also.

I will also be using my current 64GB Adata SSD, IBM M1015 and 8 hard drives.

I still will want another nic card. Any thoughts on the Intel Pro/1000 PT dual? With all these nics, it looks like I might need a bigger switch
Edited by Andy_Steb - 5/16/13 at 10:23pm
post #35 of 89
Thread Starter 
Quote:
Originally Posted by kapone View Post

It works just fine. Yes, a lil bit of command line work is needed.

Once I get past the learning curve with Esxi I might try to tackle this.
post #36 of 89
Quote:
Originally Posted by Andy_Steb View Post

Well I went ahead and made my order.

Supermicro X9SCM-IIF-B $186
Intel Xeon E3-1230V2 $214
2 Samsung 8GB ECC $140

The 250GB Samsung 840 is on sale now for $160 so I ordered one of those also.

I will also be using my current 64GB Adata SSD, IBM M1015 and 8 hard drives.

I still will want another nic card. Any thoughts on the Intel Pro/1000 PT dual? With all these nics, it looks like I might need a bigger switch

Why do you need so many NICs?

I have 3 physical NICs, but I have 7 virtual NICs. AT&T sucks and requires a unique MAC address for each static IP they offer me. So I have one physical NIC connected to my uverse box. I then have 5 virtual NICs in pfsense for Internet, each grabbing a separate static IP, but they all share the same physical NIC. I then have one physical NIC connected to the LAN switch, but I have separate virtual NICs for DMZ and LAN connectivity. This wouldn't work if you had physical devices that you wanted connected to the DMZ segment, but it works fine if all the DMZ traffic is internal to ESX. My 3rd physical NIC is used for an iSCSI connection to my openfiler. You can always double up on connections and team them for higher bandwidth or redundancy, but is that really necessary in a home environment?

BTW, any of the Intel network cards are good choices. If you have the slots to spare, it is a lot cheaper to buy 2 single NICs than a dual.
post #37 of 89
The HP (Intel) 412648-B21 NC360T PCI-Express DP Gigabit Adapter is pretty cheap. Can be had for $30. I've also seen someone using a dual PCI-X Intel Gigabit card in a PCI slot and calculated that there is enough bandwidth. Those are about $15.

But I think three NICs are plenty if you are running pfSense, two would be fine without. I have four with the following:

On-board Realtek vSwitch: pfSense WAN (Comcast 105/20 Mbps Internet)
Single Intel GB vSwitch: pfSense LAN (All VM and routing traffic)
Dual Intel GB (the HP above): Pass through to WSE 2012 and Teamed for 2 GB througput. Note that the 2 GB is only possible when traffic is sent to more than one client.
DMZ vSwitch: no adapter and on a separate subnet for WAN only access. With pfSense the DMZ is a closed environment that you have to create rules to open up access unlike in a typical router where it is open to everything.

So any VMs go to the Intel GB vSwitch. Anything I want off my LAN goes to the DMZ vSwitch. My WSE 2012 is my primary server and has dedicated dual teamed GB. I could get by without the teaming and only need three NICs, but why should I?

If I wasn't running pfSense I wouldn't need the WAN vSwitch and could get by with just two NIC vSwitches. I do think you should dedicate your media server to its own NIC so it can serve its data without needing to share traffic.

I've got a standard 16 port Gbit switch and I'm down to like two or three available ports.
post #38 of 89
Quote:
Originally Posted by brianjb View Post

Why do you need so many NICs?

1 - I have seen numerous places recommend that the management network for ESXi should be on its own NIC (with its own subnet).

2 - If you are hosting storage (i.e., ZFS-based, unRAID, FlexRAID, WHS, etc.), I have seen it recommended that this has its own NIC.

3 - If you are using pfsense, it will need one additional NIC for the WAN input.

4 - at least one additional NIC for the remaining VMs.
post #39 of 89
Thread Starter 
Quote:
Originally Posted by brianjb View Post

Why do you need so many NICs?

I have 3 physical NICs, but I have 7 virtual NICs. AT&T sucks and requires a unique MAC address for each static IP they offer me. So I have one physical NIC connected to my uverse box. I then have 5 virtual NICs in pfsense for Internet, each grabbing a separate static IP, but they all share the same physical NIC. I then have one physical NIC connected to the LAN switch, but I have separate virtual NICs for DMZ and LAN connectivity. This wouldn't work if you had physical devices that you wanted connected to the DMZ segment, but it works fine if all the DMZ traffic is internal to ESX. My 3rd physical NIC is used for an iSCSI connection to my openfiler. You can always double up on connections and team them for higher bandwidth or redundancy, but is that really necessary in a home environment?

BTW, any of the Intel network cards are good choices. If you have the slots to spare, it is a lot cheaper to buy 2 single NICs than a dual.

My thinking on this is. Realtek for dedicated IPMI, one onboard Intel 82574L dedicated to WHS, One onboard Intel 82574L for ESXI and various VMs, And a Intel dual port for pfSence.
This is my first adventure into Esxi, So pfSence may be a month or so down the road. I just want some sort of game plan before I begin.

Thank you all for the suggestions and tips so far.
post #40 of 89
Quote:
Originally Posted by Andy_Steb View Post

My thinking on this is. Realtek for dedicated IPMI, one onboard Intel 82574L dedicated to WHS, One onboard Intel 82574L for ESXI and various VMs, And a Intel dual port for pfSence.
This is my first adventure into Esxi, So pfSence may be a month or so down the road. I just want some sort of game plan before I begin.

Thank you all for the suggestions and tips so far.

First of all, not sure the Realtek NIC will work. I haven't tried one in a while, but last I checked, there was only one Realtek NIC that worked reliably, and it requires a manual driver install. Not really a simple process if you are unfamiliar with linux.

Secondly, why would you dedicate the NICs to those VMs? Sorry if this is blunt, but It's a waste of bandwidth. Copying large files will suck up all the bandwidth from a single NIC, and it can cause stuttering if someone is watching a movie at the same time; however, dedicating a NIC to WHS won't remedy that. If you really feel you need the bandwidth, which is still highly suspect at this point, I still wouldn't dedicate a NIC to any VM. I would team NICs together on the VM network. That way all VMs are able to pull from the collective pool rather than having bandwidth sitting their wasted because it is way under utilized by that VM. From what you said, one NIC connected to your ISP modem, and two NICs teamed on a separate switch from your ISP for the vm network will likely be more than sufficient. You should focus your attention on memory and disk IO. Those are the two biggest bottlenecks on a virtual server.
Edited by brianjb - 5/17/13 at 9:29am
post #41 of 89
Quote:
Originally Posted by tcs2tx View Post

1 - I have seen numerous places recommend that the management network for ESXi should be on its own NIC (with its own subnet).

2 - If you are hosting storage (i.e., ZFS-based, unRAID, FlexRAID, WHS, etc.), I have seen it recommended that this has its own NIC.

3 - If you are using pfsense, it will need one additional NIC for the WAN input.

4 - at least one additional NIC for the remaining VMs.

I addressed #'s 2, 3, and 4 in my OP.

As far as one goes, yeah this is really best practice and a good idea in a corporate situation, but it is overkill in a home environment. How exactly would #1 benefit anyone in a typical home scenario?
post #42 of 89
Quote:
Originally Posted by bryansj View Post

The HP (Intel) 412648-B21 NC360T PCI-Express DP Gigabit Adapter is pretty cheap. Can be had for $30. I've also seen someone using a dual PCI-X Intel Gigabit card in a PCI slot and calculated that there is enough bandwidth. Those are about $15.

Where did you find that HP NIC for $15? Cheapest I found was $40, and it appears it comes with a low-profile bracket. And I am even more curious where the Intel card was found for $15. Newegg sells those for $150+.
Edited by brianjb - 5/17/13 at 9:26am
post #43 of 89
Thread Starter 
Quote:
Originally Posted by brianjb View Post

First of all, not sure the Realtek NIC will work. I haven't tried one in a while, but last I checked, there was only one Realtek NIC that worked reliably, and it requires a manual driver install. Not really a simple process if you are unfamiliar with linux.

Secondly, why would you dedicate the NICs to those VMs? Sorry if this is blunt, but It's a waste of bandwidth. Copying large files will suck up all the bandwidth from a single NIC, and it can cause stuttering if someone is watching a movie at the same time; however, dedicating a NIC to WHS won't remedy that. If you really feel you need the bandwidth, which is still highly suspect at this point, I still wouldn't dedicate a NIC to any VM. I would team NICs together on the VM network. That way all VMs are able to pull from the collective pool rather than having bandwidth sitting their wasted because it is way under utilized by that VM. From what you said, one NIC connected to your ISP modem, and two NICs teamed on a separate switch from your ISP for the vm network will likely be more than sufficient. You should focus your attention on memory and disk IO. Those are the two biggest bottlenecks on a virtual server.

The Realtek is built into the motherboard and dedicated to IPMI. It really has nothing to do with Esxi. Is it needed for a home server, absolutely not. But if you have it available and what a headless system then why not use it.

I will take your other suggestions under serious considerations. The whole point of my thread was to learn a little bit more before the building begins.
post #44 of 89
Quote:
Originally Posted by brianjb View Post

I would team NICs together on the VM network. That way all VMs are able to pull from the collective pool rather than having bandwidth sitting their wasted because it is way under utilized by that VM. From what you said, one NIC connected to your ISP modem, and two NICs teamed on a separate switch from your ISP for the vm network will likely be more than sufficient.

If I am understanding this right, you are suggesting 3 total NIC's in this situation? 1 NIC for the WAN and 2 NICS "teamed" to handle everything else? The only dedicated NIC that will be passed down will be on pfsense for the WAN and everything on the LAN side will be the "teamed" NICS?

Sorry for my ignorance, I took am trying to grasp alot of this.
post #45 of 89
Quote:
Originally Posted by brianjb View Post

First of all, not sure the Realtek NIC will work. I haven't tried one in a while, but last I checked, there was only one Realtek NIC that worked reliably, and it requires a manual driver install. Not really a simple process if you are unfamiliar with linux.

The on-board RealTek on my board worked automatically. I think around ESXi 5.0 there was a preloaded driver added. All I can say is that it worked on whatever came on my board.
Quote:
Originally Posted by brianjb View Post

Where did you find that HP NIC for $15? Cheapest I found was $40, and it appears it comes with a low-profile bracket. And I am even more curious where the Intel card was found for $15. Newegg sells those for $150+.

No, the PCI-X card sells for about $15. The HP is in the $30 range. There is an HP right now on Amazon as a refurb, which I assume is another way of saying Server Pull, for a little over $30 shipped.

Dual PCI-X: http://thehomeserverblog.com/esxi/intel-pro1000-dual-gigabit-nic-pci-x-card-in-pci-slot/
post #46 of 89
Quote:
Originally Posted by mrted46 View Post

If I am understanding this right, you are suggesting 3 total NIC's in this situation? 1 NIC for the WAN and 2 NICS "teamed" to handle everything else? The only dedicated NIC that will be passed down will be on pfsense for the WAN and everything on the LAN side will be the "teamed" NICS?

Sorry for my ignorance, I took am trying to grasp alot of this.

Yes, that is what I am suggesting. Things change a bit if you plan to use iSCSI to connect to a NAS, or a DMZ that will connect to other physical devices. Otherwise, realistically 3 is all that is needed and someone could get away with as few as 2. In a home environment, 99% of the time you wouldn't notice the difference between 2 or 3 NICs. But for the cost of a $30 Intel pci-e NIC or the HP dual port NIC Bryan mentioned, it makes sense to go with 3 for the extra headroom unless the budget is already stretched tight. It doesn't hurt to have extra, but it is just burning money if they are not needed.
post #47 of 89
I would suggest getting dual NICs since you can grab them for about the same cost as a single. No need to waste the slots. There is also a quad floating around that is a good deal as a server pull.
post #48 of 89
Quote:
Originally Posted by brianjb View Post

I addressed #'s 2, 3, and 4 in my OP.

As far as one goes, yeah this is really best practice and a good idea in a corporate situation, but it is overkill in a home environment. How exactly would #1 benefit anyone in a typical home scenario?

Quoting from the last post in the thread linked below, which deals with virtualizing pfsense in ESXi, "I'll tell you right now what your problem is. Having your management share with your LAN over the same NIC is causing the issues. Without going into great detail, essentially there is a lot of broadcasting that is occuring over that same link. You need to separate those two (have your management on its own subnet) and you will see a great increase in performance)."

http://forum.pfsense.org/index.php?topic=45789.0

After investing the significant amounts for a server-grade setup, an extra NIC or two doesn't seem like a bad idea (e.g., dual Intel NIC about $30 on Ebay).
post #49 of 89
Quote:
Originally Posted by tcs2tx View Post

Quoting from the last post in the thread linked below, which deals with virtualizing pfsense in ESXi, "I'll tell you right now what your problem is. Having your management share with your LAN over the same NIC is causing the issues. Without going into great detail, essentially there is a lot of broadcasting that is occuring over that same link. You need to separate those two (have your management on its own subnet) and you will see a great increase in performance)."

http://forum.pfsense.org/index.php?topic=45789.0

After investing the significant amounts for a server-grade setup, an extra NIC or two doesn't seem like a bad idea (e.g., dual Intel NIC about $30 on Ebay).

No disagreement with the NICs, now that Bryan enlightened me on some of the inexpensive dual port NICs available. When I made my original post, it was with the thought of spending $150 on a dual port NIC from Newegg. There are some decent server pull options on eBay as well.

I still question the benefit of dedicating a NIC to the management network in this scenario. It certainly doesn't hurt if you have a spare NIC to use, but it hardly seems like a requirement. I have run pfSense as a VM for a long time, and know others that have as well, and have yet to notice the behavior described in the thread you linked to. I did not notice the behavior in ESXi 3.5, 4, or 5 or with any of the updates for those versions. In fact I just did the same test as described in the thread and am not experiencing that behavior now. I even did some of my own stress testing with pfsense on the same network and on a different network and got the same results. I am sorry, but based on my experience as well as others I know, your conclusion that this applies to everyone because of this one example seems to be a stretch.
post #50 of 89
Thread Starter 
Quote:
Originally Posted by brianjb View Post

No disagreement with the NICs, now that Bryan enlightened me on some of the inexpensive dual port NICs available. When I made my original post, it was with the thought of spending $150 on a dual port NIC from Newegg. There are some decent server pull options on eBay as well.

I still question the benefit of dedicating a NIC to the management network in this scenario. It certainly doesn't hurt if you have a spare NIC to use, but it hardly seems like a requirement. I have run pfSense as a VM for a long time, and know others that have as well, and have yet to notice the behavior described in the thread you linked to. I did not notice the behavior in ESXi 3.5, 4, or 5 or with any of the updates for those versions. In fact I just did the same test as described in the thread and am not experiencing that behavior now. I even did some of my own stress testing with pfsense on the same network and on a different network and got the same results. I am sorry, but based on my experience as well as others I know, your conclusion that this applies to everyone because of this one example seems to be a stretch.

Let's put it this way. I'll have 2 onboard nics and i'll add an ebay $25 intel dual port nic. How would you assign the nics?

tcs2tx gave one option, I'm guessing yours will involve teaming. Or do i just use 3 and save the forth for later down the road?
Edited by Andy_Steb - 5/18/13 at 6:31pm
post #51 of 89
Quote:
Originally Posted by Andy_Steb View Post

Let's put it this way. I'll have 2 onboard nics and i'll add an ebay $25 intel dual port nic. How would you assign the nics?

tcs2tx gave one option, I'm guessing yours will involve teaming. Or do i just use 3 and save the forth for later down the road?

I would set it up the way tcs2tx suggested if you have the spare. You might as well, since you have no better use for the 4th nic. So 1 for wan, 2 for vm network, and 1 for management network. It is best practice to separate them, but that best practice was created with typical corporate loads in mind. And 9 times out of 10 or less you won't see any difference if you split them out in a typical home environment. As I said, I see no difference, but that's not to say others haven't, and there is likely something unique about their implementation beyond just pfsense.
Edited by brianjb - 5/19/13 at 5:38pm
post #52 of 89
Well here goes, I ordered everything I think I need to get my esxi 5.1 up and running:

CPU: AMD FX-8350 8 CORE
MoBo: ASRock 970 Extreme4
Memory: 4x8GB corsair
GPU: Ati 5450
SSD: 64GB Corsair
HDDs: 3 x 4TB seagate drives for NAS, 1 x TB Green for misc. use
PCIx: 2 x Dual Port Intel/1000, 1 x IBM M1015 Sata controller

I want to run a VM NAS (WHS + Flexraid), pfsense, homeseer w winxp os, plex server (don't know if this will be dedicated vm) and others

I probably went overboard on memory and cpu but I want to make sure the horsepower is there. Looking forward to experimenting!
post #53 of 89
Good luck! My build is very similar (Same CPU, MoBo, but only 16GB memory)

I tested it weekend before last and ESXi had no issue with the onboard NIC, but I'll probably add at least another one to dedicate to the WHS2011 VM.

This weekend I'll be flashing my IBM M1015 and making sure that works, then hopefully by Monday its time for the full build!
post #54 of 89
If I go with WHS + FlexRaid what is the best version to crossflash the M1015?

Is it LSI9211-IT or LSI9211-IR?

Just planning ahead
post #55 of 89
LSI9211-IT
post #56 of 89
I shared this with a fellow poster in a PM... and thought I'd share here:



I have a home-built system based on
  • Athlon II X4 630 quad-core processor (already had this laying around)
  • AsRock 990FX Extreme4 motherboard (I purchased this as it had VT-d/IOMMU support for PCI passthrough) It has 6 AMD SATA 3 ports, and an extra 2 SATA-III ports based on the Marvel SE9120 chip... for a total of 8 SATA-III ports, and a Broadcom BCM57781 NIC
  • 16GB of cheapo RAM (I originally orderd 32GB, but two of the DIMMS were faulty and Newegg was out of the replacements, so I just took the refund and kept the 16GB that worked correctly)
  • I just purchased an IBM M1015 SATA card for passthrough, as I was getting some odd behavior from the on-board AMD SATA ports... even running a full OS natively.
  • I had an HP quad port gigabit NIC (Intel 82571EB based) already to throw in the box
  • I also had an Intel CT Desktop gigabit NIC (Intel 82574L based) to throw in the box (these are very reliable cheap NICs that I highly recommend if you need a OS/VM-platform agnostic NIC. It will work on practically any OS, and is perfect in VMware)
  • Two cheapo SIL3124 based 2 port SATA-II cards for extra storage needs
  • An old AMD Radeon HD 2400 graphics card since the mobo doesn't have on-board video. It's passively cooled so I don't have to worry about the fan going out. I might see if I can get a PCI slot based really old VGA card, so I can reclaim the 16x PCIe slot... they are about $15 on ebay.


The trials with this hardware...

Now... let me tell you my tale... I originally wanted to build the box as a VMware host and run a storage VM on it that used the ZFS filesystem. This was in January, and the box was very unstable with the 32GB I originally put in it. I later tested the memory, and found two of the DIMMS faulty, so I sent them back to Newegg to get replacements, but they were out of that model, and I just took the refund instead.

So I set about getting ESXi installed, and realized that some of my hardware just wasn't going to work well. At the time, my SIL3124 SATA-II cards were not ESXi 5.0 update 1 or 2 compatible. Plus the motherboard had two extra Marvel SATA-III SE9120 based SATA-III ports... which ESXi 5.0 update 1 or 2 didn't support. So I was down to just using the 6 on-board AMD based SATA ports with ESXi 5.0. At the time, ESXi 5.1 was out and I gave it a whirl... which added support for my SIL3124 SATA cards but not the Marvel ports. So I had 10 working SATA ports. Also adding to my frustration was the on-board Broadcom BCM57781 NIC from the motherboard didn't work well in ESXi 5.0... but was very flakey in 5.1. Passing through a SATA card to a VM gave me more instability and an occasional PSOD (Pink Screen of Death... ESXi's version of Windows' famous BSOD).

I didn't want to use a storage VM with RDM mapped drives, I wanted native access as ZFS wants to controll the SATA controller and the hard drives. This is important to ZFS as it can't use it's more advanced features to detect controller faults or hard drive faults if the disks are virtuallized. I needed that hardware passthrough to work.


Trying the hardware in native mode

I had another ESXi box running 5.0 u1 running on a very cheapo AMD build (Sempron dual core processor, cheap 880G+ based mobo, 8GB of RAM.) This was running all my production VMs very well and and was very stable. So I decided to just give up on ESXi on the new hardware, and run OpenIndiana + Napp-It natively on the hardware. OpenIndiana is Solaris based, so it also had issues with some of my hardware. It didn't like the Marvel SATA controller, but it did like the SIL 3124 cards. So I still had 10 SATA ports to work with. I figured I could just run the native OpenIndiana build to serve as a storage backend for my running ESXi box and general file-server duties.

The OpenIndiana box ran very well out of the box. I tweaked a few things here and there and everything was working great. Then I had a rash of hard drive failures, that lead to my RAID on ZFS finally giving up the ghost. It was partially my fault as I had some old hard drives in there and it seemed my newer drives were just getting trashed by all the activity.

I was a little pissed by this time, and thought I should look elsewhere for a different way to share out storage and do basic file-server duties. I threw on a copy of of Windows server 2012 natively and played with Storage Spaces... but the write performance was just abysmal. I decided to give ESXi another shot.

I checked VMware's site and discovered that they had just released ESXi 5.1 update 1. So I backed all my data off of the 2012 install and tried it out.


Going back to ESXi

Holy cow!!! ESXi5.1 u1 detected my Broadcom NIC, it detected my SIL3124 SATA cards, and it even detected my Marvel SATA ports on-board the mobo. ESXi always worked with the Intel based NICs, by the way. So at this point I had access to all 12 of my SATA-II and III ports... and I had access to all 6 of my NICs. I thought I had a winner!

Since I had my data sitting on another box I decided to take it slow and test and prove out everything before making it live.

I didn't have the IBM M1015 card yet so I passed through the 6 on-board AMD SATA ports to an OpenIndiaan VM to test out ZFS again. My thought process was that the improvements VMware made to ESXi in 5.1u1 may make the system a little more reliable, considering my problems before. OI installed fine, and I had everything working in no time. But... the performance of OI as a VM was not providing the storage performance I was expecting. So i decided to go ahead and buy the IBM M1015 card so that I could pass it through. This card is very well known in the VMware and OpenIndiana world, so I though this would work perfect. It's highly recommended.


OpenIndiana virtualized networking disappoints

While I was waiting on the IBM card, I started to test the network on OpenIndiana, and after some iperf tests and some research, I discovered that the that the VMXNET3 virtual NIC is not very good in OI. I tried everything, but the virtualized NIC was just not giving me the performance I needed. So I was wrongly blaming the SATA ports for my storage performance when it was actually the VMXNET3 NIC that was causing the problems in my virtualized OI VM. I didn't have these problems when I ran OpenIndiana natively, and I didn't feel like fighting with a virtualized OI anymore and using the emulated e1000 NICs. I wanted the 10Gb/s performance between VMs when you use the VMXNET3 NICs. This would make my VMs scream from a storage standpoint, as the e1000 NICs would only give me a gigabit of network performance, and I didn't feel like going through the hassle of teaming virtual e1000 NICs... I just shouldnt have to. OpenIndiana works really well natively... I'd stay away from it if you want top-notch VMXNET3 NIC support.


Decisions and testing to replace OI... the outliers

So I decided to give up on a virtualized ZFS based VM, and thought about my options. There's the FreeBSD based ZFS distros like FreeNAS, NAS4Free... but I'd played around with NAS4Free on a native machine once and I just didn't trust FreeBSD's version of ZFS to my data. ZFS is a Solaris product and all the tests I've seen online showed that ZFS on Solaris was night-and-day better than what FreeBSD had to offer. Plus I didn't like NAS4Free's weird folder permissions. It took way to much command line work to get permissions set correctly on folders... even though it has a nice GUI for everything else. Then there is Nexenta, which is somewhat Solaris based, but the free version had this artificial limit of of 18TB of raw storage space. I don't have 18TB now, but I didn't want to have to face that down the road. And since Nexenta is somewhat Solaris based, I thought I might see the same network probelems in a virtualized Nexenta with the VMXNET3 NICs.

I'd already tried Server 2012's version of Storage Spaces, and that was going to be a no-go for me. So I started looking at Linux options. I could probably "roll-my-own" with Ubuntu, but I just wanted to use something that worked at this point. ZFS is on Linux, but it's in it's infancy, and the performance is not that great from what I've read online. I tried it out, and they were right. I didn't want to spend too much time to tweak for performance when the implementation of ZFS is completely incompatible with Linux. BTRFS is the Linux equivilant to ZFS but it's still in development, and doesnt have all the features of ZFS yet. So I looked at some purpose built Linux NAS distros...


The Linux options

I had already played with Amahi before... wasn't impressed. Nothing against the product... I seriously considered it as a WHS replacement once the WHS 2011 bomb dropped about not having Drive-Extender. Greyhole just seemed like a kludge to me... and although easily fixed, I didn't like the fact that Amahi out of the box tries to be your DHCP server. This is rediculous... everyone has a home router of some sort that already does this function... and does it reliably. If I need to service the Amahi box, I don't want to be without DHCP. This is one of the main reasons I am a huge proponent of single-purpose machines/VMs. That way one thing doesn't take out all the other functions that you depend on. I'll never buy a TV with a built in DVD player. Once the DVD player craps out, then all you have is the TV... or vice-versa. This is also what lead me down the path of virtualization years ago as I could build purpose-built single-function VMs on a stable back-end platform. It's almost a contradiction of ideas, as all the VMs rely on the stable back-end... but the trade off is worth it in my opinion. So Amahi was out of the picture... too many eggs in one basket for me. It tries to hard to do everything, which usually means it doesn't excel at anything.


I then looked at OpenFiler and OpenMediaVault. OpenFiler is somewhat of an enigma... in my research it works and works very well at what it does. But development has seemed to have stopped, and it just didn't seem like the correct way to go on a dead-end product. So I finally tried OpenMediaVault. It's pretty easy to get installed and working... and while it is not the most bleeding-edge kernel, it should be very stable and has a decent community behind it. So up it went into a VM... I passed through the 6 AMD SATA ports and built a RAID 10 with 6 2TB drives. Initial impressions were that it would be fine and do what I need. It supported the VMXNET3 NIC and seemed stable enough as a VM. Iperf tests showed promising results between VMs... I did a lot of research and tweaked NFS and TCP settings to get the maximum speed out of the VMXNET3 driver. Between a Server 2008 VM and OpenMediaVault, I was able to get up to 5Gb/s pure network performance with them both running the VMXNET3 NICs. This 5Gb/s speed is pretty good... the VMXNET3 NICs will be limited by the bus speed of your host. Considering I'm running an Athlon II processor, and not so fast memory... I found this acceptable.

The NFS tweaks did the most good, as when I exported a virtual hard drive to the Server 2008 VM through ESXi over NFS... I was able to get real-world performance of 229-277MB/s (write and read respectively) performance out of my RAID 10 running on a virtual OMV, exported as a VMDK over NFS on an ESXi datastore. It provides anywhere from 1337-27122 IOPS (27K IOPS! wowsers!) depending on the function. Pretty good! I could probably get more performance if I go iSCSI directly from OMV to the Server 2008 VM... and I would... but I'm not that familiar with OMV yet, and it seems that you can only dedicate a whole volume (Block IO, not File I/O) to their implementation of iSCSI. But 250MB/s isn't that bad... it's better than Gigabit LAN speeds, and that's what I was really after. If I only wanted Gigabit speeds then I would have settled on emulated e1000 NICs a long time ago.


I migrated the RAID 10 from the built-in AMD SATA ports to the IBM M1015 card without issue when it arrived. Passthrough on ESXi with my hardware is easy and stable now on version 5.1 update 1. If you've had instability before, I highly recommend this version of ESXi.


Conclusion

So that's where I'm at now. I'm reloading all my data onto the OMV virtual machine... CIFS/Samba performance is decent... I get 70-80MB/s over gigabit LAN ... so that's acceptable. NFS now screams... which is important as NFS is faster for ESXi to detect as being "online" than iSCSI. This is important if you have to reboot your ESXi box (power-failure, hardware issue, whatever)... and you are depending on that storage VM to be up the first thing to provide a datastore for your other VMs. With iSCSI, you might have to do a rescan of the Storage Bus in the ESXi console to re-detect your LUNs. I plan on migrating the other production VMs from my other working ESXi box soon. I am also going to load Crashplan headless on it to backup to their cloud service. It saved my bacon on my native install of OpenIndiana right before my ZFS RAID died.


I currently run the following VMs in production:
  • IPFire as my router/firewall. It's a great little distro that is very easy to use. It is able to keep up with my 50Mb/s up/down fiber internet service... even virtualized. I've used it on Hyper-V, VMware, and Proxmox (KVM) with great success. I don't know why people are so in love with pfSense.
  • Windows 7 VM that has access to the HD HOmerun Prime cable card network-based tuners. This serves the kids' Media Center extenders to allow them to watch the channels I approve of. I can customize the guide to only show the appropriate channels for them, while keeping the full guide on the two other full physical HTPCs in the house for the "adults". :/
  • Another Windows 7 VM that I use to play around with. It is my internet sand-box, that I use to access "seedy" areas of the internet. I don't care if it gets jacked with viruses... I can just restore it from a clone or snapshot. Everything else in the house (physical and VM) has anti-virus, so no worries of a jacked VM spreading a virus... and if it did...that's what backups are for.
  • OpenVPN server based on Ubuntu. It allows my in-laws to access my videos, pictures from their remote media center box just as if their machine was on my local network
  • PIAF which is PBX In A Flash... a CentOS based Asterisk PBX distro that allows me to have free home phone service through Google Voice.
  • WHS v1 which is basically relegated to provide backups of windows systems in the house and to offload TV recordings from the HTPCs.
  • WHS 2011 - just loaded this up to play with it. Since it's Server 2008 based, the VMXNET3 NIC has better performance than the WHS v1 virtual machine. Since WHS v1 is Server 2003 based, the NIC performance is fine (about 1Gb/s performance on VMXNET3)... but server 2008 runs better. I might migrate all WHS functions currently on the WHS v1 VM to this one.
  • Server 2012 just to play with and learn
  • Server 2008 just to play with
  • OpenIndiana VM just to play with to see if I can ever get it to work well again virtualized.
  • Comming soon a vShpere Management Assistant VM to play with UPS support through ESXi... to allow me to gracefully shutdown the VMs and ESXi in case of a power outage.
  • Anything else I want to play with.

I love virtualization, as it allows you to do so much with your existing hardware and use it to it's fullest potential. Most purpose built physical machines rarely use all their abilities considering the electricy they consume. Virtualizing functions into a single (or multiple) servers allows you to effectively utilize all the abilities of the hardware spread out among the functions you desire. But this allows you to effectively separate duties so that when the TV crashes, it doesn't kill the DVD player. smile.gif (referenced above)


I plan on upgrading the CPU to an 8 core AMD FX processor in the next month or so to expand the resources available to my VMs... and probably see if I can get another 16GB of RAM. That will give me a grand total of 8 cores, 32GB of RAM, 20 ports of SATA II&III (4 SATA-II, and 16 SATA-III), and 6 Gigabit NICs. This is a lot of power for a very little bit of money on commodity/consumer hardware.



If you have any questions about any of the above, please don't hesitate to ask. I've used Hyper-V, KVM, Xen, VMware... I've run into most problems in virtualization before... so I can help out where I can.
post #57 of 89
Quote:
Originally Posted by mrted46 View Post


I probably went overboard on memory and cpu but I want to make sure the horsepower is there. Looking forward to experimenting!

I don't think you can go overboard with memory. I have 24gb in my server, and I wish I had at least double that. Once you find a few uses for virtualization, you will keep finding more. Even if you don't need it initially, some day you will be glad that you have all that ram.
Edited by brianjb - 5/23/13 at 2:52pm
post #58 of 89
Quote:
Originally Posted by mrted46 View Post

Well here goes, I ordered everything I think I need to get my esxi 5.1 up and running:

CPU: AMD FX-8350 8 CORE
MoBo: ASRock 970 Extreme4
Memory: 4x8GB corsair
GPU: Ati 5450
SSD: 64GB Corsair
HDDs: 3 x 4TB seagate drives for NAS, 1 x TB Green for misc. use
PCIx: 2 x Dual Port Intel/1000, 1 x IBM M1015 Sata controller

I want to run a VM NAS (WHS + Flexraid), pfsense, homeseer w winxp os, plex server (don't know if this will be dedicated vm) and others

I probably went overboard on memory and cpu but I want to make sure the horsepower is there. Looking forward to experimenting!

Good looking build. It should do really well and you hit a nice price to performance balance. You can't really have too much RAM for hosting VMs. Only thing I may tweak in the future is to find an old PCI (plain PCI) video card to open up a PCIe slot since I've used up all three with the M1015, dual Intel Gbit NIC, and my old Nvidia 430. I'd also like to add more SSDs for the VMs, but I'll just upgrade the other SSDs in the house and retire the smaller ones to the server.
post #59 of 89
Quote:
Originally Posted by bryansj View Post

Good looking build. It should do really well and you hit a nice price to performance balance. You can't really have too much RAM for hosting VMs. Only thing I may tweak in the future is to find an old PCI (plain PCI) video card to open up a PCIe slot since I've used up all three with the M1015, dual Intel Gbit NIC, and my old Nvidia 430. I'd also like to add more SSDs for the VMs, but I'll just upgrade the other SSDs in the house and retire the smaller ones to the server.

That's a great idea about the PCI graphics card. Now that I think about it, I have 2 old Intel pro/1000 regular PCI cards I can also throw in there to have it dedicated to a VM.
post #60 of 89
Thread Starter 
I got it all put together, I figured I would post up some shots.






ESXi is all new to me, but I managed to stumble through it and got WHS back up and running.
New Posts  All Forums:Forum Nav:
  Return Home
  Back to Forum: Home Theater Computers
AVS › AVS Forum › Video Components › Home Theater Computers › ESXi wanted - Any gurus on here?