or Connect
AVS › AVS Forum › Video Components › Home Theater Computers › ESXi Server with virtual -- WHS2011, flexraid, Win7 WMC ?
New Posts  All Forums:Forum Nav:

ESXi Server with virtual -- WHS2011, flexraid, Win7 WMC ? - Page 2

post #31 of 70
Quote:
Originally Posted by rekd0514 View Post

Meh I think I can handle it. I will just get it set up perfectly how I want it over time. I have pfsense set up so that shows I know networking fairly well. I have cat 6 and procurve 1410-24g switches in the house so I am set there. I just need to buy the proper hardware for the project. I have done quite a bit of reading already on the best practices in esxi and such. It doesn't seem too bad to me, just time consuming to setup. The main thing is just figuring out how I want to set up all the HDDs for proper management and back up.

It seems like in my experience is the only way you really learn it is to start messing around with it.
No doubt, and I certainly did not want to discourage you in any way. My apologies if it came out like that. There's just lots of pitfalls when using hypervisors and VMs.

That being said, if you are set on going down this road:

- Get hardware (motherboard/CPU/Chipset) that is supported "well" by ESXi. Otherwise you'll end up troubleshooting more issues with ESXi than what you're actually trying to setup.
- If you are gonna setup a storage server as a VM with ESXi, you really have two options to get good performance. Either passthrough a RAID card, or passthrough disks. You "could" potentially passthrough even the onboard controller (like ICHR xx), but that is not recommended and sometimes causes unintended consequences. Since passthrough is quite important, choose your hardware wisely and make sure your motherboard has the number and type of slots you need and they are not behind some kind of PCI bridge. ESXi doesn't really like slots behind PCI bridges.
- If it's a small storage server, let ESXi detect the hard drives from the onboard controller and mark them for passthrough. See: http://blog.davidwarburton.net/2010/10/25/rdm-mapping-of-local-sata-storage-for-esxi/
- Once you do the above, your VM has "native" access to those drives and life should be good.
- Spend some time setting up your network. This is crucial to get good performance with VMs. In addition, I'd (ideally) choose hardware that has atleast two gigabit network adapters, if not more. It just makes life easier.
- If you're gonna have pfsense as a VM on this same server......smile.gif I'd suggest getting a dedicated pci/pci-e/pci-x network adapter with atleast two ports (preferably four) and passthrough it to the pfsense VM. And a tip: physically mark your network ports and cables...you'll be thankful later. smile.gif
- If you're gonna run Windows 7 or any other OS and plan to use a graphics adapter passed through to it....uhh...start reading on the vmware community forums.

Other than these potential pitfalls, ESXi is actually quite powerful and you can do a number of things that'd be difficult with physical hardware.
post #32 of 70
Quote:
Originally Posted by kapone View Post

No doubt, and I certainly did not want to discourage you in any way. My apologies if it came out like that. There's just lots of pitfalls when using hypervisors and VMs.

That being said, if you are set on going down this road:

- Get hardware (motherboard/CPU/Chipset) that is supported "well" by ESXi. Otherwise you'll end up troubleshooting more issues with ESXi than what you're actually trying to setup.
- If you are gonna setup a storage server as a VM with ESXi, you really have two options to get good performance. Either passthrough a RAID card, or passthrough disks. You "could" potentially passthrough even the onboard controller (like ICHR xx), but that is not recommended and sometimes causes unintended consequences. Since passthrough is quite important, choose your hardware wisely and make sure your motherboard has the number and type of slots you need and they are not behind some kind of PCI bridge. ESXi doesn't really like slots behind PCI bridges.
- If it's a small storage server, let ESXi detect the hard drives from the onboard controller and mark them for passthrough. See: http://blog.davidwarburton.net/2010/10/25/rdm-mapping-of-local-sata-storage-for-esxi/
- Once you do the above, your VM has "native" access to those drives and life should be good.
- Spend some time setting up your network. This is crucial to get good performance with VMs. In addition, I'd (ideally) choose hardware that has atleast two gigabit network adapters, if not more. It just makes life easier.
- If you're gonna have pfsense as a VM on this same server......smile.gif I'd suggest getting a dedicated pci/pci-e/pci-x network adapter with atleast two ports (preferably four) and passthrough it to the pfsense VM. And a tip: physically mark your network ports and cables...you'll be thankful later. smile.gif
- If you're gonna run Windows 7 or any other OS and plan to use a graphics adapter passed through to it....uhh...start reading on the vmware community forums.

Other than these potential pitfalls, ESXi is actually quite powerful and you can do a number of things that'd be difficult with physical hardware.

I was planning on getting the h200 or M1015 and passthrough that for the WHS 2011 WM alone. Then use the built in controller for a WMC and pfsense VM. The VM for pfsense with have a dedicated Intel ET dual NIC ( I have 2 of these already). I actually want to have a dedicated port for management and each VM as well. I don't want to be struggling for bandwidth on any of the NIC ports.

I plan on using extenders only for the video/TV playback so the server will remain headless like WHS 2011 is meant to be. I just really like the fact that I can get it all running in one box instead of 3 separate ones like I have now.
post #33 of 70
Quote:
Originally Posted by Mfusick View Post

Where's a good place learn this stuff ?

This is where I got most of my information http://lime-technology.com/forum/index.php?topic=14695.0. It's obviously geared towards unRaid but it has a bunch a basic information.
post #34 of 70
Quote:
Originally Posted by joemoma View Post

This is where I got most of my information http://lime-technology.com/forum/index.php?topic=14695.0. It's obviously geared towards unRaid but it has a bunch a basic information.

This is a very good resource. I used this when I set up my ESXi server and it worked like a champ. [H]ardforum also has some good info.

As has been stated before be careful when picking your hardware. I got lucky with my Gigabyte X58 motherboard and i7 920. I had to e-mail Gigabyte support to get a special bios for it. So consumer hardware is definitely hit or miss. Your best bet is to go with a Supermicro board. You know it's going to work, and it has dual NIC's and IPMI. This will come in handy. My actual "production" server runs on a Supermicro board and I'm very happy with it.
post #35 of 70
Any difference if I am on flexraid and not unraid ???
post #36 of 70
I am running server 2012 standard with flexraid within ESXi 5.1 without much issue. Setting up the RDMs was a bit of a pain, as I am using the attached SATA controller rather than passing through an HBA, but once that was handled everything was no different than working with a physical server from the Server 2012 perspective. Flexraid sees the drives without issue, I get SMART data, etc.
post #37 of 70
Quote:
Originally Posted by Mfusick View Post

Any difference if I am on flexraid and not unraid ???

No, not really. I had FlexRAID running on WS 2012 Essentials ESXi Guest. I passed through an IBM M1015 card to the guest and configured as any other FlexRAID machine.
post #38 of 70
I finished my esxi upgrade. My original intent was to do an all in one box with a storage VM to provide storage for the esxi box and all other VMs and HTPCs.

Well... pick your hardware wisely is the best advice I can give. I tried to go the cheapest routes possible and ended up having to rethink my strategy.

I ended up running two boxes... one as a native installed Openindina box running ZFS, and a separate box running esxi. The passthrough on my mobo worked but it just wasn't stable. I had issues with a faulty DIMM (test those before you start loading your OS!) and a faulty HDD that for all intents looked fine in windows. Luckily ZFS can find these faults and I'm going to RMA the drive.

My biggest cheap-out was not buying a good HBA, hoping I could get by using the mobo SATA ports... Esxi would choke and laugh at me for doing so. tongue.gif

I'm happy with the setup now as I know the Openindiana box will be a stable workhorse and the performance natively is astounding! I still get to run all my VMs on my current esxi box. The flexibility that a separate storage box provides might have just swayed me from the "true" all in one plan I had. But no doubt I'll be using server grade parts in my esxi upgrade next time.

By the way, check out IPFire for a really good firewall distro to virtualize. I've gotten it to run on Hyper-v, KVM/proxmox, and esxi with ease with excellent performance.
post #39 of 70
Came across this thread and wanted to share my experiences, as I was after a very similar setup:
  • EXSi v5.1 on a X8SIL-F / Xeon 3440 / 32GB ECC RDIMM
  • NAS VM (running Nexenta) with HBA passthru of a LSI 9211-8i
  • Dedicated VM for Crashplan (with NFS access to all shared storage on dedicated 'data' vSwitch)
  • Datastores served by NAS for other VMS, including 'Network' VM (DNS/DHCP/LDAP/Firewall) running Zentyal
  • Windows 7 VM for headless WMC server for extenders
  • HDHomeRun Prime tuner card
  • Ceton Echos for extenders

I can say that everything works extremely well - with a few caveats:
  1. I moved all of my VM 'os' drives to SSD. I found this to be much more manageable when I was messing around with my NAS server. I started with just my network VM (since the entire network would go down whenever I was working on the NAS server) but eventually ended up putting everything there for speed and simplicity. Extended virtual storage drives are still served by the NAS datastore - which is setup over NFS
  2. I needed to use the Digital Cable Advisor hack to get WMC to work - I don't have any GPU passthru. After applying the hack and going through the setup again everything worked fine.

Anyway, just wanted to throw my config out there - happy to talk specifics if anyone is interested.
post #40 of 70
Quote:
Originally Posted by rc05 View Post

Also, Windows Media Center doesn't run on WHS or Windows Server, so you would need Windows 7 at least for WMC.

There are tools to convert server OSes to workstations. You can also do it manually by adding server roles and features and one could add WMC to WHS that way. I haven't tried with WHS, however I am able to get WMP and most other media software to work on WS2012. You must download it seperatly from MS on WS2012 becuase it was sold seperately on windows 8. You might be able to get it working a little smoother on WS2008R2, however.
post #41 of 70
Quote:
Originally Posted by 72.9.159.100 View Post

There are tools to convert server OSes to workstations. You can also do it manually by adding server roles and features and one could add WMC to WHS that way. I haven't tried with WHS, however I am able to get WMP and most other media software to work on WS2012. You must download it seperatly from MS on WS2012 becuase it was sold seperately on windows 8. You might be able to get it working a little smoother on WS2008R2, however.
No you can't. You can only do it if Microsoft specifically offers a role or feature that enables it, and it doesn't for WHS.

I don't consider hacked/hex modified DLLs as a solution.
post #42 of 70
So WHS does not have the role? what about the windows media services?
post #43 of 70
Is there any reason why you guys are using an hba like the IBM 1015 and software raid (flexraid), instead of a hardware raid card like a dell perc6/I or the IBM 5015?

I'm looking to virtualize my setup, and have been going back and forth.

Sent from my Galaxy Nexus using Tapatalk 2
post #44 of 70
Hardware raid is going to be much more sensitive to power. Further, with flexraid I can pull an individual drive and instantly have access to the data on the drive. When I write a file, it is only written to 1 drive and then snapshot will build the parity on a schedule, vs constantly writing to all drives in the raid array.

For home, it just makes more sense (for me at least).
post #45 of 70
Quote:
Originally Posted by Mason736 View Post

Is there any reason why you guys are using an hba like the IBM 1015 and software raid (flexraid), instead of a hardware raid card like a dell perc6/I or the IBM 5015?

Sometimes the ride is more fun if you invented the wheel yourself?

To me, it sounds like making a PB&J sandwich with the peanut butter on the outside because you want the option of scraping it off and using it somewhere else if you change your mind. Some call it a feature, others call it a hot mess.
post #46 of 70
Quote:
Originally Posted by Mason736 View Post

Is there any reason why you guys are using an hba like the IBM 1015 and software raid (flexraid), instead of a hardware raid card like a dell perc6/I or the IBM 5015?

I'll most likely be using that card when my existing storage expands further

A 5015 has big advantages in XOR calcs, great for RAID 5

M1015 is actually much better with JBOD, which is what a lot of people prefer. We have all the throughput we need in a HDD. If they just made cheap 50TB HDDs I might just have used one of those w/ crashplan, but since that isn't where technology is right now we have to use multiple large volume storage disks. For a home server, it shouldn't really be about performance since the regular HDD throughput is plenty. The two things I get from software raid are 1 Unit of Risk recovery (allows any of my HDDs to fail at once) and "easy" sharing. (Just one drive to map rather than all the individual drives)
post #47 of 70
Quote:
Originally Posted by Mason736 View Post

Is there any reason why you guys are using an hba like the IBM 1015 and software raid (flexraid), instead of a hardware raid card like a dell perc6/I or the IBM 5015?

For software RAID solutions, the M1015 is all we need really.
post #48 of 70
I found this article helpful when I was deciding which direction to go. I ended up with an IBM M1015 flashed to HBA IT mode running FlexRAID.

http://www.servethehome.com/difference-hardware-raid-hbas-software-raid/
post #49 of 70
Thread Starter 
Don't know about anybody else, but the reason I wanted the ibm 1015 hba as a JBOD configuration for WHS2011 with flexraid is because I can have many of the drives spun down when they are not in use, not something you can really do with a HW RAID solution. The 1015 is automatically recognized by ESXi also.

Looks like I'll need to redo my ESXi install from 5.1 to 5.0. I downloaded both with the expectation that there might be some gotchas.

I've installed the ESXi 5.1 to a flash drive (4GB) for booting the machine, and I'll just do another flash drive with the 5.0 installer and see if I can get moving again.

My only concern at the moment is how to install WHS2011 on my 128GB SSD. After endless reading, I can't quite figure out how to get it to use less than 160GB in the ESXi environment. I know you are supposed to be able to use a USB flash drive with the install DVD during the install to modify the disk size, but haven't figured out how to make that work with ESXi yet.
post #50 of 70
With Windows Server 2012 Essentials (maybe others as well) you can simply open a command prompt during installation (shift+F10) and then launching regedit. Then create a DWORD named "HWRequirementChecks" with a value of "0" in HKLM\Software\Microsoft\Windows Server\Setup.

This shuts up the <160GB storage drive. I installed mine to a 64GB.
post #51 of 70
or use a USB stick with an "answer file" that tells WHS 2011 to shut up about <160gb (and create a single partition and do other things)
post #52 of 70
Thread Starter 
usb with an answer file was the technique I couldn't get to work with ESXi 5.1 (usb passthru doesn't work in 5.1) as kapone pointed out earlier in this thread.

So either back to ESXi 5.0, or try bryans technique.
post #53 of 70
Can you slipstream the answer file onto the DVD?
post #54 of 70
Quote:
Originally Posted by BruceD View Post

usb with an answer file was the technique I couldn't get to work with ESXi 5.1 (usb passthru doesn't work in 5.1) as kapone pointed out earlier in this thread.

So either back to ESXi 5.0, or try bryans technique.
Don't stick the answer file stick on the server, stick it on the client where you have the vSphere client running. The vSphere client has USB redirection built in, so it'll redirect a client USB device and connect it to a VM when the VM is started from the client and the console is open. There is a button on the VM console to do that, but you have to be quick and do it while it's still POSTing.

Adding a POST delay to the VM settings helps. This way no USB passthrough is needed, as the stick gets connected at run time, from the client.
post #55 of 70
Thread Starter 
kapone,

That is good advice (from the vSphere client machine), but I'm thinking I may want to try the 5.0 ESXi release to see if I can get usb passthru to work for my magic-jack usb plug-in.

It definately doesn't work with the 5.1 release.
post #56 of 70
Well since I just installed ESXi over the weekend I see what kapone is saying. Just plug in the USB key into your client and then let it share to your VM, there's a button for that on the console. You can also have the DVD/.iso on the client and share it as well. What I did was select the option for the VM to boot to the BIOS setup screen at next boot. Doing that gives me plenty of time to set up all the needed devices.

If you haven't yet tried 5.1 with your magic-jack then I'm not sure what your problem would be. I have ESXi 5.1 and I'm passing through a USB-UIRT and a USB Z-wave stick to my WSE 2012 and haven't had any issues.

I think the problem is only when you want to pass through the entire controller with the devices attached. I just added a USB controller (virtual) to my VM and then added the USB devices. That is different than passing through the actually controller which would bring in all the devices automatically, which I assume is what isn't working correctly in 5.1. This is what I did with my PCIe HBA controller and all my attached drives showed up to my VM.
post #57 of 70
Quote:
Originally Posted by BruceD View Post

Don't know about anybody else, but the reason I wanted the ibm 1015 hba as a JBOD configuration for WHS2011 with flexraid is because I can have many of the drives spun down when they are not in use, not something you can really do with a HW RAID solution. The 1015 is automatically recognized by ESXi also.

Looks like I'll need to redo my ESXi install from 5.1 to 5.0. I downloaded both with the expectation that there might be some gotchas.

I've installed the ESXi 5.1 to a flash drive (4GB) for booting the machine, and I'll just do another flash drive with the 5.0 installer and see if I can get moving again.

My only concern at the moment is how to install WHS2011 on my 128GB SSD. After endless reading, I can't quite figure out how to get it to use less than 160GB in the ESXi environment. I know you are supposed to be able to use a USB flash drive with the install DVD during the install to modify the disk size, but haven't figured out how to make that work with ESXi yet.

Re-reading this I don't see how it wouldn't work all on the host machine. I did a server restore using a USB drive attached to the host and a .iso image of the Windows 2012 Server DVD (you boot from the install DVD, hit repair, then troubleshooting, then re-image). I created a standard VM for the server installation and plugged in the external USB drive containing the restore image to the EXSi machine. I set the VM to boot to BIOS. Once in the BIOS I changed to boot order to DVD drive first. Then in the console toolbar I connected the local USB drive using the USB icon and then connected the .iso image from the client PC using the DVD icon. I'd imagine all you would have to do is the same thing except you would use a local flash drive with the <160GB instruction file. The key to me was booting to BIOS to give myself plenty of time to rig it all together. Once set up you save to BIOS and exit.
post #58 of 70
Thread Starter 
Quote:
Originally Posted by bryansj View Post


If you haven't yet tried 5.1 with your magic-jack then I'm not sure what your problem would be. I have ESXi 5.1 and I'm passing through a USB-UIRT and a USB Z-wave stick to my WSE 2012 and haven't had any issues.

I have, and the problem is the magic-jack usb is more than just a flash drive, it acts like a CD when executing some code. Essentially it isn't recognized at all from the win7 VM machine when plugged into any of the usb ports on the ESXi machine. I also don't get the option to passthru any usb ports in 5.1 to the win7 VM.
post #59 of 70
I first add a USB Controller to the VM then you can add USB Devices attached to the host.

I'd dump the Magic Jack and get an Obi for free Google Voice. Mine has been working great for the two years I've used it. The monthly fee of $0.00 is nice as well.
post #60 of 70
Thread Starter 
Quote:
Originally Posted by bryansj View Post

I first add a USB Controller to the VM then you can add USB Devices attached to the host.

I'd dump the Magic Jack and get an Obi for free Google Voice. Mine has been working great for the two years I've used it. The monthly fee of $0.00 is nice as well.

Yes, I know to add the USB controller first, that still doesn't allow the usb device to show up.

It's not that difficult to go back to ESXi 5.0 to see if I get the better support for usb.
New Posts  All Forums:Forum Nav:
  Return Home
  Back to Forum: Home Theater Computers
AVS › AVS Forum › Video Components › Home Theater Computers › ESXi Server with virtual -- WHS2011, flexraid, Win7 WMC ?