Thoughts on Using Esxi vs Hyper-V for My Home Media - AVS Forum
Forum Jump: 
 
Thread Tools
post #1 of 12 Old 09-01-2013, 08:52 AM - Thread Starter
Advanced Member
 
AVTechMan's Avatar
 
Join Date: Jul 2007
Location: The State of Michigan
Posts: 633
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 0 Post(s)
Liked: 36
I've read many different threads on virtualizing using Esxi in a server setting to host VM's and Hyper-V to accomplish the same thing. My whole interest in this was to create some VM's to install the OS's to handle all of my media needs, audio and video alike and to save on hardware. With that said,

I have tested Hyper-V under Server 2008 R2 for the past few weeks and I have to say it works pretty well. The only servers I have to test with was the SM server I gotten from Tams, which has worked pretty solid. For ESXi, I tested that on the AIC server and based on trying it out, it works well too and I was able to create some VM's on it. Unfortunately, it seems that ESXi is useless for me to use on the current server hardware I have since its picky on what it can use for passthrough. Plus the fact that the SAT2-MV8 (not SAS) cards aren't compatible, it looks like I will have to buy all new hardware and SATA controller cards just to make ESXi work more effectively.

This is where for me Hyper-V has a better advantage....i can mark a drive as offline to where Hyper-V can use it in its own VM, which I have tested and works. I was hoping to use ESXi as a good VM instead of Hyper-V but I will have to invest in new hardware to make it work so I will use what I have currently.

I'm using Server 2008 R2 (for the Ceton) and thinking of using WHS 2011 in a VM for all my client backups since I have two XP boxes so that leaves WSE2012 out. For media streaming I may use Remote Potato since it has better video format compatibility for streaming. I plan to try DrivePool for pooling and figuring a solution for data redundancy. I don't have any RAID cards and don't really see the need for them, just using the SATA ports with either FlexRaid or Drivepool should work for me, and backup critical stuff offsite (I don't bother with cloud services).

So for those of us using the servers bought from Tams, what OS are you running on your machines?
AVTechMan is offline  
Sponsored Links
Advertisement
 
post #2 of 12 Old 09-03-2013, 08:59 AM
Senior Member
 
rc05's Avatar
 
Join Date: Oct 2011
Location: San Luis Obispo, CA
Posts: 347
Mentioned: 1 Post(s)
Tagged: 0 Thread(s)
Quoted: 23 Post(s)
Liked: 51
I have the 2U server that was available for a bit, and it has an intel motherboard and processor that supports hardware passthrough. I'm using ESXi, and I've passed the SAT2-MV8 through to unraid, win 7, and win 8 with no issues. I think you're right that ESXi itself won't recognize the SAT2-MV8 if you wanted to use it to connect datastores and such though.
rc05 is online now  
post #3 of 12 Old 09-04-2013, 04:35 AM
Advanced Member
 
balky's Avatar
 
Join Date: Oct 2008
Posts: 817
Mentioned: 1 Post(s)
Tagged: 0 Thread(s)
Quoted: 36 Post(s)
Liked: 34
You are right in that ESXi can be quite picky when it comes to hardware passthrough, but IMO, this doesn't make the windows server option a better idea.
In my own case, after seeing the ups and downs of ESXi in a home environment, with consumer grade hardware I installed CentOS Linux on my box and called it a day.
I have ZFS on Linux, Plex media server, TVMobili and TVheadend running on the box.

The mere thought of the monthly windows security updates which at most times will require a server reboot, and probably break something that was previously working make me avoid anything windows server like a bad habit... smile.gif

Virtualization in a home environment is seldom a must, but if you must do it, try as much as possible not to cut corners especially from hardware perspective...

Goodluck!
balky is offline  
post #4 of 12 Old 09-04-2013, 08:49 PM
AVS Special Member
 
Puwaha's Avatar
 
Join Date: Jan 2003
Posts: 1,124
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 1 Post(s)
Liked: 33
If you are just playing around, then using either one is fine but you want to run a production ready VM server, stick with ESXi.

You don't have to passthrough whole controllers in ESXi, you can make a VDI on any hard drive/controller that the ESXi OS can see. Or you can do an RDM (Raw Device Mapping) similar to what you did in dedicating a whole disk to your VM in HyperV. But really a VDI is a lot more portable, which is kind of the point in virtualization.
Puwaha is offline  
post #5 of 12 Old 09-05-2013, 01:02 AM - Thread Starter
Advanced Member
 
AVTechMan's Avatar
 
Join Date: Jul 2007
Location: The State of Michigan
Posts: 633
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 0 Post(s)
Liked: 36
Quote:
Originally Posted by balky View Post

You are right in that ESXi can be quite picky when it comes to hardware passthrough, but IMO, this doesn't make the windows server option a better idea.
In my own case, after seeing the ups and downs of ESXi in a home environment, with consumer grade hardware I installed CentOS Linux on my box and called it a day.
I have ZFS on Linux, Plex media server, TVMobili and TVheadend running on the box.

The mere thought of the monthly windows security updates which at most times will require a server reboot, and probably break something that was previously working make me avoid anything windows server like a bad habit... smile.gif

Virtualization in a home environment is seldom a must, but if you must do it, try as much as possible not to cut corners especially from hardware perspective...

Goodluck!

Well, windows updates isn't too big a deal as I can turn it off and never have to worry about a reboot. I'm using two of the servers I got from Tams, the SM and the AIC model and both have server grade hardware, just not the kind that's compatible with ESXi.

I have no issue in getting new hardware that's compatible, its just not a priority right now. Was basically seeing if it would work with what I have, which in most cases it has except when it comes to the controller cards and storage.

Quote:
Originally Posted by Puwaha View Post

If you are just playing around, then using either one is fine but you want to run a production ready VM server, stick with ESXi.

You don't have to passthrough whole controllers in ESXi, you can make a VDI on any hard drive/controller that the ESXi OS can see. Or you can do an RDM (Raw Device Mapping) similar to what you did in dedicating a whole disk to your VM in HyperV. But really a VDI is a lot more portable, which is kind of the point in virtualization.

Agreed. Its something I will have to work with more. Besides, it would take too much work trying to RDM 24 drives to a VM..lol. So in those cases for me it would be best to just do a bare metal install with an OS that will see the drives after the drivers are installed. I may consider doing an ESXi build with newer hardware, possibly server grade and re-purpose the AIC by replacing the guts with better hardware that will be ESXI compatible. Appreciate the info!
AVTechMan is offline  
post #6 of 12 Old 09-05-2013, 04:47 AM
AVS Special Member
 
Tong Chia's Avatar
 
Join Date: Nov 2002
Posts: 1,140
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 0 Post(s)
Liked: 16
Quote:
Originally Posted by AVTechMan View Post

....So in those cases for me it would be best to just do a bare metal install with an OS that will see the drives after the drivers are installed. I may consider doing an ESXi build with newer hardware, possibly server grade and re-purpose the AIC by replacing the guts with better hardware that will be ESXI compatible. Appreciate the info!

That is what I use, a Class II Hypervisor (VirtualBox on Solaris 11.1) on a Xeon E3-1230

I found that this was simpler for home use running less than 10 VMs, hardware passthru not required, less hassle if older hardware reuse is important.

I use Solaris + ZFS running on baremetal as a conventional file server and a Win7 VM running media center as my DVR. I experimented with ESXI for a few months and I did not see much of an advantage for home use. VDI for the VM storage is good enough for home use and makes it easy to move the VM if required.

Virtualbox is very nicely integrated into Solaris and the VM clients use RedHat's VirtIO for the nics, on the host side I am using Crossbow vnics plumbed into a 802.3ad LACP aggregated group of 4 physical nics. VirtIO network performance on my setup is quite good as I have no drop out issues recording with my HDHomerun Prime tuners over ethernet. DiskIO performance is a non issue as the VM is inside the file server.

USB is a potential issue for VMs I am currently using Silex USB device servers so everything that is USB for the VM comes in over ethernet, the Silex software on the VM makes the difference and the usb devices attach and operate normally on the VM, except that it is physically connected via ethernet to the VM. The ones I have are USB2.0 over Gigabit ethernet. This is one place Crossbow makes a difference, I get a good measure of load balancing when several VMs want to use the network. I wanted to avoid dedicating a physical nic to each VM.

(A side benefit of the Silex devices is that I can connect to a CD/DVD drive or scanner over a WiFi on a laptop)
Tong Chia is offline  
post #7 of 12 Old 09-05-2013, 05:36 AM
AVS Special Member
 
Nevcairiel's Avatar
 
Join Date: Mar 2010
Location: Germany
Posts: 1,010
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 10 Post(s)
Liked: 110
I have been using Hyper-V on 2008R2 and 2012 servers for around two years now with both Linux and Windows VMs, and its been working quite nicely. With 2012 you also get hardware passthrough with VT-d if you need that, but i never did so far.
I only run two VMs permanently though, and a few others on demand - so not a huge setup over here.
Nevcairiel is offline  
post #8 of 12 Old 09-05-2013, 09:23 AM
Advanced Member
 
balky's Avatar
 
Join Date: Oct 2008
Posts: 817
Mentioned: 1 Post(s)
Tagged: 0 Thread(s)
Quoted: 36 Post(s)
Liked: 34
Quote:
Originally Posted by Tong Chia View Post

That is what I use, a Class II Hypervisor (VirtualBox on Solaris 11.1) on a Xeon E3-1230

I found that this was simpler for home use running less than 10 VMs, hardware passthru not required, less hassle if older hardware reuse is important.

I use Solaris + ZFS running on baremetal as a conventional file server and a Win7 VM running media center as my DVR. I experimented with ESXI for a few months and I did not see much of an advantage for home use. VDI for the VM storage is good enough for home use and makes it easy to move the VM if required.

Virtualbox is very nicely integrated into Solaris and the VM clients use RedHat's VirtIO for the nics, on the host side I am using Crossbow vnics plumbed into a 802.3ad LACP aggregated group of 4 physical nics. VirtIO network performance on my setup is quite good as I have no drop out issues recording with my HDHomerun Prime tuners over ethernet. DiskIO performance is a non issue as the VM is inside the file server.

USB is a potential issue for VMs I am currently using Silex USB device servers so everything that is USB for the VM comes in over ethernet, the Silex software on the VM makes the difference and the usb devices attach and operate normally on the VM, except that it is physically connected via ethernet to the VM. The ones I have are USB2.0 over Gigabit ethernet. This is one place Crossbow makes a difference, I get a good measure of load balancing when several VMs want to use the network. I wanted to avoid dedicating a physical nic to each VM.

(A side benefit of the Silex devices is that I can connect to a CD/DVD drive or scanner over a WiFi on a laptop)

IMO I think this way too complex for someone looking for a set it and forget, easy to manage system for sharing media in a home environment...
balky is offline  
post #9 of 12 Old 09-05-2013, 09:32 AM
Advanced Member
 
balky's Avatar
 
Join Date: Oct 2008
Posts: 817
Mentioned: 1 Post(s)
Tagged: 0 Thread(s)
Quoted: 36 Post(s)
Liked: 34
Quote:
Originally Posted by Nevcairiel View Post

I have been using Hyper-V on 2008R2 and 2012 servers for around two years now with both Linux and Windows VMs, and its been working quite nicely. With 2012 you also get hardware passthrough with VT-d if you need that, but i never did so far.
I only run two VMs permanently though, and a few others on demand - so not a huge setup over here.

You will want to be able to passthrough especially if there is a NAS instance in one of the VMs... otherwise, the only way out is to use hardware RAID for your HDD drives... goodluck with all the data in the VMFS if your hardware RAID card goes belly up...
balky is offline  
post #10 of 12 Old 09-06-2013, 12:09 AM
AVS Special Member
 
Nevcairiel's Avatar
 
Join Date: Mar 2010
Location: Germany
Posts: 1,010
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 10 Post(s)
Liked: 110
Quote:
Originally Posted by balky View Post

You will want to be able to passthrough especially if there is a NAS instance in one of the VMs... otherwise, the only way out is to use hardware RAID for your HDD drives... goodluck with all the data in the VMFS if your hardware RAID card goes belly up...

The host system manages all the storage and serves as a file server as well, VMs just get access on a as-needed basis.
This is not a data center, after all. The VMs don't require high performance access to stuff, and it works just perfectly for me smile.gif

And if not, i can always do HW pass-through if required.
Nevcairiel is offline  
post #11 of 12 Old 09-06-2013, 02:24 AM
AVS Special Member
 
Tong Chia's Avatar
 
Join Date: Nov 2002
Posts: 1,140
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 0 Post(s)
Liked: 16
Quote:
Originally Posted by balky View Post

IMO I think this way too complex for someone looking for a set it and forget, easy to manage system for sharing media in a home environment...
I find it about the same complexity as my prior WHS setup. Everything is done thru a point and click web frontend.
Tong Chia is offline  
post #12 of 12 Old 09-06-2013, 06:50 AM
Advanced Member
 
balky's Avatar
 
Join Date: Oct 2008
Posts: 817
Mentioned: 1 Post(s)
Tagged: 0 Thread(s)
Quoted: 36 Post(s)
Liked: 34
Quote:
Originally Posted by Tong Chia View Post

I find it about the same complexity as my prior WHS setup. Everything is done thru a point and click web frontend.

Oh yes... you're right... it is the easiest setup ever... I guess I am just plain dumb... tongue.gif

Virtualbox is very nicely integrated into Solaris and the VM clients use RedHat's VirtIO for the nics, on the host side I am using Crossbow vnics plumbed into a 802.3ad LACP aggregated group of 4 physical nics

How easier can it get? LOL
balky is offline  
Reply Home Theater Computers

User Tag List

Thread Tools
Show Printable Version Show Printable Version
Email this Page Email this Page


Forum Jump: 

Posting Rules  
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are Off
Pingbacks are Off
Refbacks are Off