Virtualized HTPC (ESXi) - AVS Forum
Forum Jump: 
 
Thread Tools
post #1 of 26 Old 09-12-2012, 12:12 AM - Thread Starter
Senior Member
 
mcturkey's Avatar
 
Join Date: Aug 2011
Posts: 343
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 4 Post(s)
Liked: 43
I'm not sure how many folks this may apply to, but if you have virtualized your HTPC like I have, be aware that ESXi 5.1 will not work with the Ceton InfiniTV. As soon as I attempted to boot the VM, it crashed the host with a pink screen of death. Never had that happen with 5.0, so I'm not sure if I want to roll it back, or abandon virtualizing the HTPC until a workaround can be figured out. Everything else transitioned to 5.1 flawlessly, so this is quite disappointing.
mcturkey is offline  
Sponsored Links
Advertisement
 
post #2 of 26 Old 09-12-2012, 03:18 PM - Thread Starter
Senior Member
 
mcturkey's Avatar
 
Join Date: Aug 2011
Posts: 343
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 4 Post(s)
Liked: 43
http://communities.vmware.com/thread/417736?decorator=print&displayFullThread=true

Guess there is some hope of this being resolved. Wish there was an ETA on the fix though.
mcturkey is offline  
post #3 of 26 Old 09-17-2012, 12:19 PM
AVS Special Member
 
Noah's Avatar
 
Join Date: Jul 2000
Location: St. Paul, MN
Posts: 1,452
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 3 Post(s)
Liked: 13
Quote:
Originally Posted by mcturkey View Post

I'm not sure how many folks this may apply to, but if you have virtualized your HTPC like I have, be aware that ESXi 5.1 will not work with the Ceton InfiniTV. As soon as I attempted to boot the VM, it crashed the host with a pink screen of death. Never had that happen with 5.0, so I'm not sure if I want to roll it back, or abandon virtualizing the HTPC until a workaround can be figured out. Everything else transitioned to 5.1 flawlessly, so this is quite disappointing.
This is a little OT, but could you summarize your experiences with a virtualized HTPC? I'm considering a similar setup such that all the displays in my home could have their own instance of MCE. I believe this is possible now that video hardware is more directly accessible under ESXi.

How stable and reliable is/are your virtual HTPC(s)?
Noah is offline  
post #4 of 26 Old 09-17-2012, 07:02 PM - Thread Starter
Senior Member
 
mcturkey's Avatar
 
Join Date: Aug 2011
Posts: 343
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 4 Post(s)
Liked: 43
Quote:
Originally Posted by Noah View Post

This is a little OT, but could you summarize your experiences with a virtualized HTPC? I'm considering a similar setup such that all the displays in my home could have their own instance of MCE. I believe this is possible now that video hardware is more directly accessible under ESXi.
How stable and reliable is/are your virtual HTPC(s)?

Other than this hiccup with updating to 5.1, it has been pretty solid. The initial configuration with getting the Ceton passed through and working was a pain, but once I had it working, it was pretty smooth. I'm using mine with 360s as extenders, though, rather than directly watching from the VM. I've considered adding a video card to the server (passed though to the VM) and running an HDMI/USB extension out to the main TV so that I can use that directly, but at this point I have not done so. I don't think playback of most media would work smoothly without a dedicated video card for each VM. NVIDIA did announce awhile back that they were working with VMware on a method of GPU virtualization, which would allow a single powerful GPU to be utilized across multiple VMs. More than likely it would have to be a Quadro card, but there hasn't been any word on the progress of this technology since sometime in the spring.
mcturkey is offline  
post #5 of 26 Old 11-21-2012, 10:13 PM
Newbie
 
Ian Kellogg's Avatar
 
Join Date: Nov 2012
Posts: 1
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 0 Post(s)
Liked: 10
I have esxi 5 and I can't figure out how to set up PCI passthrough and have it work. I can set it up but it crashes the server just like it did in 5.1
Ian Kellogg is offline  
post #6 of 26 Old 11-22-2012, 04:23 AM
AVS Addicted Member
 
Mfusick's Avatar
 
Join Date: Aug 2002
Location: Western MA
Posts: 23,033
Mentioned: 2 Post(s)
Tagged: 0 Thread(s)
Quoted: 169 Post(s)
Liked: 852
What's the point or benefit in virtualizing it?

-

"Too much is almost enough. Anything in life worth doing is worth overdoing. Moderation is for cowards."
Mfusick is offline  
post #7 of 26 Old 11-22-2012, 07:47 AM
AVS Special Member
 
HMenke's Avatar
 
Join Date: Dec 2003
Location: Northern Kentucky (Greater Cincinnati)
Posts: 1,076
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 0 Post(s)
Liked: 52
^^ +1

What does it mean to "virtualize" an HTPC?
HMenke is offline  
post #8 of 26 Old 11-22-2012, 11:32 AM
Senior Member
 
duff99's Avatar
 
Join Date: Jan 2007
Posts: 368
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 0 Post(s)
Liked: 16
Quote:
Originally Posted by HMenke View Post

^ +1 What does it mean to "virtualize" an HTPC?

It means to run it on a virtualization server. That is a server that has a OS that provides a platform that lets you run multiple operating systems on it. Something like ESXi provides an abstraction layer that presents a standard hardware set to the operating systems that run on it. This is opposed to running on the bare metal of your actual hardware. Virtualization is useful for running multiple OS's on a single set of hardware. Lets say you were running WHS, Win7, and Unbuntu. You could assign each of them 4GB of ram even though you only have 8GB of ram in your system. The virtualization OS would manage the memory so that each OS would only use the memory it needs when it needs it. It also does the same thing with the processor and NIC's. Probably not the best explanation, but hopefully it gives you some idea.
Quote:
Originally Posted by Mfusick View Post

What's the point or benefit in virtualizing it?

In this case the OP is running Win 7 with his Ceton card to provide a platform for his extenders. This makes a lot of sense to me. If your going to use extenders this saves you from having to use another box just to run Win 7. The OP could be using the same box to run WHS and serve his extenders. The benefit is to run more than one OS on the same hardware saving money and space.

As for a full HTPC virtualized, that's a little tricky. I've seen some people that have actually made it work. The key is to use hardware passthrough to give the guest OS full access to the actual hardware on the server. Something like HBA's, or in the OP's case his Ceton card, usually works pretty well. That's why I recommended the IBM ServeRAID M1015 for you Mfusik. It works better in passthrough than the Supermicro card. For a full HTPC though you need to be able to passthrough a video card. This is the tricky part. It's not supported by ESXi, and is certainly hit or miss depending on your hardware. I've only gotten it to work with 2GB of ram on the guest OS, and it didn't seem to work quite right. Again it's a crapshoot depending on your video card and other hardware. I was actually hoping the OP had a working full HTPC and could give us some instruction since I could never get it to work right.
duff99 is offline  
post #9 of 26 Old 11-22-2012, 12:49 PM - Thread Starter
Senior Member
 
mcturkey's Avatar
 
Join Date: Aug 2011
Posts: 343
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 4 Post(s)
Liked: 43
Quote:
Originally Posted by duff99 View Post

It means to run it on a virtualization server. That is a server that has a OS that provides a platform that lets you run multiple operating systems on it. Something like ESXi provides an abstraction layer that presents a standard hardware set to the operating systems that run on it. This is opposed to running on the bare metal of your actual hardware. Virtualization is useful for running multiple OS's on a single set of hardware. Lets say you were running WHS, Win7, and Unbuntu. You could assign each of them 4GB of ram even though you only have 8GB of ram in your system. The virtualization OS would manage the memory so that each OS would only use the memory it needs when it needs it. It also does the same thing with the processor and NIC's. Probably not the best explanation, but hopefully it gives you some idea.
In this case the OP is running Win 7 with his Ceton card to provide a platform for his extenders. This makes a lot of sense to me. If your going to use extenders this saves you from having to use another box just to run Win 7. The OP could be using the same box to run WHS and serve his extenders. The benefit is to run more than one OS on the same hardware saving money and space.
As for a full HTPC virtualized, that's a little tricky. I've seen some people that have actually made it work. The key is to use hardware passthrough to give the guest OS full access to the actual hardware on the server. Something like HBA's, or in the OP's case his Ceton card, usually works pretty well. That's why I recommended the IBM ServeRAID M1015 for you Mfusik. It works better in passthrough than the Supermicro card. For a full HTPC though you need to be able to passthrough a video card. This is the tricky part. It's not supported by ESXi, and is certainly hit or miss depending on your hardware. I've only gotten it to work with 2GB of ram on the guest OS, and it didn't seem to work quite right. Again it's a crapshoot depending on your video card and other hardware. I was actually hoping the OP had a working full HTPC and could give us some instruction since I could never get it to work right.

I had it virtualized, with the same host also running several VMs to cover storage, routing, and a few other server functions, as this allowed me to consolidate 24/7 hardware down to the server and cable modem. Unfortunately, after 5.1 broke PCI passthrough for so many components, I gave up and switched back to a dedicated box for the HTPC. The WAF was going downhill fast from me fighting with getting it working again. Before the update to 5.1, though, it was working flawlessly.
mcturkey is offline  
post #10 of 26 Old 11-22-2012, 03:08 PM
Senior Member
 
duff99's Avatar
 
Join Date: Jan 2007
Posts: 368
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 0 Post(s)
Liked: 16
Quote:
Originally Posted by mcturkey View Post

I had it virtualized, with the same host also running several VMs to cover storage, routing, and a few other server functions, as this allowed me to consolidate 24/7 hardware down to the server and cable modem. Unfortunately, after 5.1 broke PCI passthrough for so many components, I gave up and switched back to a dedicated box for the HTPC. The WAF was going downhill fast from me fighting with getting it working again. Before the update to 5.1, though, it was working flawlessly.

Thanks. I still haven't updated to 5.1. The main reason I would have would have been for better passthrough. Seems like it went the other way though. I just could never get the graphics card to passthrough properly. I really don't need to virtualize my HTPC. I've got that all covered with real machines. I just wanted to see if I could do it. Turns out I couldn't, at least not well.
duff99 is offline  
post #11 of 26 Old 11-22-2012, 04:47 PM
Senior Member
 
Aluminum's Avatar
 
Join Date: Feb 2006
Posts: 257
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 2 Post(s)
Liked: 19
5.1 has broken some hardware that has otherwise been working with VT-d passthrough with no issues before, theres been more than enough posts now to confirm it.

Its obviously a bug or regression in 5.1 that might get fixed eventually, or become a "feature" since this IS vmware we are talking about...

Also the only true full graphics passthrough (VGA BIOS and actual working 3D is a real test) I've read about has been with a heavily modified KVM build, not ESXi.

Nearly dead silent HTPC ver 2.0: i3-4340 w/ Noctua NH-L9i on Z87E-itx inside CM130 elite, fanless PSU, SSD OS drive
SAN shares via 40GbE tunneled over 56Gb infiniband links
microcenter & ebay = severe risks to my wallet
Aluminum is offline  
post #12 of 26 Old 11-26-2012, 01:56 PM
AVS Special Member
 
BruceD's Avatar
 
Join Date: Oct 2000
Location: Silicon Valley, CA USA
Posts: 1,064
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 1 Post(s)
Liked: 29
What about using the HD HomeRun Prime with a cable card as the tuner for a ESXi virtualized Win7 htpc? Has anybody tried that yet with extenders for multiple rooms?

BruceD


To view links or images in signatures your post count must be 0 or greater. You currently have 0 posts.
BruceD is offline  
post #13 of 26 Old 11-26-2012, 05:14 PM - Thread Starter
Senior Member
 
mcturkey's Avatar
 
Join Date: Aug 2011
Posts: 343
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 4 Post(s)
Liked: 43
Quote:
Originally Posted by BruceD View Post

What about using the HD HomeRun Prime with a cable card as the tuner for a ESXi virtualized Win7 htpc? Has anybody tried that yet with extenders for multiple rooms?

I haven't done it personally, but I know it's been done. The HDHR Prime is a much better option than the Ceton if running in a VM since you don't have to worry about passthrough.
mcturkey is offline  
post #14 of 26 Old 11-26-2012, 05:38 PM
AVS Special Member
 
BruceD's Avatar
 
Join Date: Oct 2000
Location: Silicon Valley, CA USA
Posts: 1,064
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 1 Post(s)
Liked: 29
Quote:
Originally Posted by mcturkey View Post

I had it virtualized, with the same host also running several VMs to cover storage, routing, and a few other server functions, as this allowed me to consolidate 24/7 hardware down to the server and cable modem. Unfortunately, after 5.1 broke PCI passthrough for so many components, I gave up and switched back to a dedicated box for the HTPC. The WAF was going downhill fast from me fighting with getting it working again. Before the update to 5.1, though, it was working flawlessly.

That is my target, one HW machine for 24x7 with multiple VMs to cover server, htpc, and other duties with an HDHomerun on the network for Comcast Cable Digital Preferred. Installed ESXi 5.1 to a 4GB Flash and have a win7 VM running on an ASRock H77 with 16GB RAM and an i5-quadcore (vt-d support), waiting to follow up with additional VMs including WHS2011 with an IBM 1015 RAID card flashed to the LSI-9211 HBA firmware.

What specific passthru HW has been having problems ? On the VMware forum ?

BruceD


To view links or images in signatures your post count must be 0 or greater. You currently have 0 posts.
BruceD is offline  
post #15 of 26 Old 11-26-2012, 06:27 PM - Thread Starter
Senior Member
 
mcturkey's Avatar
 
Join Date: Aug 2011
Posts: 343
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 4 Post(s)
Liked: 43
Quote:
Originally Posted by BruceD View Post

That is my target, one HW machine for 24x7 with multiple VMs to cover server, htpc, and other duties with an HDHomerun on the network for Comcast Cable Digital Preferred. Installed ESXi 5.1 to a 4GB Flash and have a win7 VM running on an ASRock H77 with 16GB RAM and an i5-quadcore (vt-d support), waiting to follow up with additional VMs including WHS2011 with an IBM 1015 RAID card flashed to the LSI-9211 HBA firmware.
What specific passthru HW has been having problems ? On the VMware forum ?

The M1015 should work, no problem. However, lots of things not on the HCL that worked in 5.0 no longer work in 5.1, such as the Ceton. This thread has some details: http://communities.vmware.com/thread/417736?start=0&tstart=0
mcturkey is offline  
post #16 of 26 Old 11-27-2012, 06:23 AM
AVS Special Member
 
BruceD's Avatar
 
Join Date: Oct 2000
Location: Silicon Valley, CA USA
Posts: 1,064
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 1 Post(s)
Liked: 29
Quote:
Originally Posted by mcturkey View Post

The M1015 should work, no problem. However, lots of things not on the HCL that worked in 5.0 no longer work in 5.1, such as the Ceton. This thread has some details: http://communities.vmware.com/thread/417736?start=0&tstart=0

Since I will be using the network tuner HDHomerun Prime with a Cable Card for Comcast Digital Preferred, the Ceton card doesn't enter the picture for me.

I do see that USB passthru seems to be a problem as well, and if I run into problems I will just install ESXi 5.0U1 and go from there. Appears some PCIe passthru works fine (like my IBM M1015) in ESXi 5.1.

My prime interest is getting a single piece of HW to be the main power draw for my home entertainment setup that can be on 24x7. I will likely wait for awhile to see how the Ceton Echo pans out before I buy into a HDHR Prime and a maybe 3-4 Echos.

When your single setup was working, did you have the Ceton PCIe card working under ESXi 5.0U1 ?

BruceD


To view links or images in signatures your post count must be 0 or greater. You currently have 0 posts.
BruceD is offline  
post #17 of 26 Old 11-27-2012, 07:10 AM
Member
 
Ruxl's Avatar
 
Join Date: Nov 2012
Posts: 15
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 0 Post(s)
Liked: 11
I have been looking to go this route as well, however I am somewhat confused on why we would need passthrough of the M1015 (or any HDD controller for that matter). Couldn't you pass each hard drive through RDM, and run your software raid from there? This would give you the same result, without the need for hardware passthrough, which provides a bit simpler implementation and means you don't need to track down specific VT-D capable hardware.
Ruxl is offline  
post #18 of 26 Old 11-27-2012, 08:05 AM
AVS Special Member
 
BruceD's Avatar
 
Join Date: Oct 2000
Location: Silicon Valley, CA USA
Posts: 1,064
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 1 Post(s)
Liked: 29
Quote:
Originally Posted by Ruxl View Post

. . . . I am somewhat confused on why we would need passthrough of the M1015 (or any HDD controller for that matter). Couldn't you pass each hard drive through RDM, and run your software raid from there? This would give you the same result, without the need for hardware passthrough, which provides a bit simpler implementation and means you don't need to track down specific VT-D capable hardware.

Problem is I want bone stock Hard drives that I can take out of the WHS2011 server and hook them up directly to a win7 PC if need be. Can't do that with RDM if I remember correctly.

Also I'm not sure flexraid will work in the RDM environment.

BruceD


To view links or images in signatures your post count must be 0 or greater. You currently have 0 posts.
BruceD is offline  
post #19 of 26 Old 11-27-2012, 08:24 AM
Member
 
Ruxl's Avatar
 
Join Date: Nov 2012
Posts: 15
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 0 Post(s)
Liked: 11
Brahim uses ESXi with RDM mapping, which is actually what started me to question why I would care about VT-D.

With RDM mapping you get almost straight passthrough of the disk. You can disconnect it from your ESXi host and plug it into any other system and it should read without issue.

This is with ESXi 4, but it gets the point across: http://vm-help.com/esx40i/SATA_RDMs.php
Ruxl is offline  
post #20 of 26 Old 11-27-2012, 08:45 AM - Thread Starter
Senior Member
 
mcturkey's Avatar
 
Join Date: Aug 2011
Posts: 343
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 4 Post(s)
Liked: 43
Quote:
Originally Posted by BruceD View Post

Since I will be using the network tuner HDHomerun Prime with a Cable Card for Comcast Digital Preferred, the Ceton card doesn't enter the picture for me.
I do see that USB passthru seems to be a problem as well, and if I run into problems I will just install ESXi 5.0U1 and go from there. Appears some PCIe passthru works fine (like my IBM M1015) in ESXi 5.1.
My prime interest is getting a single piece of HW to be the main power draw for my home entertainment setup that can be on 24x7. I will likely wait for awhile to see how the Ceton Echo pans out before I buy into a HDHR Prime and a maybe 3-4 Echos.
When your single setup was working, did you have the Ceton PCIe card working under ESXi 5.0U1 ?

Yes, I was using the Ceton PCIe card under 5.0U1.
Quote:
Originally Posted by Ruxl View Post

I have been looking to go this route as well, however I am somewhat confused on why we would need passthrough of the M1015 (or any HDD controller for that matter). Couldn't you pass each hard drive through RDM, and run your software raid from there? This would give you the same result, without the need for hardware passthrough, which provides a bit simpler implementation and means you don't need to track down specific VT-D capable hardware.

The reason for passing the card through is actually simplicity. In my case, I'm running an OpenIndiana 151a VM with ZFS. Having the drives passed through individually opens up another layer of potential failure. It's simpler to pass through just the controller, and let OI handle the drives exclusively. I suppose it would be theoretically possible to map the drives to the OS, but I can't see a benefit to that.
mcturkey is offline  
post #21 of 26 Old 11-27-2012, 09:35 AM
AVS Special Member
 
BruceD's Avatar
 
Join Date: Oct 2000
Location: Silicon Valley, CA USA
Posts: 1,064
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 1 Post(s)
Liked: 29
Quote:
Originally Posted by Ruxl View Post

Brahim uses ESXi with RDM mapping, which is actually what started me to question why I would care about VT-D.
With RDM mapping you get almost straight passthrough of the disk. You can disconnect it from your ESXi host and plug it into any other system and it should read without issue. . . .

The reason you DON'T want to use RDM is because it won't allow WHS2011 and flexraid (or unraid) to use SMART and spin-down the drives when not in use. For me that is a BIG deal. Also I think passing each individual drive through (via RDM) is a whole lot more complicated that a single PCIe card. Adding or removing drives is more complicated with RDM (multiple times for each drive instead of just once) not even taking into consideration how the parity drive structure functions in that environment (clumsy).

BruceD


To view links or images in signatures your post count must be 0 or greater. You currently have 0 posts.
BruceD is offline  
post #22 of 26 Old 11-27-2012, 10:00 AM
AVS Special Member
 
BruceD's Avatar
 
Join Date: Oct 2000
Location: Silicon Valley, CA USA
Posts: 1,064
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 1 Post(s)
Liked: 29
Ruxl,

Additionally, here is what I found at the link you provided to me above, (thanks!):
Quote:
Originally Posted by Dave Mishchenko 10-04-2010 
Hi Mike, you don't require VT-d to create an RDM. With an RDM the entire disk or LUN is passed to the virtual machine, but I/O still goes through a virtual SCSI adapter.

If you were using VMDirectpath the virtual machine would have full control of the physical storage controller and you wouldn't need to use an RDM as the guest OS would see both the adapter and any connected storage.

So, my guess is that performance may suffer if you go through yet another virtual layer (virtual SCSI adapter) between the SATA HBA card (IBM M1015) and the WHS2011+flexraid VM. I think the preferred method is the VMDirectpath via vt-d, but just IMHO.

BruceD


To view links or images in signatures your post count must be 0 or greater. You currently have 0 posts.
BruceD is offline  
post #23 of 26 Old 11-27-2012, 10:13 AM
Member
 
Ruxl's Avatar
 
Join Date: Nov 2012
Posts: 15
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 0 Post(s)
Liked: 11
Great replies - thank you both very much!

My concern was more financial than anything else. The VT-D/IOMMU hardware requirements are very specific, and while building a brand new ESXi host purposely for I/O passthrough would be nice, for many of us it isn't realistic. In my case I'm using an old PC that doesn't support VT-D, and trying to weigh the options of just dealing with what I have, or trying to justify the hardware expenses for enhancing something that for the most part works just fine.

So in an ideal world, get VT-D capable hardware and pass through your HBA. In a less than ideal world use RDM to give the drives to your guest, but understand that each individual drive needs to be setup for RDM and there is a performance hit and you lose SMART access, which in larger HDD arrays becomes significant.

I have a few 3TB Seagate drives that I can play with before I put them in my media server. I'll set them up via RDM in the upcoming days and do some performance testing, as well as validate that I can acces the data on them from a separate system, for my own curiosity if nothing else.
Ruxl is offline  
post #24 of 26 Old 11-27-2012, 01:35 PM - Thread Starter
Senior Member
 
mcturkey's Avatar
 
Join Date: Aug 2011
Posts: 343
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 4 Post(s)
Liked: 43
Quote:
Originally Posted by Ruxl View Post

Great replies - thank you both very much!
My concern was more financial than anything else. The VT-D/IOMMU hardware requirements are very specific, and while building a brand new ESXi host purposely for I/O passthrough would be nice, for many of us it isn't realistic. In my case I'm using an old PC that doesn't support VT-D, and trying to weigh the options of just dealing with what I have, or trying to justify the hardware expenses for enhancing something that for the most part works just fine.
So in an ideal world, get VT-D capable hardware and pass through your HBA. In a less than ideal world use RDM to give the drives to your guest, but understand that each individual drive needs to be setup for RDM and there is a performance hit and you lose SMART access, which in larger HDD arrays becomes significant.
I have a few 3TB Seagate drives that I can play with before I put them in my media server. I'll set them up via RDM in the upcoming days and do some performance testing, as well as validate that I can acces the data on them from a separate system, for my own curiosity if nothing else.

True, I can definitely understand the cost savings of reusing older hardware. That said, I know that there is some older consumer-level AMD hardware that does support IOMMU, which might work if you have the right gear (or failing that, old server equipment is dirt cheap these days since everyone jumps into faster, more efficient servers pretty regularly).
mcturkey is offline  
post #25 of 26 Old 11-28-2012, 06:20 PM
Member
 
Ruxl's Avatar
 
Join Date: Nov 2012
Posts: 15
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 0 Post(s)
Liked: 11
Well, some answers to a bit of testing I have been doing:

I have created a RDM of a 3TB SATA drive connected to the onboard SATA controller, and passed it through to a Ubuntu 12.10 guest VM. I am able to access the entire 3TB, and I do have SMART capabilities.

Loading the exact same drive into Server 2012, however, is yielding issues. In device manager I see the physical disk, proper model and serial number and everything. In disk management, however, it sees it as a 0MB volume and won't allow me to partition/format it whatsoever. Parted within linux had zero issues with it. I'm assuming this is a 2012 issue, and am continuing to research. I'll likely spin up a 2008 R2 server to see if it reacts similarly.

So in the end, we do indeed have SMART control through RDM without VT-D/IOMMU, but I can 100% attest that setting up individual RDMs for anything more than 1 or 2 drives is an administrative nightmare.

Here is the SMART output from within Ubuntu:

smartctl 5.43 2012-06-30 r3573 [i686-linux-3.5.0-18-generic] (local build)
Copyright (C) 2002-12 by Bruce Allen, http://smartmontools.sourceforge.net

=== START OF INFORMATION SECTION ===
Model Family: Seagate Barracuda (SATA 3Gb/s, 4K Sectors)
Device Model: ST3000DM001-9YN166
Serial Number: Z1F1AAMW
LU WWN Device Id: 5 000c50 04e647ba2
Firmware Version: CC4H
User Capacity: 3,000,592,982,016 bytes [3.00 TB]
Sector Sizes: 512 bytes logical, 4096 bytes physical
Device is: In smartctl database [for details use: -P show]
ATA Version is: 8
ATA Standard is: ATA-8-ACS revision 4
Local Time is: Wed Nov 28 21:03:54 2012 EST
SMART support is: Available - device has SMART capability.
SMART support is: Enabled

=== START OF READ SMART DATA SECTION ===
SMART STATUS RETURN: incomplete response, ATA output registers missing
SMART overall-health self-assessment test result: PASSED
Warning: This result is based on an Attribute check.
Ruxl is offline  
post #26 of 26 Old 11-29-2012, 10:56 AM
AVS Special Member
 
BruceD's Avatar
 
Join Date: Oct 2000
Location: Silicon Valley, CA USA
Posts: 1,064
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 1 Post(s)
Liked: 29
Ruxl,


Glad to see you could get RDM to work for you.

Yes, my concern with RDM was for multiple hard drives like you would find when using the IBM M1015 PCIe card (8 SATA drives per card).

BruceD


To view links or images in signatures your post count must be 0 or greater. You currently have 0 posts.
BruceD is offline  
Reply Home Theater Computers

User Tag List

Thread Tools
Show Printable Version Show Printable Version
Email this Page Email this Page


Forum Jump: 

Posting Rules  
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are Off
Pingbacks are Off
Refbacks are Off