ESXi Build With 16.568 TB, 16 GB RAM, 6 core CPU - AVS Forum
Forum Jump: 
Reply
 
Thread Tools
post #1 of 112 Old 03-18-2013, 07:56 AM - Thread Starter
AVS Special Member
 
bryansj's Avatar
 
Join Date: Feb 2004
Location: Atlanta, GA
Posts: 6,296
Mentioned: 1 Post(s)
Tagged: 0 Thread(s)
Quoted: 49 Post(s)
Liked: 180
I realized my server had its four year birthday last Friday (looking at my Newegg invoice). Since my wife is a little more appreciative of my server due to this, I figured now would be a good time to squeeze in an upgrade. What I had was an Intel C2D E8400 running on a GIGABYTE GA-E7AUM-DS2H LGA 775 NVIDIA GeForce 9400 HDMI Micro ATX Intel Motherboard with 8GB of DDR2 RAM (my firstHTPC with a 4GB RAM upgrade).

I tried to install VMWare ESXi on it when I transitioned from WHS 2011 to Windows Server 2012 Essentials, but it didn't like something which I assume was the Nvidia based motherboard and NIC. I didn't want to spend a ton of money on the server and every time I created a parts list of compatible hardware for an Intel CPU I would always end up over budget. I wanted a Supermicro X9SCM-F and a Xeon E3 1230v2 that they had at my local Microcenter, but then I have to use ECC memory and the X9SCM-F is only Ivy Bridge compatible if the BIOS firmware is v2. I couldn't guarantee the BIOS firmware version and wouldn't be able to flash it without a Sandy Bridge compatible rig. Anyway, on top of the firmware unknown and the ECC pushing it over budget I'd have to go down to an i3 dual core. I'd rather have at least a Quad core for ESXi so I looked into AMD.

I ended up piecing together a package that has ESXi compatible hardware using:

Mobo: ASRock 970 Extreme4 Socket AM3+ 970 ATX AMD Motherboard $99.99 -$40.00 CPU Bundle
CPU: FX 6300 Black Edition 3.5GHz Six-Core $119.99
RAM: Crucial Ballistix Sport 16GB (2x8GB) DDR3-1600 (PC3-12800) CL9 $99.99 -$10.00
NIC: Intel PCIe 82574L Gigabit Ethernet Controller $39.99

~$330 w/ tax from local Microcenter

I already had a case but got a 600W power supply just in case I needed it to replace my older 460W. Turns out the old 460W didn't have a direct 8 pin CPU connector, only a 4 pin, so I used the new 600W instead. I think I had a molex to 4 pin adapter in my electronic/pc toolbox that might have worked with the existing 4 pin, but I liked the new PS better so I didn't bother to figure it out. I also went and bought an aftermarket heatsink and fan for the CPU.

Existing reuse was an Antec 300 case, 120mm fans, IBM M1015 flashed to an LSI LSI9211-IT, SAS to SATA cables, an old Nvidia 430 video card, and all storage drives. The drives are 3x 3TB Seagate, 1x 2TB Seagate, 2x 2TB WD Green, 1x 1TB WD Green, 1x 500GB WD Green, 64GB Crucial M4 SSD, and a 4GB flash drive. The 3x 3TB and 1x 2TB Seagates are my FlexRAID array. The 2x 2TB are Storage Space Mirrored for work disc usage, the 1x 1TB was an extra that I added for installing ESXi VMs (may remove it), 500GB is external for the server backup, and the 4GB flash drive is for the ESXi installation.

I threw everything into another case while the existing server remained online and used the 1TB disc for VM provisioned storage. Installation went well with ESXi installing it to a basic USB2 4GB flash drive. Everything except for the on-board sound was picked up by ESXi, even the RealTek network card worked. I created a couple VMs and got them working, one was a conversion from VirtualBox running Ubuntu and I got it into ESXi, the other was the Win7 x86 VM. Then I created one for my WSE 2012 and pulled its 500GB external backup drive and connected it to the ESXi server to see how well a restore worked. It restored and everything worked except it was complaining about missing all of its storage since it was still in the old machine.

So I made the hardware swap and here are some pictures...




The only problem that I ran into was trying to restore my WSE 2012 backup onto the same 64GB SSD that I was using. I provisioned it and during the restore it complained about not having enough space which I assume is due to ESXi throwing in a bit of overhead storage that slightly brought down the disc size. I just created a 200 GB disc out of the old 1TB drive and restored to that.

I either need to get a decent 7,200 rpm disc for the VMs or shrink down my WSE 2012 to fit on the slightly smaller disc. For now I have three VMs running; WSE 2012, Ubuntu for my Newznab server, and Win7 x86 for my streaming. It was a "fun" weekend playing with all the hardware and software. Now that everything is running all that is left is to tweak it until I break something...

bryansj is offline  
Sponsored Links
Advertisement
 
post #2 of 112 Old 03-18-2013, 08:32 AM
Advanced Member
 
ncarty97's Avatar
 
Join Date: Mar 2006
Posts: 747
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 6 Post(s)
Liked: 17
Very cool. I've been thinking of doing ESXi once I rebuild my server (had almost identical hardware to your old one, so glad I didn't run it on that), I may just borrow you build since I found it confusing finding out what did and did not work!

What are you streaming with the Win 7 x86 install (and why x86 over x64)?

I had been thinking of the following VM's if I go this route:

1) WHS2011
2) Ubuntu to run PlexServer
3) Win7 for DVR to run out to extenders (no other media though)
ncarty97 is offline  
post #3 of 112 Old 03-18-2013, 10:21 AM - Thread Starter
AVS Special Member
 
bryansj's Avatar
 
Join Date: Feb 2004
Location: Atlanta, GA
Posts: 6,296
Mentioned: 1 Post(s)
Tagged: 0 Thread(s)
Quoted: 49 Post(s)
Liked: 180
I'm streaming this http://www.avsforum.com/t/1462261/heres-something-different-that-im-doing-with-my-server which can be summarized by going to nuthatchcam.com.

I chose Win7 x86 for three reasons. One reason was I just needed the most basic install to run the software listed in the above thread. Second would be that if needed I would have an x86 OS for anything that failed to run with x64 (haven't needed it for that). Third and most important would be that I think I deleted my x64 .iso images for some reason and only had x86. I didn't want to download an x64 image again for the purpose...

I found a similar build posted somewhere, searched for "whitebox esxi". It used the older 6100 processor or the 8 core 8320. Microcenter had the $40 off mobo + cpu combo for the 6300 and not the 83x0. The two extra cores in a 8320 were going to cost $20 more plus I'd lose the $40 combo and increased power/heat. I figured six cores was enough and still kept me at a decent budget.

bryansj is offline  
post #4 of 112 Old 03-18-2013, 10:24 AM
AVS Special Member
 
oman321's Avatar
 
Join Date: Sep 2005
Location: MASS
Posts: 4,730
Mentioned: 2 Post(s)
Tagged: 0 Thread(s)
Quoted: 27 Post(s)
Liked: 37
Gotta love Micro Center. That's where I sourced 90% of my parts for my Server build a year and a half ago. Love the motherboard and CPU combo deals.

ncarty97, curious as to why Ubuntu to run PlexServer? Runs just fine on WHS2011 or Win7. Just wondering.


To view links or images in signatures your post count must be 0 or greater. You currently have 0 posts.

oman321 is offline  
post #5 of 112 Old 03-18-2013, 10:52 AM - Thread Starter
AVS Special Member
 
bryansj's Avatar
 
Join Date: Feb 2004
Location: Atlanta, GA
Posts: 6,296
Mentioned: 1 Post(s)
Tagged: 0 Thread(s)
Quoted: 49 Post(s)
Liked: 180
Unfortunately Microcenter is close enough to work that I can go there during lunch... I don't need to be tempted any more than I am already. Went to return a couple items today and actually walked out empty handed (which is good). Saw they had a liquid cooling package on sale for like $50...

bryansj is offline  
post #6 of 112 Old 03-18-2013, 11:15 AM
Advanced Member
 
ncarty97's Avatar
 
Join Date: Mar 2006
Posts: 747
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 6 Post(s)
Liked: 17
Quote:
Originally Posted by bryansj View Post

I'm streaming this http://www.avsforum.com/t/1462261/heres-something-different-that-im-doing-with-my-server which can be summarized by going to nuthatchcam.com.

I chose Win7 x86 for three reasons. One reason was I just needed the most basic install to run the software listed in the above thread. Second would be that if needed I would have an x86 OS for anything that failed to run with x64 (haven't needed it for that). Third and most important would be that I think I deleted my x64 .iso images for some reason and only had x86. I didn't want to download an x64 image again for the purpose...

I found a similar build posted somewhere, searched for "whitebox esxi". It used the older 6100 processor or the 8 core 8320. Microcenter had the $40 off mobo + cpu combo for the 6300 and not the 83x0. The two extra cores in a 8320 were going to cost $20 more plus I'd lose the $40 combo and increased power/heat. I figured six cores was enough and still kept me at a decent budget.

Makes sense! I was thinking an 8 core for max expandability, but I really doubt I'll ever need that much horsepower.

Quote:
Originally Posted by oman321 View Post

Gotta love Micro Center. That's where I sourced 90% of my parts for my Server build a year and a half ago. Love the motherboard and CPU combo deals.

Yeah I also love how they basically troll tigerdirect and newegg and adjust their prices to be competitive. I try to give them all my business now.
Quote:
ncarty97, curious as to why Ubuntu to run PlexServer? Runs just fine on WHS2011 or Win7. Just wondering.

I want to keep the PlexServer seperate from my WHS. Essentially I want to keep the WHS as clean as possible to limit the possible issues (I like to set and forget). I'm not against W7, but that adds the cost of an additional W7 license. Since that VM will do nothing but PlexServer, using a linux distro seemed to make the most sense.

Quote:
Originally Posted by bryansj View Post

Unfortunately Microcenter is close enough to work that I can go there during lunch... I don't need to be tempted any more than I am already. Went to return a couple items today and actually walked out empty handed (which is good). Saw they had a liquid cooling package on sale for like $50...

It's just far enough to be a good undertaking for me, but not so far that I'm not willing to do it when I see something I really want!
ncarty97 is offline  
post #7 of 112 Old 03-18-2013, 12:12 PM
AVS Special Member
 
oman321's Avatar
 
Join Date: Sep 2005
Location: MASS
Posts: 4,730
Mentioned: 2 Post(s)
Tagged: 0 Thread(s)
Quoted: 27 Post(s)
Liked: 37
Ya for me Micro Center is about 35 to 40 minutes away from where I live, but only about 15 mins. on the highway from my mom's so easy to get there if I really need to on the weekend or something.

ncarty97, I don't think it would matter much since like you point out you can get Linux at no cost, but you can simply run PMS on the W7 install. It's basically an application and you wouldn't need a dedicated license just for that.

In any event you may have your reasons to do it how you state so no harm no foul if that's what you wanna do.

I've considered VM a bit in my mind as well. I have a dedicated WHS2011 machine with a Phenom II Black (opened the 2 additional cores making it a useable quad core) and an MSI 870A motherboard (combo deal I got from Micro Center). I haven't really looked to much to see if I can do that with this machine but I may be able to. The reason I have been thinking of it is because as you guys probably know already WHS2011 doesn't support WMC, which I would need for offsite viewing if I ever end up getting the HDHR Prime network tuner that I intended to get when I first built this thing. If I were doing it today, I probably would have just done a W7 machine and used that as a server but WHS does have some useful features regardless.

I have a HTPC that has W7 on it but that machine is pretty basic and probably not up to task.


To view links or images in signatures your post count must be 0 or greater. You currently have 0 posts.

oman321 is offline  
post #8 of 112 Old 03-18-2013, 01:14 PM
Advanced Member
 
ncarty97's Avatar
 
Join Date: Mar 2006
Posts: 747
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 6 Post(s)
Liked: 17
Quote:
Originally Posted by oman321 View Post


ncarty97, I don't think it would matter much since like you point out you can get Linux at no cost, but you can simply run PMS on the W7 install. It's basically an application and you wouldn't need a dedicated license just for that.

In any event you may have your reasons to do it how you state so no harm no foul if that's what you wanna do.

yeah it's more about seperating duties. I don't feel like getting a call from my wife about how the DVR isn't working because it slows down or something as Plex is transcoding video to me on the road and my brother down at his place! It's all about WAF!
ncarty97 is offline  
post #9 of 112 Old 03-18-2013, 02:15 PM
AVS Addicted Member
 
Mfusick's Avatar
 
Join Date: Aug 2002
Location: Western MA
Posts: 22,902
Mentioned: 2 Post(s)
Tagged: 0 Thread(s)
Quoted: 166 Post(s)
Liked: 842
WOW biggrin.gif

-

"Too much is almost enough. Anything in life worth doing is worth overdoing. Moderation is for cowards."
Mfusick is online now  
post #10 of 112 Old 03-19-2013, 12:45 PM - Thread Starter
AVS Special Member
 
bryansj's Avatar
 
Join Date: Feb 2004
Location: Atlanta, GA
Posts: 6,296
Mentioned: 1 Post(s)
Tagged: 0 Thread(s)
Quoted: 49 Post(s)
Liked: 180
Got my SSD install sorted out. I went into Disk Manager on the WSE 2012 install and chose the option to shrink my C:\ drive. It only took off 1.9GB, but that was enough for the most part. I then ran a new backup on the server.

The next issue was the VM wouldn't start with the SSD selected with the storage set at near max. The problem was the swap file gets added to the disc unless otherwise relocated or reducing the RAM to result in a smaller swap file. Once I chose to move the swap file off the SSD then everything worked.

Good thing about working with ESXi was all I had to do was create a new WSE 2012 VM and was able to leave the existing one running, just had to remove the backup USB drive and attach it to the new VM (all through software, no plugging/unplugging needed). Once the restore was complete I shut down the current VM and passed through the HBA card and remaining USB devices and started it up. You sure do get spoiled with SSD OS drives once you get another taste of WD Greens.

bryansj is offline  
post #11 of 112 Old 03-19-2013, 04:18 PM
AVS Special Member
 
oman321's Avatar
 
Join Date: Sep 2005
Location: MASS
Posts: 4,730
Mentioned: 2 Post(s)
Tagged: 0 Thread(s)
Quoted: 27 Post(s)
Liked: 37
Congrats, I'm impressed with your rig and setup. Sounds like it's quite awesome when you can just create a new virtual machine. Nice!!


To view links or images in signatures your post count must be 0 or greater. You currently have 0 posts.

oman321 is offline  
post #12 of 112 Old 04-18-2013, 06:55 AM
Advanced Member
 
ncarty97's Avatar
 
Join Date: Mar 2006
Posts: 747
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 6 Post(s)
Liked: 17
Any new issues? Just got my tax refund and I'm about to pull the trigger on my new build!

The logical side of me says "Six Cores will be MORE than enough." The nerd side of me says "But why would you want only 6 when you can have EIGHT?!?!?!?"

The nerd side seems to be winning....
ncarty97 is offline  
post #13 of 112 Old 04-18-2013, 07:44 AM
Advanced Member
 
ncarty97's Avatar
 
Join Date: Mar 2006
Posts: 747
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 6 Post(s)
Liked: 17
Well looks like MicroCenter has the 8320 (and 8350 for that matter) with the $40 off bundle, so I think I'm going to go 8 Core. Doesn't seem like the 8350 is worth the extra $40 though over the 8320.
ncarty97 is offline  
post #14 of 112 Old 04-18-2013, 08:08 AM - Thread Starter
AVS Special Member
 
bryansj's Avatar
 
Join Date: Feb 2004
Location: Atlanta, GA
Posts: 6,296
Mentioned: 1 Post(s)
Tagged: 0 Thread(s)
Quoted: 49 Post(s)
Liked: 180
I'd get the FX8320 if I was buying today, only the four and six core CPUs were part of the $40 bundle for me. There is the extra power draw, but if I was concerned about power then I'd cram all the drives into my HTPC and deal with the noise.

I'd also like to have planned my VM guests' datastore drives a bit better. What would be best is to get a 64GB SSD for each VM. Right now I have my WSE2012 on one SSD and all my others share a 2TB HDD. It is rather trivial to move them, but I might have added one to my shopping list with hindsight.

Also, I upgraded the CPU fan and heatsink because the stock one seemed very loud. I assumed it was just some crappy OEM combo that AMD used. Turns out that ASRock motherboard's default CPU fan speed setting is FULL ON. I might have gotten away with the OEM fan and heatsink once I changed the setting. However, the Cooler Master Hyper 212 EVO CPU Cooler from Microcenter is a good deal for what you get.

bryansj is offline  
post #15 of 112 Old 04-18-2013, 08:14 AM
Advanced Member
 
ncarty97's Avatar
 
Join Date: Mar 2006
Posts: 747
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 6 Post(s)
Liked: 17
Thanks! Have you seen a performance hit by having all the VMs on the 2TB drive? I have two 160GB drives (RAID 1), not SSD, that I use for my WHS2011 install right now. I was thinking of seperating them to use as the base drives for WHS2011 on one and the other two VMs on the other.
ncarty97 is offline  
post #16 of 112 Old 04-18-2013, 08:24 AM - Thread Starter
AVS Special Member
 
bryansj's Avatar
 
Join Date: Feb 2004
Location: Atlanta, GA
Posts: 6,296
Mentioned: 1 Post(s)
Tagged: 0 Thread(s)
Quoted: 49 Post(s)
Liked: 180
Yes I see a hit. However, my primary VM that I want to keep happy is WSE2012 and it has the SSD. The other VMs aren't as important performance-wise and are sharing the WD Green 2TB. When my Ubuntu VM is churning away processing data every 10 to 15 minutes the other VMs on the disc are a bit sluggish until if finishes. I would split your RAID1 and use those separately as datastores.

There is another option to consider with ESXi. You can add a SSD as a cache drive in ESXi and link your physical HDD to it. That might solve my performance issue with only adding a single SSD drive.

Keep in mind on your WHS2011 that I was able to restore my server backup directly into a VM without reinstalling. I simply created the WHS2011 VM, attached both the installation DVD (.iso) and my server backup USB drive. Once restored it installed a few drivers and booted to the desktop with no issues. Simply installed VM Tools and was good to go.

bryansj is offline  
post #17 of 112 Old 04-18-2013, 08:38 AM
Advanced Member
 
ncarty97's Avatar
 
Join Date: Mar 2006
Posts: 747
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 6 Post(s)
Liked: 17
Nice! I will give that a try.
ncarty97 is offline  
post #18 of 112 Old 04-18-2013, 10:24 AM
Member
 
mattlach's Avatar
 
Join Date: Feb 2011
Posts: 24
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 0 Post(s)
Liked: 10
Nice,

I did a similar Microcenter CPU combo build for my virtualized server:
- Gigabyte GA990FXA-UD3 Motherboard
- AMD FX-8120 (8 core, 3.1Ghz, turbo's to 4Ghz)
- 32GB RAM (Why not, RAM is cheap, and FreeNAS LOVES RAM)
- Intel Dual port gigabit Server NIC
- Broadcom NetXtreme single port gigabit server NIC
- IBM M1015 8 drive SAS controller
- Boots from and datastores are on a 128GB SSD, so that VM's come back up quickly in case of a reboot. (wouldn't want to go without internet access for more than a few seconds, now would we tongue.gif )


I have a bunch of VM's on there, most of them just for playing around, but the more serious ones are:

PFSense:
I use it instead of a consumer router. Much more reliable, and since I have a few gamers refreshing server lists in the house, we no longer run out of states in the NAT table, since I have it set to a million tongue.gif In order to cut down on latency, I have Direct I/O forwarded a dual gigabit Intel Server NIC to this VM, which cuts out the VSwitches, which aren't the lowest latency.

Since there is no consumer router anymore, I have the LAN side connected to a HP Pro Curve 1810G-24 Level 2 Managed Switch. I also use a Ubiquiti Unifi Long Range Wireless Access Point for my wifi needs which blows any consumer router completely out of the water from a wifi range and performance perspective.



FreeNAS:
This handles all my storage, and it is great. The server sits in my basement, out of earshot, so noisy fans and drives are not a problem. None of the other systems in the house have mechanical drives anymore. They boot and have applications on small inexpensive SSD's and all the data is stored on the NAS via gigabit ethernet.

Since FreeNAS loves ram, I have given it 24GB of the servers RAM. Also, since I use the integrated motherboard storage controller to boot ESXi and for datastores, I can't forward it to FreeNAS, so I looked around for a third party controller known to work with FreeNAS and forward using DirectPath I/O. Turns out this combination is not very common. I bought an IBM M1015 SAS controller for relatively cheap on eBay, and flashed it with an LSI firmware that turned it into a JBOB controller, perfect for compatibility and ZFS. I use adapter cables to hook it up to 6 regular SATA WD Green 3TB drives in RAIDZ2 config. FreeNAS also has its dedicated NIC, a Direct I/O forwarded Broadcom NetXtreme gigabit card. Locally the system benchmark gives me ~480MB/s which is pretty awesome, but its limited to ~80MB/s remotely via SMB due to gigabit ethernet (~110MB/s via NFS due to the more efficient protocol, but security is more problematic)

Ubuntu Linux Server:
I pretty much run this for my generic server needs. (rtorrent, Ubiquiti Unifi Wifi Access Point Control software, NO-Ip software for dynamic DNS and a few others.) It seems superfluous, until you factor in just how annoying FreeBSD is to use, especially once limited in the specialized PFSense and FreeNAS versions.

It is hooked up to FreeNAS via NFS internally on a virtual 10Gig ethernet interface (which for some reason seems to max out just north of 2Gbit, but it is virtual, so...) to avoid bottlenecking if it accesses the storage at the same time as a client does. Externally it shares the integrated ethernet with the ESXi management console



I absolutely love this setup, as the virtualization allows me to do pretty much whatever I want, whenever I want even if it requires a different OS!

The server is still running the stock AMD cooler, which I may need to upgrade before the summer comes around as Boston summers get deceptively hot, and my basement is not air conditioned.
mattlach is offline  
post #19 of 112 Old 04-18-2013, 10:34 AM
Advanced Member
 
ncarty97's Avatar
 
Join Date: Mar 2006
Posts: 747
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 6 Post(s)
Liked: 17
Wow. Quite the set up. Thanks for sharing.
ncarty97 is offline  
post #20 of 112 Old 04-18-2013, 11:15 AM
Member
 
mattlach's Avatar
 
Join Date: Feb 2011
Posts: 24
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 0 Post(s)
Liked: 10
NP.

I figured I'd mention my configuration here, as it took me quite some time, and a lot of no inexpensive trial and error to select components that actually wound up working for my needs. (particularly in selecting motherboards that supported Direc I/O Mapping and finding a storage controllers that would allow themselves to be forwarded) as well as a lot of switch and wireless AP research.

If I can, it would be nice to help others so they can avoid doing the same thing smile.gif

I'm kind of hoping the lower power FX-8300 winds up being available as a standalone CPU at some point (right now it's only sold in OEM's and I can't find it on eBay or anywhere else.). If it does, I'll replace my 8120 with it. Anything that reduces power usage and heat generation is a bonus in my book.
mattlach is offline  
post #21 of 112 Old 04-18-2013, 12:17 PM - Thread Starter
AVS Special Member
 
bryansj's Avatar
 
Join Date: Feb 2004
Location: Atlanta, GA
Posts: 6,296
Mentioned: 1 Post(s)
Tagged: 0 Thread(s)
Quoted: 49 Post(s)
Liked: 180
I might look into adding PFSense as a VM for my next project. Thanks for the post.

bryansj is offline  
post #22 of 112 Old 04-18-2013, 12:27 PM
Member
 
mattlach's Avatar
 
Join Date: Feb 2011
Posts: 24
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 0 Post(s)
Liked: 10
Quote:
Originally Posted by bryansj View Post

I might look into adding PFSense as a VM for my next project. Thanks for the post.

NP,

Yeah, once you have an ESXi box, there really isn't much of a reason not to.

pfSense is very powerful, and only requires a couple of gigs of drive space, and a gig or less of RAM. CPU utilization typically isn't crazy either, (unless you have very high internet bandwidth and many clients)

The only cost involved is getting two decent gigabit NIC's (the on board ones are garbage, do not use for this)

I went the more expensive route to save expansion slots and got a dual port Intel server NIC, but if you instead get two single port Intel NIC's they are only $30 a pop. You probably already have a switch, and there you go.

To ad awe inspiring wifi to the network check out Ubiquiti Unifi. Every bit as powerful as enterprise cisco systems at a fraction of the price. A regular 802.11G AP is only $80, and if you need to blanket a huge building with wifi, the software makes it easy to manage large wifi deployments. I just need one long range unit for my house. (Of course, their 802.11AC unit came out right after I upgraded to the long range unit. Isn't that always the case... tongue.gif )

Anyway, point is, all together, this is in the same price range as a single high end consumer router, and will blow that consumer router completely out of the water.
mattlach is offline  
post #23 of 112 Old 04-20-2013, 06:26 AM - Thread Starter
AVS Special Member
 
bryansj's Avatar
 
Join Date: Feb 2004
Location: Atlanta, GA
Posts: 6,296
Mentioned: 1 Post(s)
Tagged: 0 Thread(s)
Quoted: 49 Post(s)
Liked: 180
Quote:
Originally Posted by bryansj View Post

I might look into adding PFSense as a VM for my next project. Thanks for the post.

OK, last night I tried a pfSense VM install. I followed this guide http://doc.pfsense.org/index.php/PfSense_2_on_VMware_ESXi_5 and the install went fine. I'm using an Intel Gbit NIC for LAN and the onboard Realtek for WAN, will add more Intel NICs later. However, once I moved the other VMs into the DMZ as suggested at the bottom of the guide I could not get them connected to the network. The guide sort of drops you off with an incomplete setup for the DMZ. I realized the OPT1 for the DMZ VM switch wasn't assigned in pfSense because the guide adds it post install so I added it, but couldn't figure out how to do whatever it is that needs to be done to make them see anything. At that time the clock said it was very late so I hit the abort button and unplugged the LAN/WAN cables and reconnected using my router. Any ideas on how to get this working picking up at the DMZ virtual switch?

It is annoying to tweak with this since I have to take down the WAN and break most of the LAN while "playing" with it. It also causes downtime for this stream, which creates a low WAF.

bryansj is offline  
post #24 of 112 Old 04-20-2013, 06:40 AM
AVS Special Member
 
WonHung's Avatar
 
Join Date: Aug 2003
Posts: 1,007
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 0 Post(s)
Liked: 10
Quote:
Originally Posted by mattlach View Post

NP,

Yeah, once you have an ESXi box, there really isn't much of a reason not to.

pfSense is very powerful, and only requires a couple of gigs of drive space, and a gig or less of RAM. CPU utilization typically isn't crazy either, (unless you have very high internet bandwidth and many clients)

The only cost involved is getting two decent gigabit NIC's (the on board ones are garbage, do not use for this)

I went the more expensive route to save expansion slots and got a dual port Intel server NIC, but if you instead get two single port Intel NIC's they are only $30 a pop. You probably already have a switch, and there you go.

To ad awe inspiring wifi to the network check out Ubiquiti Unifi. Every bit as powerful as enterprise cisco systems at a fraction of the price. A regular 802.11G AP is only $80, and if you need to blanket a huge building with wifi, the software makes it easy to manage large wifi deployments. I just need one long range unit for my house. (Of course, their 802.11AC unit came out right after I upgraded to the long range unit. Isn't that always the case... tongue.gif )

Anyway, point is, all together, this is in the same price range as a single high end consumer router, and will blow that consumer router completely out of the water.

I think it's a stretch to say the Ubiquiti products are every bit as capable as the Cisco Aironet products. The Ubiquiti gear is much better than just doing a hack job of forcing independent APs to function under the same SSID.


To view links or images in signatures your post count must be 0 or greater. You currently have 0 posts.
WonHung is offline  
post #25 of 112 Old 04-21-2013, 07:36 AM - Thread Starter
AVS Special Member
 
bryansj's Avatar
 
Join Date: Feb 2004
Location: Atlanta, GA
Posts: 6,296
Mentioned: 1 Post(s)
Tagged: 0 Thread(s)
Quoted: 49 Post(s)
Liked: 180
I'm still not having any luck figuring out how to make pfSense work with the DMZ vswitch. I guess I could do it without the DMZ, but I'd rather do it the suggest way. But it's not like my actual router is broken...

bryansj is offline  
post #26 of 112 Old 05-07-2013, 08:26 AM - Thread Starter
AVS Special Member
 
bryansj's Avatar
 
Join Date: Feb 2004
Location: Atlanta, GA
Posts: 6,296
Mentioned: 1 Post(s)
Tagged: 0 Thread(s)
Quoted: 49 Post(s)
Liked: 180
I now have my router retired to wireless Access Point status and acting as a simple Gigabit switch for my TV, AVR, and Xbox (all 100 Mbps devices). I added a dual port Intel NIC to my ESXi box and have a pfSense VM doing the routing and firewall duties. Figured out the DMZ situation and have all the VMs and everything else communicating correctly. My main problem was I didn't have much of a window for downtime to figure out what was going on and had to throw in the towel and revert back. Once I was able to give myself some time I was able to sort everything out. In hindsight I could have set it all up within half an hour.

bryansj is offline  
post #27 of 112 Old 05-07-2013, 08:43 AM
Member
 
mattlach's Avatar
 
Join Date: Feb 2011
Posts: 24
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 0 Post(s)
Liked: 10
I never used a DMZ.

In fact, I Direct I/O forward a dual gigabit Intel NIC to my pfSense VM, so that I never have to use a vswitch for it, and its as if my pfSense router is running on a separate box.

Unfortunately I don't know much about the DMZ setup.

Why do you want to put the VM's in a DMZ?
mattlach is offline  
post #28 of 112 Old 05-07-2013, 09:41 AM - Thread Starter
AVS Special Member
 
bryansj's Avatar
 
Join Date: Feb 2004
Location: Atlanta, GA
Posts: 6,296
Mentioned: 1 Post(s)
Tagged: 0 Thread(s)
Quoted: 49 Post(s)
Liked: 180
Quote:
Originally Posted by mattlach View Post

I never used a DMZ.

In fact, I Direct I/O forward a dual gigabit Intel NIC to my pfSense VM, so that I never have to use a vswitch for it, and its as if my pfSense router is running on a separate box.

Unfortunately I don't know much about the DMZ setup.

Why do you want to put the VM's in a DMZ?

I was following the pfSense ESXi guide linked above and at the bottom they mentioned creating a DMZ for VMs, but then the guide ends. At first I assumed you were to put all your VMs behind this DMZ for some reason. Turns out that the DMZ would be to sandbox any VMs that you didn't want on your LAN. Could have easily left it out. I went ahead and created one, but right now I have nothing inside it.

bryansj is offline  
post #29 of 112 Old 05-08-2013, 08:33 PM
AVS Special Member
 
Puwaha's Avatar
 
Join Date: Jan 2003
Posts: 1,124
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 1 Post(s)
Liked: 34
Yeah, you don't want your VMs in a DMZ as they are not protected by the router's firewall. Usually you only put something like a web server on a DMZ, and even then it's usually better to just forward the appropriate ports to a web server.

I am with the rest of you, VMware esxi is awesome. Just make sure you get compatible hardware. I run a VM as my router as well, but I like IPFire. It was able to keep up with my 50Mb/s fiber connection. I also run a PIAF VM for free VOIP home phone action through Google Voice. Then another VM running Windows 7 to serve up cable card action to a couple of extenders for the kids. An Ubuntu VM to serve as an OpenVPN gateway so that the in-laws can stream my media collection. And finally a WHS v1 VM to offload tv recordings and provide backups. All this running on a cheapo Sempron processor and 8GB of RAM!
Puwaha is offline  
post #30 of 112 Old 05-09-2013, 05:04 AM - Thread Starter
AVS Special Member
 
bryansj's Avatar
 
Join Date: Feb 2004
Location: Atlanta, GA
Posts: 6,296
Mentioned: 1 Post(s)
Tagged: 0 Thread(s)
Quoted: 49 Post(s)
Liked: 180
Well a DMZ in pfSense isn't like a typical router's DMZ. In a typical router putting something in the DMZ opens it up for direct access to the WAN and still accessible to your LAN, bypassing any firewall. In pfSense putting something in a DMZ is like throwing it into a pit and covering it with dirt. It is inaccessible to both the LAN and WAN and can only be accessed by opening up rules in the firewall. In the case of ESXi running pfSense and a "web server" you would create a vswitch without an adapter connected to it and place the web server VM on that DMZ vswitch. Then in pfSense you would allow the DMZ access to only port 80 (and 443) on the WAN.

bryansj is offline  
Reply Home Theater Computers

User Tag List

Thread Tools
Show Printable Version Show Printable Version
Email this Page Email this Page


Forum Jump: 

Posting Rules  
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are Off
Pingbacks are Off
Refbacks are Off