or Connect
AVS › AVS Forum › Video Components › Home Theater Computers › Mfusick's How to build an affordable 30TB Flexraid media server: Information Requested.!
New Posts  All Forums:Forum Nav:

Mfusick's How to build an affordable 30TB Flexraid media server: Information Requested.! - Page 28

post #811 of 3370
Thread Starter 
Quote:
Originally Posted by politby View Post

Got started on the build, here are a few pictures. Have to use a flash so the quality is what it is...

I will complete the build when I get the parts I am waiting for - NIC, IBM M1015 bracket, and a 4-port PCIe x4 HBA.

Front view. Does not look cheap at all. I've heard Norco owners have had problems with the front panel buttons, hopefully these will not break. Not like they will be used a lot.




120mm fans.




Beautiful motherboard. I'd guess it weighs 4 pounds at least. It also has 5 fan connectors in addition to the CPU fan, so I can power all the case fans off the mobo and hopefully control them with Speedfan. 




Mobo, PSU, graphics card and replacement exhaust fans installed. I picked the XFX PSU based on positive reviews and it has 6 Molex connectors - 5 for the backplanes and one for the slim DVD adapter harness.




VERY tight between the starboard 120mm fan and the backplane power connections. Had to remove the fan to plug in the Molex connectors and getting the fan back in there w/o dislocating the plugs took some effort.



This looks awesome!!! Nice Job !!!

Any updates ?
post #812 of 3370

Tip: I figured I'd try to gain some performance by using a dual NIC with teaming, so I bought one of these. Dual Intel based NIC for $25!

 

Windows Server 2012 has built in NIC teaming independent of NIC and switch manufacturer. :)

post #813 of 3370
Quote:
Originally Posted by politby View Post

Tip: I figured I'd try to gain some performance by using a dual NIC with teaming, so I bought one of these. Dual Intel based NIC for $25!

Windows Server 2012 has built in NIC teaming independent of NIC and switch manufacturer. smile.gif

Have you done any testing? Do you see higher transfer speeds over the network now? I am thinking of doing a triple NIC teaming.

I wonder if Windows 7 has NIC teaming features.
post #814 of 3370
Thread Starter 
I only have a single NIC card.
post #815 of 3370
Not a bad deal and it looks like it is ESXi supported. That would allow me to assign VMs their own port. I could also disable the built in Realtek or just use it only to administer the server.

Also, I thought to take advantage of network teaming you needed a managed network switch. Not sure.
Edited by bryansj - 4/15/13 at 1:28pm
post #816 of 3370
Thread Starter 
Quote:
Originally Posted by bryansj View Post

Not a bad deal and it looks like it is ESXi supported. That would allow me to assign VMs their own port. I could also disable the built in Realtek or just use it only to administer the server.

Also, I thought to take advantage of network teaming you needed a managed network switch. Not sure.

Very interesting. I can't say I have much experience with this personally. I'd be interested in learning more though.
Anyone using teaming ?
post #817 of 3370
Quote:
Originally Posted by bryansj View Post

Not a bad deal and it looks like it is ESXi supported. That would allow me to assign VMs their own port.
May not be the best place for this discussion, but can you explain the benefits of ESXi over just the basic VMware player?

In regards to your comments above, I'd point out that I can assign VMs their own port using a bridged adapter in VMware player. Router sees VM as a separate port (can give it a DHCP reservation) and I can run VPN / OpenVPN apps on the VM which (afterwards) show completely different results in a whatismyip lookup. Am I missing some "bridged-adapter-catch" in this case?

Also, by ESXi are you referring to the free bare metal hypervisor http://www.vmware.com/products/vsphere-hypervisor/overview.html or the retail vsphere package?

Is the intent to run multiple VMs without a dedicated host OS? I guess I don't understand the concept if you're already running a server that is always on, just looking for insight
post #818 of 3370
Yes, I'm referring to the free ESXi. The intent is to run it on bare metal so the other VMs aren't relying on a base OS. My ESXi is simply installed on a spare 4GB flash drive. It got annoying to have to terminate and restart VMs when the server OS needs to reboot for patches (or crashes) and causes downtime which could be avoided with a bare metal hypervisor. It also lets me manage the server's hardware resources a bit better since I can guarantee each VM a level of performance unlike running them in a server OS that could be busy serving up a ripped Blu-ray disc while the other VM is streaming or un-raring etc.

And for the above, I didn't mean a port like an IP port. I meant a full Gigabit connection (connected to the Ethernet "port") for each VM. I already have a single port Intel Gbit card and the built in Realtek. The Intel is assigned to the WSE2012 VM and the Realtek is for the Ubuntu and Win7. I can't remember which is assigned to ESXi management...
post #819 of 3370
Quote:
Originally Posted by Mfusick View Post

Very interesting. I can't say I have much experience with this personally. I'd be interested in learning more though.
Anyone using teaming ?

Maybe it doesn't require a manage switch depending on how you are doing it.

http://technet.microsoft.com/en-us/library/hh831648.aspx

There are two basic sets of algorithms that are used for NIC Teaming:

Algorithms that require the switch to participate in the teaming, also known as switch-dependent modes. These algorithms usually require all the network adapters of the team to be connected to the same switch.

Algorithms that do not require the switch to participate in the teaming, also referred to as switch-independent modes. Because the switch does not know that the network adapter is part of a team, the team network adapters can be connected to different switches. Switch-independent modes do not require that the team members connect to different switches, they merely make it possible.
post #820 of 3370
Quote:
Originally Posted by bryansj View Post

And for the above, I didn't mean a port like an IP port. I meant a full Gigabit connection (connected to the Ethernet "port") for each VM. I already have a single port Intel Gbit card and the built in Realtek. The Intel is assigned to the WSE2012 VM and the Realtek is for the Ubuntu and Win7. I can't remember which is assigned to ESXi management...
I see, I'm positive this is possible when running VMplayer in a host OS as well
Quote:
Originally Posted by bryansj View Post

Yes, I'm referring to the free ESXi. The intent is to run it on bare metal so the other VMs aren't relying on a base OS. My ESXi is simply installed on a spare 4GB flash drive. It got annoying to have to terminate and restart VMs when the server OS needs to reboot for patches (or crashes) and causes downtime which could be avoided with a bare metal hypervisor. It also lets me manage the server's hardware resources a bit better since I can guarantee each VM a level of performance unlike running them in a server OS that could be busy serving up a ripped Blu-ray disc while the other VM is streaming or un-raring etc.
Now I'm lead to another question I suppose. When you say "guarantee a level of performance" do you happen to know how vsphere manages vcpu cores?

For example, my puny single VM only gets the minimum vcpu allotment (I doubt it needs much more) and 1GB of RAM. I prefer it this way since I'm running most of the important stuff in the host (where I want the resources to go). With vsphere, is that virtual cpu core completely segregated from other VMs? Is this any different from running a VM in a host? I had been under the impression that a host OS can "spread" resources as necessary, so that I'd get the fullest performance in the host that way. Most of the resources on the issue are completely unrelated to my intended use smile.gif I basically just want the single VM to back off resources if I launch Crysis 3
post #821 of 3370
I'm sure VMPlayer can do the nic assignment as well. Just in my case I'd be doing it with ESXi.

I would not consider running ESXi on a gaming rig where you plan on playing Crysis on a guest VM. For one it doesn't directly support audio. Second would be the need to pass thru your video card.
post #822 of 3370
Quote:
Originally Posted by bryansj View Post

I would not consider running ESXi on a gaming rig where you plan on playing Crysis on a guest VM. For one it doesn't directly support audio. Second would be the need to pass thru your video card.
Makes sense smile.gif
VMplayer it shall stay for now. I guess the model I'm after is obtuse

My VM needs are minimal, and the server/host is W8 which also functions as a normal htpc (may cross-over to gaming htpc). I appreciate the discussion, it reassures me that the direction I went is satisfactory even though I didn't fully research the available server options
post #823 of 3370
Quote:
Originally Posted by amarshonarbangla View Post

Have you done any testing? Do you see higher transfer speeds over the network now? I am thinking of doing a triple NIC teaming.

I wonder if Windows 7 has NIC teaming features.

Don't have it up and running yet. My HP switch does teaming so I will test both switch-aware and WS2012 virtual.
post #824 of 3370
Quote:
Originally Posted by bryansj View Post

Not a bad deal and it looks like it is ESXi supported. That would allow me to assign VMs their own port. I could also disable the built in Realtek or just use it only to administer the server.

Also, I thought to take advantage of network teaming you needed a managed network switch. Not sure.

Heh, maybe I should do a little virtualization too, considering I work for the vendor in question. smile.gif

Although I always figured with consumer hardware, I am much more likely to have crashes caused by hardware than software, so I run 3 physical servers rather than a bunch of VMs on one.
post #825 of 3370
Quote:
Originally Posted by Mfusick View Post

This looks awesome!!! Nice Job !!!

Any updates ?

Thanks.
I still have one HBA coming in (hopefully today) so I haven't completed it yet. Should be done by the end of the week, will post results.
post #826 of 3370
Thread Starter 
post #827 of 3370
It looks like a RAID card with one SAS port. Not too exciting for the retail price
post #828 of 3370
post #829 of 3370
Thread Starter 
Quote:
Originally Posted by bryansj View Post

It looks like a RAID card with one SAS port. Not too exciting for the retail price

Takes SSD caching to a whole new level

It’s tough to wrap your head around HighPoint’s RocketCache, so we’ll try to sum it up as being simply crazy performance, if you’re willing to deal with the configuration hassles.

The RocketCache is a x8 PCIe 2.0 card that lets you connect up to four SATA devices to it via a Medusa-like cable with four SATA 6Gb/s connectors on it. The card lets you run two HDDs with two SSDs for caching, or—more crazy—one HDD with three SSDs for insane caching. That’s not all. You can select between maximum performance, maximum performance with cache protection, RAID 1 with two hard drives and two SSDs for caching speed (maximum performance and protection), and maximum protection, which is RAID 1 with cache written to disks. One important note is that this device is not bootable, which is very unfortunate.



To test the RocketCache, we grabbed a WD 1TB Black drive, two OCZ Vertex 4 SSDs, and one Intel 335 Series SSD, and we ran all tests in Maximum Performance mode, which takes roughly 22GB from each SSD and stripes it together into a 66GB cache. Like other caching products, the size of the 1TB drive remained unchanged, and the extra space on the SSDs not being used for caching—about 217GB or so—is also still available as individual volumes. Since each SSD has its own lane to send and receive data, the configuration is theoretically able to saturate the PCIe interface with up to 1,500MB/s transfer speeds, and we got very close to that in testing with all four drives connected.

First, we connected just the hard drive by itself, and then the Vertex 4 SSD by itself, and ran our tests to show you what each drive is capable of by its lonesome (see benchmark chart). We then ran the HDD with each SSD added, one at a time, and ran our tests several times in order to see if performance would improve as the card began to cache the data used in the tests. Sure enough, it did, and each successive test run showed us increasing speed until we hit a ceiling. It didn’t take long for the 1TB hard drive to become as fast as an SSD, and in many cases performance surpassed that of the lone Vertex 4 SSD, which is not surprising. As an example, when we ran HD Tune on the one-SSD-plus-1TB combo, we initially saw the drive hit 107MB/s sequential read speeds (the same score it hit on its own), then 169MB/s on the next run, then 194MB/s, and on it went all the way up to 242MB/s. PCMark would also show us a “drive-only” score first, around 5,000, then suddenly jump to 40,000 or so—a huge increase in speed.

The RocketCache works as advertised, in other words. The only problem is, who would use this device? We don’t see it being used with three SSDs, due to expense (small, fast SSDs aren’t that cheap), though if you can swing it you’ll be a happy camper. The more interesting aspects are the RAID 1 options, which grant you huge-drive security with the safety of RAID and the speed advantage of drive caching. That is a truly unique combination of performance and security, and makes the RocketCache an interesting product that would kicks ass if we could boot from it.

$160, www.highpoint-tech.com

I think it is different than this:
http://www.techpowerup.com/182507/HighPoint-Launches-First-40-Port-SATA-6-Gb-s-HBA.html


Each device port supports up to four SATA hard drives: A single Rocket 750 is capable of supporting up to 40 4TB 6Gb/s SATA disks, for an unprecedented 160TB of directed-attached storage.
post #830 of 3370
Thread Starter 
I'm thinking this is a bad ass scratch disk. One 3TB 7200.14 and a few cheap SSD's (I have a few laying around) Better than my RAID0 array now.
post #831 of 3370
A WD Green and some Crucial M4s right?
post #832 of 3370
Thread Starter 
Quote:
Originally Posted by bryansj View Post

A WD Green and some Crucial M4s right?

Lol.

How did you guess?

Actually I have a spare M4 and few WD greens. That's the funny part.
post #833 of 3370

Alright, so I got mine finally assembled and up an running:

 

All the PCIe slots ended up occupied. From above - GeForce 210, RocketRAID 622, Lycom PE123i, HP NC360T, IBM M1015. Turns out the only HBA that does true hot swap is the cheap (£29) Lycom card; the IBM card requires a rescan of disks in the Windows disk manager and the mobo ports require a reboot to recognize a newly inserted drive. 

 

 

 

I installed a cheap DVD drive just in case. Barely enough room between the fans and SFF-8087 plugs.

 

 

 

This I am not too happy with. As I only use Molex and mobo power, all the other cables on the PSU are just bunched up unused. Does not look tidy at all. Why is it that everything made in China seems to come with cables stiff as boa constrictors?  

 

 

At home in the rack. Those 120mm fans plus the Arctic 80mm ones I added makes it very quiet. The HP switch has two 40mm fans and it makes more noise than all the other gear combined. 

The 2 other systems are a backup domain controller (top) and an Exchange/web server.  

The blue light at left on top of the new case is the smart card reader for my sat TV decryption. The server runs Oscam which is a DVB decryption app that serves my HTPC DVB-S2 tuners.  

 

 

I also installed FlexRAID. Very impressive!


Edited by politby - 4/18/13 at 12:57am
post #834 of 3370
M1015 can't do hotswaps? Can anyone else confirm this please?

Impressive build btw. Love the rack. What brand is it?

And is that a power conditioner/UPS above the CoolerMaster case?
post #835 of 3370
Quote:
Originally Posted by amarshonarbangla View Post

M1015 can't do hotswaps? Can anyone else confirm this please?

Impressive build btw. Love the rack. What brand is it?

And is that a power conditioner/UPS above the CoolerMaster case?

 

My M1015 does require a disk rescan. But maybe it has to do with the motherboard (Asus Sabertooth 990FX V2)?

 

The rack is an old ("Hewlett Packard" logo) HP Procurve networking rack - 38U high I believe - that I got for the equivalent of $30 from a local company that did a server refresh. smile.gif It's got built in fans, power rail, etc. Pretty good find.

 

No power conditioner, unfortunately - between the CoolerMaster and the switch is just empty space.

post #836 of 3370
Thread Starter 
Quote:
Originally Posted by amarshonarbangla View Post

M1015 can't do hotswaps? Can anyone else confirm this please?

Impressive build btw. Love the rack. What brand is it?

And is that a power conditioner/UPS above the CoolerMaster case?


I got swap mine all the time. Never bothered to check if it was. I just make sure the HDD is not in use and yank it out.

I've done this 50 times without issue.
post #837 of 3370
Quote:
Originally Posted by Mfusick View Post

I got swap mine all the time. Never bothered to check if it was. I just make sure the HDD is not in use and yank it out.

I've done this 50 times without issue.

Maybe his M1015 isn't flashed to IT mode?
post #838 of 3370
Quote:
Originally Posted by bryansj View Post

Maybe his M1015 isn't flashed to IT mode?

It is. Maybe I need to check again. I only tested once.
post #839 of 3370
Thread Starter 
post #840 of 3370
Thread Starter 
If your trying to install Windows home server to SSD smaller than 160GB here is how I do it:

Hit SHIFT +F10 to bring up command prompt. Then type : notepad.exe and hit enter. This will bring up the Notepad. Click file then open and browse to SKU/SERVERHOMEPREMIUM.def. You need to select all files at the bottom, and not just word files to see it. Then just edit the size of the HDD from the 160GB to anything you want it to be. Save and Close.

Then just type wpshell.exe and enter... set up will continue on your drive under 160GB smile.gif

New Posts  All Forums:Forum Nav:
  Return Home
  Back to Forum: Home Theater Computers
AVS › AVS Forum › Video Components › Home Theater Computers › Mfusick's How to build an affordable 30TB Flexraid media server: Information Requested.!