AVS Forum banner

1 - 20 of 192 Posts

·
Registered
Joined
·
4,641 Posts
Discussion Starter #1
Given all my work on the previous build, I thought it may be time to create a new thread for the next build.


Version 2 of the storage server -



Link to old thread for continuity sake - http://www.avsforum.com/avs-vb/showthread.php?t=776778




Target Goals:


- 48 drive Hotswap capability

- Space dense. These rackmount enclosures have really come down in price, making them very attractive.

- Capability to add more enclosures if needed - via External PCI-e.

- Minimize power consumption - although a server/setup of this size is never really going to be "power efficient" per se, but all we can do is minimize it.

- One switch operation. The main server's power up/power down sequence should turn everything off/on

- SAS Connectivity - I know some of you might beat me up for abandoning e-SATA, but there's reasons. Primarily, the eSATA connection is fairly "flimsy". The connectors pop off every now and then, and the alarm bells start sounding, and the server starts screaming blue murder.



Products already selected:

- Supermicro SC836 series, 3U enclosures with 16 hotswap bays and SAS backplanes. I have 4 on order. (Why the 3U, and not the 846 series 4U with 24 hotswap? Cost. I can get these for a quarter of the price of that. All 4 of these are about the same price as a single 846 series enclosure. 64 hotswap bays vs 24...no comparison
)

- Motherboard - Is going to be a dual 771, 8 core motherboard, I'm still finalizing specific models

- CPUs - 50w socket 771 Xeon quad cores

- RAM - 8/16GB ddr-667 FBDIMMs (I know I know..they EAT power)

- Rest TBD.


More to come.
 

·
Registered
Joined
·
333 Posts
Is this in competition to the other thread with the 48TB server???


Its battle of the server size... where size matters...


ok.ok.... I have a Supermicro X7DAL-E that has proven to be an awesome motherboard... Its paired up with an Xeon 5320, just one though, havent needed to scale yet into dual quad cores...
 

·
Registered
Joined
·
333 Posts
How much is one of those cases going to set you back? Those are nice and I could use a new case, specifically a Supermicro one that would play well with my SuperMicro motherboard... right now the Supermicro MB is jerryrigged up to the non-standard Intel SSI front panel connectors.
 

·
Registered
Joined
·
4,641 Posts
Discussion Starter #4

Quote:
Originally Posted by HaLo6 /forum/post/14262480


Is this in competition to the other thread with the 48TB server???


Its battle of the server size... where size matters...


ok.ok.... I have a Supermicro X7DAL-E that has proven to be an awesome motherboard... Its paired up with an Xeon 5320, just one though, havent needed to scale yet into dual quad cores...

Oh hell no, I don't compete. V2 grew out of the need to consolidate a few things. My "old" server setup runs just fine, but I'm starting to outgrow it..
Plus, I want to run a couple of virtual servers on the new hardware, and the old hardware just won't cut it for that.


In addition, after almost 2 years with my old setup, I have come to realize the value of proper cabling/labeling/proper rackmounting etc for something this size. Maintenance is a bi*ch with the old setup.
 

·
Registered
Joined
·
4,641 Posts
Discussion Starter #5

Quote:
Originally Posted by HaLo6 /forum/post/14262527


How much is one of those cases going to set you back? Those are nice and I could use a new case, specifically a Supermicro one that would play well with my SuperMicro motherboard... right now the Supermicro MB is jerryrigged up to the non-standard Intel SSI front panel connectors.

These retail for roughly $650 each. Of course I'm not paying that. I have *sources*.
 

·
Registered
Joined
·
4,641 Posts
Discussion Starter #7

Quote:
Originally Posted by ianeicher /forum/post/14263181


So, are you going to be selling some of your old stuff? It's vastly outdated, you might as well give it away.

Actually yes, I have some of those Norco rackmount cases that I bought for the old setup, and if you're in MD, buy me a beer and you can have em.



I have two short depth ones, and one full depth one. I can get the exact dimensions if needed.
 

·
Registered
Joined
·
153 Posts

Quote:
Originally Posted by kapone /forum/post/14263411


Actually yes, I have some of those Norco rackmount cases that I bought for the old setup, and if you're in MD, buy me a beer and you can have em.



I have two short depth ones, and one full depth one. I can get the exact dimensions if needed.

Nope, SoCal. I really do appreciate the offer though. Even your old server is way overkill for me anyway. I only need an 8 bay or the like. I figured I would ask though. It's kind of like horsepower...
 

·
Registered
Joined
·
4,641 Posts
Discussion Starter #9
Hmph...it looks I'll be getting Silver units instead of Black. Oh well. I guess I can live with it.
 

·
Registered
Joined
·
333 Posts
I am in MD... Pax River...
...
 

·
Registered
Joined
·
421 Posts

Quote:
Originally Posted by kapone /forum/post/14263411


Actually yes, I have some of those Norco rackmount cases that I bought for the old setup, and if you're in MD, buy me a beer and you can have em.



I have two short depth ones, and one full depth one. I can get the exact dimensions if needed.

I'm in Arlington, but willing to drive. What kind of beer?
 

·
Registered
Joined
·
4,641 Posts
Discussion Starter #12
Sorry, was busy with a few things.


I have 2 Norco "short depth" cases, one has a handle that's bent, so it probably can't be rackmounted, but can be put on a shelf. The other one is perfectly healthy.


I have 1 Norco extended length (takes e-atx motherboards), which is also completely healthy.


I probably don't have all the screws for these at this point, but they use generic screws, so you can just any old one's you may have lying around.


I'm stressed for time for the next 2 weeks or so, as I have to make a quick trip overseas as well, but towards the end of of the month, you're welcome to come by and pick em up. Lemme go through the responses to see who gets first dibs, and if they don't want it, anyone can have them.


Heineken or Becks will do.
Just kidding. Instead of throwing them at the recycling plant, it's better if someone gets use out of them.
 

·
Registered
Joined
·
4,641 Posts
Discussion Starter #13
Ah..shoot..all that iSCSI architecture and diagrams are gone..due to the crash..I'll post them again later.


But, slight change of plans. I have managed to acquire another failed company (it's scary how many are failing)..and guess what it had sitting in it's inventory...
A whole boatload of these...




These are 2U enclosures with 12 hotswap bays in the front...sigh...gonna need a bigger rack..since I seem to have almost 20 of these chassis now...
 

·
Registered
Joined
·
1,844 Posts

Quote:
Originally Posted by kapone /forum/post/14422918


Ah..shoot..all that iSCSI architecture and diagrams are gone..due to the crash..I'll post them again later.


But, slight change of plans. I have managed to acquire another failed company (it's scary how many are failing)..and guess what it had sitting in it's inventory...
A whole boatload of these...




These are 2U enclosures with 12 hotswap bays in the front...sigh...gonna need a bigger rack..since I seem to have almost 20 of these chassis now...

Very cool...
 

·
Registered
Joined
·
4,641 Posts
Discussion Starter #15
What's going to be interesting is the LED management solution, I'll have to devise. I have been looking at SAS adapters/converters, and man are they expensive! And even with that, very very few offer integrated LED management.


I don't get it, don't server and storage vendors get into the same issues for LED management?? I mean if I'm gonna have 40 or more hard drives connected to a storage server, I'd absolutely want to be able to go into the RAID management software/utility and light up the corresponding LED, to locate the drive, as well as LED identification of failed drives.


Gotta get a boatdload of 16 pin flat cables, pin headers, brackets...lol
 

·
Registered
Joined
·
3,138 Posts

Quote:
Originally Posted by kapone /forum/post/14423884


What's going to be interesting is the LED management solution, I'll have to devise. I have been looking at SAS adapters/converters, and man are they expensive! And even with that, very very few offer integrated LED management.


I don't get it, don't server and storage vendors get into the same issues for LED management?? I mean if I'm gonna have 40 or more hard drives connected to a storage server, I'd absolutely want to be able to go into the RAID management software/utility and light up the corresponding LED, to locate the drive, as well as LED identification of failed drives.


Gotta get a boatdload of 16 pin flat cables, pin headers, brackets...lol

You don't get special LED headers on normal SATA ports, even the status indicators on the 3726 PMP's aren't programmable. Personally, I've always been able to tell which drive failed by the drive activity lights. I make the filesystem very busy and the drive that has no read/write activity is the one I took out of service.
 

·
Registered
Joined
·
4,641 Posts
Discussion Starter #17

Quote:
Originally Posted by MikeSM /forum/post/14423900


You don't get special LED headers on normal SATA ports, even the status indicators on the 3726 PMP's aren't programmable. Personally, I've always been able to tell which drive failed by the drive activity lights. I make the filesystem very busy and the drive that has no read/write activity is the one I took out of service.

That's true. However, v2 is not going to use port multipliers.
All SAS HBAs. And they have programmable LED headers for each SAS port. I just have to daisy chain the signal to the external enclosures.
 

·
Registered
Joined
·
4,641 Posts
Discussion Starter #19
The RAID cards selected for v2 are the Ciprico RaidCore 8 port cards. Yes, these are not "hardware" RAID cards, but "fake RAID/software RAID", if you will. However, their performance is nothing short of amazing. In my benchmarking, just 8 drives connected to one of the PCI-e cards, benchmark at around 600MBps reads in RAID-0 and around 500MBps reads in RAID-5. That's impressive, and to top it off, you can span controllers. A single RAID array could be up to 32 drives.


(Also, I got these for a song..
, can't even touch any other 8 port PCI-E controllers at these prices).




Now, these cards (unfortunately) have SFF-8087 internal connectors, instead of external. I'd have loved to get the external versions, but apparently Ciprico doesn;t make em yet, the external ports will be offered only in the newer 5400 series (which btw, I have some on order as well, for testing. These are the 16 port versions).


So, to route the SAS ports outside, I have a couple of options.


One is:



http://www.newegg.com/Product/Produc...82E16816215062


BUT, it's $90. $90?? for a friggin adapter??



I'd need 2 of these on the host side (4 RAID cards, for a total of 32 ports) and 3 of these on the enclosure side. That's almost $500 just in adapters. And the friggin SFF-8087 cables aint cheap either. Almost $15 for one. I'd need 8 on the host side! That's another $120.


Ugghhh...


Gonna start getting creative...
 

·
Registered
Joined
·
4,641 Posts
Discussion Starter #20
So, the Ciprico cards come with the mini-SAS to SATA fanout cables (2 each per card).




Now, instead of using this cable, I could of course go the SAS adapter route as I said above, however it would be a lot more expensive.


OR, I could get a couple of these:




for the disk enclosures, route the SATA fan out cables from the host chassis, outside, and connect them to these brackets in the disk enclosures. These brackets are not cheap per se, but a lot less expensive than the SAS adapters. They run about $25 each.


Now, that presents a dilemna as these cables are "supposed" to be internal cables, not external. Although I have certainly used "internal" SATA cables, externally, with no issues. I'll have to try this approach though, because by the time the fan-out cables connect to the brackets in the disk enclosures, and then regular SATA cables from the brackets to the backplane in the disk enclosure, I'll be over the 1m SATA cable spec.


I have used 2m eSATA cables in the past, with no ill effects, but they are supposed to be shielded better, hence they can run longer lengths. I suspect a lot of its hogwash, the manufacturers are just being conservative. When you connect an internal to external eSATA adapter to a RAID card, the voltage travelling through the cable doesn't change, and if the 2m cable works, I suspect the regular "internal" cable will works as well. We'll see.


In addition, I'll have to route one 16 pin flat cable from each card, outside the host chassis, for LED management. I'm still trying to work out a schema as to what will be a "seamless" design for routing these 2 cable sets.
 
1 - 20 of 192 Posts
Top