AVS Forum banner
1 - 14 of 14 Posts

·
Registered
Joined
·
301 Posts
Discussion Starter · #1 ·
Guys i know a little bit about high end storage stuff but i'm FAR from an expert. So excuse me if i say something stupid. After the whole WHS Fiasco i have decided to build a new storage server. Not sure what type of setup (OS i will be running yet). It might involve ESXi, FreeBSD, Linux, ect... So i need something that is really compatible. Anyways this is the case i've selected:

http://www.ipcdirect.net/servlet/Detail?no=202


I've been googling around the past few days trying to figure things out. Previously i didn't know what a miniSAS was. Anyways of course it would be NICE to have a card that would

service all those drives but i sure can't afford a card like this:
http://www.newegg.com/Product/Produc...82E16816151037


I think i have decided to get two of these cards:
http://www.scsi4me.com/ibm-serveraid...ontroller.html


OK that would service 16 drives or 4 of the 5 miniSAS connectors on the backplane of the case. I will have 6 sata ports on my motherboard. Is there anyway to convert 4 of my sata connectors on my motherboard to the 5th miniSAS connector on the Motherboard? I would like two use the other two SATA connectors on the motherboard for a Raid-1 setup for whatever OS i decide to use.


I learned a lot from this post:
http://www.avsforum.com/avs-vb/showthread.php?t=1149005


In the post he use an expander and i'm not quite sure why. Not sure i understand 100% the concepts of an expander but i'm assuming it lets you split the 4 lanes off in one connector into more lanes but you will get slightly less bandwidth. Course the expander itself cost an extra $200 so id rather just use the sata connectors on my board if i could. Any help would be appreciated.


thanks,

Ncage
 

·
Registered
Joined
·
36 Posts
To connect your motherboard SATA ports to the SAS backplane, you need a SATA to SAS reverse breakout cable. A forward breakout cable goes from SAS card to SATA hard drives. Reverse breakout goes from SATA ports to SAS backplane.
 

·
Registered
Joined
·
301 Posts
Discussion Starter · #3 ·

Quote:
Originally Posted by kenoka /forum/post/19572593


To connect your motherboard SATA ports to the SAS backplane, you need a SATA to SAS reverse breakout cable. A forward breakout cable goes from SAS card to SATA hard drives. Reverse breakout goes from SATA ports to SAS backplane.

And those will work with miniSAS too and just not SAS because you would need the ability (i would think) of plugging in 4 cables in and there would be 1 cable out?
 

·
Registered
Joined
·
36 Posts

Quote:
Originally Posted by ncage /forum/post/19572607


And those will work with miniSAS too and just not SAS because you would need the ability (i would think) of plugging in 4 cables in and there would be 1 cable out?

I'm talking about a connection between four SATA ports and an SFF-8087, which is what the Norco case has.


As for storage server solutions, I really like unRAID. I currently have a server running eight data drives totaling 10 TB. The features I like about unRAID:


data redundancy with a single parity drive. The parity is calculated across all drives, so the loss of any disk can be compensated for. unRAID doesn't stripe data across drives though, so if you lose two drives, you only lose the data on those drives, rather than the whole array. This is much more drive efficient than WHS's mirroring scheme.


Ability to add drives of any size at any time: as long as your parity drive matches the size of the largest drive in the array, you can add any size drives you wish.


Ability to use cheap hardware: I'm using a two or three year old motherboard and CPU, and many others use even older stuff. It's not very hardware intensive at all. You can also upgrade hardware pretty pain free. unRAID doesn't care as long as the drive assignments stay the same.
 

·
Registered
Joined
·
301 Posts
Discussion Starter · #5 ·

Quote:
Originally Posted by kenoka /forum/post/19572655


I'm talking about a connection between four SATA ports and an SFF-8087, which is what the Norco case has.


As for storage server solutions, I really like unRAID. I currently have a server running eight data drives totaling 10 TB. The features I like about unRAID:


data redundancy with a single parity drive. The parity is calculated across all drives, so the loss of any disk can be compensated for. unRAID doesn't stripe data across drives though, so if you lose two drives, you only lose the data on those drives, rather than the whole array. This is much more drive efficient than WHS's mirroring scheme.


Ability to add drives of any size at any time: as long as your parity drive matches the size of the largest drive in the array, you can add any size drives you wish.


Ability to use cheap hardware: I'm using a two or three year old motherboard and CPU, and many others use even older stuff. It's not very hardware intensive at all. You can also upgrade hardware pretty pain free. unRAID doesn't care as long as the drive assignments stay the same.

Thanks a lot for the help. Just wish there was some good info on the web that was centralized about this stuff. It found out what i know mostly piece meal fashion.


Ya unRaid is my list. To be honest if ZFS was ready for prime time on FreeNAS thats what i would use. Unfortuantely its not. Hopefully with the next version (8) it will be.
 

·
Registered
Joined
·
301 Posts
Discussion Starter · #6 ·

Quote:
Originally Posted by kenoka /forum/post/19572655


I'm talking about a connection between four SATA ports and an SFF-8087, which is what the Norco case has.


As for storage server solutions, I really like unRAID. I currently have a server running eight data drives totaling 10 TB. The features I like about unRAID:


data redundancy with a single parity drive. The parity is calculated across all drives, so the loss of any disk can be compensated for. unRAID doesn't stripe data across drives though, so if you lose two drives, you only lose the data on those drives, rather than the whole array. This is much more drive efficient than WHS's mirroring scheme.


Ability to add drives of any size at any time: as long as your parity drive matches the size of the largest drive in the array, you can add any size drives you wish.


Ability to use cheap hardware: I'm using a two or three year old motherboard and CPU, and many others use even older stuff. It's not very hardware intensive at all. You can also upgrade hardware pretty pain free. unRAID doesn't care as long as the drive assignments stay the same.

Could you link to a 4-SATA to 1-MiniSAS cable?


I have no trouble finding 1-MiniSAS to 4 SATA but not the other way around.


thanks...
 

·
Registered
Joined
·
374 Posts

Quote:
Originally Posted by ncage /forum/post/19572726


Ya unRaid is my list. To be honest if ZFS was ready for prime time on FreeNAS thats what i would use. Unfortuantely its not. Hopefully with the next version (8) it will be.

If you're looking for ZFS take a glance at Nextenta Core which is free or their commercial product NexentaStor (which has a free version if you stay under 18 TB). They are an OpenSolaris kernel with a Linux type userland. I've been running my NAS for almost a year on Nexenta Core and have been quite happy with it.


You of course lose a good bit of flexibility going with a ZFS solution instead of an unRaid solution (like not being able to make effective use of mixed size drives and not being able to efficiently add drives one or two at a time) but you do gain a lot of other features (check-summing, dedupe, snap shots, etc.). I just decided I'll be adding my drives in waves. I started with 6, my next add will be 7 more, then one last add of 7; each group is it's own zpool. So I'll end up filling my 20 drive bays and will lose 3 to parity.


-apnar
 

·
Registered
Joined
·
36 Posts
You do get drive pooling though, with each pool getting its own parity. That's a nice feature when you're populating so many drives. Personally, I wouldn't build a 20 drive server with a single parity. I'm planning for a 12 drive system, and I think that's about as far as I'm willing to push it. I know lots of folks have the Norco case and it works for them, but I guess I'm a bit conservative there.
 

·
Registered
Joined
·
301 Posts
Discussion Starter · #10 ·

Quote:
Originally Posted by apnar /forum/post/19574334


If you're looking for ZFS take a glance at Nextenta Core which is free or their commercial product NexentaStor (which has a free version if you stay under 18 TB). They are an OpenSolaris kernel with a Linux type userland. I've been running my NAS for almost a year on Nexenta Core and have been quite happy with it.


You of course lose a good bit of flexibility going with a ZFS solution instead of an unRaid solution (like not being able to make effective use of mixed size drives and not being able to efficiently add drives one or two at a time) but you do gain a lot of other features (check-summing, dedupe, snap shots, etc.). I just decided I'll be adding my drives in waves. I started with 6, my next add will be 7 more, then one last add of 7; each group is it's own zpool. So I'll end up filling my 20 drive bays and will lose 3 to parity.


-apnar

Interesting....i thought you could mix & match drive sizes with ZFS. So its like a traditional raid solution where the drives have the been the same size?


There are so many solutions i just need thw wave the cost and all the +/-. I don't think every solution is perfect. I need to look more into FlexRaid which seems quite interesting. To be honest right now i'm running Windows Home Server inside a hyper-v instance which has worked out just fine made it a little tricky the host ever went down to restore the host. Still considering ESXi. I would go with hyper-v but of course hyper-v is limited for guest OS (with true optimization or enlightement) that ESXi is not limited to. I tried FreeBSD a few nights ago in a VM and it was awesome. To bad ZFS isn't stable yet in the current version.
 

·
Registered
Joined
·
301 Posts
Discussion Starter · #11 ·

Quote:
Originally Posted by kenoka /forum/post/19575000


You do get drive pooling though, with each pool getting its own parity. That's a nice feature when you're populating so many drives. Personally, I wouldn't build a 20 drive server with a single parity. I'm planning for a 12 drive system, and I think that's about as far as I'm willing to push it. I know lots of folks have the Norco case and it works for them, but I guess I'm a bit conservative there.

Why do you say that? Because your worried about more than one drive failing at once? I'm not too concerned with that because the likely hood (eventhough it could happen) of two drives failing at the same time would be very unlikely. Even hardware raid can fail i know. I remember at work when a raid controller failed on our server and corrupted everything on that machine. That is why i will be backing everything up in the cloud to crashplan just in case.
 

·
Registered
Joined
·
374 Posts

Quote:
Originally Posted by ncage /forum/post/19577455


Interesting....i thought you could mix & match drive sizes with ZFS. So its like a traditional raid solution where the drives have the been the same size?

So you can and you can't. Within a pool it will only use as much space on each disk as the smallest disk in that pool, so it makes sense to keep all the drives in a pool the same size. You can have different size disks in different pools though. So in my example, my 6 drives are each 1.5 TB but my next add of 7 drives will likely be of 3 TB drives all of which will be used. You can think of the pools sort of like raid groups. You can add as many raid groups as you want but you can't expand raid groups. You can make raid groups similar to either raid 1 (mirror), raid 5 (single parity called raid-z), or raid 6 (double parity called raid-z2). You can even mix and match raid types together, so if I wanted to I could add 2 more drives in a raid 1 to my existing 6 which are in a raid 5, but that wouldn't be a very efficient use of space. Also once you add a raid group there is no way to remove it.


-apnar
 

·
Registered
Joined
·
374 Posts

Quote:
Originally Posted by ncage /forum/post/19577469


Why do you say that? Because your worried about more than one drive failing at once? I'm not too concerned with that because the likely hood (eventhough it could happen) of two drives failing at the same time would be very unlikely.

In the storage world it actually happens a lot more then you'd expect it to. There are a number of things that contribute to it. First, most drives in an arrays tend to be purchased from the same place at the same time so are usually from the same batch of drives. As such they tend to share similar characteristics and failure time lines. Second, as soon as one drive fails the remaining drives have to work harder; every single read or write has to touch every drive while doing parity calculations. Third, once the drive is replaced, the entire array is maxed out while reconstructing putting all drives under stress and raising the temperature of the array as a whole. Basically, you're stressing all the drives to their maximum at a time when you can least handle a failure.


To kenoka's point, you don't want to put too many drives in a raid group. This is primarily due to the fact that as more disks are added the time to reconstruct the array increases. In some cases with todays huge sata disks in these arrays you're looking at reconstruct times measured in days.
 

·
Registered
Joined
·
374 Posts

Quote:
Originally Posted by ncage /forum/post/19577455


Still considering ESXi. I would go with hyper-v but of course hyper-v is limited for guest OS (with true optimization or enlightement) that ESXi is not limited to. I tried FreeBSD a few nights ago in a VM and it was awesome. To bad ZFS isn't stable yet in the current version.

I'm not using it but there are some folks on the Nexenta forums running it very well in ESXi. Using boards that support Intel's VT-d virtualization extension, they actually pass through the PCIe SAS/SATA controllers directly to the VM running Nexenta. They even use that Nexenta VM exporting NFS back to the ESXi server to host other VMs.


You can find some details in the ESXi section of this page:

http://www.nexenta.org/projects/site...CP_virtualized
 
1 - 14 of 14 Posts
This is an older thread, you may not receive a response, and could be reviving an old thread. Please consider creating a new thread.
Top