or Connect
AVS › AVS Forum › Video Components › Home Theater Computers › Mfusick's How to build an affordable 30TB Flexraid media server: Information Requested.!
New Posts  All Forums:Forum Nav:

Mfusick's How to build an affordable 30TB Flexraid media server: Information Requested.! - Page 2

post #31 of 3344
Here is a research paper Microsoft wrote on the issue of disk media errors before we had 1Tb drives
https://research.microsoft.com/apps/pubs/default.aspx?id=64599

Here is more recent paper from Dr Steve Hetzler, an IBM research scientist
http://drhetzler.com/smorgastor/2012/04/02/series-ssd-1-hard-errors-and-reliability/
post #32 of 3344
Thread Starter 
Quote:
Originally Posted by duff99 View Post

This depends on your setup. At some point I saw you mention PCI-x cards and PCI cards. If this is the case it is obviously less than optimal. If you meant PCI Express cards that's a different story. In the cheap server builds I've seen people are actually using PCI Intel NIC's to keep their PCI-E slots free for HBA cards. The 133 MB/s limit for PCI isn't going to handicap the NIC. Now if you've got a HBA (SATA Card) on your PCI bus that's going to be a problem. Since all of the PCI slots are probably sharing 1 PCI-E lane they will be competing for the available bandwidth. Everything you have in a PCI-E slot has it's own dedicated bandwidth. So the moral of the story is I wouldn't put more than one device one the PCI bus since they are sharing bandwidth. Anything on PCI-E has dedicated bandwidth. Now if you've got HBA's on PCI for most of what you do it should work fine. Just streaming a video isn't going to be limited by PCI. Where the limited bandwidth is going to come into play is if you've got to do a rebuild. When your trying to access more than one disk at a time on PCI they're going to be bandwidth limited.

Thanks! I actually have both but I'll configure set up better on the rebuild
post #33 of 3344
Thread Starter 
Quote:
Originally Posted by renethx View Post

Not "PCI" (there is no "PCI" in Z77/H77 chipset smile.gif) but "PCI Express", they are completely different, i.e, parallel shared bus topology vs. serial point-to-point bus topology. There is no "slow down" or "compete" in PCI Express. Moreover NIC is even connected to the chipset's PCI Express controller (x1 link), while RAID card will be connected to CPU's PCI Express controller (x16 or x8 link).
BTW onboard NIC is also connected to the chipset via a PCI Express x1 link, it's no different from a discrete NIC.
Chipset and CPU are connected via Direct Media Interface (DMI) 2.0 = PCI Express 2.0 x4 link.

Currently I have an older PCI raid card that I use simply for extra SATA ports.

I also have a smaller slot/ card pcix that I also use more more SATA ports.

The pci is 4 ports. The pcix is 2. Plus 4 ports of sata2 on my mobo.

I have ten drives so I'm at capacity almost. I have one sata3 left. The other SATA 3 is used for my server OS 60gb SSD drive.

Got a reccomendation for a good SATA card for my server rebuild ? I'd both like to replace my older cheaper cards with upgrade for best performance and also need to get some more ports too.

My new motherboard I use in the server rebuild should have two more SAtA 3 ports so in prob good for a couple months... But in thinking I might want to grab a good value SATA card and use that in the rebuild.

Advice ?
post #34 of 3344
Quote:
Originally Posted by Mfusick View Post

Currently I have an older PCI raid card that I use simply for extra SATA ports.
I also have a smaller slot/ card pcix that I also use more more SATA ports.
The pci is 4 ports. The pcix is 2. Plus 4 ports of sata2 on my mobo.
I have ten drives so I'm at capacity almost. I have one sata3 left. The other SATA 3 is used for my server OS 60gb SSD drive.
Got a reccomendation for a good SATA card for my server rebuild ? I'd both like to replace my older cheaper cards with upgrade for best performance and also need to get some more ports too.
My new motherboard I use in the server rebuild should have two more SAtA 3 ports so in prob good for a couple months... But in thinking I might want to grab a good value SATA card and use that in the rebuild.
Advice ?

For just ports, which is all you need for software solutions like flexraid and ZFS, a cheap LSI 2008 sas card = 8 Sata3 ports in a x8 slot.
It can work in open-ended shorter slots as well. You can turn closed-ended slots into open-ended slots with an exacto knife and a steady hand.

Ibm m1015 is currently the hot deal, $100 or less for card+cables.

Beware of any x1 sata 2 or 4 port card, all of them are garbage for various reasons, usually the chipset.
post #35 of 3344
Quote:
Originally Posted by Aluminum View Post

For just ports, which is all you need for software solutions like flexraid and ZFS, a cheap LSI 2008 sas card = 8 Sata3 ports in a x8 slot.
It can work in open-ended shorter slots as well. You can turn closed-ended slots into open-ended slots with an exacto knife and a steady hand.
Ibm m1015 is currently the hot deal, $100 or less for card+cables.
Beware of any x1 sata 2 or 4 port card, all of them are garbage for various reasons, usually the chipset.

I second that. Definitely the best way to go.
post #36 of 3344
Thread Starter 
Thanks guys!
post #37 of 3344
Fyi, I just picked up one of these for my WHS 2011 machine. I got the drivers from here. It's working just fine, although I won't be using the RAID features.

Monoprice has a few other models too. Gotta love Monoprice's prices!
post #38 of 3344
Quote:
Originally Posted by Mfusick View Post

Currently I have an older PCI raid card that I use simply for extra SATA ports.
I also have a smaller slot/ card pcix that I also use more more SATA ports.
The pci is 4 ports. The pcix is 2. Plus 4 ports of sata2 on my mobo.
I have ten drives so I'm at capacity almost. I have one sata3 left. The other SATA 3 is used for my server OS 60gb SSD drive.
Got a reccomendation for a good SATA card for my server rebuild ? I'd both like to replace my older cheaper cards with upgrade for best performance and also need to get some more ports too.
My new motherboard I use in the server rebuild should have two more SAtA 3 ports so in prob good for a couple months... But in thinking I might want to grab a good value SATA card and use that in the rebuild.
Advice ?

Perhaps

- Supermicro AOC-SAS2LP-MV8 8-port SAS/SATA 6.0Gb/s PCI Express 2.0 x8 card (Marvell 88SE9480 chip), ~$100

is the most straightforward, cheap pure HBA (i.e. non-RAID) card. I have been using it in several systems and it's pretty good. Two of this card + onboard 8 SATA = 24 SATA ports in NORCO RPC-4224 case.

As others mentioned, LSISAS2008 chip-based cards include:

- IBM ServeRAID M1015 8-port SAS/SATA 6.0Gb/s RAID 0,1, 10 and JBOD PCI Express 2.0 x8 card, ~$80 at ebay
- LSI MegaRAID SAS 9240-8i (= M1015 + RAID 5, 50), > $200
- LSI SAS 9211-8i, >$200

The last card is a pure HBA (hence no "MegaRAID") and is recommended for your purpose. IBM M1015 can be turned into 9211-8i by flashing the firmware (OK, I have no hands-on experience with this card smile.gif).
Edited by renethx - 11/9/12 at 3:35pm
post #39 of 3344
Thread Starter 
Big thanks!
post #40 of 3344
Quote:
Originally Posted by Tong Chia View Post

No problem, learning exercise is a good reason.
For starters you need a motherboard with 4 or more sata connectors, for this to work well you need a quad core and at least 8Gb ram, a dual core with hyperthreading will also work. The Intel NICs are the best supported. Recent NVidia or Radeons are well supported by the Solaris Xserver.
HBA is optional until you get something running. You can try this if you have at least 3 disks for RAIDZ, as an experiment it can be leftovers from a past upgrade.
The most interesting is a baremetal hypervisor like VMWare VSphere/Esxi 5.1 hypervisor, this is the freebie.
Your motherboard and CPU must support VT-x and preferably VT-d. check your BIOS for this.
Create 2 VMs one WHS and the other Solaris 11.
Solaris 11 will get you ZFS, NFS and iSCSI. I would use SMB/CIFS for starters as it is the easiest to setup. You can download the VM images from Oracle if you want to skip the installation hassle.
Create your ZFS pool and enable NFS, iSCSI and CIFS. The napp-it tool is probably easiest to get this going.
An alternative to Oracle is Nexenta based the OpenIndiana fork of Solaris and it has a Debian user environment.

I definitely want to go the ESXi route. I've already got a ESXi server setup with WHS, Win 7, and a few other things. I'm all set with the server hardware. Supermicro board with passthrough, Xeon, and ECC ram. I want to keep it in one box so I can get the speed advantage. If I separate the systems I'd have to deal with the network limitations. So for my plan I really do need the HBA. I've got some boxes I could use to setup a ZFS server to play with. I really don't have the disks though. I've tended to pass my old disks down to computers for my kids and other family members. At this point I've only got one non server in the house with a spinner so I can't upgrade to SSD for disks. My plan was to upgrade my 1 TB drives in my Unraid server to 2 TB and use the old disks. I just haven't got around to it yet. I seem to get distracted and just build another computer with my available funds. Maybe this will give me the kick in the butt I need to finally do it. I do plan on using Napp-it for the setup since I like a nice GUI.

I don't advocate ZFS for everyone. I think it's the best file system out there. It's a whole lot more work to get setup, and it certainly has some more complicated hardware requirements. Honestly on this forum I usually recommend Unraid, it's what I'm using now. I can also see the advantages of Flex Raid or Snap Raid. They all accomplish similar goals, namely providing some degree of protection for our media files. This gets the job done for the majority of use cases here. Our focus here is on HTPC's, servers are a useful part of the eco system not the focus. Usually I leave ZFS for other forums. As for other posters, I can't program like them so this is my hobby. I like building things and making them work. I can definitely see why someone would like a simpler system that just works and probably provides 95% of the functionality. I'm not trying to sell ZFS to everyone. I just saw ZFS brought up and thought I'd share what I would like to do if I could afford it.
post #41 of 3344
Quote:
Originally Posted by Tong Chia View Post

Nope, the ZFS pool defaults to the smallest storage unit, so you get only 2Tb. It is terrible as a logical volume manager.

Not so fast. A vdev can be a single drive or a set of drives with some sort of RAID-X underneath it. If you don't care about redundancy, you can just add each drive as a single vdev each and then stripe across all of them. That will get you full usage of your mixed size drives.
post #42 of 3344
Quote:
Originally Posted by Tong Chia View Post

The most interesting is a baremetal hypervisor like VMWare VSphere/Esxi 5.1 hypervisor, this is the freebie.
Your motherboard and CPU must support VT-x and preferably VT-d. check your BIOS for this.
Create 2 VMs one WHS and the other Solaris 11.
Solaris 11 will get you ZFS, NFS and iSCSI. I would use SMB/CIFS for starters as it is the easiest to setup. You can download the VM images from Oracle if you want to skip the installation hassle.
Create your ZFS pool and enable NFS, iSCSI and CIFS. The napp-it tool is probably easiest to get this going.
An alternative to Oracle is Nexenta based the OpenIndiana fork of Solaris and it has a Debian user environment.

This is exactly what I do.

VMWare ESXi runs all my VMs, and I have a separate test machine that runs NAS4Free (BSD-based ZFS) and provides iSCSI storage for all my VMs. I am satisfied with the performace over 1GB NICs with iSCSI that I am building a single more powerful server to consolidate both machines into a virtualized utopia. The internal transfer rates between VMs over the paravirtualized NICs will run at 10GB/s so ethernet will never be the bottleneck in iSCSI. I also have a 3rd box that's been around a while that runs Server 2008 with Starwind iSCSI (free) to provide storage for a VM to test how FlexRAID performs with iSCSI. It performs admirably, though I don't trust FlexRAID to protect an ever-changing iSCSI VHD.
post #43 of 3344
Thread Starter 
Quote:
Originally Posted by lockdown571 View Post

Fyi, I just picked up one of these for my WHS 2011 machine. I got the drivers from here. It's working just fine, although I won't be using the RAID features.
Monoprice has a few other models too. Gotta love Monoprice's prices!

What's big difference between a 15$ card like this and a $100 card like renethx is reccomending?

FYI I have a couple of these cheap cards already
post #44 of 3344
As long as they work stably, there is no difference, of course.

If you build a new system from scratch, I recommend a mb with 8 SATA ports (can be had < $100) + zero, one or two of SAS2LP, with 8 / 16 / 24 SATA ports in total.

Otherwise, use whatever you already have and save money.
post #45 of 3344
Quote:
Originally Posted by renethx View Post

As long as they work stably, there is no difference, of course.

They don't, thats the problem.

LSI has chipsets in literally every major SAN/NAS vendor out there worth buying, and get rebranded by everyone in their own servers, all for good reasons.
Personally I won't touch anything else with my data except motherboard chipset ports. (and not those "extra 2 raid ports" that cost OEMs 50 cents so they can have another feature bullet point)

Silent data problems don't show up until its painful.


Cheap things have their place, just know their limitations.
I have lots of bargain bin screwdriver and ratchet sets freighted over from china, they are still useful for casual stuff. But if I'm rebuilding an engine or whatever, out come the real tools with actual tolerances.
post #46 of 3344
Thread Starter 
It's just a home server with movies and TV shows in not sure it's super important data.
post #47 of 3344
Quote:
Originally Posted by Aluminum View Post

They don't, thats the problem.
LSI has chipsets in literally every major SAN/NAS vendor out there worth buying, and get rebranded by everyone in their own servers, all for good reasons.
Personally I won't touch anything else with my data except motherboard chipset ports. (and not those "extra 2 raid ports" that cost OEMs 50 cents so they can have another feature bullet point)
Silent data problems don't show up until its painful.
Cheap things have their place, just know their limitations.
I have lots of bargain bin screwdriver and ratchet sets freighted over from china, they are still useful for casual stuff. But if I'm rebuilding an engine or whatever, out come the real tools with actual tolerances.

Data (scientific / quantitative) that back up your claim?
Quote:
Originally Posted by Mfusick View Post

It's just a home server with movies and TV shows in not sure it's super important data.

Exactly. I see very little relevance of "silent data corruption" problem in a home theater environment. We are not treating critical data, say, that could jeopardize our lives or a company, but just media for our entertainment. Movie can be easily retrieved from the original disc. TV programs? well, they are already a lot corrupted in my eyes, let alone downloaded media.

Personally I never listen to these server professionals, in particular in the HTPC area (somehow I always feel they lack a good balance).
Edited by renethx - 11/11/12 at 5:06am
post #48 of 3344
Thread Starter 
I'm all for maxing out my network connection since copy pasting a Bluray rip movie folder is something I do constantly and the mkv is 20+ gb sized.

I'd like 130/mb sec performance so if a cheapo raid or SATA card is limiting me I'd spend up.

But even cheap SATA2 card usually features faster than HDD speed performance.

My only question is if a SATA card split the speed of one drive among 4 drives then it might be an issue.

If I'm over 100mb/sec copy paste then I'm happy.

My fastest HDD in my desktop only hits 150mb sec anyways and that's good speed for a HDD. At a certain point HDD mechanical speed is limiting so I'd like the network connection and the SATA card to at least offer that.
post #49 of 3344
Thread Starter 
If a few of my movies have "data rot" or whatever and don't play I'll live. It's not life or death stuff here.

I just want good performance and good value.

Price to performance to capacity is the ratios I'm trying to maximize.

I've used a cheapo SATA card in my 20tb flex raid server last 6 months without any issues so far.

But since I need another I'm thinking I might want to get something decent.
post #50 of 3344
The cheap four port cards work well until you've got to access more than one drive at a time like during a rebuild. Then you'll be seriously limited. I personally have one of these cards, and one of the Supermicro cards in my unraid server. I'd like to replace the Monoprice card, but it's functional. It's takes a big hit on my parity check, rebuild speeds though.

On the subject of the LSI cards. As previously stated they are enterprise grade. These other cards would probably work fine for your use case. The LSI cards will work for any use case. This would come in handy if you get influenced by any of the crazy talk that we started on your various threads. They also offer more bandwidth than the other options. If you wanted to be prepared for any option the LSI cards are the way to go.

Here is a good write up from the Unraid forums. It also includes the files and info on flashing to IT mode. This makes them into plain HBA's instead of raid cards.
http://lime-technology.com/forum/index.php?topic=12767.0

If your happy with what you've got there is no real need to upgrade. The easiest upgrade solution is the Supermicro card recommended by renethx. If you just stick with WHS it will work well. If you ever think you'll experiment with any other OS's, or just want a better card for possibly less money. Along with possibly of more work in the initial setup. I'd go with the LSI card. If I was building today it's what I'd go with.
post #51 of 3344
Quote:
Originally Posted by Mfusick View Post

If a few of my movies have "data rot" or whatever and don't play I'll live. It's not life or death stuff here.

Leave deep error recovery on the drive enabled (its the default) and you not have to see any of it, the manufacturers put it in for this reason.

Enabling TLER (Timed Limited Error Recovery) disables this protection, once you do that, another error recovery mechanism is needed.
post #52 of 3344
Thread Starter 
Very helpful thank you both
post #53 of 3344
Quote:
Originally Posted by Puwaha View Post

Not so fast. A vdev can be a single drive or a set of drives with some sort of RAID-X underneath it. If you don't care about redundancy, you can just add each drive as a single vdev each and then stripe across all of them. That will get you full usage of your mixed size drives.

The key point is, you throw away redundancy, which is kind of a big deal. If you have a 1:1 backup of your data then sure, eschew redundancy... but for most here, this is the big reason to go with these solutions in the first place. Like I said before, it is not about the amount of data you have that determines the solution you employ, but about what your risk tolerances are in conjunction with your usage requirements that determine what is a good fit. Going ZFS for the sake of going ZFS because someone said so or because you are lured by the whole "built for enterprise" line is not a good way to go about making decisions. ZFS is good for some, but not for all.
post #54 of 3344
No offence but it sounds like some of you like playing with fire. There is nothing more frustrating than dealing with disk problems because a lot of it is out your control. You don't have to use ZFS at all... But if you value your time and sanity its a good option to explore, especially when you are dealing with consumer grade hardware and lots of data. It doesn't matter if is only movie files. It takes hours upon hours of your precious time to re-rip or re-acquire.

If you have other people in your household that depend on system uptime then I can only highly suggest you do the extra little things to make your systems more reliable.
post #55 of 3344
Quote:
Originally Posted by Puwaha View Post

No offence but it sounds like some of you like playing with fire. There is nothing more frustrating than dealing with disk problems because a lot of it is out your control. You don't have to use ZFS at all... But if you value your time and sanity its a good option to explore, especially when you are dealing with consumer grade hardware and lots of data. It doesn't matter if is only movie files. It takes hours upon hours of your precious time to re-rip or re-acquire.
If you have other people in your household that depend on system uptime then I can only highly suggest you do the extra little things to make your systems more reliable.

+1

Video is at most 25% on my box most of that is family videos from camcorders and conversions from 16mm film, it is a backup device mostly, if you have kids with laptops, I think you know where I am headed.

I would be curious though about the number of people who build servers exclusively for ripped movie storage. Poll maybe?
post #56 of 3344
Quote:
Originally Posted by Mfusick View Post

I've used a cheapo SATA card in my 20tb flex raid server last 6 months without any issues so far.
Quote:
Originally Posted by Mfusick View Post

If a few of my movies have "data rot" or whatever and don't play I'll live. It's not life or death stuff here.

The customer is satisfied with his cheapo SATA card. He is not frustrated with disk problems at all. Then why step in further? A "Hey, your system with such a cheap SATA card must have disk problems, you should replace it with a professional-grade card"-type argument sounds professionals' arrogance to me. smile.gif

He is going to buy a new card and recommending a better card should be welcome, of course.
Edited by renethx - 11/12/12 at 1:55am
post #57 of 3344
Thread Starter 
Quote:
Originally Posted by renethx View Post

The customer is satisfied with his cheapo SATA card. He is not frustrated with disk problems at all. Then why step in further? A "Hey, your system with such a cheap SATA card must have disk problems, you should replace it with a professional-grade card"-type argument sounds professionals' arrogance to me. smile.gif
He is going to buy a new card and recommending a better card should be welcome, of course.

Renethx,

I appreciate your reasonable approach. I think I'm going to grab one if the cards you recommended when I add my next two 3TB drives. I'll use that card and the motherboard ports for my newest, largest HDDs and use the cheapo card to attach my smaller 2tb green drives.

I'm assuming it doesn't matter about a SATA3 HDD or port ? I always assumed it does not matter on any HDDs since they are slower than SATA2 specs..
post #58 of 3344
AOC-SAS2LP-MV8 supports both SATA2 and SATA3 HDDs with no problem.
post #59 of 3344
Quote:
Originally Posted by renethx View Post

AOC-SAS2LP-MV8 supports both SATA2 and SATA3 HDDs with no problem.

Do you know if the AOC-SAS2LP-MV8 allows programs to read SMART data?
I've had a couple of the cheap cards in the past and I have never been able to access the SMART in programs like HD Tune. Not sure if it is a driver issue or just the card (Disks are setup as JBOD)
post #60 of 3344

The proponents of ZFS forget to mention limitations such as not being able to dynamically expand/reduce pools, the almost requirement to have a fast ssd as a cache drive, complexity in managing it etc. It is no doubt an industrial strength file system, and it is probably out of place for many home uses unless the user is advanced (which many here are).

 

I also would base the decision not on the size of data (2TB or 200TB is immaterial) but on how static it is. If its mostly media which is write once, then a snapshot solution is actually better as it doesn't stripe data and is much simpler in theory and in practice, as well as being infinitely expandable. For me, simplicity is paramount - both in the technical implementation as well as the management interface.

 

If there is ever an implementation of ZFS that is easy for the end user (by which I mean no cmd line needed, ever) and which is more flexible, then it might be an alternative. Storage Spaces is dead in the water and btrfs will only be in Linux which rules it out. Meanwhile, I can take 10 2TB drives and store their parity on 2 more drives this giving me reasonable peace of mind as well as disaster recovery - I can simply pull out a drive and use it, even if there's bit rot only a few affected files are damaged. This is why I hate striping solutions.

New Posts  All Forums:Forum Nav:
  Return Home
  Back to Forum: Home Theater Computers
AVS › AVS Forum › Video Components › Home Theater Computers › Mfusick's How to build an affordable 30TB Flexraid media server: Information Requested.!