or Connect
AVS › AVS Forum › Video Components › Home Theater Computers › Mfusick's How to build an affordable 30TB Flexraid media server: Information Requested.!
New Posts  All Forums:Forum Nav:

Mfusick's How to build an affordable 30TB Flexraid media server: Information Requested.! - Page 6

post #151 of 3374
Thread Starter 
My thought on hard drives having owned 50 of them is they are all basically the same.

I have currently: Samsung, Hitachi, WD, and Seagate.

When I bought each, there was a current reason.

But in hind sight they are all the same.

I can't hear a difference between a green and a Seagate.

People who say "heat" are just idiots. These things don't get any hotter.

The "heat and noise " issue gets beat to death around here with green drive propagandizers but in a server environment it's a non issue. And good case with good cooling and acoustic properties will take a normal drive without issue.

Perhaps if I had a mini ITX build I'd consider that. But in my case it's not a big deal.

Capacity to price is main concern. Reliability and performance also matter. I'd say the few dollars in electricity a year is last on my list.

It's quite trendy around here to say the buzz words "heat and noise" but owning 30 hard drives from various mfg's it's more just "heat and noise" spewing from people's mouths than it is from actual hard drives. I've never had a problem with "heat and noise". I think only in a very small build or perhaps a dedicated HTPC in say a bedroom is that going to be a factor.


In a tower desktop or server it's just a non issue.

It's great this is a HTPC forum but I think too often people assume every build is an ITX build in the worlds smallest case and going to be on display. The recommendations around here seem to assume that's the only HTPC that exists. That's a pet peeve of mine around here.

Green drives are good only in a mini dedicated HTPC IMO. It makes sense. But only if its same or cheaper priced as a RED or normal drive or that the Green actually does save significant energy/heat/noise.

But I think there's a general perception that's wrong: People assume green is cheaper and saves significant heat and noise and energy. That's not really true. Green drives are old now. A modern model that's not designated as green offers better performance, warranty, reliability, and price while being on par or better on energy too.

RED now sells for basically same as Green right now. Longer warranty, better speeds and lower energy.
Seagate offers huge benefit in price, also a boost in speed. But is only on par with energy and warranty/reliability.

There's nothing wrong with a green drive but I think they are a bit overrated. 12 months ago it was clear best choice but today that choice is much tougher.
Most if the people recommending them today are still living in 2011.
post #152 of 3374
Thread Starter 
Quote:
Originally Posted by Puwaha View Post

Honestly with that much storage at risk I would not use a snapshot RAID product like FlexRAID. Updates, validations and rebuilds would take days. With that much storage at risk I'd start investigating a ZFS platform.

Where is a good place to learn more about this ?
post #153 of 3374
Thread Starter 
Quote:
Originally Posted by renethx View Post

Perhaps
- Supermicro AOC-SAS2LP-MV8 8-port SAS/SATA 6.0Gb/s PCI Express 2.0 x8 card (Marvell 88SE9480 chip), ~$100
is the most straightforward, cheap pure HBA (i.e. non-RAID) card. I have been using it in several systems and it's pretty good. Two of this card + onboard 8 SATA = 24 SATA ports in NORCO RPC-4224 case.
As others mentioned, LSISAS2008 chip-based cards include:
- IBM ServeRAID M1015 8-port SAS/SATA 6.0Gb/s RAID 0,1, 10 and JBOD PCI Express 2.0 x8 card, ~$80 at ebay
- LSI MegaRAID SAS 9240-8i (= M1015 + RAID 5, 50), > $200
- LSI SAS 9211-8i, >$200
The last card is a pure HBA (hence no "MegaRAID") and is recommended for your purpose. IBM M1015 can be turned into 9211-8i by flashing the firmware (OK, I have no hands-on experience with this card smile.gif).


One more question on this too.

I thought Somone said I can't virtualize with the first option so is be better flashing the IBM.

Can someone clarify that for me?

And,

Does Newegg not carry the super micro ??

http://www.newegg.com/Product/ProductList.aspx?Submit=ENE&N=100007607%2050001655&IsNodeId=1&name=SuperMicro

I have 20% off coupon code for SATA cards I can use but I can't find the specific model renethx listed.

Any have advice and link ?

Where a good place to pick one if these up ?
Edited by Mfusick - 11/22/12 at 4:41am
post #154 of 3374
Quote:
Originally Posted by Mfusick View Post

Where is a good place to learn more about this ?

I'd check out the [H]ardOCP storage forums. http://hardforum.com/forumdisplay.php?f=29 If you think you got a hard sell on ZFS here, try posting over there. The starter of the server thread and some others moved over there. There's a ton of ZFS builds and some good info. Look for a poster named _Gea. He's developed a GUI front end that makes setting up a system much easier. He's got some good guides, and is a very good source of information. I wouldn't even consider setting up a ZFS system without his Nappit software. I'm not a big fan of the command line.
post #155 of 3374
Quote:
Originally Posted by Mfusick View Post

One more question on this too.
I thought Somone said I can't virtualize with the first option so is be better flashing the IBM.
Can someone clarify that for me?
And,
Does Newegg not carry the super micro ??
http://www.newegg.com/Product/ProductList.aspx?Submit=ENE&N=100007607%2050001655&IsNodeId=1&name=SuperMicro
I have 20% off coupon code for SATA cards I can use but I can't find the specific model renethx listed.
Any have advice and link ?
Where a good place to pick one if these up ?

The IBM card works well with passthrough for ESXi. This is where you give the OS direct access to the hardware. This way the OS is working directly with the disks, not through an abstraction layer. This is better for speed, and reduces any problems. The Supermicro card can work. You just need to hack ESXi. So if your going to run ESXi going with the natively supported, faster, and cheaper card makes sense. Generally the LSI cards are better supported across different OS's than cards with a Marvell chipset.
post #156 of 3374
Thread Starter 
The IBM requires me to buy an additional full size pci slot adapter ?

And what cables do I need to connect 8 SATA HDDs?

Where should I buy ?

I see a few IBM on eBay but they cheap ones are not full size slots. My server is in a full ATX tower so I think I need a full slot mount.
post #157 of 3374
All cards I mentioned are PCIe 2.0 x8, that works in any PCIe 2.0/3.0 x8/x16 slot that is electrically x8/x16.

These cards use 2 x SFF-8087 mini-SAS connector (wiki), each of which supports 4 SATA hard disk drives. The other end connector depends on your case/back plate/cage. For example, Norco RPC-4224/RPC-4220 uses 6/5 x SFF-8087 (= 24/20 SATA drives). Or you connect to individual HDDs. Choose:

- SFF-8087 to SFF-8087 cable in the first case, e.g. 3ware CBL-SFF8087-05M or Norco C-SFF8087-D
- SFF-8087 to four SATA "forward" breakout cable in the latter case, e.g. 3ware CBL-SFF8087OCF-05M (there is also a "reverse" cable, that connects 4 SATA connectors on a HBA to a mni-SAS connector, e.g. connect AOC-SAT-MV8 to RPC-4224; choose "forward" here)
Edited by renethx - 11/22/12 at 3:17pm
post #158 of 3374
Quote:
Originally Posted by renethx View Post

All cards I mentioned are PCIe 2.0 x8, that works in any PCIe 2.0/3.0 x8/x16 slot that is electrically x8/x16.
These cards use 2 x SFF-8087 mini-SAS connector (wiki), each of which supports 4 SATA hard disk drives. The other end connector depends on your case/back plate/cage. For example, Norco RPC-4224/RPC-4220 uses 6/5 x SFF-8087 (= 24/20 SATA drives). Or you connect to individual HDDs. Choose:
- SFF-8087 to SFF-8087 cable in the first case, e.g. 3ware CBL-SFF8087-05M or Norco C-SFF8087-D
- SFF-8087 to Four SATA "forward" breakout cable in the latter case, e.g. 3ware CBL-SFF8087OCF-05M (there is also a "reverse" cable, that connects 4 SATA connectors on a HBA to a mni-SAS connector, e.g. connect AOC-SAT-MV8 to RPC-4224; choose "forward" here)

I think he was talking about the mounting bracket. A lot of the IBM cards are server pulls that come without the bracket or a low profile one. It just takes some looking around to find what you need. These cards have become quite popular in certain circles so the prices have gone up a bit. As always good information renethx. I keep thinking how good it is to see you back.
post #159 of 3374
There are plenty of brackets sold separately on ebay,.

I sort of feel insulted paying ~$11 or more (shipping etc) for a bracket "specifically" for these cards that probably got sourced for a nickel or two, but I have lots of spare slot brackets that do the job just fine with 2 dollars of home depot parts to fit.

For the SAS breakout cables in the US, go to monoprice, under $10 each and multiple unit order discounts.

Also, for fastest booting and best compatibility with glitchy bios from non-server oems, you can flash it to IT mode without a ROM. The only caveat is you cannot boot a drive off it, but IT mode doesn't need the ROM anyways and this is ideal for things like ZFS. A lot of people are booting a small usb drive or DOM from a chipset port anyways.
post #160 of 3374
Thread Starter 
Quote:
Originally Posted by duff99 View Post

I think he was talking about the mounting bracket. A lot of the IBM cards are server pulls that come without the bracket or a low profile one. It just takes some looking around to find what you need. These cards have become quite popular in certain circles so the prices have gone up a bit. As always good information renethx. I keep thinking how good it is to see you back.

Right.

I have a full tower ATX build. I need a full size backplate.

Most seem to be the mini variety and the ones with the ATX full size backplate are 20-35$ more which seems crazy.

Can I put on a backplate from a spare wifi card or something I have ? They are all the same basically.. right ???
post #161 of 3374
Quote:
Originally Posted by Mfusick View Post

The IBM requires me to buy an additional full size pci slot adapter ?
And what cables do I need to connect 8 SATA HDDs?
Where should I buy ?
I see a few IBM on eBay but they cheap ones are not full size slots. My server is in a full ATX tower so I think I need a full slot mount.

You can find them on E-bay with brackets.
http://www.ebay.com/itm/IBM-ServeRaid-M1015-SAS-SATA-PCI-e-RAID-Controller-46M0861-Full-Height-Bracket-/121024400570?pt=US_Server_Disk_Controllers_RAID_Cards&hash=item1c2d9dccba

I got my cables from Ebay as well. Monoprice can be high on shipping if you don't have a big order.
http://www.ebay.com/itm/NEW-INTEL-LSI-Mini-SAS-SFF-8087-to-4-x-SATA-SAS-Fan-out-Cable-RAID-CABLE-NEW-/300805391030?pt=US_Drive_Cables_dapters&hash=item46096602b6

And a link to crossflashing
http://forums.laptopvideo2go.com/topic/29059-sas2008-lsi92409211-firmware-files/
post #162 of 3374
Thread Starter 
Quote:
Originally Posted by renethx View Post

All cards I mentioned are PCIe 2.0 x8, that works in any PCIe 2.0/3.0 x8/x16 slot that is electrically x8/x16.
These cards use 2 x SFF-8087 mini-SAS connector (wiki), each of which supports 4 SATA hard disk drives. The other end connector depends on your case/back plate/cage. For example, Norco RPC-4224/RPC-4220 uses 6/5 x SFF-8087 (= 24/20 SATA drives). Or you connect to individual HDDs. Choose:
- SFF-8087 to SFF-8087 cable in the first case, e.g. 3ware CBL-SFF8087-05M or Norco C-SFF8087-D
- SFF-8087 to four SATA "forward" breakout cable in the latter case, e.g. 3ware CBL-SFF8087OCF-05M (there is also a "reverse" cable, that connects 4 SATA connectors on a HBA to a mni-SAS connector, e.g. connect AOC-SAT-MV8 to RPC-4224; choose "forward" here)

Awesome post !!!

Clarity is extreme, your wisdom is obvious- and your effort to post is greatly appreciated.

I had these exact questions and while I would have found the answer in a round about way- you make it so clear and easy to understand.

Thank you.

I think I am going to go straight to drives. I don't have a server case. I plan to use my COSMOS II desktop case for my server.

I was looking at something like this:

http://www.newegg.com/Product/Product.aspx?Item=N82E16816115064


If I am right - I would put one end in the SATA CARD and the other four into four HDD's. Right ?

Just double checking.
post #163 of 3374
Thread Starter 

Awesome !!!

I edited my post and came back to see this one. Thanks for this.

I was looking at monoprice:

http://www.monoprice.com/products/search.asp?keyword=SFF-8087

But it looks like your ebay link just makes sense.

The new egg I was asking about was $25.
post #164 of 3374
Thread Starter 
Quote:
Originally Posted by Aluminum View Post

There are plenty of brackets sold separately on ebay,.
I sort of feel insulted paying ~$11 or more (shipping etc) for a bracket "specifically" for these cards that probably got sourced for a nickel or two, but I have lots of spare slot brackets that do the job just fine with 2 dollars of home depot parts to fit.
For the SAS breakout cables in the US, go to monoprice, under $10 each and multiple unit order discounts.
Also, for fastest booting and best compatibility with glitchy bios from non-server oems, you can flash it to IT mode without a ROM. The only caveat is you cannot boot a drive off it, but IT mode doesn't need the ROM anyways and this is ideal for things like ZFS. A lot of people are booting a small usb drive or DOM from a chipset port anyways.

That's what I was seeing. It seems like a lot to spend $20 more for the full slot version over the identical with half slot backplate.

When you say flash to IT without ROM what are you referring to ? Flashing the IBM ? I will have a dedicated SSD for my OS drive- connected to the motherboard SATA3 port.
post #165 of 3374
I built a WHS2011 server about two years ago. I used a Gigabyte GA-970A-UD3 board (http://www.gigabyte.us/products/product-page.aspx?pid=4396#ov) and an AMD Phenom II x4 3.2GHz processor. Fast, silent and trouble-free. I also use a highpoint RAID card with 4Tb of storage space (RAID 5). OS and backups sit on a pair of 1 Tb drives independent of the controller. All of the movies and music on my server are stored on the highpoint controller. Everything I care about is stored on the OS drive and backed up to the other (removable) 1 Tb drive. The server also houses a Ceton InfiniTV card where I distribute its four HD tuners over the network. In addition to feeding my two HTPC's, it also handles all of the standard file & print serving duties one would expect of a server. I selected the Gigabyte board partly on Gigabyte's reputation for quality and partly on the fact that it's rugged and uses some beefy mil-std caps. I have not been disappointed at all. Only minor gripe is mkv streaming to one of my HTPC's, but that's most likely a network issue and not related to the server.

There are a lot of fans of flexraid on this forum. I'm not quite as comfortable with software RAID solutions as a lot of the folks here. I am comfortable with my "old school" RAID 5 array, I understand how to recover my data, and I like the idea that I can pull the card and drives out of the server and move them to another machine whenever I feel the urge to do so. Flexraid is intriguing, but I'm not sure I'd trust it with 20+ Tb of data.
post #166 of 3374
Thread Starter 
Quote:
Originally Posted by ajkrishock View Post

There are a lot of fans of flexraid on this forum. I'm not quite as comfortable with software RAID solutions as a lot of the folks here. I am comfortable with my "old school" RAID 5 array, I understand how to recover my data, and I like the idea that I can pull the card and drives out of the server and move them to another machine whenever I feel the urge to do so. Flexraid is intriguing, but I'm not sure I'd trust it with 20+ Tb of data.

It's pretty easy to set up flexraid.

Benefit over RAID hardware is you can use one parity drive (or two if needed) for many HDDS

So you can convert over to flexraid and gain lots of storage.

The issue is you can't break up your RAID array without losing your data.

So you could copy over each raid array or drive one at a time if you had enough storage. Then break up the Array after and add it back in as DRU drives (data drives) for more storage.
post #167 of 3374
Quote:
Originally Posted by ajkrishock View Post

I
There are a lot of fans of flexraid on this forum. I'm not quite as comfortable with software RAID solutions as a lot of the folks here. I am comfortable with my "old school" RAID 5 array, I understand how to recover my data, and I like the idea that I can pull the card and drives out of the server and move them to another machine whenever I feel the urge to do so. Flexraid is intriguing, but I'm not sure I'd trust it with 20+ Tb of data.
fwiw, with flexraid, you don't need to pull the card & drives to move them to a different PC, you can just pull any drive and or drives and move them to any number of PCs as needed...
post #168 of 3374
Thread Starter 
Quote:
Originally Posted by Somewhatlost View Post

fwiw, with flexraid, you don't need to pull the card & drives to move them to a different PC, you can just pull any drive and or drives and move them to any number of PCs as needed...

http://www.avsforum.com/t/1440779/breaking-up-flexraid-array-anything-need-to-do-to-remove-a-hdd-from-a-flexraid-server/0_40

I am looking to take apart my current server and reinstall the HDDs in my new motherboard and new server.

Take a look and confirm ?
post #169 of 3374
Thread Starter 
Quote:
Originally Posted by Dark_Slayer View Post


There is some info here about the relative HD playback power requirement of current drives Link
An older article on Ars went over this exact same topic 2 years ago, and they even mentioned HTPCs. ars link
However, in regard to the notion that Green Drives are usually cheaper, I think this has changed recently. In my experience with slickdeals, techbargains, etc, I have found the cheapest drives to be 7200 rpm 3 TB Seagate drives
To my own personal use, it has never made much economical since to pay more for a green drive. I thought that all spun down drives use the same power, and I was wrong. It looks like only the green drives use less than 1W, but the others don't usually consume more than 2W when spun down. I have a couple of green drives, and I bought them back when that was the cheapest thing to buy.
The reason I might not want to continue buying green drives for my storage drives would be that I'm a control freak. I like to be sure that it's spinning down when I tell it to and staying that way because I tell it to. I believe that WD green drives are really 7200 rpm, but their intellipower management dictates their rotational speed on demand (somewhere between 5400-7200, but usually 5400 I'd guess). To achieve the lower power usage, they use more aggressive APM settings which spin down whenever the drive determines. There are tools that let you control APM settings yourself, and I'd rather go that route. I really don't mind the extra heat for the 2-3 hours of usage as long as I can control when it spins down. The only drives I've read people having trouble with effectively spinning down using various softwares (hdparm, hddscan) were green drives (Samsung Eco and WD)

From your link:
"In terms of cost, using a green hard drive compared to a normal one makes very little difference. Assuming your drive spends 4 hours reading and writing and 20 hours idle per day, switching from the WD Black to Green saves you only 45 kilowatt-hours per year. The national average cost of a kilowatt-hour is 11.93 cents, netting you a whopping $5.38 per year for your sacrifice of 1800 RPM. For comparison, changing one 60-watt lightbulb used 4 hours a day to a 7-watt fluorescent one saves you more, about $9.23 per year."


Ever done the math on a 3TB Seagate at the lower/better energy profile than BLACK WD (which is the worst) and a lower electric rate (like mine)

It's even more silly...

it would probably be 15 kWh's saved- at .07 electric rate (mine is .065) - for a whopping $1

I am sure the mileage varies here- with different electric rates and different energy profiles of various drives.

But what I do know is WD BLACK is the worst - and the electric rate is lower today and in the past 12 months in general thanks to a crash in Natural Gas last year.

So if you take a better performing drive than a WD BLACK and use a lower electric rate (your own) I am sure the savings is between the two. ($1 to $5 per year)

It's such an insignificant factor IMO considering the costs of everything else is so much greater.

The "heat" issue might be different but I think that is BS too. It's more a buzzword than a reality, Trendy to say for sure. Not realistic. In a HTPC it matters perhaps if your cramming into the worlds smallest case without fans.

But- In a full desktop build or server build with proper fans, cooling, space and design- "heat" from HDD's is a non issue. My server is in a climate controlled environment that never gets warmer than 75 degrees, and never colder than 55. That's just how my house is.

Once you throw out the mini builds, or the bedroom builds, and accept the fact that a desktop or server is not going to be seen or heard because of where your putting it- I think "Heat and NOISE" becomes a secondary issue. Secondary to cost per GB, Secondary to performance, and most importantly secondary in reliability.

I think anyone would exchange the heat and noise profile of a GREEN drive as well as the energy profile for a real boost in reliability. So those factors can't be that important in full size builds.

The major issue is reliability can not be predicted with any reasonably accuracy so it usually just boils all down to "personal feelings" and those are so subjective we can argue into infinity.
post #170 of 3374
Flexraid still seems needlessly complex for my uses. I don't generally shy away from complexity, but in the case of Flexraid, I don't see much of an upside, and as downsides, I see new, exciting and creative ways to lose data, high CPU usage and a slew of other related issues that I really don't want to have to think about. Google seems to show equal amounts of fans and detractors of Flexraid, so I think given that, I'll just stick to my reliable old Highpoint RAID controller. The particular controller that I own (RocketRaid 2300) is also expandable (two cards in the same system), so I have lots of room to grow.

Right now, I have four 1 Tb drives on the controller with about 800 Gb of free space remaining. I have 2.5 Tb of space distributed among three other machines. When the time comes, I will replace the 1 Tb drives on the controller with 3 Tb drives (when the prices comes down) and distribute the left over 1 Tb drives to the remaining computers. Perhaps that's a more complicated way to do things, but I've been doing it this way for years now, and I've never lost data. I've lost motherboards, processors, memory modules and video cards.. but never data.
post #171 of 3374
Thread Starter 
post #172 of 3374
Quote:
Originally Posted by Mfusick View Post

http://www.avsforum.com/t/1440779/breaking-up-flexraid-array-anything-need-to-do-to-remove-a-hdd-from-a-flexraid-server/0_40
I am looking to take apart my current server and reinstall the HDDs in my new motherboard and new server.
Take a look and confirm ?
depends on if you want to keep the original array intact or not...
if you don't care about the original array, just yank the drives...
if you do care, then either yank & replace/rebuild, or remove from flexraid nicely and then yank...

note, I actually use unRAID, so while the overall concept is the same, the actual steps taken may be slightly different...
post #173 of 3374
Thread Starter 
Quote:
Originally Posted by Somewhatlost View Post

depends on if you want to keep the original array intact or not...
if you don't care about the original array, just yank the drives...
if you do care, then either yank & replace/rebuild, or remove from flexraid nicely and then yank...
note, I actually use unRAID, so while the overall concept is the same, the actual steps taken may be slightly different...

I don't care about the array.

I am going to take apart my server and install the drives in a new server- based on new motherboard- and new Sata Card in a new case.

I just want the drives and the data - and install them on the new flexraid server I build.

I am going to be removing all my 1TB and 750GB drives and not re-installing them. They are too old and small for me.

I will copy the data over from them- to the new 3TB drives. I just bought 4 more 3TB drives. I already owned 4 and about 4 more 2TB drives.
post #174 of 3374
Quote:
Originally Posted by Mfusick View Post

I don't care about the array.
I am going to take apart my server and install the drives in a new server- based on new motherboard- and new Sata Card in a new case.
I just want the drives and the data - and install them on the new flexraid server I build.
I am going to be removing all my 1TB and 750GB drives and not re-installing them. They are too old and small for me.
I will copy the data over from them- to the new 3TB drives. I just bought 4 more 3TB drives. I already owned 4 and about 4 more 2TB drives.
hopefully you just bought the 3tb seagates that have been going for ~$90 smile.gif that is what I just did... of course I am just replacing some old 750gb & 1 TB drives in my array, not building a new one...
but still should be simple... yank & replace, then copy...
post #175 of 3374
Thread Starter 
Quote:
Originally Posted by Somewhatlost View Post

hopefully you just bought the 3tb seagates that have been going for ~$90 smile.gif that is what I just did... of course I am just replacing some old 750gb & 1 TB drives in my array, not building a new one...
but still should be simple... yank & replace, then copy...

Yup. Yup.

I have already bought 3 of those from Costco as external and cracked them open. This was months back at $120-$130 each. It was good deal then.

I saw it pop up on Amazon for the external and bought another for $89 on Monday.

Then on Wednesday Newegg was doing the $89 internals - SO I bought 3 of them. 3 unit limit.

I already have a few 2TB WD greens, a couple 3TB WD greens, A Hitachi 2TB, etc... in my server now. I have been using one of those Seagates at my Parity drive for about 6 months.

I used a couple older drives when I build my server because I already had them. A 750GB samsung and a couple 1TB seagates and Samsungs. I just don't want to sacrifice the SATA PORTS in my server for such small drives anymore and those drives are pretty old now. I feel more secure moving the data onto a 3TB and keeping the rest of the sata ports free. I can probable get 50% of my money back if I sell them off on ebay. They still work great. I have had luck doing this.

My new server will be:


6 - 3TB Seagates (I think I am going to leave the external as it is but I also have a 3TB WD green External USB3.0. Not sure)
2 - 3TB WD GREEN
2 - 2TB WD GREEN
1 - 2TB Hitachi 7200rpm/Sata3/6g

25TB or so...

I bought a Supermicro AOC-SAS2LP-MV8 8-port SAS/SATA 6.0Gb/s PCI Express 2.0 x8 card and I already own a SATA3 2 port Highpoint PCI-X card. My Z77 Asrock Extreme4 motherboard has 8 SATA ports, I will only use one of the SATA3 for the SSD and OS drive.

So I have 8+8+2=18 Sata ports minus the OS drive = 17 SATA ports. I should be good for a bit if I don't use my 4 smaller drives and just copy over the info. My case has two HOT SWAP bays- so that should be easy. I'll just reformat when done and sell them off.
post #176 of 3374
Quote:
Originally Posted by Mfusick View Post

Where is a good place to learn more about this ?

Hmmm... too many places to start. HardForums deal with ZFS topics a lot: http://hardforum.com/forumdisplay.php?f=29

Really, just google any topic you want with ZFS... it's probably been asked before somewhere.
post #177 of 3374
Here is a link on building a simple ZFS server for home use.
http://constantin.glez.de/blog/2010/02/seven-useful-opensolaris-zfs-home-server-tips
post #178 of 3374
Thread Starter 
Thanks !

I think I am going to stick with flexraid for now- but I'd like to know what's next. I seem to get deeper and deeper the more I learn.
post #179 of 3374
A wise man once said...

"The enemy of good is perfect"

...or...

"If it ain't broke don't fix it"

Words to live by when dealing with HTPC in many different areas.
post #180 of 3374
I am sticking with flexraid too with my first setup. Once I've learnt the rules of managing the raid server I move on to play with it
New Posts  All Forums:Forum Nav:
  Return Home
  Back to Forum: Home Theater Computers
AVS › AVS Forum › Video Components › Home Theater Computers › Mfusick's How to build an affordable 30TB Flexraid media server: Information Requested.!