or Connect
AVS › AVS Forum › Video Components › Home Theater Computers › Flexraid help deciding
New Posts  All Forums:Forum Nav:

Flexraid help deciding - Page 5

post #121 of 164
Thread Starter 
11-12 hours, can't remember exact minute amount. The elapsed time counter stops upon completion
post #122 of 164
Thread Starter 
Quote:
Originally Posted by amarshonarbangla View Post

And can you guys link to which label maker you are using?

Wound up using some labels we keep in supply room at office, usually for file folders. Put one over the existing HDD label and matching one on sata cable.

Quote:
Originally Posted by Mfusick View Post

nice. Oldschool like mine? Or new school ?

Not very new, circa 2007

I have 7 5.25 expansion bays with a 5x3.5 tool-less HDD rack

I put in a 4 in 3x5.25 Cooler Master expansion cage to allow 9 HDDs. Any more storage will require another. I can fit 4 more HDDs in there since I still have 3 5.25 bays available, and the cage is only $20
post #123 of 164
Old gaming rig ?
post #124 of 164
Thread Starter 
Yep
post #125 of 164
You still game ?
post #126 of 164
Yes cool.gif
post #127 of 164
What should I set WHS2011 power options to be ?

Should I turn of HDD "NEVER" or should I turn off after say 30 min ????

What is the advantage and disadvantage ?

Would turning them off save $$ on electricity, but just cause them to have to spin up and a small delay when first accessing the server ?
post #128 of 164
Thread Starter 
Not really gaming since getting married. I spent way too much time on my k/d ratio anyway

I have mine turn off after 5 minutes

Flex raid will not spin them down on its own, but it will only spin up the ones you access rather than the whole array

On average about 5 kWh savings a month per drive

Not a great idea for drives you access often
post #129 of 164
Isn't constantly spinning up and spinning down drives bad for them in the long run?
post #130 of 164
Thread Starter 
Most drives already perform head parking through their firmware nearly as often as the settings I described
post #131 of 164
I'm going to test 30 min setting.

If I notice a problem I'll leave them always on
post #132 of 164
Quote:
Originally Posted by amarshonarbangla View Post

Isn't constantly spinning up and spinning down drives bad for them in the long run?

Depends on your use patterns really.
post #133 of 164
Wow, this thread should be stickied. Lots of good info - gives me some idea of what I want to set up when everything finishes rolling in. If you have 5 - 2TB drives and 5 - 3TB drives and you want 2 parity drives...can you do a 2 and a 3, or do both parity drives have to be 3?
post #134 of 164
You need to use your largest drive
post #135 of 164
Doh. I figured, but I was hoping.

Thanks
post #136 of 164
your parity drive (or drives) must be at least as large as your largest data drive so it can hold parity information for everything that might be on your biggest data drive. If it was smaller, It simply could not. So you must always use a parity drive that is at least equal to your largest data drive. For me I use a 3TB since my largest drives I own are 3TB. At some point I'll upgrade to 4TB probably. Swapping out a parity drive is not a hard thing to do.
post #137 of 164
I just don't like the idea of losing 6TB of storage (out of 25..i know i know) It is what it is. I'd rather go overkill than have to re-rip everything.
post #138 of 164
What do you mean ?
post #139 of 164
Wait, am I retarded?

This would work like raid 6 wouldn't it?

Nevermind. I am retarded. Not sure what I was thinking.
post #140 of 164
I'll try to help you. I'm just confused too.
post #141 of 164
No no no. I was (for whatever reason) thinking I lost the storage on those drives like a true backup, not like a parity drive. So in my head, I'm doing math like RAID1, but for 2 drives instead of the whole array, and losing 6TB.

When You asked your Initial question it kicked me into realizing it was like RAID6 and that I wasn't really losing that much space.
post #142 of 164
Thread Starter 
Has anyone copied an existing flexraid config-db to an OS re-install (i.e. after swapping SSDs, or swapping for an SSD etc)
post #143 of 164
It's easier to start over
post #144 of 164
Great Thread.

After 13 years with an HTPC and storage scattered across what is now 6 different computers, this thread has spurred me to create a central storage location via Flexraid. I do have a few questions though that I would like some help with.

1) Currently many of the hard drives have a folder like "Movies" on them and then a bunch of ripped movies inside. If I were to just put those drives into a pool on the new server wouldn't that cause a problem because I will lose the drive designations and therefore have multiple folders with the same name in the pool which I assume is not allowed. Works fine for separate disks but I'm assuming not so well when pooled. Advice?

2) Some of these drives have been formatted with 4k clusters and some with 64k clusters. Will this represent a problem with parity? Seems like it might. Do I need to reformat so that they all have the same size clusters and if so, what's the better size?

3) Although the "new" server has a really nice and new Rosewill Gold 650w power supply (with 8 sata connectors) the mobo only has 6 sata ports. That means I will need a couple ports on a PCI-E card. How about this cheap Rosewill RC-211 $17.99 from newegg. http://www.newegg.com/Product/Product.aspx?Item=N82E16816132008

4) The "new" server also does not have enough 3.5" bays (only 4) for the hard drives. I figure I could use a drive cage so that I could fit 4 more drives into 5.25 bays. How about this cheap COOLER MASTER STB-3T4-E3-GP $24.99 from newegg. http://www.newegg.com/Product/Product.aspx?Item=N82E16817993002 Not only holds the drives but puts a fan on them.

Thanks for any help.

Bob
post #145 of 164
Quote:
Originally Posted by rbmcgee View Post

Great Thread.

After 13 years with an HTPC and storage scattered across what is now 6 different computers, this thread has spurred me to create a central storage location via Flexraid. I do have a few questions though that I would like some help with.

1) Currently many of the hard drives have a folder like "Movies" on them and then a bunch of ripped movies inside. If I were to just put those drives into a pool on the new server wouldn't that cause a problem because I will lose the drive designations and therefore have multiple folders with the same name in the pool which I assume is not allowed. Works fine for separate disks but I'm assuming not so well when pooled. Advice?

2) Some of these drives have been formatted with 4k clusters and some with 64k clusters. Will this represent a problem with parity? Seems like it might. Do I need to reformat so that they all have the same size clusters and if so, what's the better size?

3) Although the "new" server has a really nice and new Rosewill Gold 650w power supply (with 8 sata connectors) the mobo only has 6 sata ports. That means I will need a couple ports on a PCI-E card. How about this cheap Rosewill RC-211 $17.99 from newegg. http://www.newegg.com/Product/Product.aspx?Item=N82E16816132008

4) The "new" server also does not have enough 3.5" bays (only 4) for the hard drives. I figure I could use a drive cage so that I could fit 4 more drives into 5.25 bays. How about this cheap COOLER MASTER STB-3T4-E3-GP $24.99 from newegg. http://www.newegg.com/Product/Product.aspx?Item=N82E16817993002 Not only holds the drives but puts a fan on them.

Thanks for any help.

Bob

1) The duplicate folders will be combined into one AFAIK.

2) Shouldn't cause problems but don't quote me on it. Post this question on the official FlexRAID forum to get a more competent answer.

3) That card is out of stock and doesn't have any reviews. Find one that has mostly positive reviews. I saw one around here that was pretty well received. I will try and find it for you. For future expansion though, I will recommend you just get an IBM M1015 HBA. It has two SAS ports which can split to 8 SATA ports via a couple of these cables. It has to be flashed with a different firmware in IT mode before you can put it to use. It can be had for under $100 on ebay. I say invest in that to make expandability easier in the future. Get a new case too ^_^

4) That will work just fine. Here's another option, although it's out of stock currently.

Here's another excellent thread about FlexRAID, M1015 and everything server related. There are some good resources there.
http://www.avsforum.com/t/1438027/planning-to-rebuild-my-20tb-whs-flexraid-server-information-requested
Edited by amarshonarbangla - 3/12/13 at 3:47pm
post #146 of 164
Thanks for the info.

After I thought about, it makes sense that it will combine the folders.

I will check on the cluster size issue

I may consider a new mobo (8 ports), but I really like the case.
post #147 of 164
Might be cheaper just to get an m1015 used off eBay. I paid $117 including shipping and the bracket, plus another $20 for LSi breakout cables. Added 8 sata6 ports and could have added more if I wanted to run a SAS expander.
post #148 of 164
Quote:
Originally Posted by goros View Post

Might be cheaper just to get an m1015 used off eBay. I paid $117 including shipping and the bracket, plus another $20 for LSi breakout cables. Added 8 sata6 ports and could have added more if I wanted to run a SAS expander.

+1


I paid as low at $80 for this.

There is a guide how to do it I made in my server guide. You can take the IBM card and make it a $250 LSI card... by cross flashing it.
post #149 of 164
This has been an excellent thread, thanks very much to all of you who have provided both the challenging questions and the answers. It deserves a bump to the top again.

While data protection has been discussed thoroughly, and there is some important knowledge regarding disk pooling, I wonder what the benefits of pooling are outside of protection from running out of space. It would seem to me that at least two drives would be involved anytime an IO request is initiated, which is not necessarily desired. I'm sure I am missing something.

I was of the same mindset as Dark_Slayer - I'm just too stubborn about file management to give into the possibility of someday flexraid going away and having files all over the place. But I've gotten over it, especially if pooling provides some performance or environmental benefit. Otherwise I may just purchase raid on its own and forego pooling.

Thank you in advance.
post #150 of 164
Quote:
Originally Posted by Nethawk View Post

While data protection has been discussed thoroughly, and there is some important knowledge regarding disk pooling, I wonder what the benefits of pooling are outside of protection from running out of space. It would seem to me that at least two drives would be involved anytime an IO request is initiated, which is not necessarily desired. I'm sure I am missing something.

Only the disks you are reading from will be active. If you are reading files from two different drives, only then would two drives be involved. If you are reading files from one drive, only that drive drive is involved.
New Posts  All Forums:Forum Nav:
  Return Home
  Back to Forum: Home Theater Computers
AVS › AVS Forum › Video Components › Home Theater Computers › Flexraid help deciding