or Connect
AVS › AVS Forum › Video Components › Home Theater Computers › Guide To Building A Media Storage Server
New Posts  All Forums:Forum Nav:

Guide To Building A Media Storage Server - Page 197

post #5881 of 7891
Check out the Corsair forums. For my 520HX, they have pictures on how current is divided among the various modular plugs. They probably have one for the 750HX, too.
post #5882 of 7891
Quote:
Originally Posted by AVTechMan View Post

1. What mobo would be ideal for server use for the 4020 case? At least one that has the slots needed to connect the SATA 8-port cards? Would a high end CPU be necessary, and would DDR2 RAM be acceptable?

As for RAID, this can be pretty complicated depending on what type of RAID and whether its hard/software based. I don't think I will need RAID at this time until I do further research on whether it will be beneficial it but can be a viable option in the future if needed. A good separate backup solution I know is best to preserve my data.

Basically what I want to do with this server is to store all of my DVD/BD movies, AVI files audio files and other media content to where I can simply select what movie I would like to watch on my big screen, or simply have continuous audio playing unattended (like your own radio station at home, minus the DJ's). I wouldn't mind having the ability to stream different movies to different TV's in the future (I need to learn how to do that as well).

I will keep reading this thread and work through the many pages throughout and hopefully will find the answers I need. Don't have the cash yet to begin, but can do my research in the meantime. Problem is most of the time is by the time I have the money the parts are usually always unavailable or discontinued; always the problem with waiting sometimes.

Thanks for any help!

I would recommend taking a look at this thread (linky linky), I used renethx's specs to build a WHS a while back, and I've been very pleased.

In case the link doesn't work, this is in the "how to build an HTPC" thread that is stickied. There is a section with recommended specs for a few different server builds, and it's updated regularly. This may be just what you're looking for
post #5883 of 7891
Quote:
Originally Posted by the_beast666 View Post

2. You can definately run a cpu such as that passive. In fact servers generally run with the cpus passive anyway - although they do have to use fancy ducting a BIG fans to do so. The reason is because you remove a single point of failure - if your cpu fan dies, you cpu is likely to be toast. But with hurricane cooling and passive heatsinks you can lose 1 or more fans without compromising on the processor cooling. If your airflow is a little lower, just make sure you get a large but relatively open-finned heatsink (Ninja or similar) and you will be fine.

so if I were to get a fan/heatsink combo like this one

http://www.newegg.com/Product/Produc...82E16835233014

I can just not attach the fan and use the case fan airflow instead?
post #5884 of 7891
Thank you for the useful information.

I am very concerned about losing my existing data during migration and do not know how I can back it up as it is over 15TB.

What is the likelyhood of losing the data during a migration and am I better off just creating a second unit with the 8 drives and using 2 units? This is not ideal but is possibly much safer.?

Quote:
Originally Posted by MiBz View Post

For whatever reason 3ware refers to OCE as RLM (raid level migration).

Usually with most controllers RLM refers to changing the raid level (ex raid 5 to raid 6).

But with 3ware they bunch both expansion and migration into the same category under RLM.

Here's some RLM documentation on the 9650SE that you'll find helpful. Follow the instructions for Expanding Unit Capacity. Be sure to backup any critical data to an external disk just in case.

Once that's all done, expanding the volume partition in Win2008 is quite easy and intuitive within 2008's Disk Management gui.
post #5885 of 7891
Quote:
Originally Posted by PaulKohler View Post

I use Cavediver's modified fan board and it is great. Fits well, much quieter, and easy to install.

Quote:
Originally Posted by Mark Guebert View Post

Ditto that, I have his fanboard also, and the noise goes from intolerable to merely annoying. But I don't have the quietest 120mm fans either. What I can say is that it is quiet enough and the temps are very good so far. Under load none of my drives exceed 29deg C.

Quote:
Originally Posted by gsr View Post

It is a jump, but it's worth it to get the more flexible cables - one of those "you get what you pay for" kind of things.

Unless the noise of the stock fan configuration doesn't bother you, absolutely yes. My Norco system lives in the basement so it doesn't need to be silent, but the noise from the stock fan configuration could be heard on the 1st floor of the house and could be heard over the noise the furnace makes. I didn't go with the quietest fan option (intentionally) but the fans I went with move the same amount of air as the stock fans but are much quieter to the point where the sound is fine for my needs. If I were setting this up in my family room, I would have gone with quieter fans.

Thanks all for your replies... The server is in the second bedroom of my single floor apartment that I use as an office, and about 30 feet from where I sleep. So it looks like I'll be spending some more money on this rig.
post #5886 of 7891
Quote:
Originally Posted by Farris View Post

Thank you for the useful information.

I am very concerned about losing my existing data during migration and do not know how I can back it up as it is over 15TB.

What is the likelyhood of losing the data during a migration and am I better off just creating a second unit with the 8 drives and using 2 units? This is not ideal but is possibly much safer.?

If you take it slow and follow the instructions carefully, its very likely everything will be fine. Just take your time. Its always best to backup any critical data you cannot replace like family pics, documents and so on.

Before you even consider adding new drives to the existing array, connect the new drive(s) to another workstation or the server's motherboard SATA port and run a full scan.

In Win2008/ Win7 just select the new disk in My computer right click to select properties then the tools tab and select error checking. Select both check boxes and run. Since it's not the OS disk, it can run the verification online.

If the drive has mechanical issues or bad sectors you'll see it here. Further after the check is done, run a few bench marks and them check smart data.

This single largest contributor to a failed expansion is caused by adding new drives that have pre-existing issues. You essentially want to make sure you're expanding with healthy drives.

A raid 6 expansion will take a long time..be patient and let your raid controller do it's thing.

Good luck !
post #5887 of 7891
Quote:
Originally Posted by Farris View Post

Thank you for the useful information.

I am very concerned about losing my existing data during migration and do not know how I can back it up as it is over 15TB.

What is the likelyhood of losing the data during a migration and am I better off just creating a second unit with the 8 drives and using 2 units? This is not ideal but is possibly much safer.?

3Ware's array expansion is very robust. I would back up my irreplaceable data like documents and pictures and not worry about my media rips and recorded TV. However, make sure to do an "Array Verify" before you start the expansion. You should have it scheduled to automatically verify weekly already.

However, in your situation, I would probably take the opportunity to go up in disk size. I assume that if you have room for 8 more disks in a Norco case and you have 15TB data, that you are currently running 12 1.5TB disks in RAID6. I would add 8 2TB disks in RAID6 (12TB usable) and later continue to expand that array and decommission the array of 1.5TB disks. I'm also a little too paranoid to put 20 disks in a single RAID6 array, but it is often done for home use.

Edit: Of course, MiBz beat me to it. Another way to exercise the drives to make sure they're OK for use in an expansion is to use them to create a stand-alone array. If they don't throw any errors in an array build, they will probably be OK for an expansion too.

- Mike
post #5888 of 7891
Quote:
Originally Posted by MiBz View Post


A long time? you don't say. One of my arrays crashed 3 days back. 8x1.5TB RAID-5. It's still rebuilding....
post #5889 of 7891
Quote:
Originally Posted by kapone View Post

A long time? you don't say. One of my arrays crashed 3 days back. 8x1.5TB RAID-5. It's still rebuilding....

I'm sorry man I tried to warn you about those pesky Seagate drives !

But honestly in most case there is no need for raid arrays for home media storage.

The best system that I setup for a friend a few months back is also the simplest. WHS with a bunch of 2TB drives. Duplication on for everything except movies.

I ordered him an external eSATA cradle which connects to his desktop pc that he also rips from. Each time he rips a movie he copies it to WHS and drops a copy to the external disk until it's full. Labels it Movies Vol X and then puts it in safe place offsite (his parents).

Ripping and cataloging is painfully slow and I know I'd never want to do it over again.

With this setup he has less disks spinning, demigrator never has to create shadow copies of his movies since dup is off and he has offsite copies that are safe in case of a fire,theft ...simple and effective.

But with most things in life, we all have different needs and expectations.
post #5890 of 7891
Thanks

I have had 2 volume checks going at the same time, but not 2 array expansions..

Although I do not really need it -- I will probably buy the 2nd E1 enclosure and cascade it off the first in August (setting aside budget cash for the project now)

Has anyone cascaded E1's from one 8088 port on an HBA yet...

thanks



Quote:
Originally Posted by odditory View Post

yeah rebooting during an expansion/migration is a pretty basic feature - not all cards handle as they're supposed to though, but never had a problem with Areca 1280ML or 1680. the card is doing write-through as opposed to write-back meaning it's not caching writes and thus isn't endangered by a reboot.

for the record you don't have to wait for one expansion to complete before starting another one -- i've had up to three going at the same time for multiple arrays hosted by the same card, it actually finishes much quicker than if i do them sequentially. you might think the raid chip would be pegged during array rebuilds but its nowhere near it - at least an Areca 1680 -- the write speed of the drives tend to be the bigger bottleneck.

same goes for the "check volume" operation on the areca, a.k.a. array scrub -- it can run simultaneously for different arrays on the same card and its not a big deal.
post #5891 of 7891
yep.. the array expansion I described a few post back took a bit over three days.. I had my second copy of bluray rips on that volume for about 4.5TB and it all came through fine -- including the reboot test...

The original set of data is on a 3ware array in an older Windows 2003 server that has been through about 3 major expansion / rebuild over the years moving for 400 gig drive to 500gig drive and then to 1TB drives it's currently running.

lucky at home -- I have not lost any production level data in an array expansion.

The only glitch I have had was the first test arrays on the new areca 1680 and E1 mentioned -- see post some time back

so while I pucker a little on these expansions - so far having a good HBA has not let me down..

Steve

Quote:
Originally Posted by kapone View Post

A long time? you don't say. One of my arrays crashed 3 days back. 8x1.5TB RAID-5. It's still rebuilding....
post #5892 of 7891
wah....I don't have good HBAs........

I'm still using the Ciprico 5252-08s.... No XOR processor on them. Everything's in software. And now, I may switch again. To Supermicro pci-e 8 port adapters and Veritas Storage Foundation.
post #5893 of 7891
Quote:
Originally Posted by kapone View Post

wah....I don't have good HBAs........

I'm still using the Ciprico 5252-08s.... No XOR processor on them. Everything's in software. And now, I may switch again. To Supermicro pci-e 8 port adapters and Veritas Storage Foundation.

Don't do it ! Take the Lotus out for a spin with the windows down and clear your mind before you make any rash decisions

Early spring this year. I just took my Aston out again today for the first time this season
post #5894 of 7891
Quote:
Originally Posted by the_beast666 View Post

That depends on the psu and how everything is connected I'm afraid. IIRC each set of wires out of the psu should be limited to something like 18A - which would allow around 6-8 HDDs to spin up without issue. However many manufacturers don't stick to the specs. Some don't even limit the current on each set of wires, and just limit the overall output for each voltage. It is possible you could run everything easily off a single set of cables from your psu (using adapters and staggered spin-up), but I wouldn't recommend it...

I'm wondering if this SM PSU I have here will be enough for the Core 2 Mobile and 20 5400RPM 2TB drives.

Any guesses?:
3.3V - 30A
5V - 30A
3.3V & 5V combined max of 160W
12V1 - 15A
12V2 - 15A
12V3 -18A
12V Combined Max 34A

Doesnt say which Rail is attached to what. Very simple wiring, 24+8+4 and 3 Dual Molex.
post #5895 of 7891
Quote:
Originally Posted by MiBz View Post

Don't do it ! Take the Lotus out for a spin with the windows down and clear your mind before you make any rash decisions

Early spring this year. I just took my Aston out again today for the first time this season

lol..nah, it's not a rash decision. I have been evaluating VSF for a while now, and I'm liking it. It's basically the SAME volume manager that's in Windows (hell, the built in volume manager in Windows WAS written by Veritas...), except it's not crippled. It can do OCE, ORLM, etc. What it doesn't have is a global/dedicated spare mechanism, but I'm still investigating that. The best part about it is that you're not tied to ANY particular HBA. As long as the disk is addressable, it can use it.
post #5896 of 7891
Thanks for all the help and advice. I checked the new drives and all seems ok... I followed the instructions in the manual and the migration has started.

Is it ok to access the data during the migration (adding 8 new drives) or does it increase the risk of losing data or corrupting file?

Quote:
Originally Posted by MiBz View Post

If you take it slow and follow the instructions carefully, its very likely everything will be fine. Just take your time. Its always best to backup any critical data you cannot replace like family pics, documents and so on.

Before you even consider adding new drives to the existing array, connect the new drive(s) to another workstation or the server's motherboard SATA port and run a full scan.

In Win2008/ Win7 just select the new disk in My computer right click to select properties then the tools tab and select error checking. Select both check boxes and run. Since it's not the OS disk, it can run the verification online.

If the drive has mechanical issues or bad sectors you'll see it here. Further after the check is done, run a few bench marks and them check smart data.

This single largest contributor to a failed expansion is caused by adding new drives that have pre-existing issues. You essentially want to make sure you're expanding with healthy drives.

A raid 6 expansion will take a long time..be patient and let your raid controller do it's thing.

Good luck !
post #5897 of 7891
Quote:
Originally Posted by Farris View Post

Thanks for all the help and advice. I checked the new drives and all seems ok... I followed the instructions in the manual and the migration has started.

Is it ok to access the data during the migration (adding 8 new drives) or does it increase the risk of losing data or corrupting file?

it should be fine to use the array - but anything you do will increase the time the rebuild takes and cause the drives to have to work that little bit harder - which gives a little bit more chance of failure. The risk is still small, especially if all you do is read data off it - but I would still try and minimise array usage as far as is possible.
post #5898 of 7891
Quote:
Originally Posted by Farris View Post

Thanks for all the help and advice. I checked the new drives and all seems ok... I followed the instructions in the manual and the migration has started.

Is it ok to access the data during the migration (adding 8 new drives) or does it increase the risk of losing data or corrupting file?

Glad to hear you managed to get the expansion going.
Yes, don't be worried about accessing the array during OCE expansion. It might extend the expansion time somewhat because any changes and new data will also need to re-striped to include the new disks.

I'm actually in process of adding another 2TB drive to a Raid 6 video array. I started OCE yesterday morning. Last night I copied over to 2 blu-rays to the array while it was expanding and then we watched one of them after dinner, again while the expansion was in progress. Good raid cards can handle this pretty easily.

The only time you really want to avoid I/O to the array is during a drive failure rebuild, in which case you want the rebuild to complete as quick as possible to retain data safety.

Have a great week-end
post #5899 of 7891
Quote:
Originally Posted by Farris View Post

Thanks for all the help and advice. I checked the new drives and all seems ok... I followed the instructions in the manual and the migration has started.

Is it ok to access the data during the migration (adding 8 new drives) or does it increase the risk of losing data or corrupting file?

Farris, I'd be interested to know how long this takes with your 3ware card. What drives are you using, 1.5tb? How much data did you have on your 12 drives before adding the other 8? Curious to hear how fast the 3ware does OCE w/RAID6.
post #5900 of 7891
Anyone running Freenas?

I'm just testing but with a single HD and a Gigabit Intel NIC (No Jumbos) on Client and Server I only get 44.5Mbyte/sec writes. My old P3 Win2K server (Ableit with Raid 5 array) manages 60-70Mbytes/sec. Its not so much the throughput that bothers me but the 50% CPU usage on the Freenas box, is this normal?
post #5901 of 7891
Thread Starter 
FreeNAS is something I've gone back and forth with, the allure of ZFS and all (save for the fact you can't OCE a zRaid volume- deal breaker), ultimately I always give up in favor of keeping all my disks and arrays NTFS. Last thing I'd want to be is at the mercy of a linux or freebsd learning curve if I had to perform data recovery on it.

You might try experimenting with different NAS software using Hyper-V or ESXi and virtualizing them, it's pretty fun. I've even managed to get my pfSense (FreeBSD based) firewall/router virtualized. I've also got an instance of WHS virtualized in Hyper-V and still get over 100MB/s of throughput in and out.

Note pjkenned has also done quite a lot of experimenting with different NAS software all running under Hyper-V, which he discusses at his blog: http://www.servethehome.com/
post #5902 of 7891
Quote:
Originally Posted by odditory View Post

FreeNAS is something I've gone back and forth with, the allure of ZFS and all (save for the fact you can't OCE a zRaid volume- deal breaker),

Could you explain the issue in English perhaps (pretty please)?
post #5903 of 7891
Quote:
Originally Posted by Krobar View Post

Anyone running Freenas?

I'm just testing but with a single HD and a Gigabit Intel NIC (No Jumbos) on Client and Server I only get 44.5Mbyte/sec writes. My old P3 Win2K server (Ableit with Raid 5 array) manages 60-70Mbytes/sec. Its not so much the throughput that bothers me but the 50% CPU usage on the Freenas box, is this normal?

FreeNas (FreeBSD) implementation of Samba isn't so good.
post #5904 of 7891
Quote:
Originally Posted by odditory View Post

FreeNAS is something I've gone back and forth with, the allure of ZFS and all (save for the fact you can't OCE a zRaid volume- deal breaker), ultimately I always give up in favor of keeping all my disks and arrays NTFS. Last thing I'd want to be is at the mercy of a linux or freebsd learning curve if I had to perform data recovery on it.

You might try experimenting with different NAS software using Hyper-V or ESXi and virtualizing them, it's pretty fun. I've even managed to get my pfSense (FreeBSD based) firewall/router virtualized. I've also got an instance of WHS virtualized in Hyper-V and still get over 100MB/s of throughput in and out.

Note pjkenned has also done quite a lot of experimenting with different NAS software all running under Hyper-V, which he discusses at his blog: http://www.servethehome.com/

Do you run ESXi or something similar on your main storage array, or do you use a separate machine just for experimentation? Do you find that virtualizing makes this more complex and/or less reliable? If you run your array under something like ESXi, do you create a vmdk on the array, or just use it in raw or passthrough mode (not sure if that's the right term)?

Was thinking of using ESXi but seems to introduce a fair bit of processing overhead and a lot of added complexity (and therefore the possibility of more problems). What has your experience been?
post #5905 of 7891
Anyone got any suggestions for what 120mm fans I should use in my Norco 4220? Is it still Noctua that is the ones to go for?
post #5906 of 7891
Quote:
Originally Posted by scientest View Post

Could you explain the issue in English perhaps (pretty please)?

He would like to use FreeNAS, because the support for ZFS is better in FreeBSD (the OS FreeNAS is built on) is better than the ZFS support in Linux.

ZFS offers a wealth of features for software RAID arrays (called RAIDZ or RAIDZ2 for ~RAID5 & 6) - it is very low overhead, massively scaleable, offers automatic online de-duplication, etc...

However the one thing it can't do is expand an array. You can only add new sets of disks to an existing pool. In more 'RAID-like' terms you can't add drives to an existing RAID6 array like most hardware RAID cards can, but you can add a new set of drives and create a 'RAID60' array across them. The big advantage comes from the fact not all the RAID spans need to be the same size - so you can use 8x1TB drives for your first set of disks, then add 8x2TB drives, then down the line add 8x4TB drives and span across all of them. For a RAIDZ2 (RAID6) array you would lose the capacity of 2 of each drive set - and end up with 6x1+6x2+6x4=42TB raw capacity, all in a single array.

He is hesitant to use FreeNAS for his OS because he is not overly familiar with FreeBSD, and as a result doesn't want to have to perform data recovery with CLI tools on an OS he doesn't know inside out.

That's about what he was talking about (or my interpretation of it), whether or not it is any more in English or not is another matter...
post #5907 of 7891
Quote:
Originally Posted by the_beast666 View Post

That's about what he was talking about (or my interpretation of it), whether or not it is any more in English or not is another matter...

Well, in particular I was looking for an expansion of "OCE" (Google yields nothing useful), but I'm guessing that's got something to do with the ability to expand an existing array? I'm peripherally aware of some of the advantages of ZFS but didn't realize all of the things you list here. I can see the attraction, but I'd have to concur, not being able to expand an existing array is also be a deal breaker for me (I don't expect to be adding entire new arrays in one shot).
post #5908 of 7891
OCE = Online Capacity Expansion

Basically the ability to 'grow' an array onto one or more disks (or onto larger disks if all members are replaced).

That would have been much easier...
post #5909 of 7891
Quote:
Originally Posted by the_beast666 View Post

OCE = Online Capacity Expansion

Basically the ability to 'grow' an array onto one or more disks (or onto larger disks if all members are replaced).

That would have been much easier...

Well I did ask for an explanation of "the issue" and quoted the bit about OCE, but yeah, I could have been clearer. None-the-less I still found the rest of yoru explanation helpful, so thanks for that also...
post #5910 of 7891
Quote:
Originally Posted by scientest View Post

Well, in particular I was looking for an expansion of "OCE" (Google yields nothing useful), but I'm guessing that's got something to do with the ability to expand an existing array? I'm peripherally aware of some of the advantages of ZFS but didn't realize all of the things you list here. I can see the attraction, but I'd have to concur, not being able to expand an existing array is also be a deal breaker for me (I don't expect to be adding entire new arrays in one shot).

OCE = Online Capacity Expansion.
In other words the ability to add drives as needed in order to expand storage space of an existing array.

ZFS is a very poor choice for home media storage purposes when the primary goal is to be able to EASILY expand storage as your needs grow and they certainly will.

With ZFS, when you need more space, you must build a completely new array with another set of drives. Each time you again lose more drives to parity on the newly created array and find yourself with several arrays.
New Posts  All Forums:Forum Nav:
  Return Home
  Back to Forum: Home Theater Computers
AVS › AVS Forum › Video Components › Home Theater Computers › Guide To Building A Media Storage Server