or Connect
AVS › AVS Forum › Video Components › Home Theater Computers › Guide To Building A Media Storage Server
New Posts  All Forums:Forum Nav:

Guide To Building A Media Storage Server - Page 9

post #241 of 7891
I was leaning towards WHS because it seems to automate most of these tasks - any reason to choose unRaid over it?
post #242 of 7891
Quote:
Originally Posted by PhoenixDown View Post

I was leaning towards WHS because it seems to automate most of these tasks - any reason to choose unRaid over it?

Unraid uses only 1 hdd to protect the rest of your "array". WHS cuts the storage capacity of your system in half to provide similar protection. That being said, if you're willing to modify your WHS in an "unsupported" manner you can just enable regular software raid (reference1, reference2).
post #243 of 7891
Quote:
Originally Posted by kapone View Post

Sure, but that one has an embedded VIA processor as well, and draws about 10w.

Ok, I figured there was a reason
post #244 of 7891
Quote:
Originally Posted by kapone View Post

Most controllers allow multiple arrays (and multiple types) per controller.

So kapone...

IF I have three Raptor drives in Raid 0 for the boot drive/s

and if I have 3-4 1TB or 1.5TB drives in Raid 0 for my PRIMARY storage and

6 drives for my backup in Raid 6 for good redundancyHOW MANY RAID CARDS do I NEED and which array should I put in what location?

Ie, should I put the raptors on a good Raid card with say 1GB of memory or on the built in Raid on the MOBO?

This is all in the same case that is and yes, there is room for them.

George
post #245 of 7891
Quote:
Originally Posted by guiri View Post

So kapone...

IF I have three Raptor drives in Raid 0 for the boot drive/s…

and if I have 3-4 1TB or 1.5TB drives in Raid 0 for my PRIMARY storage and

6 drives for my backup in Raid 6 for good redundancy…HOW MANY RAID CARDS do I NEED and which array should I put in what location?

Ie, should I put the raptors on a good Raid card with say 1GB of memory or on the built in Raid on the MOBO?

This is all in the same case that is and yes, there is room for them.

George

You don't need anything fancy for RAID0. Intel's ICH controller can handle that w/o issue, and you'll get very good speed. You could put 3 Raptors and 3 1-1.5TB drives all on the Intel controller...it can hold up to 6 drives. Just build 2 separate RAID0 arrays in the Intel Matrix BIOS.

The fancy RAID cards really help w/ parity calculation RAID formats like RAID5/6. So, if you want 6 drives in RAID6 then you'll need a sata card w/ at least 6 ports that supports RAID6.


I'm a little confused as to your proposed solution, though. The RAID6 should be plenty fast, and it's redundant already. If you want secondary data storage I'd look to a separate PC, or external backup, and ideally offsite storage.
post #246 of 7891
Bingo. There's zero benefit to having a bunch of drives in RAID-0 as your "primary" storage. Unless you are doing realtime HD video editing or something, there's nothing that RAID 5/6 can't handle. Just put your O/S on the ICH ports, and get an add-in card for the storage. Which card to get depends on your budget, intended usage...lots of things.
post #247 of 7891
Quote:
Originally Posted by WeeboTech View Post


A 9 drive setup with trayless removable SATA units cost me less then $700, although probably less with all the rebates and reused spare parts.

Abit AB9 PRO ~ $90 (9 internal sata, 1 external sata)
Celerom M 440 2ghz ~ $50
4g Ram ~ $70
Centurion 590 ~ $75
Power Supply ~ $80-120
Extra 120MM fans ~ $15 (box of 4 coolermasters).
unRAID Pro License - $150
Kingston SD Card + reader $15

Additional costs are based on how you mount SATA Drives.
You can use the 4 in 3 modules or use the i-Star fanless trayless SATA modules like I did.
These were $15 each. I used 9 (1 per internal sata port).

Some fancy taping to direct air where I wanted and I'm all set.

Granted this is only 9 drives, with the ability to add 1 externally.
I guess I don't grow as fast as others do.
As soon as I fill up the slots, it seems drives get big enough for me to begin replacing the smaller ones.

I plan to grow into either a 15 drive setup with 5in3 trayless removables or graduate to a Norco setup if I find the right motherboard for it.

WeeboTech - Do you think this setup is powerfull enough to run WHS or Vista?

Mario
post #248 of 7891
jason4207, I'm only a few miles from you. I'm in Marshville right next to Monroe

What do you do for a living?

George
post #249 of 7891
Any comments about using Ubuntu as a server OS? My only concern is whether it has the drivers for most of today's modern equipment.
post #250 of 7891
Quote:
Originally Posted by mariomp View Post

WeeboTech - Do you think this setup is powerfull enough to run WHS or Vista?
Mario

Bump up the CPU and Video and you would be fine.
post #251 of 7891
A few questions for those of you with Areca cards.
I'm exploring the possibility of Arcea support under unRAID

1. Is it possible to do a SAFE33 or SAFE50 type arrangement on the hardware raid card.
I.E. Similar to Intel's matrix raid or Silicon Image Steelvine SAFE50 Arrangement.

I would rig in a SIL5744 bridgeboard (ala.. http://www.cooldrives.com/usb20espcbfo.html) but the SAFE modes concatenate segments of the drives rather then doing a RAID0.

The Steelvine processor does work with unRAID because to the hardware it just looks like a regular eSata drive.


Something in the Areca manual suggests it might be possible to do this with Volume sets.
"Volumsets of different levels may exist on the same raidset"

What I'm, looking to do is have 2 drives setup with part as RAID0 and part as RAID1.

2. The manual mentions pass through mode. Has anyone tried this and if so how is it working? Anyone try this with Llinux? Is linux able to access the drives do to a smartctl check?

3. Anyone who has an Arcea on linux. How do the devices show up in /dev/disk/by-id and /dev/disk/by-label
post #252 of 7891
Quote:
Originally Posted by mariomp View Post

WeeboTech - Do you think this setup is powerfull enough to run WHS or Vista?

Mario

As WeeboTech said a better CPU, and video card would allow you to run Vista, but if you're going to go that route I'd pick a more modern motherboard w/ 45nm support as well. P43/45

I have the MSI P43 Neo3-F motherboard and an E5200 CPU. The board has 8 sata ports which isn't quite as many as the ABit AB9 Pro, but still enough for most folks. The MSI has better memory compatibility, and is compatible w/ 45nm CPU's. It's also priced similarly or even less than the Abit AB9 Pro.

The E5200 is a 45nm dual-core 2.5GHz CPU that can be overclocked very easily to 3.5GHz if you need that extra power for Vista. I don't need that much power for my unRAID file server so I actually have my E5200 underclocked and undervolted to 333x6=2GHz at an ultra-low vcore of 0.856v.

Very powerful graphics cards are now dirt cheap. I know eVGA has B-stock items and you can get the 8800GTS-512 for $99, or the 9600 GSO for $69. Either will allow you to run Vista Aero w/o issue, and even play all the modern games out there w/ very good settings. You can get cheaper cards, but you won't find a much better price-performance ratio if you need graphical power. Running a graphics card like one of these on a Windows platform also gives you the option to use the GPU for encoding which can save you a lot of time if you like to encode. I haven't tried this yet, but I've read a little about it.


Edit: Also wanted to add that when you pick a PSU don't just get one that's $80-$120. You need to pick out a quality PSU. The Corsair's have an excellent reputation. Google 'jonnyguru' for some other good recommendations.
post #253 of 7891
I'm also looking to go on the cheap and have access to academic copies of Windows Server 2003 and hopefully soon 2008. My plan is to use the Supermicro card with Seagate 1TB drives in a RAID 5 array (3 to start, more to follow).

I understand Windows Server 2003 can do RAID 5 but requires breaking the array to add a disk. Any other SW RAID options that can add a new disk without breaking the array? Will Server 2008 do this?

My application is storage of ripped DVDs, CDs, and recorded TV programs for viewing on a separate HTPC. The storage server will also contain 2 tuner cards and will be running on 100Mb/s ethernet.

Thanks,

TW
post #254 of 7891
Quote:


My question is, can't you do s/w RAID 5 in Windows Server

You certainly can.

Quote:


which allows for expansion of the array

No can do. Windows software RAID (2003 OR 2008) does not allow expansion of a RAID-5 array. Yes, I know all about diskpart.exe, but it still doesn't work.

Quote:


and portability to other hardware if controller/motherboard failure occurs?

Sure, that's one of the biggest advantages of software RAID.

Quote:


The storage server will also contain 2 tuner cards

For what? why? Windows 2003 or 2008 do not have any media center components, and unless you're gonna be using any native recording software from those cards, there's no reason to have the tuners in the server.

Now, if you were running Vista on the server, it would make sense, ....but Vista doesn't do software RAID-5... Not even Ultimate... hasn't been hacked yet.
post #255 of 7891
Quote:
Originally Posted by Torquewrench View Post

...will be running on 100Mb/s ethernet.

100Mb/s may be fine, but I never tested. I decided to jump on the Gb ethernet bandwagon. My main PC and new file server already have Gb, and a PCI card for my wife's PC was only a few bucks. I bought a Wireless-N Gb router for $70, and sold my Wireless-G 100Mb router for $35. So, the upgrade to Gb was not very costly at all, and although I'm not getting the transfer rates some of the guys on here are getting it is still noticeably faster when moving large chunks of data across. Plus, I don't have to worry at all about streaming movies across.

Well worth the small investment IMO.
post #256 of 7891
I thought I read somewhere that you can easily add disks to a RAID 5 on WS2008, you really can't?
post #257 of 7891
Quote:
Originally Posted by Torquewrench View Post

I'm also looking to go on the cheap and have access to academic copies of Windows Server 2003 and hopefully soon 2008. My plan is to use the Supermicro card with Seagate 1TB drives in a RAID 5 array (3 to start, more to follow).

I understand Windows Server 2003 can do RAID 5 but requires breaking the array to add a disk. Any other SW RAID options that can add a new disk without breaking the array? Will Server 2008 do this?

My application is storage of ripped DVDs, CDs, and recorded TV programs for viewing on a separate HTPC. The storage server will also contain 2 tuner cards and will be running on 100Mb/s ethernet.

Thanks,

TW

FlexRAID can add a new disc without breaking the array. http://www.avsforum.com/avs-vb/showthread.php?t=1016375
post #258 of 7891
Quote:
Originally Posted by jagojago View Post

I thought I read somewhere that you can easily add disks to a RAID 5 on WS2008, you really can't?

I believe you can add spares to a software RAID5 set under windows. The issue is expanding the array and the filesystem on top of that. You can do that in software raid under linux and freebsd, but not windows.

If you want to do raid under windows, I really recommend a hardware RAID controller, since you'll get OCE that way, and I think OCE is absolutely essential for a media server, since storage prices are falling like mad.

I just got two Hitachi 7K1000 1 TB drives to expand one of my RAID5 arrays ( I have one with 7K1000's and another with 7200.11's) for $89 and $99 each. OCE is really important for that. I'm actually going to take a 2 500 GB disk RAID0 set that is a member of the RAID5 set and convert it as a hot spare for both RAID5 volumes, and have only 1 TB disks in each RAID5 set.

You really want the flexibility to move stuff around like this, and that doesn't work under software RAID under windows.
post #259 of 7891
I'm leaning strongly towards an Areca card; but certainly don't like the cost of this vs. a couple of Supermicro 8-port cards (coupled with the onboard SATA ports, these easily support 20 drives).

In addition to the faster computations, a hardware card provides two key features: (a) staggered spinup; and (b) array spindown.

The downside seems to be the potential compatibility issues with various drives (discussed at some length in the 48-TB storage thread). If it wasn't for those issues, I'd almost certainly have already bought an Areca card.

Bottom line: It seems that a software implementation saves at least $1000 in the cost of the array, and doesn't have the compability concerns of a hardware card. But it does not support staggered spinup so a heftier power supply is required. Spindown may be supported via the OS ... depends on how it treats the array drives (I'd appreciate comments on this r.e. the various software RAID implementations).

Has anyone implemented a truly large array with FlexRAID (i.e. at least 10TB)?? Does it support GPT? I don't particularly mind the static nature of it -- my primary large array need is strictly video archiving; so a "rebuild" of the parity on those days when I add content isn't a big deal. Not the best solution -- but nevertheless an attractive alternative.

Another thought r.e. power: Is the Areca card S3 compatible? Just curious if the power consumption on this storage server could be brought down to standby levels (~10w) with a reliable WOL. My experience with desktops is not good with standby -- too many devices don't work well through S3 ... but a simple server with basically nothing but a bunch of disks and a RAID card may be more reliable. While 125-150 watts isn't too bad an idle consumption figure, 10w or so would be even better !!
post #260 of 7891
I have a question about the actual read/writes in Raid configurations and the actual throughput of a local network. It seems that through link aggregtion you can hope for a theoretical max throughput of about 250MBps or 2gbps on Cat5e/6. What benefit does my server array provide to my media network by reading at 800MBps? (kudos for that accomplishment kapone) or even at 500MBps? Am I way off base here? or should i just be focused on adding storage capacity and getting my read/writes to a certain level?
post #261 of 7891
Quote:
Originally Posted by garycase2001 View Post

I'm leaning strongly towards an Areca card; but certainly don't like the cost of this vs. a couple of Supermicro 8-port cards (coupled with the onboard SATA ports, these easily support 20 drives).

In addition to the faster computations, a hardware card provides two key features: (a) staggered spinup; and (b) array spindown.

The downside seems to be the potential compatibility issues with various drives (discussed at some length in the 48-TB storage thread). If it wasn't for those issues, I'd almost certainly have already bought an Areca card.

Bottom line: It seems that a software implementation saves at least $1000 in the cost of the array, and doesn't have the compability concerns of a hardware card. But it does not support staggered spinup so a heftier power supply is required. Spindown may be supported via the OS ... depends on how it treats the array drives (I'd appreciate comments on this r.e. the various software RAID implementations).

Has anyone implemented a truly large array with FlexRAID (i.e. at least 10TB)?? Does it support GPT? I don't particularly mind the static nature of it -- my primary large array need is strictly video archiving; so a "rebuild" of the parity on those days when I add content isn't a big deal. Not the best solution -- but nevertheless an attractive alternative.

Another thought r.e. power: Is the Areca card S3 compatible? Just curious if the power consumption on this storage server could be brought down to standby levels (~10w) with a reliable WOL. My experience with desktops is not good with standby -- too many devices don't work well through S3 ... but a simple server with basically nothing but a bunch of disks and a RAID card may be more reliable. While 125-150 watts isn't too bad an idle consumption figure, 10w or so would be even better !!

Linux software raid seems to support staggered spinup, at least my implementation does. Forget about spinup/spindown of the array in linux though.
post #262 of 7891
Quote:
Originally Posted by MikeSM View Post

I believe you can add spares to a software RAID5 set under windows. The issue is expanding the array and the filesystem on top of that. You can do that in software raid under linux and freebsd, but not windows.

If you want to do raid under windows, I really recommend a hardware RAID controller, since you'll get OCE that way, and I think OCE is absolutely essential for a media server, since storage prices are falling like mad.

I just got two Hitachi 7K1000 1 TB drives to expand one of my RAID5 arrays ( I have one with 7K1000's and another with 7200.11's) for $89 and $99 each. OCE is really important for that. I'm actually going to take a 2 500 GB disk RAID0 set that is a member of the RAID5 set and convert it as a hot spare for both RAID5 volumes, and have only 1 TB disks in each RAID5 set.

You really want the flexibility to move stuff around like this, and that doesn't work under software RAID under windows.

Well that sucks, my whole plan revolved around building a RAID 5 on Windows Server 2008 and adding more drives 1 TB at a time as I needed. Can I run FreeNAS on Windows Server? Is that the next best alternative for software raid? I just don't have the cash for a hardware raid card right now.
post #263 of 7891
FreeNAS is a standalone O/S. (A Linux flavor)
post #264 of 7891
Quote:
Originally Posted by jagojago View Post

Well that sucks, my whole plan revolved around building a RAID 5 on Windows Server 2008 and adding more drives 1 TB at a time as I needed. Can I run FreeNAS on Windows Server? Is that the next best alternative for software raid? I just don't have the cash for a hardware raid card right now.

As I recall, you were originally planning on using FreeNAS and only considered WS 2008 since you got a free academic license. For pure media storage server purposes, there really isn't that much benefit going with WS 2008. You're probably better off sticking with your first choice (FreeNAS).
post #265 of 7891
Quote:
Originally Posted by garycase2001 View Post

I'm leaning strongly towards an Areca card; but certainly don't like the cost of this vs. a couple of Supermicro 8-port cards (coupled with the onboard SATA ports, these easily support 20 drives).

In addition to the faster computations, a hardware card provides two key features: (a) staggered spinup; and (b) array spindown.

The downside seems to be the potential compatibility issues with various drives (discussed at some length in the 48-TB storage thread). If it wasn't for those issues, I'd almost certainly have already bought an Areca card.

Bottom line: It seems that a software implementation saves at least $1000 in the cost of the array, and doesn't have the compability concerns of a hardware card. But it does not support staggered spinup so a heftier power supply is required. Spindown may be supported via the OS ... depends on how it treats the array drives (I'd appreciate comments on this r.e. the various software RAID implementations).

The Western Digital drives that I have all have a jumper setting for PM2 or PUIS, (Power Up In Standby). According to the spec, this would seem to provide a poor man's staggered spinup feature to minimize startup power costs. (Assuming HDD driver support.) If the OS provides spindown support based on inactivity, then the need for hardware RAID would not seem to be so important.

Has anyone ever used this PM2/PUIS jumper setting and will using it provide the power up benefit I am inferring from the spec?
post #266 of 7891
I meant to say FlexRAID, not FreeNAS, sorry. Will FlexRAID work well on WS08? I was going to use FreeNAS but I'd rather give the computer more functionality for other stuff if I can get WS for free.
post #267 of 7891
[quote=kapone;14831549]

For what? why? Windows 2003 or 2008 do not have any media center components, and unless you're gonna be using any native recording software from those cards, there's no reason to have the tuners in the server.
QUOTE]

I use GBPVR which has the ability to run clients off of a server. Right now my GBPVR computer is by the TV and has a single drive, but I need more storage and I don't want the noise and heat of so many drives in the room. Putting the tuner cards in the server lets me serve multiple clients (2 laptops and 2 desktop PCs, maybe other TV clients eventually) and also only stream in one direction (vs having to stream recordings from pc with tuners to server, and then from server to client.

That's my rationale, not sure it's the best way but it's all in my head for the moment, so if you've got recommendations on the best way to implement this I'm all ears.

Here's my current build plan----
Proc.: Athlon XP 2500+ $0 (lying around)
Cooler: Thermalright SLK-900 $0
MotherB: Asus A7N8X-E Deluxe $35 eBay
Memory: 2x1GB PC2700 $0
Video: ATI Radeon 9200SE $0
Optical: DVD-RW $0
HD Ctrlr: Supermicro $110
HDs: 3X Seagate 1TB SATA $0 + $340 (already have 1)
NIC: onboard Gb/s
Windows XP SP3
FlexRAID (the best option I've found yet)
post #268 of 7891
Quote:
Originally Posted by eToolGuy View Post

The Western Digital drives that I have all have a jumper setting for PM2 or PUIS, (Power Up In Standby). According to the spec, this would seem to provide a poor man's staggered spinup feature to minimize startup power costs. (Assuming HDD driver support.) If the OS provides spindown support based on inactivity, then the need for hardware RAID would not seem to be so important.

Has anyone ever used this PM2/PUIS jumper setting and will using it provide the power up benefit I am inferring from the spec?


From the spec you linked to it sounds like something in the OS still needs a spinup routine to make staggered spinup happen. Does anyone know if any software RAIDs do this?
post #269 of 7891
Quote:
Originally Posted by WeeboTech View Post

A few questions for those of you with Areca cards.
I'm exploring the possibility of Arcea support under unRAID

1. Is it possible to do a SAFE33 or SAFE50 type arrangement on the hardware raid card.
I.E. Similar to Intel's matrix raid or Silicon Image Steelvine SAFE50 Arrangement.

I wanted to make sure I understood the Areca's FW before answering your question, so I just created a total of 4 volumes on a single 4 disk raidset. The answer is yes, you can mimic SAFE33 or SAFE50, but better still you are not limited to those configs. You could create your own "SAFE90" variant that used 90% of the array's available space for example, or 3 SAFE33's, etc.

One limitation is that if you want to create a Raid 1 volume on a raidset with more than two disks then Raid10 is necessary (Raid 1 across two Raid 0's). Not that this is a problem as Raid10 is much faster than Raid1 anyways. If you are creating multiple volumes on a 2 disk raidset I don't believe you'd have this limitation.

Quote:
Originally Posted by WeeboTech View Post

What I'm, looking to do is have 2 drives setup with part as RAID0 and part as RAID1.

Based on my testing above I don't see how this would be a problem, but after I finish stress testing my new setup I'll try creating a 2 disk raidset with 2 volumes to verify.

Quote:
Originally Posted by WeeboTech View Post

2. The manual mentions pass through mode. Has anyone tried this and if so how is it working? Anyone try this with Llinux? Is linux able to access the drives do to a smartctl check?

Haven't tried pass through mode yet. I assume it simply makes a single disk available to the host OS outside of an array (similar to JBOD). Can't speak to linux atm. I had ESX 3i installed as I was going to use it in my new system, so I could have tested this, but it doesn't support the VT on my DSBF-DE or e5420 out of the box (not sure which), so I couldn't create 64bit guests. I didn't feel like fiddling, so now I'm on Hyper-V 2008 (Considerably slower disk i/o compared to ESX, but it just works :/)

Quote:
Originally Posted by jason4207 View Post

Edit: Also wanted to add that when you pick a PSU don't just get one that's $80-$120. You need to pick out a quality PSU. The Corsair's have an excellent reputation. Google 'jonnyguru' for some other good recommendations.

I agree with jason on this. I don't care what brand you choose as long as you do your research and make sure you're buying a high quality psu. Permanently losing an 8 disk array because your $60 PSU fried the hdd's would be a bad day.

Quote:
Originally Posted by kapone View Post

For what? why? Windows 2003 or 2008 do not have any media center components, and unless you're gonna be using any native recording software from those cards, there's no reason to have the tuners in the server.

Now, if you were running Vista on the server, it would make sense, ....but Vista doesn't do software RAID-5... Not even Ultimate... hasn't been hacked yet.

If he didn't reply already I was going to explain the benefits of running tuner hardware in a server/client configuration across the network.

Quote:
Originally Posted by garycase2001 View Post

I'm leaning strongly towards an Areca card; but certainly don't like the cost of this vs. a couple of Supermicro 8-port cards (coupled with the onboard SATA ports, these easily support 20 drives).

In addition to the faster computations, a hardware card provides two key features: (a) staggered spinup; and (b) array spindown.

The downside seems to be the potential compatibility issues with various drives (discussed at some length in the 48-TB storage thread). If it wasn't for those issues, I'd almost certainly have already bought an Areca card.

I just received a new 1280ml-24 (2GB ECC) and 4 Seagate 1.5TB hdd's (ST31500341AS) to test. I sent my 1680ix back to newegg after my 2 weeks of hard work dialing it in to be reliable resulted in a rock solid 8 disk array (WD1001FALS) with write speeds of only 80MB/sec (BLEH!).

I'm in the process of configuring the new card so that I can perform some stress tests and see how reliable this configuration is. I looked everywhere for concrete compatibility data between the 1280ml and Seagate 1.5TB, but couldn't find it, so I guess I'm going to be the guinea pig .

I'll post my results, good or bad.

Quote:
Originally Posted by LuncHwagon View Post

I have a question about the actual read/writes in Raid configurations and the actual throughput of a local network. It seems that through link aggregtion you can hope for a theoretical max throughput of about 250MBps or 2gbps on Cat5e/6. What benefit does my server array provide to my media network by reading at 800MBps? (kudos for that accomplishment kapone) or even at 500MBps? Am I way off base here? or should i just be focused on adding storage capacity and getting my read/writes to a certain level?

You aren't limited to aggregating only 2 links.

Quote:
Originally Posted by eToolGuy View Post

The Western Digital drives that I have all have a jumper setting for PM2 or PUIS, (Power Up In Standby). According to the spec, this would seem to provide a poor man's staggered spinup feature to minimize startup power costs. (Assuming HDD driver support.) If the OS provides spindown support based on inactivity, then the need for hardware RAID would not seem to be so important.

Has anyone ever used this PM2/PUIS jumper setting and will using it provide the power up benefit I am inferring from the spec?

Spindown is supported on a few software raid configurations (Unraid, Linux, etc.). I enabled the PM2 jumper on all 8 of my WD1001FALS drives before plugging them in to my 1680ix. Whoops. None of the drives would spin up in that configuration. I wasn't intersted in testing the PM2 feature unfortunately, so I didn't try connecting them directly to the motherboard. After removing the jumper (disabling PM2) they booted fine with the Areca.

Quote:
Originally Posted by Torquewrench View Post

I use GBPVR which has the ability to run clients off of a server. Right now my GBPVR computer is by the TV and has a single drive, but I need more storage and I don't want the noise and heat of so many drives in the room. Putting the tuner cards in the server lets me serve multiple clients (2 laptops and 2 desktop PCs, maybe other TV clients eventually) and also only stream in one direction (vs having to stream recordings from pc with tuners to server, and then from server to client.

That's my rationale, not sure it's the best way but it's all in my head for the moment, so if you've got recommendations on the best way to implement this I'm all ears.

I was going to mention the same thing to Kapone. Though I used Mediaportal or BeyondTV for my client/server setup.

Removing all of those tuners from my clients made interrupted recordings (from people restarting the machines) a thing of the past, not to mention simplifying configuration. It also significantly reduces the load on the network (no more encoding the streams across the network to the file server).

Quote:
Originally Posted by Torquewrench View Post

Here's my current build plan----
Proc.: Athlon XP 2500+ $0 (lying around)
Cooler: Thermalright SLK-900 $0
MotherB: Asus A7N8X-E Deluxe $35 eBay
Memory: 2x1GB PC2700 $0
Video: ATI Radeon 9200SE $0
Optical: DVD-RW $0
HD Ctrlr: Supermicro $110
HDs: 3X Seagate 1TB SATA $0 + $340 (already have 1)
NIC: onboard Gb/s
Windows XP SP3
FlexRAID (the best option I've found yet)

Nice setup, eerily similar to my Unraid server (Though I upgraded my Athlon XP to a Sempron 3000+ )

Can I ask why you aren't purchasing the new 1.5TB drives from Seagate? With FlexRAID it doesn't matter that you already have a 1TB, correct?
post #270 of 7891
Quote:
Originally Posted by kenshin-san View Post

Based on my testing above I don't see how this would be a problem, but after I finish stress testing my new setup I'll try creating a 2 disk raidset with 2 volumes to verify.

Thank you for all the information and help with this!
New Posts  All Forums:Forum Nav:
  Return Home
  Back to Forum: Home Theater Computers
AVS › AVS Forum › Video Components › Home Theater Computers › Guide To Building A Media Storage Server