Guide To Building A Media Storage Server - Page 2 - AVS Forum
Forum Jump: 
Reply
 
Thread Tools
post #31 of 7891 Old 09-30-2008, 10:59 AM
AVS Special Member
 
archer75's Avatar
 
Join Date: Dec 2003
Location: Oregon
Posts: 1,719
Mentioned: 1 Post(s)
Tagged: 0 Thread(s)
Quoted: 32 Post(s)
Liked: 36
All I have to do to enable duplication on the remaining files is to buy a single drive. Which would cost me far less than if I had to buy all new drives of equal size to do a RAID array.
I also have an external that I backup the server to as well.

And there are plugins for the WHS console for offsite backups.

I would take the less efficent duplication method over RAID any day due to the extra benefits of WHS itself along with the ability to use all of my drives of different sizes.

I do understand what you are saying. But RAID also has it's own drawbacks(everything has pros and cons) and not everyone can afford to buy a bunch of brand new drives especially when they already have a bunch of perfectly good drives.

HTPC: Intel e6300 2.8ghz, Intel DG45ID, 2gb DDR2, Radeon 5570, MCE IR receiver, Yamaha RX-V663 receiver via HDMI, panasonic ax100u, 145" S-I-L-V-E-R painted screen, 2x Roku 3's, chromecast, Amazon Fire TV, Vizio M602i-B3
archer75 is offline  
Sponsored Links
Advertisement
 
post #32 of 7891 Old 09-30-2008, 11:12 AM
Senior Member
 
SeijiSensei's Avatar
 
Join Date: Sep 2007
Location: Metro Boston
Posts: 466
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 0 Post(s)
Liked: 10
For those who might be interested, there's a current thread in the Linux HTPC forum about using that OS for a media server.
SeijiSensei is offline  
post #33 of 7891 Old 09-30-2008, 11:20 AM
Member
 
honeybrain's Avatar
 
Join Date: Dec 2007
Posts: 92
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 0 Post(s)
Liked: 10
Quote:
Originally Posted by odditory View Post

Welcome to the thread, JimsArcade. Yes there is something VERY obvious as not cost effective: WHY spend over $600 to get a tower case and 3 x 5-in-3 expanders to get 15 drive bays, when you can buy a Norco RPC-4020 case for $289 and it comes with 20 drive bays built-in? If you need rack rails, add the Norco RL-26 rail kit for $37.



I hate to sound like a Norco cheerleader but they've got a ridiculously low pricepoint relative to other ways of achieving that amount of drive bays. I think this case has become the easiest decision when putting a parts list together. Back in January when I first bought a Supermicro 24-bay case for around $1000 I thought *that* was the deal of the century since prior to that, cases with as many bays were multiple thousands and usually only from bigger name vendors.

As for the rest of the parts list compiled by renethx, I think it's a bit dated and there are better alternatives - I'm working on this new list right now. One of the things that's important to me is a motherboard with as many PCIe slots as possible (at least 3 or 4) for multiport SAS/SATA cards. I also have a personal preference for Intel-based CPU's over desktop-class AMD chips when it comes to running server O/S's (unless we're talking the more expensive AMD Opteron chips which are good).

As i requested for the configuration for WHS , i am planning to get above case for 20TB DAS. If you can come up with the article as soon as possible using Intel MB and Processor, i would really appreciate

The best possible and low cast DAS or NAS
honeybrain is offline  
post #34 of 7891 Old 09-30-2008, 11:31 AM
AVS Special Member
 
kapone's Avatar
 
Join Date: Jan 2003
Posts: 4,429
Mentioned: 7 Post(s)
Tagged: 0 Thread(s)
Quoted: 83 Post(s)
Liked: 126
Quote:
Originally Posted by honeybrain View Post

The best possible and low cast DAS or NAS

Those are two contradictory statements as such, but I know where you're coming from.

The "cheapest" option in my opinion for ~20 disks, is:

- get three of the Supermicro MV8 cards, and get a motherboard with 3 PCI-X slots, and run software RAID.

You can't beat the price of those cards for 8 ports for $95. Anything else is more expensive, unless you start watching fleabay like a hawk waiting for a "good" deal.

Another option is to:

-Get a 4 port Silicon Image card and 4 port multipliers (~$85 for the card and about $340 for the 4 port multipliers)

But that's still more expensive.
kapone is offline  
post #35 of 7891 Old 09-30-2008, 11:38 AM
Member
 
eToolGuy's Avatar
 
Join Date: Nov 2007
Posts: 39
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 0 Post(s)
Liked: 10
Quote:
Originally Posted by kapone View Post

So, then it all boils down to cost benefit, time, your technical abilities, money...the usual suspects.

I completely agree.
In this regard, and at the risk of being overly quantitative, it may be helpful to try to articulate the feature dimensions' that have to be implicitly or explicitly addressed in order to successfully build a media server. I know we all have different skills, goals, and budgets. My thinking is that if we can use this list to quantify our priorities, it may simplify our decision process. Some of these are obvious and are only listed for completeness sake; some of them can simply be builder options; but some of them are pivotal and will significant affect how useful the profiles defined in this thread are. Anyway, here's the list, please feel free to offer improvements

Media Server Feature Dimensions:
A. Runtime model (Bit streamer vs. Power HTPC; power up on demand vs. 24/7 operation; HW RAID vs. SW RAID)
B. Fault Tolerance (Quality components, minimize runtime corruption, backup plan, battery backup. (IMHO, there's little point in building this server without very solid robustness.))
C. Operating Costs (Assuming good MTBF components and relatively straightforward array extensions, this is primarily a power consumption issue.)
D. Initial costs (Both money and setup time)
E. Capacity (Execution and network bandwidth / storage size)
F. Maintainability (Primarily issue is extending array, hopefully little else to worry about)
G. System Complexity and User Technical Skills (Is our target user an IT Engineer or motivated AV hobbyist)
H. Software (OS choice, RAID, Ripping tools & Process, Converters, Playback SW)

I'd love to say that I want all of these, but unfortunately some of them oppose each other and force tradeoffs. (E.g. Skill vs. costs vs. capacity) Ultimately, these are odditory's call and will be addressed in the profiles he defines, but how he addresses them will ultimately determine how many people benefit from this thread. So my thinking is that if people chime in now and let him know what they value, it should help the overall value of the thread. I have previously stated that I am primarily interested in a low energy, robust, bit streamer, and am willing to subvert other features' for these but clearly there are other points of view, and I think it would be great for people to register them before we get too much inertia going
eToolGuy is offline  
post #36 of 7891 Old 09-30-2008, 11:41 AM
Member
 
honeybrain's Avatar
 
Join Date: Dec 2007
Posts: 92
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 0 Post(s)
Liked: 10
Quote:
Originally Posted by kapone View Post

Those are two contradictory statements as such, but I know where you're coming from.

The "cheapest" option in my opinion for ~20 disks, is:

- get three of the Supermicro MV8 cards, and get a motherboard with 3 PCI-X slots, and run software RAID.

You can't beat the price of those cards for 8 ports for $95. Anything else is more expensive, unless you start watching fleabay like a hawk waiting for a "good" deal.

Another option is to:

-Get a 4 port Silicon Image card and 4 port multipliers (~$85 for the card and about $340 for the 4 port multipliers)

But that's still more expensive.

Gotcha, i am also planning for the same.

Does these cards work with WHS? And what are the other components i need? PSU 750W is in enough when i grow like 21 Drives, I may be planning Seagate 1.5TBs to hook up with except the main drive for OS....
honeybrain is offline  
post #37 of 7891 Old 09-30-2008, 11:43 AM
Senior Member
 
aaomember's Avatar
 
Join Date: Aug 2007
Posts: 274
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 0 Post(s)
Liked: 10
Can you run ciprico's raid software with the supermicro cards?
aaomember is offline  
post #38 of 7891 Old 09-30-2008, 11:46 AM
AVS Special Member
 
kapone's Avatar
 
Join Date: Jan 2003
Posts: 4,429
Mentioned: 7 Post(s)
Tagged: 0 Thread(s)
Quoted: 83 Post(s)
Liked: 126
Quote:


Does these cards work with WHS?

The card has Windows drivers, so it's "compatible" with WHS from a driver point of view, but you won't be able to any RAID with the card and WHS.

Quote:


And what are the other components i need?

How'd I know? Depends on your budget, what's in your parts box, what can you reuse, what chassis you are using...lots of things.
kapone is offline  
post #39 of 7891 Old 09-30-2008, 11:47 AM
AVS Special Member
 
kapone's Avatar
 
Join Date: Jan 2003
Posts: 4,429
Mentioned: 7 Post(s)
Tagged: 0 Thread(s)
Quoted: 83 Post(s)
Liked: 126
Quote:
Originally Posted by aaomember View Post

Can you run ciprico's raid software with the supermicro cards?

No.
kapone is offline  
post #40 of 7891 Old 09-30-2008, 11:53 AM
Senior Member
 
aaomember's Avatar
 
Join Date: Aug 2007
Posts: 274
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 0 Post(s)
Liked: 10
Can you tell me if there is any major difference between the pci-x and the pci-e cards from ciprico?
aaomember is offline  
post #41 of 7891 Old 09-30-2008, 11:55 AM
AVS Special Member
 
kapone's Avatar
 
Join Date: Jan 2003
Posts: 4,429
Mentioned: 7 Post(s)
Tagged: 0 Thread(s)
Quoted: 83 Post(s)
Liked: 126
Quote:
Originally Posted by aaomember View Post

Can you tell me if there is any major difference between the pci-x and the pci-e cards from ciprico?

Functionally none. I'm sure there are performance differences. PCI-X is just not as optimized as PCI-E, however, it doesn't mean PCI-X is a slouch. With a good motherboard that has multiple 133MHz PCI-X slots, you can reach almost equivalent performance.
kapone is offline  
post #42 of 7891 Old 09-30-2008, 12:53 PM
Newbie
 
rdean79's Avatar
 
Join Date: Sep 2007
Posts: 5
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 0 Post(s)
Liked: 10
Been a lurker for some time and thought I would finally chime in. This is something that I've been interested in lately. I'm looking to rebuild my HTPC and have been thinking of a HTPC server and NAS server. So here are some things I would like to see:
  • OS: unraid or similar. I want something that can take multiple hdds of various sizes and still have the backup protection of raid without the cost. The downside with unraid is the write speed although with the latest version you can do a cache disk to help out. Plus they just released the multi-core version. I would like to record HD to this as well as stream movies, music, etc.
  • Rackmount case: I like the Norco RPC-4020.
  • CPU: Something dual core, cheap and runs cool.
  • MB: as many sata connections as possible, dual gigabit nics, don't care about video or audio, boot from usb (for unraid).

My biggest question is raid vs something like unraid. So far what I see is that raid is faster but cost way more to maintain and grow. Unraid isn't fast but is getting better and very cheap to implement.

So with all that being said, my suggestion would be to base various system builds not only on know-how and cost but also the various OS/Raid styles. Like Combo 1 for someone that wants to build a Raid5 system, combo 2 for someone that wants an unraid system, combo 3 for someone that wants WHS.

That's my two cents.

Robert
rdean79 is offline  
post #43 of 7891 Old 09-30-2008, 01:10 PM - Thread Starter
Advanced Member
 
odditory's Avatar
 
Join Date: Feb 2006
Posts: 771
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 0 Post(s)
Liked: 10
Quote:
Originally Posted by eToolGuy View Post

I completely agree.
In this regard, and at the risk of being overly quantitative, it may be helpful to try to articulate the feature dimensions' that have to be implicitly or explicitly addressed in order to successfully build a media server. I know we all have different skills, goals, and budgets. My thinking is that if we can use this list to quantify our priorities, it may simplify our decision process. Some of these are obvious and are only listed for completeness sake; some of them can simply be builder options; but some of them are pivotal and will significant affect how useful the profiles defined in this thread are. Anyway, here's the list, please feel free to offer improvements

Media Server Feature Dimensions:
A. Runtime model (Bit streamer vs. Power HTPC; power up on demand vs. 24/7 operation; HW RAID vs. SW RAID)
B. Fault Tolerance (Quality components, minimize runtime corruption, backup plan, battery backup. (IMHO, there's little point in building this server without very solid robustness.))
C. Operating Costs (Assuming good MTBF components and relatively straightforward array extensions, this is primarily a power consumption issue.)
D. Initial costs (Both money and setup time)
E. Capacity (Execution and network bandwidth / storage size)
F. Maintainability (Primarily issue is extending array, hopefully little else to worry about)
G. System Complexity and User Technical Skills (Is our target user an IT Engineer or motivated AV hobbyist)
H. Software (OS choice, RAID, Ripping tools & Process, Converters, Playback SW)

I'd love to say that I want all of these, but unfortunately some of them oppose each other and force tradeoffs. (E.g. Skill vs. costs vs. capacity) Ultimately, these are odditory's call and will be addressed in the profiles he defines, but how he addresses them will ultimately determine how many people benefit from this thread. So my thinking is that if people chime in now and let him know what they value, it should help the overall value of the thread. I have previously stated that I am primarily interested in a low energy, robust, bit streamer, and am willing to subvert other features' for these but clearly there are other points of view, and I think it would be great for people to register them before we get too much inertia going

Great points, but I would worry that attempting to get into all those metrics you defined within the confines of a forum thread might be futile. In any case your suggestions are definitely in line with what myself and some even more experienced people around here have in mind, and the suggestions made in the original post definitely will be boiled down from committee opinion as much as possible rather than just my particular opinion.

I'm going to start out with two systems tentatively: #1: a lower cost system with non-striped storage based on Windows Home Server and non-raid multiport cards (multiple 8-port Supermicro SAT2 MV8 cards), and #2: a higher cost system with a striped array for storage, based on an array card and some variant of Windows Server 2003 or 2008. In time, if a niche emerges for a third system then that may be added. These systems will stay media-oriented and won't attempt to be solutions for I.T. engineers interested in adding loads of cheap storage at work (I always get PM's from those people).

In any case, eToolGuy, I look forward to your comments and insight once these build lists are up.
odditory is offline  
post #44 of 7891 Old 09-30-2008, 01:23 PM - Thread Starter
Advanced Member
 
odditory's Avatar
 
Join Date: Feb 2006
Posts: 771
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 0 Post(s)
Liked: 10
Quote:
Originally Posted by rdean79 View Post

Been a lurker for some time and thought I would finally chime in. This is something that I've been interested in lately. I'm looking to rebuild my HTPC and have been thinking of a HTPC server and NAS server. So here are some things I would like to see:
  • OS: unraid or similar. I want something that can take multiple hdds of various sizes and still have the backup protection of raid without the cost. The downside with unraid is the write speed although with the latest version you can do a cache disk to help out. Plus they just released the multi-core version. I would like to record HD to this as well as stream movies, music, etc.
  • Rackmount case: I like the Norco RPC-4020.
  • CPU: Something dual core, cheap and runs cool.
  • MB: as many sata connections as possible, dual gigabit nics, don't care about video or audio, boot from usb (for unraid).

My biggest question is raid vs something like unraid. So far what I see is that raid is faster but cost way more to maintain and grow. Unraid isn't fast but is getting better and very cheap to implement.

So with all that being said, my suggestion would be to base various system builds not only on know-how and cost but also the various OS/Raid styles. Like Combo 1 for someone that wants to build a Raid5 system, combo 2 for someone that wants an unraid system, combo 3 for someone that wants WHS.

That's my two cents.

Robert

Hi Robert,

I have a lot of respect for unRAID, and in the future I may create a parts-list for a system based on unRAID, but in my experience with it I didn't find anything compelling enough to recommend it over WHS - the two are very similar from a performance standpoint in the way their filesystems work, and WHS is just simpler to operate for more people not to mention all its other perks as far as integration with Windows (backup features, etc). As I mentioned I'm going to start out simple so that the information isn't overwhelming for people, and will have striped and unstriped storage be the common denominators / differentiators.

That said, there are plenty of threads on unRAID and since this thread is just meant as a jumping off point, I think people could take a build parts list for a system from this thread, 'run with it' and create an unRAID systemwith it. They still benefit since most of the guesswork of which components to buy will have been made easy.
odditory is offline  
post #45 of 7891 Old 09-30-2008, 01:32 PM - Thread Starter
Advanced Member
 
odditory's Avatar
 
Join Date: Feb 2006
Posts: 771
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 0 Post(s)
Liked: 10
Quote:
Originally Posted by kapone View Post

Functionally none. I'm sure there are performance differences. PCI-X is just not as optimized as PCI-E, however, it doesn't mean PCI-X is a slouch. With a good motherboard that has multiple 133MHz PCI-X slots, you can reach almost equivalent performance.

One problem these days is motherboards with multiple PCI-X tend to be on the expensive side (if its a motherboard made in the last 18 months meaning a newer one), and often the higher cost of motherboards with multiple PCIe and PCI-X outweighs the cost savings of a secondhand PCI-X I/O card. Supermicro boards are a good example of this.

In any case PCI-X has more than enough bandwidth for large striped arrays - I've seen HDTune benchmarks hit over 800Mbps with an older PCI-X based Areca card.

Right now one of the systems I'm compiling a parts list for will utilize the PCI-X based SUPERMICRO AOC-SAT2-MV8 card (which they'd hurry up with a PCIe variant). I've seen some people putting three of these cards on a motherboard solely in PCI slots, and I'm debating if its something even worth recommending. It should be fine for WHS since sequential read/write performance from its "drive pool" won't exceed that of a single drive drive, but anyone wanting to use three PCI-X cards in PCI slots and planning on reading from any more than a couple drives at a time may be asking for trouble. Just a hunch.

Thoughts?

By the way here's a build by Ockie on hardforum - he's got three of these 8-port supermicro cards, one of them in a PCI-X, the other two in PCI, and running WHS.

odditory is offline  
post #46 of 7891 Old 09-30-2008, 01:46 PM
Member
 
alamone's Avatar
 
Join Date: Aug 2008
Posts: 179
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 0 Post(s)
Liked: 10
Quote:
Originally Posted by laggedreaction View Post

What should I do about running the cables from one case to the other? Could I get away with 1 meter internal fan-out cables going from one case to the backplane of the second? Is there enough length for that?

I'd strongly recommend *against* doing dodgy cabling that goes in and out of cases, through multiple adaptors, and through excessively long cabling. SATA signaling voltages can only go so far, and at a certain point you're going to be running into signal integrity problems due to all those workarounds to connect your drives from one case to another. SAS uses higher voltages so it's more robust in these configurations, but you're likely to be using cheaper SATA drives.

One exception to this rule is if you have an intermediary that can boost the signal, such as a powered port multiplier or the like. However, the hardware required for port multiplier is rather overpriced, and hardware raid card choices are limited. I used to have a Highpoint PM card, but it could only manage about 120MB/sec no matter how I configured the drives. Additionally, the ESATA connector itself is rather flimsy compared to say, a latching miniSAS connector, and it can easily dislodge and cause your raid to die.
alamone is offline  
post #47 of 7891 Old 09-30-2008, 01:59 PM
Member
 
alamone's Avatar
 
Join Date: Aug 2008
Posts: 179
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 0 Post(s)
Liked: 10
Quote:
Originally Posted by odditory View Post

It should be fine for WHS since sequential read/write performance from its "drive pool" won't exceed that of a single drive drive, but anyone wanting to use three PCI-X cards in PCI slots and planning on reading from any more than a couple drives at a time may be asking for trouble. Just a hunch.

Thoughts?

I think your hunch is right. If I recall correctly, the regular PCI bus is shared across all peripherals connected to the bus. All regular PCI cards will thus be sharing the same bus, which won't be able to go past 133MB/sec theoretically. Add overhead and all that, probably your max real world speed will be about 100MB/sec, and that will saturate your PCI bus. But like you said, if you only access 1 drive at a time, it wont be an issue since most hard drives won't read faster than 100MB/sec.

Then again, if they don't mind 100MB/sec performance, then maybe it'll
work for them, but I'm not sure if saturating the bus like that would cause
starvation of other PCI devices like soundcards, etc. I never tried it TBH.
alamone is offline  
post #48 of 7891 Old 09-30-2008, 02:01 PM
AVS Special Member
 
kapone's Avatar
 
Join Date: Jan 2003
Posts: 4,429
Mentioned: 7 Post(s)
Tagged: 0 Thread(s)
Quoted: 83 Post(s)
Liked: 126
Quote:
Originally Posted by odditory View Post

One problem these days is motherboards with multiple PCI-X tend to be on the expensive side (if its a motherboard made in the last 18 months meaning a newer one), and often the higher cost of motherboards with multiple PCIe and PCI-X outweighs the cost savings of a secondhand PCI-X I/O card. Supermicro boards are a good example of this.

In any case PCI-X has more than enough bandwidth for large striped arrays - I've seen HDTune benchmarks hit over 800Mbps with an older PCI-X based Areca card.

Right now one of the systems I'm compiling a parts list for will utilize the PCI-X based SUPERMICRO AOC-SAT2-MV8 card (which they'd hurry up with a PCIe variant). I've seen some people putting three of these cards on a motherboard solely in PCI slots, and I'm debating if its something even worth recommending. It should be fine for WHS since sequential read/write performance from its "drive pool" won't exceed that of a single drive drive, but anyone wanting to use three PCI-X cards in PCI slots and planning on reading from any more than a couple drives at a time may be asking for trouble. Just a hunch.

Thoughts?

By the way here's a build by Ockie on hardforum - he's got three of these 8-port supermicro cards, one of them in a PCI-X, the other two in PCI, and running WHS.

Agreed, but with a bit of searching, you can find them. The ASUS DSBV-D is one such example. Dual 771, with 3 PCI-X slots (and a couple of PCI-e to boot)

Pair that with a single 40w dual core 5148 or something, and you'll be flying with power savings to boot.
kapone is offline  
post #49 of 7891 Old 09-30-2008, 02:06 PM
AVS Special Member
 
archer75's Avatar
 
Join Date: Dec 2003
Location: Oregon
Posts: 1,719
Mentioned: 1 Post(s)
Tagged: 0 Thread(s)
Quoted: 32 Post(s)
Liked: 36
Quote:
Originally Posted by alamone View Post

I think your hunch is right. If I recall correctly, the regular PCI bus is shared across all peripherals connected to the bus. All regular PCI cards will thus be sharing the same bus, which won't be able to go past 133MB/sec theoretically. Add overhead and all that, probably your max real world speed will be about 100MB/sec, and that will saturate your PCI bus. But like you said, if you only access 1 drive at a time, it wont be an issue since most hard drives won't read faster than 100MB/sec.

Then again, if they don't mind 100MB/sec performance, then maybe it'll
work for them, but I'm not sure if saturating the bus like that would cause
starvation of other PCI devices like soundcards, etc. I never tried it TBH.

It's not a problem. I have my 2 promise cards in PCI slots. You only need what? 4MB/s for HD video? I can transfer from my desktop to the server, stream HD to my HTPC and run a backup to my external drive from the server all without a hitch. Yes the file transfers and backup will slow down but it doesn't affect the video playback.

Prior to WHS PP1 all rights were to the landing drive and then got transfered to their final destination. Now all writes go directly to the drive they are intended for.

Being a server I don't use a soundcard on it. I do have an old Voodoo 3 I use in an AGP slot for video when I need it. But the only PCI cards are my 2 promise cards.

HTPC: Intel e6300 2.8ghz, Intel DG45ID, 2gb DDR2, Radeon 5570, MCE IR receiver, Yamaha RX-V663 receiver via HDMI, panasonic ax100u, 145" S-I-L-V-E-R painted screen, 2x Roku 3's, chromecast, Amazon Fire TV, Vizio M602i-B3
archer75 is offline  
post #50 of 7891 Old 09-30-2008, 02:45 PM
Senior Member
 
havix's Avatar
 
Join Date: Nov 2001
Location: St. Paul, MN
Posts: 264
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 0 Post(s)
Liked: 10
I tried WHS and now I'm running the opposite direction due to issues with managing large drive collections and windows instability and quirks. It's not easy to identify failing drives inside WHS. It's unstable like most Windows operating systems when it comes to drive failures even when it's not a system volume. If you lose a drive you have no idea what you've lost since it's pooled. In addition since you have to completely duplicate your data to protect it you most likely won't have a backup of what is lost. So I've finally given up on having a end all be all server. What I want is stability. I would only use WHS if you have a media server with very few drives.

So I've purchased parts for a new unraid server.

NORCO RPC-4020 4U Rackmount Server Case - Retail
Patriot Xporter XT 4GB Flash Drive - (2)
Intel Celeron 430 Conroe-L 1.8GHz LGA 775 35W Single-Core Processor
GIGABYTE GA-G31M-S2L Intel G31 Micro ATX Motherboard
G.SKILL 2GB (2 x 1GB) 240-Pin DDR2 SDRAM DDR2 800
Seagate Barracuda 7200.11 1.5TB 7200 RPM SATA Hard Drive - (2)
Unraid Server Pro Licenses - (2)
Supermicro SAT2 MV8 - (2) - Existing
Mixture of 400GB - 750GB drives - (19) - Existing - Will pick the newest and largest drives to get up to a total of 16 for unraid.
Corsair TX750W - Existing
Unraid

This build should be power efficient and offer a stable platform to serve my media.
havix is offline  
post #51 of 7891 Old 09-30-2008, 03:05 PM
AVS Special Member
 
archer75's Avatar
 
Join Date: Dec 2003
Location: Oregon
Posts: 1,719
Mentioned: 1 Post(s)
Tagged: 0 Thread(s)
Quoted: 32 Post(s)
Liked: 36
Quote:
Originally Posted by havix View Post

I tried WHS and now I'm running the opposite direction due to issues with managing large drive collections and windows instability and quirks. It's not easy to identify failing drives inside WHS. It's unstable like most Windows operating systems when it comes to drive failures even when it's not a system volume. If you lose a drive you have no idea what you've lost since it's pooled. In addition since you have to completely duplicate your data to protect it you most likely won't have a backup of what is lost. So I've finally given up on having a end all be all server. What I want is stability. I would only use WHS if you have a media server with very few drives.

So I've purchased parts for a new unraid server.

NORCO RPC-4020 4U Rackmount Server Case - Retail
Patriot Xporter XT 4GB Flash Drive - (2)
Intel Celeron 430 Conroe-L 1.8GHz LGA 775 35W Single-Core Processor
GIGABYTE GA-G31M-S2L Intel G31 Micro ATX Motherboard
G.SKILL 2GB (2 x 1GB) 240-Pin DDR2 SDRAM DDR2 800
Seagate Barracuda 7200.11 1.5TB 7200 RPM SATA Hard Drive - (2)
Unraid Server Pro Licenses - (2)
Supermicro SAT2 MV8 - (2) - Existing
Mixture of 400GB - 750GB drives - (19) - Existing - Will pick the newest and largest drives to get up to a total of 16 for unraid.
Corsair TX750W - Existing
Unraid

This build should be power efficient and offer a stable platform to serve my media.

Disk management add on can build a wire frame so you know exactly what drive is in what slot. Of course you tell it what is where. No OS is going to do that for you. They don't know in what bay of your computer a drive is installed in. Unraid won't tell you that. But with this addon you will know exactly what drive in what bay has failed.
http://forum.wegotserved.co.uk/index...s&full=1&id=64

There is a duplication add on, can't remember what it's called, but it shows you what data resides on what drive. Everything.
But if you lost the drive it doesn't matter what was on it if you have duplication enabled.
Edit - found it: http://forum.wegotserved.co.uk/index...ds&showfile=27

I don't know about instability. My server is rock solid. Been up since the last time the power went out. I don't even remember when that was. Even my Vista x64 desktop never crashes. Never locks up.

And you also have the ability to backup the entire server either to internal drives that are not part of the storage pool or external drive(s). so should your entire server die, should all the drives die for whatever reason you can still have a backup. My server is backed up to an external.
And you can choose from several online backup services with plugins for the WHS console. There is no replacement for offsite backups.

HTPC: Intel e6300 2.8ghz, Intel DG45ID, 2gb DDR2, Radeon 5570, MCE IR receiver, Yamaha RX-V663 receiver via HDMI, panasonic ax100u, 145" S-I-L-V-E-R painted screen, 2x Roku 3's, chromecast, Amazon Fire TV, Vizio M602i-B3
archer75 is offline  
post #52 of 7891 Old 09-30-2008, 03:34 PM - Thread Starter
Advanced Member
 
odditory's Avatar
 
Join Date: Feb 2006
Posts: 771
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 0 Post(s)
Liked: 10
Quote:
Originally Posted by archer75 View Post

It's not a problem. I have my 2 promise cards in PCI slots. You only need what? 4MB/s for HD video? I can transfer from my desktop to the server, stream HD to my HTPC and run a backup to my external drive from the server all without a hitch. Yes the file transfers and backup will slow down but it doesn't affect the video playback.

Prior to WHS PP1 all rights were to the landing drive and then got transfered to their final destination. Now all writes go directly to the drive they are intended for.

Being a server I don't use a soundcard on it. I do have an old Voodoo 3 I use in an AGP slot for video when I need it. But the only PCI cards are my 2 promise cards.

Yep - that "landing drive" methodology of WHS's drive pooling service was the absolute dealbreaker for me, after I couldn't understand why I/O performance was so horrible trying to copy data to the pool; then I read the whitepapers on the internals of the pooling service and blech - utterly ridiculous. At least they changed it in PP1; not sure why it took them so long.

Because of that and other drawbacks at the time I evaluated, I've stayed away from it, but since more and more people I respect are saying good things about it, it's time for another go in my next build. However before WHS gets any final stamp of approval I'm definitely going to test/simulate various drive failure scenarios - some of the people with WHS servers and lots of drives don't seem to have tested this when I've asked.

Finally, the progress in third party plugins for WHS makes it all the more promising. Hopefully Microsoft continues to improve and update WHS.
odditory is offline  
post #53 of 7891 Old 09-30-2008, 03:46 PM
Senior Member
 
havix's Avatar
 
Join Date: Nov 2001
Location: St. Paul, MN
Posts: 264
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 0 Post(s)
Liked: 10
Quote:
Originally Posted by archer75 View Post

Disk management add on can build a wire frame so you know exactly what drive is in what slot. Of course you tell it what is where. No OS is going to do that for you. They don't know in what bay of your computer a drive is installed in. Unraid won't tell you that. But with this addon you will know exactly what drive in what bay has failed.
http://forum.wegotserved.co.uk/index...s&full=1&id=64

There is a duplication add on, can't remember what it's called, but it shows you what data resides on what drive. Everything.
But if you lost the drive it doesn't matter what was on it if you have duplication enabled.
Edit - found it: http://forum.wegotserved.co.uk/index...ds&showfile=27

I don't know about instability. My server is rock solid. Been up since the last time the power went out. I don't even remember when that was. Even my Vista x64 desktop never crashes. Never locks up.

And you also have the ability to backup the entire server either to internal drives that are not part of the storage pool or external drive(s). so should your entire server die, should all the drives die for whatever reason you can still have a backup. My server is backed up to an external.
And you can choose from several online backup services with plugins for the WHS console. There is no replacement for offsite backups.

Yes I've used this and found that it was quite a bit more informative then the standard disk management tools. This issue I'm referring to are the logs in the event viewer. All the disk errors I was seeing wouldn't mention the volume's id in the error message. I would assume this is due to how WHS is handling the disks as I used Windows 2003 day in day out at work and the drives are identified on similar issues in the error messages. Disk management does some SMART checking but regular OS disk errors that are written to the event view aren't detected. Also there isn't any intelligent removal of failing drives. So your drives will continue to be written and read to until the drive utterly fails. This is not what I call a "reliable" set it and forget it solution to my data.

Second I had heard about this duplication plugin but I never fully looked into it's solution. Does this remember what's on all the drives even if the drive can no longer be accessed? So this catalogs non-duplicated shares as well? Also, to say well it can be duplicated that's just silly. I might as well install a free sync agent and keep a separate copy of everything I own. I have 7TB right now of data and I'm certainly not going to have a 14+ TB array just because I want to protect myself from a drive failure. Unraid makes much more sense.

So now I'll be running Unraid. I will be running off a jump drive with it's sole purpose to serve data on my network. I've reduced the chance of an OS drive failure and I'll also have a spare drive that should take a few minutes to switch over to in case of an issue. Unraid will automatically disable writing to a suspect drive from what I understand. I can withstand a drive failure and not lose any data. I guess I just want more of an appliance rather then something I'm still managing. A large WHS setup is still a chore and requires me to keep a close eye on it. Unraid will allow me to finally enjoy my nights away from the office instead of doing more computer work at home. I'll just end by saying, I'm sure it's fine for smaller installs but the more drives you're dealing with and the older they are the more chance you have for failure. I would not trust my current config with WHS.
havix is offline  
post #54 of 7891 Old 09-30-2008, 04:45 PM
Member
 
diet butcher's Avatar
 
Join Date: Sep 2007
Posts: 50
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 0 Post(s)
Liked: 10
Quote:
Originally Posted by havix View Post

Yes I've used this and found that it was quite a bit more informative then the standard disk management tools. This issue I'm referring to are the logs in the event viewer. All the disk errors I was seeing wouldn't mention the volume's id in the error message. I would assume this is due to how WHS is handling the disks as I used Windows 2003 day in day out at work and the drives are identified on similar issues in the error messages. Disk management does some SMART checking but regular OS disk errors that are written to the event view aren't detected. Also there isn't any intelligent removal of failing drives. So your drives will continue to be written and read to until the drive utterly fails. This is not what I call a "reliable" set it and forget it solution to my data.

Second I had heard about this duplication plugin but I never fully looked into it's solution. Does this remember what's on all the drives even if the drive can no longer be accessed? So this catalogs non-duplicated shares as well? Also, to say well it can be duplicated that's just silly. I might as well install a free sync agent and keep a separate copy of everything I own. I have 7TB right now of data and I'm certainly not going to have a 14+ TB array just because I want to protect myself from a drive failure. Unraid makes much more sense.

So now I'll be running Unraid. I will be running off a jump drive with it's sole purpose to serve data on my network. I've reduced the chance of an OS drive failure and I'll also have a spare drive that should take a few minutes to switch over to in case of an issue. Unraid will automatically disable writing to a suspect drive from what I understand. I can withstand a drive failure and not lose any data. I guess I just want more of an appliance rather then something I'm still managing. A large WHS setup is still a chore and requires me to keep a close eye on it. Unraid will allow me to finally enjoy my nights away from the office instead of doing more computer work at home. I'll just end by saying, I'm sure it's fine for smaller installs but the more drives you're dealing with and the older they are the more chance you have for failure. I would not trust my current config with WHS.

Have you thought about WHS without duplication + FlexRaid for parity protection?
diet butcher is offline  
post #55 of 7891 Old 09-30-2008, 04:52 PM
AVS Special Member
 
MiBz's Avatar
 
Join Date: May 2005
Posts: 1,093
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 0 Post(s)
Liked: 10
Drives are cheap. Your time to rerip, catalogue, artwork etc isn't.
Don't rely on WHS duplication to protect your data.

Backup it onto an external array or seperate system. This can easily be automated. The system can sleep and wake (WOL) only when a backup needs to take place.
MiBz is offline  
post #56 of 7891 Old 09-30-2008, 05:10 PM
Member
 
alspoll's Avatar
 
Join Date: Mar 2005
Posts: 69
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 0 Post(s)
Liked: 10
Quote:
Originally Posted by kapone View Post

Agreed, but with a bit of searching, you can find them. The ASUS DSBV-D is one such example. Dual 771, with 3 PCI-X slots (and a couple of PCI-e to boot)

Pair that with a single 40w dual core 5148 or something, and you'll be flying with power savings to boot.

Which motherboard would you recommend the ASUS or the Supermicro
X7DWE? With the Ciprico having a PCI-X version available of its RAID card, both boards become viable options. Is there any concern that the PCI-X slot would not be readily available in the future if the motherbaord dies?

Also, 1 question regarding the Ciprico cards... it says they support 32 drives. Is that for a single array or all drives attached the system? With the above motherboards, the drive count can be greater than the 32 specified.

Sorry to hijack, but I hope this compares the difference between 2 technologies and long term consideration.

TIA

AL
alspoll is online now  
post #57 of 7891 Old 09-30-2008, 05:20 PM
AVS Special Member
 
kapone's Avatar
 
Join Date: Jan 2003
Posts: 4,429
Mentioned: 7 Post(s)
Tagged: 0 Thread(s)
Quoted: 83 Post(s)
Liked: 126
Quote:
Originally Posted by alspoll View Post

Which motherboard would you recommend the ASUS or the Supermicro
X7DWE? With the Ciprico having a PCI-X version available of its RAID card, both boards become viable options. Is there any concern that the PCI-X slot would not be readily available in the future if the motherbaord dies?

Also, 1 question regarding the Ciprico cards... it says they support 32 drives. Is that for a single array or all drives attached the system? With the above motherboards, the drive count can be greater than the 32 specified.

Sorry to hijack, but I hope this compares the difference between 2 technologies and long term consideration.

TIA

AL

Can't really answer that. Technically, the X7DWE is a better board since it's based on the 5400B chipset and goes up to 1600FSB, but it is also a lot more expensive.

It really boils down to individual requirements, both are excellent boards.
kapone is offline  
post #58 of 7891 Old 09-30-2008, 06:01 PM
AVS Special Member
 
MikeSM's Avatar
 
Join Date: Jan 2002
Posts: 2,906
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 0 Post(s)
Liked: 10
I think the better deal is something like the TYAN S5211G2NR or the Supermicro MBD-X7SBA. Socket 775 CPU, normal DDR2 ram, and 2 PCI-X slots as well, plus a couple intel GbE ports. You can find them for about two hundred dollars, and you get 6 onboard SATA ports to boot. Add two of the 8 port SATA PCI-X cards, and you have 22 SATA ports, which fills up a norco 4020 well (with one SATA port for a system disk and the other for a slim optical drive). And the tyan has a couple PCI-E x16 slots to boot. They are both P35 based, with ICH9R SATA ports, so they are very fast onboard ports and well supported in DOS (er.. windows) and Linux.

The Socket 775 CPU's are much cheaper than the socket 771 models for the same performance, and no expensive FBDIMM's required, and these boards handle the latest 45 nm CPU's too. Alas, no overclocking capability...

PS And the tyan comes with a cheap onboard video controller too. :-)

I think you guys going the socket 771 route with FBDIMMs are really throwing money away.
MikeSM is offline  
post #59 of 7891 Old 09-30-2008, 06:21 PM
Senior Member
 
aaomember's Avatar
 
Join Date: Aug 2007
Posts: 274
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 0 Post(s)
Liked: 10
Does max LAN speed dual 10/100/1000 mean that some how the two LAN ports can be bridged together to get 2000mbs speed?
aaomember is offline  
post #60 of 7891 Old 09-30-2008, 06:26 PM
AVS Special Member
 
kapone's Avatar
 
Join Date: Jan 2003
Posts: 4,429
Mentioned: 7 Post(s)
Tagged: 0 Thread(s)
Quoted: 83 Post(s)
Liked: 126
Quote:
Originally Posted by aaomember View Post

Does max LAN speed dual 10/100/1000 mean that some how the two LAN ports can be bridged together to get 2000mbs speed?

Yes, it can be done, for e.g., with Link Aggregation (aka 802.3ad). The "speed" will still remain at 1gbps, but the "bandwidth" to the server goes up to 2gbps. SO you could have two workstations (as an e.g.) reading and writing to the server at full gigabit speeds without any deterioration.

For Link Aggregation to work, your switch has to support it, the NICs themselves have to support it, and the drivers have to support them.

Mine's setup that way.

kapone is offline  
Reply Home Theater Computers

Tags
Computers

User Tag List

Thread Tools
Show Printable Version Show Printable Version
Email this Page Email this Page


Forum Jump: 

Posting Rules  
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are Off
Pingbacks are Off
Refbacks are Off