AVS Forum banner

Status
Not open for further replies.
1 - 5 of 5 Posts

·
Registered
Joined
·
84 Posts
Discussion Starter · #1 ·
I'm in the process of ordering components for my soon to be media server and I have a few queestions. I'll be using a two 3Ware 7506-8's with a RAID 5 array containing 8 250GB drives on each card. I know the cards are 64bit cards and are backwards compatable to fit normal PCI slots. Will I gain any advantage using the 64bit slot or can I do fine with a regular PCI slot? Also can the 7506-8 run more than one array? As for the server case, is it really that much of an advantage to have hotswap drive cages?
 

·
Registered
Joined
·
74 Posts
While I can't answer your question WRT the performance issue, the cards can handle more than one array. As for the hotswap cages they're simply a matter of convienence.
 

·
Registered
Joined
·
14 Posts
There are several kinds of PCI busses available today.


They are mainly described by two characteristics:

1) Frequency (MHz)

2) Width (bits)


The standard PCI bus is 32bit (=4Byte) and 33MHz. Doing the math gives one a theoretical maximum bandwidth of 133 MB/s.


As you mentioned there is also 64 bit PCI. It comes in several flavors. Older server mainboards had a 33MHz version that ran at 5 volts (32/33 and other PCI is 3.3 volts I believe). All modern PCI busses are 3.3 volts (this is only an issue if you use an old mainboard).


Newer boards have 66MHz 64 bit PCI slots (like the dual Athlon I'm typing this on). They can also run at 33MHz.


Newer yet is PCI-X which can run at up to 133MHz (still 64 bit).


PCI-X cards should work in any 3.3V PCI slot.


A given motherboard may have multiple PCI busses, each has its own dedicated bandwidth (assuming a properly designed chipset)


You can do the math for the bandwidth numbers.


An 8 drive RAID 5 array with modern disks will have very fast read rates, I'm guessing around 200 MB/s or higher. If you have any need for bandwidth approaching that, you will need a 64 bit PCI bus.


I suspect, however, that you will be streaming this content over a LAN. This will limit the available bandwidth to the speed of your network. 100 Mbit is good for ~10MB/s real world bandwidth. Gigabit (1000 Mbit/s) is good for between 20 and 80MB/s (varies wildly based on what I've seen).


Additionally, the bandwidth needs of the media itself are likely small (DVD is what, 3GB/hour =~ 0.85 MB/s). You should be able to stream several DVD quality files over a 100 Mbit LAN simultaneously.


You can see that if you are streaming this type of media over a LAN, 64bit PCI is not helpful.


Additionally, some chipsets (like that in my Athlon board) have PCI performance issues, generally related to maximum write speed (capped at ~30MB/s). While RAID 5 arrays are no speed demons at writing (I'd be surprised if yours could maintain more than 50MB/s, though I'm no RAID expert), this is an issue you should research if it concerns you.


With that many drives, cooling and power could be major issues (if you don't plan for it). Pull the datasheets on the drives, they may use 10W each at IDLE, more on spinup. All that power has to be both supplied and vented (it all gets turned to heat). Do not neglect cooling or those IDE drives will likely die quick deaths.


Take care with the cabling -- 8 80 conductor cables per card could get ugly (and negatively impact airflow/cooling).


In the end, I'd look for a board that is as stable as possible that also meets your bandwidth and cost needs. For reference, I'm using a Dell 1400SC (on sale for $400 when I bought it) that has a P3 1.33GHz, onboard SCSI and two 64 bit PCI busses -- my Gig Ethernet is on one bus and the RAID 1 storage is on the other (it's a generic file server, so the extra bandwidth is helpful occasionally). I chose it for price, ability and stability (it's a Serverworks chipset).


I hope I answered your question.
 

·
Registered
Joined
·
84 Posts
Discussion Starter · #4 ·
Thanks for the input! Content will hopefullybe streamed over GigaBit LAN, if not then regular 100mb Ethernet. Two more questions though. How big of a PSU would I need to power all these drives and have my system remain stable? Anyone know of a good place, that ships to Canada, that sells rackmount cases with at least 16 hotswappable PATA bays?
 

·
Registered
Joined
·
14 Posts
You've got a few choices in PSU. If noise is not an issue, just get some bohemoth. Another option is to use a separate PSU just for the hard drives (I don't think you'll find many cases that support this out of the box).


I suggest you get the datasheet for the drive you plan on getting. PATA (and I assume SATA) drives use the +5 and +12 volt rails. Figure out how many Amps and on which voltage rail of each your drives will draw on spin-up and at idle. Spin-up will be the most taxing.


Some RAID cards allow you to control when the drives spin up (an old Mylex SCSI RAID card I still use does this, but my 2 channel 3ware does not). If you can stagger when they spin up (say, 4 drives at a time, 2 per controller), you can minimize the power draw.


If you want to use just one PSU, the necessary size will be a strong function of the rest of the system. A Celeron 300A could likely get by on, say, 180W. A dual Athlon workstation likely needs a minimum of 300-350W. Some video cards can draw significant wattage.


Then you'd need to add the particular requirements for your 16 drives on top of that.


Either way, don't skimp. I chose a PC Power and Cooling PSU for my main workstation and I don't regret it.
 
1 - 5 of 5 Posts
Status
Not open for further replies.
Top