AVS Forum banner
Status
Not open for further replies.
1 - 20 of 46 Posts

·
Registered
Joined
·
1,244 Posts
Discussion Starter · #1 ·
My current media sever (800MHz P3 on Asus CUSL2 w/ 512MB RAM) is running out of steam with the 50,000 or so songs I got on it which I access through Media Center 9.


Since I got a 3Ware 64bit PCI raid controller in it, I'd like to step up to a P4 motherboard that will support the full 64bit. The problem is that I don't seem to be able to locate such a motherboard.


I've been looking at server motherboards that have this feature, but they are all dual Zeon based.


If I was to bust out a dual Zeon motherboard, like the Intel SE7505VB2 and drop a pair of Zeon 2.4GHz 533MHz FSB onto it, how would the performance under XP Pro running Media Center 9 compare to a 3.2GHz P4?


Could I dedicate one of the CPUs to run MC9 and have the other one deal with all the I/O and other system functions?


My goal is to end up with a system in which MC9 (and therefore Mario's lobby suite) will be more or less instantaneous, even with 100,000 tracks in the MC9 database.


Other than running MC9, the server will also serve up ripped DVDs to various media clients in the house. This is a low CPU task that the current 800MHz P3 handles just fine, but I can't stand the delay in MC9 (several seconds) when jumping between genres/views, etc when browsing my music collection.


I don't mind spending the $$$ on a dual Zeon setup if it will blow away a single P4 setup, but I'm not sure that my main application MC9 supports multi-threading, so perhaps a fast P4 is better, even if that means running my raid controller in 32bit mode?
 

·
Registered
Joined
·
247 Posts
Quote:
Originally posted by pclausen

This is a low CPU task that the current 800MHz P3 handles just fine, but I can't stand the delay in MC9 (several seconds) when jumping between genres/views, etc when browsing my music collection.


I don't mind spending the $$$ on a dual Zeon setup if it will blow away a single P4 setup, but I'm not sure that my main application MC9 supports multi-threading, so perhaps a fast P4 is better, even if that means running my raid controller in 32bit mode?
Is that delay you mention a CPU bottle neck, or is a disk I/O bottleneck where it must read through GB's of data to reorder them (I'm not familiar with the application, sorry)?


Xeons are nice, if you need to do a lot of CPU intesive things, though if disk I/Os are your bottle neck, you'd be better off with a SCSI drive, faster spindle, lower seek, higher I/Os than pretty much every IDE coutnerpart.


64bit PCI helps for sending huge amounts of data over the PCI bus, but if you aren't saturating it now, buying a new computer with a 64bit PCI slot and bus won't offer any performance benfits.


$0.02
 

·
Registered
Joined
·
1,244 Posts
Discussion Starter · #4 ·
Brian, I'm pretty sure MC9 loads the table of all the tracks into memory. No way it scans my raid that contains several hundred gigabytes of wma and mp3 files in a couple of seconds. So I just need fast a fast processor on a motherboard that supports 64bit PCI. Yes, 64bit is likely overkill for now, but since my 3ware card is 64bit, might as well get a mb that supports it.


WMAman, thanks for the link to Tyan. I think this mb just might be the ticket for me:

http://tyan.com/products/html/trinitygcsl.html


built in gigabit ethernet (Intel 82545)

2 64bit PCI slots (each on their own channel)

3 32bit PCI slots (on a third channel)

built in VGA


Looks to be around $310 or so if you shop around. For about 10 bucks more you can have two gigabit ports!
 

·
Registered
Joined
·
288 Posts
Is your 3ware card a 3.3v 66Mhz 64 bit PCI card or is it a 5v only 33Mhz 64bit card? It's makes a difference because most all of the newer 64bit slots on Xeon boards are either 3.3v only 66Mhz slots or are PCI-X (which is also 3.3v only). These slots are backward compatible and can run at 33Mhz if a card that only supports 33Mhz operation is installed there. However, the slots are 3.3v only and that means in order to use a card in those slots it must either be a 3.3v card or be a universial card that supports either 3.3v or 5v signaling. If it supports 5v signaling only then it won't work in a 3.3v slot. The way to tell is by how the card is keyed. Keying is accomplished by making the card with a notch in a certain spot on the edge connector. A keyway in the slot then fits into these notches. A card that supports 3.3v signaling has the notch in a different location than a 5v only card. This prevents anyone from accidentially inserting the wrong card into the wrong slot. Some cards have both the 3.3v and 5v notch cut out. Those are "universial" cards and should work in either 3.3v or 5v slots. Here's a link to a page that discusses this and shows the different keying.

http://sunsolve.sun.com/handbook_pub.../PCI_Info.html


PCI-x slots are also suppose to be backward compatible and capable of running legacy 33Mhz PCI cards in theory. However, they too are 3.3v only. So, a 5v only card cannot be used in those either.
 

·
Registered
Joined
·
247 Posts
Sure, if you've got that much stuff already, than it sounds like it is a CPU issue with sorting it all.


That mobo looks like just the thing you need. No need to get a dual CPU mobo if you don't need it. I haven't used Tyan mobos in about 10 months or so, but I did have half a dozen dual P3's of their and I loved them.


WRT to the 3ware voltage issue, I believe the new ones are 5v cards as the need 5W over the 5v and .25W on -12V lines:

http://3ware.com/products/pdf/7506-US.pdf


Not sure which card you have, but I'd assume it would be the same, check your manual though.


I don't know off hand what voltages PCI-X runs at, though if Stefan is right, I'd send Tyan an email and ask how the 3ware would work in there before ordering. Or ask your vendor. This short blurb has a tiny bit of information reguarding actual voltage/power needs and signaling/logic levels:

http://www.generalstandards.com/pcibusissues.php


EDIT:


One more link:

http://www.mail-archive.com/chipdir-.../msg01047.html
 

·
Registered
Joined
·
551 Posts
pclausen,


Did you try J. River BBS? When I asked them to stop looking up actual files because it's too slow, they answered no they didn't look up files. I don't know why it's so slow. Anyway, they are still under the beta and they are improving speed these days before they release new MC9.1.


In addition, if you are using hardware RAID5 from Linux, I suggest you that using software RAID5 is 3 times faster under 32 bits 33 MHz configuration.
 

·
Registered
Joined
·
1,029 Posts
The voltage keying on PCI slots is for VIO, the I/O voltage, which is either 3.3V or 5V. I believe that all PCI slots are supposed to supply both 3.3V and 5V; however, They only need to support one I/O voltage, either 3.3V or 5V. A 5V slot should still have 3.3V (although some don't), but it will be limited to 33 MHz. Likewise, a 3.3V slot should have 5V available (some may not) and will be able to run 33 MHz, or 66 MHz.


Mark
 

·
Registered
Joined
·
200 Posts
The 64-bit PCI is going to make absolutely no difference in your file sorting/browsing speed. That lookup table will be loaded entirely into RAM -- if it weren't, you'd certainly notice slowdowns of much more than a couple seconds given your massive archive.


Regarding dual-procs: Unless MC9 is coded to specifically take advantage of them (a few quick searches shows that this is still on the "wish list"), you won't gain much from having multiple processors. Skip the Xeons. All my further advice is under the assumption that MC9 is written and optimized with a single processor in mind. Ask the developers anyway, if you want.


The task of processing a large database table for MC9 is one that will be bound in this case by your CPU and RAM. Given that you're running on a 800MHz P3, there's a good chance that you're rather limited here.


Since you can afford it, get a Hyperthreaded P4, as fast as you can afford, running on a Canterwood motherboard with dual channel PC3200 DDR SDRAM. A 3.2 GHz P4 should beat the pants off a couple Xeons. If you are into it, you could even play around with overclocking a P4 2.4C. With the right RAM, many have successfully run their CPUs with at up to 1000MHz FSB. Either way, a shiny new P4, Mobo, and RAM should smoke your old P3.


kazushi:
Quote:
In addition, if you are using hardware RAID5 from Linux, I suggest you that using software RAID5 is 3 times faster under 32 bits 33 MHz configuration.
That sounds backwards -- got a link handy?
 

·
Registered
Joined
·
1,244 Posts
Discussion Starter · #10 ·
The 3ware controller I have is the Escalade 7506-8 8 Port Parallel ATA Raid controller. Its 64bit, 66MHz. According to their list of compatible motherboard, it works in any of the following PCI slots:


32-bit PCI-33

64-bit PCI-33

64-bit PCI-66

64-bit PCI-X-133


If I want 64-bit PCI and the Canterwood chipset, it looks like I'll have to wait for the Canterwood-ES chipset to get the 64-bit PCI support in a 875P based motherboard.


According to this article from xbitlabs:

http://www.xbitlabs.com/news/chipset...319151039.html


I need the Canterwood-ES chipset which is paired with the ICH-S South Bridge, which provides 64-bit PCI. The regular Canterwood chipset comes with ICH5R which does not support 64-bit PCI. They say 4Q 2003, so that is right around the corner.
 

·
Registered
Joined
·
200 Posts
Are you actually doing anything now or in the near future that actually stresses the bandwidth of 32/33 PCI? Do you realize how much bandwidth there is already?


Lets be conservative and say that you could get 80MB/s (640Mb/s) out of your RAID array. That bandwidth is enough to transfer:


32 average HDTV streams (at 20Mb/s), or

23 high-quality HDTV streams (at 27Mb/s), or

106 average DVDs (at 6Mb/s), or

71 high-quality DVDs (at 9Mb/s), or

2500 average mp3s (at 256kb/s), or

444 uncompressed audio CDs (at 1.441Mb/s)
 

·
Registered
Joined
·
1,663 Posts
I've also been looking for an affordable MB with 64/66 PCI slots since I've had a horrible time with AMD dual cpu boards (I've given up at this point on that platform). the real benefit of the bigger busses is when running say gigabit ethernet AND a 3ware card. Intel's ICH-5 gigabit setup with a 64 bit slot for the 3ware card would be ideal for me and affordable. I find the 7505 boards to be rather costly and am on the fence as to what to do.

-Trouble
 

·
Registered
Joined
·
727 Posts
After some research last weekend I bought Asus PC-DL Deluxe , dual Xeon 875 motherboard. I was also seriously considering Supermicro X5DAL-TG2 .


One thing to note that both of this motherboards come with basic RADI controller. In total, 10 disks can be connected without the need for dedicated RADI controller.


Another, thing to note that Xeon's don;t have such a wide choice of quiet processor colling solutions, not to mention the reduce choice of server type power supply.


Asus board is slightly more friendly in this respect. I run it with 2x2.8GHz 533 Xeons and it is buy far the fastest machine I have owned thus far (the 2nd being Asus P4C800 with P4 3.0C GHz).


Regards.
 

·
Registered
Joined
·
1,346 Posts
I won't vouch for whether this guy actually needs a dual Xeon PCI-X machine, but I have one I like at work. It's a Supermicro X5DAE. It's a 7505 board that does not have the SCSI onboard. I bought it in January because at the time it was the best "inexpensive" board that had 533FSB for Xeon and >4GB RAM possible. This is a big board, so watch out what case you put it in. Supermicro also makes a nice case for this board that has space for 7 IDE drives in the front with dedicated drive cooling. It can be tower or 4U rackmount. A perfect match for a 8 channel 3Ware card.
 

·
Registered
Joined
·
1,244 Posts
Discussion Starter · #15 ·
APRanger, you're correct that 64-bit PCI is likely overkill, but since my 3ware controller is already 64-bit, I'd rather pay a little extra to get a mb that will support it.


I rip about 10 DVDs at a time on my two workstations and then dump the data onto my media server. I have a switched 100Mbps network, and between the two machines, the network utilization on the sever goes to 96% and cpu usage is 75%. While the transfer is taking place, MC9 crawls and is barely able to play the current playlist, let alone let you browse the collection looking for new tracks to play. This goes on for several hours as that is how long it takes to transfer 100 Gigs (50 per machine).


I want a machine that can handle this without breaking a sweat, even if my two workstations are dumping DVD rips over dedicated 1gig ethernet connections at the same time. I already got dual cat6 cables run from my office where the two workstations are located down to the server room.


I'd prefer an ATX form factor so that the board will fit in my existing 4U rackmount case that has 8 5.25 front bays, each containing a hot swap module with a 300GB drive.


However, I'm willing to step up to a larger (must be rackmount) enclosure if need be. If I go that route, I'd want one with 20 exposed 5.25 bays so that I can move everything over and be prepared to add a 12-port 3ware controller and another 12 300GB drives (once 3ware comes out with a firmware upgrade that overcomes the 2TB per raid/3TB per controller limitation.)


I have about 1000 DVDs and 4000CDs, and my goal is to put everyting onto a single media server (lossless) with room to spare for timeshifting TV shows (high definition and regular definition) as well as to accomodate future DVD and CD purchases.


Branxx, are you saying that a dual Zeon 2.8GHz will run circles around a 3.2GHz P4 with the P875 chipset and 800FSB RAM?
 

·
Registered
Joined
·
247 Posts
Quote:
Originally posted by APranger

Are you actually doing anything now or in the near future that actually stresses the bandwidth of 32/33 PCI? Do you realize how much bandwidth there is already?


Lets be conservative and say that you could get 80MB/s (640Mb/s) out of your RAID array. That bandwidth is enough to transfer:


32 average HDTV streams (at 20Mb/s), or

23 high-quality HDTV streams (at 27Mb/s), or

106 average DVDs (at 6Mb/s), or

71 high-quality DVDs (at 9Mb/s), or

2500 average mp3s (at 256kb/s), or

444 uncompressed audio CDs (at 1.441Mb/s)
I do agree that often PCI bus saturation is rare, especially for home users.


But one single 32bit 33MHz PCI bus is only capable of handling approx 133MB/s (33 * 32 /8) and if there are 4 slots sharing a sinlge bus, all of them must compete, plus whatever on board devices on the motherboard are sharing (depends on arch). Uncompressed HDTV is 1.5Gb/s (187.6MB/s) while (as you said) MPEG 2 HDTV runs about 22Mb/s (2.75MB/s). So for just moving files around, sure 32bit 33MHz PCI is fine, a 3 disk RAID array will have a very hard time saturating that.


But if he start capping video that isn't compressed by the card, he will need more bandwidth. If he needs to write to that array from a Gb NIC or read from it over a Gb NIC, he might run into issues.


So yes, he is fine for now and probably won't see any benfits of a 64 bit PCI slot, but in the future, if this becomes a TiVO like device, especially when tossing around large amounts of data in real time, bandwidth will become a concern, so for future purposes, a 64bit PCI slot will be nice, especially having a bunch of things such as audio, HDTV, and NIC all on seperate buses to keep them from being saturated, but for now, he certianly won't see any differences.


As you said, if he doesn't have a need for anything in the future, it might not be worth it, but if he is planning on saturating that, it would be nice not to have to buy another mobo in 6 months.


WRT to that CPU use, do you ahve an onbaord NIC that has no processor of it's own? I have a 100mbit switched network as well, and I don't experience that issue at all, but I don't use my onbaord NIC, but rather an Intel Server NIC.


WRT to threading, a dual processor machine is not twice as fast. Each CPU can handle one thing at a time (though with the advent of SMT, or in marketing terms HyperThreading) each CPU can act as 2 CPUs and handle two threads each. If you have one thread that is taking all of the CPU, than adding additional CPUs will not help as there will nto be much for them to do. Sure they can handle other things, like basic system stuff (which is why dually users love the smoothness of their systems, they may not be twice as fast, but if one CPU is tied up, it doesn't make their system unresponsive, sure they still have to wait for the first CPU to finish crunching, but they don't lose system responsiveness) but the second CPU won't be able to help with the first task. You need (as APranger mentioned) a program that is multi threaded, in other words, written to take advantage of the second CPU. As he already mentioned it's on the to-do list, so a second CPU won't help as the first CPU will be dealing with it, no two ways about it.


$0.02
 

·
Registered
Joined
·
551 Posts
Quote:
Originally posted by pclausen
I rip about 10 DVDs at a time on my two workstations and then dump the data onto my media server. I have a switched 100Mbps network, and between the two machines, the network utilization on the sever goes to 96% and cpu usage is 75%. While the transfer is taking place, MC9 crawls and is barely able to play the current playlist, let alone let you browse the collection looking for new tracks to play. This goes on for several hours as that is how long it takes to transfer 100 Gigs (50 per machine).


I want a machine that can handle this without breaking a sweat, even if my two workstations are dumping DVD rips over dedicated 1gig ethernet connections at the same time. I already got dual cat6 cables run from my office where the two workstations are located down to the server room.
I think the sources of your problem were:
  • Network Bandwidth: Ether net become very inefficient once utilization passed over 50%. Try Gbits ether net.
  • Single RAID5: Single RAID5 array don't serve well for 2 writes and 1 read. If we write small data, it works fine. However, if we start writing 2 of 50G data from 2 machines, it easily breaks the sweat. I think future media server should have some intelligent algorithm/application to fix this problem, but I've not heard anything for now. Try writing from one at once.

However, these are not a solution, just a remedy.


On the other hand, it may be worth to try dual Xeon with HT. It will give us 4 virtual processors and it may work fine for 2 writes and 1 read. I don't know. If it works fine, please let us know. :D
 

·
Registered
Joined
·
200 Posts
You have a lot more going on here that you mentioned in your first post. But your more detailed usage description is enlightening, becuase it gives me a better idea of what you need to improve performance. Suffice to say, the 64/66 slot should be the last of your worries, since it is the processor and network that is killing you.


Number one is getting a P4 on a Canterwood chipset with the CSA for the network speed. CSA moves the network architecture off the PCI bus, freeing it to deal exclusively with the burden of your disk array. Speeding up MC9 seemed to be your #1 concern, and a new HyperThreading P4 will probably blow you away with the speed improvement over your old 800MHz P3.


Number two is getting your backbone onto gigabit to speed up those huge file transfers. Gigabit cards run for $50, and a switch will run you $150-$200 for a simple model.


It will take much more expensive equipment, or many more computers than you have now to get you to the point of NEEDING 64/66 for your disk array. If you are seriously taxing the array with a lot of random access reads/writes, you will dump the performance so far below 133MB/s that it won't matter.


In fact, the only operations I forsee getting any improvement from 64/66 PCI are bootup time and copying all the data from one array to another array on the same PCI bus. I expect that you hardly shutdown the media server. Likewise, I expect that you are unlikely to have two arrays in the same machine AND copy huge amounts of data between them.


In the end, do whatever you want to do to have the "geeky cool factor." I know it would be nifty just to have a 64/66 PCI RAID setup, even if I wasn't using it. Just as it would be nifty to have a dual Xeon box that cost me $3000. Personally, I'd rather have another 2TB of RAID space.
 

·
Registered
Joined
·
1,244 Posts
Discussion Starter · #19 ·
I plan to have a dedicated 1gbps connection between one of the workstations and the server, possibly both. Both machines will continue to also be connected to my 100Mpbs switch which is how they will each connect to the Internet.


I wonder how to 'route' the local traffic between the server and workstation over the 1gbps link, and all other traffic over the 'regular' network. Perhaps XP will know to do this automatically when I drag folders from the workstation(s) onto my shared raid folder on the server?


Btw, the 800MHz P3 is currently running a pretty old 3Com 905B NIC card connected to a 3Com 3300 24 port switch (my network hardware is pretty dated.)


My 933MHz P3 workstation doesn't stress the server too bad (30% network utilization), but when I move .VOBs from my 2.4GHz P4, it goes up to 70% and it gets real sluggish. Dump data from both of them, and it goes up to the afore mentioned 96% and the server becomes useless for any other tasks.


Yeah, I'll try writing from just one at a time, its just that I like to be able to rip 10 DVDs on each. and after those sessions drop the result from each workstation onto my shared movie achive folder and walk away. That is when I like to have a drink and kick back and listen to some tunes! As it is right now, I have to put a CD in the regular CD player by the stereo if I want to listen to something once the current playlist on the server runs out. I almost forgot how to do that. .. :D


So having two RAID5 arrays online will improve dual writes, even if the raids are on seperate controller cards?
 

·
Registered
Joined
·
1,244 Posts
Discussion Starter · #20 ·
APRanger, just saw your post after I submitted my other message...


Ok, so you suggest connecting the workstations and the server directly to a simple gigabit switch, and then have a 100Mbps link from this switch to my main 100Mbps switch? I can see how that would solve the potential 'routing' problem I would otherwise have.


I think you almost have me convinced not to worry about getting a motherboard with 64/66 slots., but I definately want a board with the gigabit port off the pci bus.


I also want SPDIF on the motherboard, so I think I would have a real hard time finding that in a 64/66 mb.
 
1 - 20 of 46 Posts
Status
Not open for further replies.
Top