Guide To Building A Media Storage Server - Page 25 - AVS Forum
Forum Jump: 
Reply
 
Thread Tools
post #721 of 7891 Old 10-30-2008, 03:07 PM
AVS Special Member
 
lymzy's Avatar
 
Join Date: Feb 2004
Location: lalaland
Posts: 3,073
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 0 Post(s)
Liked: 10
Quote:
Originally Posted by garycase2001 View Post

As I get it fully loaded and use it for a few weeks my opinion may very well change -- but at the moment it sure looks promising.


What if there is two media front end access the same drive with 25Mbps bitrate movies from unRaid simutaneouly?

How about four ?

HDPLEX
lymzy is offline  
Sponsored Links
Advertisement
 
post #722 of 7891 Old 10-30-2008, 03:08 PM
Member
 
Ed Rogers's Avatar
 
Join Date: Feb 2001
Location: Cumming, GA USA
Posts: 170
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 0 Post(s)
Liked: 10
+1 on unRAID. After losing an entire RAID 5 (second drive failed while rebuilding the first failure), I wanted to move to a 'partial loss' solution.

unRAID has a bunch of other benefits, but there are other threads for that.
Ed Rogers is offline  
post #723 of 7891 Old 10-30-2008, 03:27 PM
AVS Special Member
 
lifespeed's Avatar
 
Join Date: May 2005
Location: San Jose, CA
Posts: 1,503
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 0 Post(s)
Liked: 10
Quote:
Originally Posted by smburns25 View Post

I currently have a Vista Home Premium HTPC
The HTPC runs directly to my receiver over HDMI (including sound as the 4850 does a nice job of passing 5.1 over HDMI) and the receiver runs directly to my pj.

What I am looking to do is leave my system as-is, but offload the 3 drives, plus possibly 5 or more into an external enclosure. I have been kicking around RAID 5 or 6 via a Highpoint 2522 (or 2322 as I cannot recall the exact number) card, but I am not sure if that is the best card nor do I know what type of case to use. The primary reason for this card is the PCIe x1 slot as I have three open.

I would prefer to have a rack-mount case with either 8, 12 or 16 bays, but I do not want to set up a server. I do not need another PC/Server, but the case I have will not accomodate adding more drives.

So, any suggestions on a good case, RAID card, etc to support this?

Cost is a small factor, but I can absorb a few thousand. I would prefer to expand the drives as I need them rather than putting out the money right now for 10 x 1 Tb drives (more or less).

Any help or comments are appreciated.

I'll provide an opposing viewpoint to Garycase2001: I went throught this exercise with a similar thoughts to yourself. I did not want another 'unneccessary' computer. I built a 5-drive HTPC/server using 1 TB Seagate enterprise drives in an Omaura HTPC case with a Highpoint 8-port RAID controller.

It works great. The performance of the RAID5 array is fantastic, something that is lacking with the unRAID approach. The performance does make for easy manipulation or transcoding of large files. My HTPC and storage functions are combined in one PC that is powered down when unused. I don't have to turn two PC's on and off to watch movies, etc.

The one thing I did learn was that it makes the most sense to buy a server case, not an HTPC case. I don't have any more room for internal hard drives. I'm not out of storage space yet, but I am sure that will happen eventually. At that point I will move the hardware to a roomier case. The Norco case is attractive but lacks a full-height optical drive slot for my BD burner so I'll probably find something else. One should, of course, pay attention to fan selection to keep the noise to a minimum. I don't find the HDD noise to be an issue at all. Of course, it is really nice to relocate the whole works to a closet, which I am in the process of doing.

Just wanted to add a vote for the combo HTPC/server approach!

Lifespeed
lifespeed is offline  
post #724 of 7891 Old 10-30-2008, 04:23 PM
Advanced Member
 
garycase2001's Avatar
 
Join Date: Dec 2006
Posts: 548
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 0 Post(s)
Liked: 10
Quote:
Originally Posted by Ed Rogers View Post

+1 on unRAID. After losing an entire RAID 5 (second drive failed while rebuilding the first failure), I wanted to move to a 'partial loss' solution.

unRAID has a bunch of other benefits, but there are other threads for that.

Agree ... the partial loss is a HUGE benefit => especially with large arrays. I plan to use 9 1.5TB drives initially (once I move beyond my current 3-drive test (which is just using 500GB drives I already had), and eventually grow it to the max 16 drives. If I built a RAID-5 array I'd feel compelled to build a 2nd array for backups ... but with a solution that has much less catastrophic consequences I'm willing to just do a reload from original media if needed (and save a bunch of $$).

Another HUGE benefit of UnRAID is the ability to mix drive sizes --> if 2TB or 3TB drives become available, I can simply start using those without any impact on all the 1.5TB drives already in the system.

Other than the lower performance, I can't think of any notable disadvantage. And the performance is PLENTY for media streaming around the house
garycase2001 is offline  
post #725 of 7891 Old 10-30-2008, 04:39 PM
AVS Special Member
 
lifespeed's Avatar
 
Join Date: May 2005
Location: San Jose, CA
Posts: 1,503
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 0 Post(s)
Liked: 10
Quote:
Originally Posted by Ed Rogers View Post

+1 on unRAID. After losing an entire RAID 5 (second drive failed while rebuilding the first failure), I wanted to move to a 'partial loss' solution.

Could you provide any more details on your failure? What RAID card and drives were involved? Was the second drive failure during rebuild a 'complete' drive failure or an unrecoverable sector that had a bit error (bit rot) that the RAID controller did not handle gracefully?

Lifespeed
lifespeed is offline  
post #726 of 7891 Old 10-30-2008, 04:50 PM
dj9
Advanced Member
 
dj9's Avatar
 
Join Date: Jun 2006
Posts: 678
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 0 Post(s)
Liked: 10
I've set up a software RAID6 on Linux with this configuration. I'm getting write speeds (copying from another array on the same system) near 95 mbytes/sec for large files. Here are the details:

System information:
  • Intel ICH10R/G45
  • E4500 (2.2GHz) CPU, 800MHz FSB
  • 1GB RAM running at 800MHz, single channel
  • CentOS 5.2 x86_64, kernel 2.6.18-92.1.13.el5

Array configuration:
  • Five Western Digital Caviar Green 1.0 TB hard drives (WD10EACS), with idle spindown disabled on each
  • All drives connected to a Silicon Image 4726 SATA port multiplier
  • Port multiplier connected to a Silicon Image 3124 controller, connected to the system using PCI-e x8
  • Linux RAID-6 with 256k chunk size (default 64k) and an intent bitmap on another drive, for an effective arraysize of 2794.53 GiB / 3000.61 GB.
  • Each device in the array set to 512k readahead (default 256k) and deadline scheduler
  • Array set to 6144k readahead (default 3072k) and stripe cache of size 4096
  • Array formatted using ext3 mounted with noatime,acl and formatted with option stride=64. The default journaling mode (ordered) is in use.

I haven't done any formal testing or measuring, but I think journaling takes a sigificant hit on performance. Moving the ext3 journal to an external device might help; for every 37 MB or so written to each drive (while copying large files), 1 MB is read. When copying a folder full of 8 MB files, I get write speeds around 60 MB/sec with iowait close to 30%. When copying large files, I get speeds around 95 MB/sec with iowait usually below 10% (and cp taking a large part of the CPU.)

I don't know of any good measurement/profiling tools on CentOS; I have used LatencyTOP in the past and found that a significant amount of time was spent in ext3 journaling.

I would have liked to use VxFS, since it should be both safe (after a power failure) and fast, but it's not supported on Linux MD. I'm not sure if this means it's untested by Symantec but should work, won't work well, or won't work at all.
dj9 is offline  
post #727 of 7891 Old 10-30-2008, 05:02 PM
Advanced Member
 
garycase2001's Avatar
 
Join Date: Dec 2006
Posts: 548
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 0 Post(s)
Liked: 10
Quote:
Originally Posted by lymzy View Post

What if there is two media front end access the same drive with 25Mbps bitrate movies from unRaid simutaneouly?

How about four ?

Just depends on what your needs are. Streaming two BD feeds isn't a problem from any modern drive; but in my case that's not an issue anyway, as my DVD collection is exclusively standard DVDs (and I have no plans to update to BD).

My Beyond TV server has 6 tuners, and there's no problem recording 6 things at once (2 HD) to the same disk => even if both my wife and I are watching something else that's recorded on the same disk.

In addition, I can stream 5 DVDs at once from my 1.5TB drive that's loaded with DVDs.

Considering there are only two of us (plus an occasional grandkid or two), I think there's PLENTY of bandwidth in individual drives (especially since I'm only going to use the very-fast 1.5TB Seagates).


As for "worst case" ... I've only got 5 media front-ends [two living spaces; master bedroom; 2 guest bedrooms] => and while I agree that if someone was using ALL of those at the same time AND my wife and I were each watching something on our PC's, AND if all of those things were on the SAME disk in an UnRAID, there would likely be some issues. But that's certainly in the VERY unlikely category

And in some respects an UnRAID is far superior => if you're streaming multiple feeds on a RAID-5 array, those disks are going to be thrashing a good bit as they seek to the various movies. With UnRAID, unless all of the movies are on the same disk, there will be much less head movement.

Bottom line: I think for a home environment, an UnRAID server is just fine
garycase2001 is offline  
post #728 of 7891 Old 10-30-2008, 07:16 PM
AVS Special Member
 
HappyFunBoater's Avatar
 
Join Date: Aug 2006
Location: Orlando, FL
Posts: 1,992
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 0 Post(s)
Liked: 10
Quote:
Originally Posted by dj9 View Post

I've set up a software RAID6 on Linux with this configuration. I'm getting write speeds (copying from another array on the same system) near 95 mbytes/sec for large files. Here are the details:

System information:
  • Intel ICH10R/G45
  • E4500 (2.2GHz) CPU, 800MHz FSB
  • 1GB RAM running at 800MHz, single channel
  • CentOS 5.2 x86_64, kernel 2.6.18-92.1.13.el5

Array configuration:
  • Five Western Digital Caviar Green 1.0 TB hard drives (WD10EACS), with idle spindown disabled on each
  • All drives connected to a Silicon Image 4726 SATA port multiplier
  • Port multiplier connected to a Silicon Image 3124 controller, connected to the system using PCI-e x8
  • Linux RAID-6 with 256k chunk size (default 64k) and an intent bitmap on another drive, for an effective arraysize of 2794.53 GiB / 3000.61 GB.
  • Each device in the array set to 512k readahead (default 256k) and deadline scheduler
  • Array set to 6144k readahead (default 3072k) and stripe cache of size 4096
  • Array formatted using ext3 mounted with noatime,acl and formatted with option stride=64. The default journaling mode (ordered) is in use.

I haven't done any formal testing or measuring, but I think journaling takes a sigificant hit on performance. Moving the ext3 journal to an external device might help; for every 37 MB or so written to each drive (while copying large files), 1 MB is read. When copying a folder full of 8 MB files, I get write speeds around 60 MB/sec with iowait close to 30%. When copying large files, I get speeds around 95 MB/sec with iowait usually below 10% (and cp taking a large part of the CPU.)

I don't know of any good measurement/profiling tools on CentOS; I have used LatencyTOP in the past and found that a significant amount of time was spent in ext3 journaling.

I would have liked to use VxFS, since it should be both safe (after a power failure) and fast, but it's not supported on Linux MD. I'm not sure if this means it's untested by Symantec but should work, won't work well, or won't work at all.

Nice. 95MB/s on a software RAID-6 write is pretty fast. When you say it's taking a "large part of the CPU", how much is that? Also, do you have a faster CPU that you can throw at it? I'd love to see how it scales.
HappyFunBoater is offline  
post #729 of 7891 Old 10-30-2008, 09:05 PM
AVS Special Member
 
MikeSM's Avatar
 
Join Date: Jan 2002
Posts: 2,906
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 0 Post(s)
Liked: 10
Quote:
Originally Posted by dj9 View Post

I've set up a software RAID6 on Linux with this configuration. I'm getting write speeds (copying from another array on the same system) near 95 mbytes/sec for large files. Here are the details:

System information:
  • Intel ICH10R/G45
  • E4500 (2.2GHz) CPU, 800MHz FSB
  • 1GB RAM running at 800MHz, single channel
  • CentOS 5.2 x86_64, kernel 2.6.18-92.1.13.el5

Array configuration:
  • Five Western Digital Caviar Green 1.0 TB hard drives (WD10EACS), with idle spindown disabled on each
  • All drives connected to a Silicon Image 4726 SATA port multiplier
  • Port multiplier connected to a Silicon Image 3124 controller, connected to the system using PCI-e x8
  • Linux RAID-6 with 256k chunk size (default 64k) and an intent bitmap on another drive, for an effective arraysize of 2794.53 GiB / 3000.61 GB.
  • Each device in the array set to 512k readahead (default 256k) and deadline scheduler
  • Array set to 6144k readahead (default 3072k) and stripe cache of size 4096
  • Array formatted using ext3 mounted with noatime,acl and formatted with option stride=64. The default journaling mode (ordered) is in use.

I haven't done any formal testing or measuring, but I think journaling takes a sigificant hit on performance. Moving the ext3 journal to an external device might help; for every 37 MB or so written to each drive (while copying large files), 1 MB is read. When copying a folder full of 8 MB files, I get write speeds around 60 MB/sec with iowait close to 30%. When copying large files, I get speeds around 95 MB/sec with iowait usually below 10% (and cp taking a large part of the CPU.)

I don't know of any good measurement/profiling tools on CentOS; I have used LatencyTOP in the past and found that a significant amount of time was spent in ext3 journaling.

I would have liked to use VxFS, since it should be both safe (after a power failure) and fast, but it's not supported on Linux MD. I'm not sure if this means it's untested by Symantec but should work, won't work well, or won't work at all.

Wow, pretty fast for ext3. You really should try XFS with it...

Nice performance considering raid6 carries a lot of overhead on I/O and more than raid5 on CPU.

And esp. good given the crappy green power drives. Congrats!
MikeSM is offline  
post #730 of 7891 Old 10-30-2008, 09:15 PM
Senior Member
 
jagojago's Avatar
 
Join Date: Mar 2007
Posts: 451
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 0 Post(s)
Liked: 10
How are the WD Green drives crappy?
jagojago is offline  
post #731 of 7891 Old 10-30-2008, 09:23 PM
AVS Special Member
 
MikeSM's Avatar
 
Join Date: Jan 2002
Posts: 2,906
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 0 Post(s)
Liked: 10
Quote:
Originally Posted by jagojago View Post

How are the WD Green drives crappy?

They spin at 5400 RPM instead of 7200, so higher seek times. Green <> Performance.
MikeSM is offline  
post #732 of 7891 Old 10-31-2008, 12:43 AM
Advanced Member
 
garycase2001's Avatar
 
Join Date: Dec 2006
Posts: 548
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 0 Post(s)
Liked: 10
Quote:
Originally Posted by dj9 View Post

I've set up a software RAID6 on Linux with this configuration. I'm getting write speeds (copying from another array on the same system) near 95 mbytes/sec for large files. Here are the details:

System information:
  • Intel ICH10R/G45
  • E4500 (2.2GHz) CPU, 800MHz FSB
  • 1GB RAM running at 800MHz, single channel
  • CentOS 5.2 x86_64, kernel 2.6.18-92.1.13.el5

Array configuration:
  • Five Western Digital Caviar Green 1.0 TB hard drives (WD10EACS), with idle spindown disabled on each
  • All drives connected to a Silicon Image 4726 SATA port multiplier
  • Port multiplier connected to a Silicon Image 3124 controller, connected to the system using PCI-e x8
  • Linux RAID-6 with 256k chunk size (default 64k) and an intent bitmap on another drive, for an effective arraysize of 2794.53 GiB / 3000.61 GB.
  • Each device in the array set to 512k readahead (default 256k) and deadline scheduler
  • Array set to 6144k readahead (default 3072k) and stripe cache of size 4096
  • Array formatted using ext3 mounted with noatime,acl and formatted with option stride=64. The default journaling mode (ordered) is in use.

I haven't done any formal testing or measuring, but I think journaling takes a sigificant hit on performance. Moving the ext3 journal to an external device might help; for every 37 MB or so written to each drive (while copying large files), 1 MB is read. When copying a folder full of 8 MB files, I get write speeds around 60 MB/sec with iowait close to 30%. When copying large files, I get speeds around 95 MB/sec with iowait usually below 10% (and cp taking a large part of the CPU.)

I don't know of any good measurement/profiling tools on CentOS; I have used LatencyTOP in the past and found that a significant amount of time was spent in ext3 journaling.

I would have liked to use VxFS, since it should be both safe (after a power failure) and fast, but it's not supported on Linux MD. I'm not sure if this means it's untested by Symantec but should work, won't work well, or won't work at all.

What do you need to install to get RAID-6 capability? The Centos docs for v5.2 indicate (on page 295) that you can only select levels 0, 1, or 5. [http://www.centos.org/docs/5/html/5....tion_Guide.pdf ]

The Installation Guide does indicate there is more detail in the Deployment Guide, but the Deployment Guide indicates the same supported levels (see Page 60). [http://www.centos.org/docs/5/html/5....ment_Guide.pdf ]

I presume you'd added a Linux package that extends this support -- it'd be nice to know what that is.
garycase2001 is offline  
post #733 of 7891 Old 10-31-2008, 07:28 AM
AVS Special Member
 
ilovejedd's Avatar
 
Join Date: Aug 2006
Posts: 3,657
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 8 Post(s)
Liked: 40
Quote:
Originally Posted by lymzy View Post

What if there is two media front end access the same drive with 25Mbps bitrate movies from unRaid simutaneouly?

How about four ?

unRAID can handle that pretty easily so long as your drives can keep up. 25Mbps is just 3.125MB/s. At the minimum, a Seagate 1.5TB HDD has, maybe 50MB/s read speed. That's enough for 16 simultaneous streams. Maybe 13~15 if you count overhead. And that's only if your files are on the same drive. Note, that the Seagate 1.5TB drives has average read speeds of 90MB/s, as I recall, with peaks around 120MB/s. Gigabit, at best, can transfer 125MB/s. Divide your files among a couple of drives and you'll likely hit the gigabit barrier. Unless you also invest a bit of cash for network infrastructure it's unlikely you'll notice the uber fast performance that a striped RAID array gives you.

Write performance, however, is a different scenario altogether. To circumvent slow network writes, you can use a temporary cache drive. The cache drive is not part of the parity protected array so you'll have to take a chance that your cache drive doesn't crap out before transferring files to the unRAID array. I think, by default, files on the cache drive are automatically transferred to the array proper around 2 or 3AM everyday. Writes to the cache drive are pretty much limited by spindle speed.
ilovejedd is online now  
post #734 of 7891 Old 10-31-2008, 07:44 AM
dj9
Advanced Member
 
dj9's Avatar
 
Join Date: Jun 2006
Posts: 678
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 0 Post(s)
Liked: 10
Quote:
Originally Posted by HappyFunBoater View Post

Nice. 95MB/s on a software RAID-6 write is pretty fast. When you say it's taking a "large part of the CPU", how much is that? Also, do you have a faster CPU that you can throw at it? I'd love to see how it scales.

It's using ~55% system CPU when writing at 95 MB/sec. cp often takes a lot of CPU (I saw it peak at 67% CPU), and I think ext3 is counted as part of that. The md kernel process seems to take 20% on average; I don't know how much of that is iowait, though.

I've made another RAID6 for temporary use using five 400GB Seagate Barracuda 7200.9 (ST3400833AS) in the same model enclosure & using the same enclosure, this time with a 128k stripe size and ext2 (no journaling). I saw at least 115 mbytes/sec writing to those drives using the same tuning instructions as below except for the stripe size. I think the write speed on RAID5 was close to 130 mbytes/sec after plenty of tuning (on ext3 with a 64k stripe size, though.)

I may try using an external ext3 journal later on.

The Seagate drives used to be a RAID-5; it was always slower than my two five-drive RAID5 arrays using WD 750 GB Caviar drives (both GP and non-GP versions.) I don't recall seeing a significant performance difference between the two Caviar models, though.

I do have an E8400 CPU system (3.0GHz) I could try if I have time; performance should be similar if I use a CentOS live CD with the controller in that system.

Quote:
Originally Posted by garycase2001 View Post

What do you need to install to get RAID-6 capability? The Centos docs for v5.2 indicate (on page 295) that you can only select levels 0, 1, or 5. [http://www.centos.org/docs/5/html/5....tion_Guide.pdf ]

RAID-6 is part of Linux's md software RAID. It's configured using mdadm, in the same way as a raid5 array except with a 6 instead of a 5. I don't think it's supported by the Red Hat installer and might not be an officially suppored configuration by Red Hat.

Quote:
Originally Posted by MikeSM View Post

Wow, pretty fast for ext3. You really should try XFS with it...

Nice performance considering raid6 carries a lot of overhead on I/O and more than raid5 on CPU.

I prefer ext3 for its integrity and easy upgrade path to ext4. If I can get the RHEL 5.3 beta, I may try ext4 (assuming that RH has backported the latest stable code.) However, I think you might need to a paying customer to try the beta.
dj9 is offline  
post #735 of 7891 Old 10-31-2008, 09:25 PM
AVS Special Member
 
JerryW's Avatar
 
Join Date: Aug 2001
Location: Location, Location
Posts: 1,014
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 1 Post(s)
Liked: 30
Quote:


think you might need to a paying customer to try the beta.

It is possible redhat won't give it to you if you aren't a paying customer, but legally they can not stop one of their paying customers from giving the source code (ala centos) away to anyone. The GPL forbids adding on additional restrictions like NDAs.

Copyright is not property, it is merely a temporary loan from the public domain.
JerryW is offline  
post #736 of 7891 Old 11-01-2008, 07:54 AM
Member
 
smburns25's Avatar
 
Join Date: Jan 2008
Posts: 90
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 0 Post(s)
Liked: 10
Thanks lifespeed and GaryCase2001 for the advice. I think I have landed on either RAID 5 or 6 (cost might be a fector) and I'm going to start my search on a good rack mount drive cage that I can expand in to later. My understanding is that I can start with 5 or so drives and then add additional ones as needed.

Thanks again for the advice. Much appreciated.

I'll post pics and the specs of what I land on.
smburns25 is offline  
post #737 of 7891 Old 11-01-2008, 08:46 AM
dj9
Advanced Member
 
dj9's Avatar
 
Join Date: Jun 2006
Posts: 678
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 0 Post(s)
Liked: 10
Quote:
Originally Posted by JerryW View Post

It is possible redhat won't give it to you if you aren't a paying customer, but legally they can not stop one of their paying customers from giving the source code (ala centos) away to anyone. The GPL forbids adding on additional restrictions like NDAs.

I am aware of the GPL, but RHEL is not comprised entirely of material licensed under the GPL. It includes many of Red Hat's trademarks. All the trademarks have to be removed in order to freely distribute it.
dj9 is offline  
post #738 of 7891 Old 11-01-2008, 01:11 PM
Newbie
 
seankoshy's Avatar
 
Join Date: May 2007
Posts: 1
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 0 Post(s)
Liked: 10
Hey Guys,

This forum has been a GREAT source of info ... a couple of quick questions:

Using an adaptec raid card / 8 x 1.5tb seagate / raid 6

Mostly for dvd / bd rips ... what stripe size / allocation unit size should I be using?

Thanks in advance
seankoshy is offline  
post #739 of 7891 Old 11-01-2008, 02:11 PM
AVS Special Member
 
HappyFunBoater's Avatar
 
Join Date: Aug 2006
Location: Orlando, FL
Posts: 1,992
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 0 Post(s)
Liked: 10
Quote:
Originally Posted by seankoshy View Post

Hey Guys,

This forum has been a GREAT source of info ... a couple of quick questions:

Using an adaptec raid card / 8 x 1.5tb seagate / raid 6

Mostly for dvd / bd rips ... what stripe size / allocation unit size should I be using?

Thanks in advance

It really depends on so many variables: controller model number, firmware version, access pattern (long sequential writes, short random writes, etc.), queue depth (number of concurrent host commands), and cache size (to a lesser extent than all the other variables). If you really want to do this right you could determine what the access patterns look like and model them under IOMeter, changing stripe size and other parameters to get the best performance. I would expect rips to look like a bunch of sequential 64KB writes at a queue depth of one.



The default is usually the best choice.
HappyFunBoater is offline  
post #740 of 7891 Old 11-02-2008, 01:24 PM
Member
 
xeonicxpression's Avatar
 
Join Date: Oct 2005
Posts: 123
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 0 Post(s)
Liked: 10
Quote:
Originally Posted by jagojago View Post

So after some testing the read speed sucks! I'm only getting ~5 MB/sec and for writing I get 25-30 MB/sec which I can live with. Any time I want to transfer files off the array it will take forever...

edit: It says the status of the array is "Healthy, Regenerating" which may cause the speed issue, but why would it be regenerating? I haven't changed anything on it, and there is nowhere to see the status or how much longer it has to regenerate for. Anybody familiar with VSF know about this?

Hopping in the way back machine, Veritas Storage Foundation Basic was discussed 10 pages ago. Did anyone else ever test it? Did it turn out to be total crap? I'm thinking I want to go with software raid and do three raid 5 arrays, of 7, 7, and 6 disks, in the norco 4020. I don't really want to have to buy all 7 drives for the first array right now, so that VSFB would be nice. All I really need is 4 hard drives to start with. 4.5tb should hold me for a little bit. Although I am kind of worried about using those Seagate 1.5tb drives right now. Since flexraid is effectively dead and I want to use WS2k_ as my OS, so I can do things like re-encode HDDVD and BR and other tasks, raid is my only choice.

xeonicxpression is offline  
post #741 of 7891 Old 11-02-2008, 03:07 PM
Member
 
xeonicxpression's Avatar
 
Join Date: Oct 2005
Posts: 123
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 0 Post(s)
Liked: 10
So I've been putting together the specs for my purposed server. Here is what I have come up with.

(4) Seagate Barracuda 7200.11 ST31500341AS 1.5TB $719.96
(1) PC Power & Cooling S75QB 750W $109.99 after MIR
(1) NORCO RPC-4020 $289.99
(1) SUPERMICRO MBD-X7SBE $269.99
(1) SUPERMICRO AOC-SAT2-MV8 $94.99
(1) Intel Xeon X3360 Yorkfield 2.83GHz 12MB $349.99
(1) OCZ SLI-Ready 2GB (2 x 1GB) 240-Pin DDR2 SDRAM DDR2 800 $11.99 after MIR

TOTAL: $1846.90

I know the ram seems silly, since it is "SLI-Ready", but it is $11.99 after MIR. Any thoughts on the motherboard? Is there a better / more appropriate choice out there? I chose it because it has PCI-X slots for the Supermicro MV8. I figure I'll start with just one and add more as I go. Is there a reason people use port multipliers instead of just buying another MV8? It's actually cheaper per port to buy another MV8 than buy port multipliers. If you do 2 MV8s and 1 PM to get to 20 ports you end up at 269.93 vs 1 MV8 and 3 PM which is 334.84. I guess that means I would have to do two 6 drive arrays and one 8 drive array then. I chose only 4 hard drives for the time being, but if Veritas Storage Foundation Basic turns out to be no good, then I would have to go with 6 or switch to hardware raid.

Like I said in the post above, the OS will either be Windows 2k3 or 2k8. I already have legal copies of both. Feel free to pick the system apart. Keep in mind this will be used mainly for serving up dvd's, hddvd's, and eventually bluray. It will also be doing the transcoding of HDDVD and Blurays, most likely. The processor may be overkill, but I would rather go way overkill rather than going the other direction. Besides, I might run some VMs some day, so might as well have the power if I ever need it.

xeonicxpression is offline  
post #742 of 7891 Old 11-02-2008, 06:16 PM
Senior Member
 
KnightRT's Avatar
 
Join Date: May 2006
Posts: 230
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 0 Post(s)
Liked: 10
This, and the "48 TB Media Server" precursor thread, make for some pretty fantastic reading. Kapone's posts in particular were very informative.

A few quick thoughts of my own on random topics:

The reason to choose RAID-5 over other solutions is speed. There are too many potential failures modes in the storage chain to make as safe as RAID-1, WHS, or any of a half-dozen alternatives, but it can be blazingly quick.

As a practical matter though, this is really just benchmark porn. My onboard nVidia GigE adapters were only good for 10-12 MB/s over Windows network shares. When I moved to Marvell Yukon controllers, that number jumped to 60 MB/s, which is the best I can get despite iPerf numbers above 950 Mbit/s. I saw higher reported numbers and utilization in Vista, but the actual transfers happened no more quickly. In short, with modern drives approaching 90 MB/s in average sequential benchmarks, it doesn't take much of an array to saturate a GigE network. Really, to make use of these 400 MB/s+ arrays, they ought to be local storage.

I do take issue with those making massive R5 and R6 volumes. The maximum I'd recommend for R5 is 8 drives. For R6, perhaps 6 more. At most. Those with more drives should use nested arrays. The acceleration in the size of drives has outpaced the speed with which most RAID controllers can reconstruct a degraded array. Even if you space out your drive purchases, you're liable to run into at least one drive that'll fail within days of another, and then you lose the entire array. It's been pointed out that a simple block read error usually won't cause modern controllers to abandon the recovery, but there's nothing they can do about a complete mechanical failure.

Put another way, RAID is not a backup for itself. Anything important should be on separate backup system, though that can be comprised of a second array. I can't emphasize this enough for those who would use RAID-5 to grant themselves the illusion of data security.

My personal setup is Ciprico RAIDCore BC4852 in my server with four 1 TB WD GP drives. I've been very impressed with it for a few reasons, though less so for others.

- The card is PCI-X. I have only PCI slots. It's backwards compatible, yes, but the real-world throughput of a 32-bit 33 MHz PCI slot is 95 MB/s for drives in RAID-0 and 75 MB/s for drives in RAID-5. The disparity is a byproduct of the fact that unlike other RAID cards, the RAIDCore series use main memory for system cache, so they're transferring quite a lot more data back and forth through the motherboard than a true "hardware" RAID card would.

- Because main memory is the write cache, and I've got 5 GB of it in my server, I effectively have 4 GB of write cache. Writes are very fast, though obviously it's of some import to have the server plugged into a UPS.

- The software interface is stellar. This 4000-series card, the 5000-series, and VST Pro all use the same driver revision. They also all have the same management interface, and it wouldn't surprise me to discover that you could span an array from the 4000-series to a 5000-series. Something to consider if you're one of those fortunate enough to have PCI-X slots on your motherboard.

A question arose earlier why anyone would bother to use VST Pro in place of Intel's Matrix RAID driver. The answer is that it supports every feature you could want except for RAID-6, including OCE and RAID-level migration. It's extremely flexible and stable software. And it's faster by quite a lot at disk writes. Toms Hardware had some benchmarks to that effect in the recent past.

I think that's all for now. Thanks again to those who contributed to this thread.
KnightRT is offline  
post #743 of 7891 Old 11-03-2008, 05:24 PM
Member
 
alamone's Avatar
 
Join Date: Aug 2008
Posts: 179
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 0 Post(s)
Liked: 10
This might be slightly off topic, but it might be of interest to other people planning to use an Adaptec raid card as local storage. This might also be a good case study in why you want to separate the server from the HTPC.

If anyone else out there has an Adaptec raid card (3 or 5 series) and can do me a favor, can you run the DPC Latency Checker utility and let it run for a few hours or overnight while the PC isn't doing much (e.g. just downloading or idling), and report what your maximum DPC latency value is?

I have a persistent DPC spiking problem that I've tracked down to my Adaptec 51645 card. Basically, on a periodic basis, every 30 minutes or so, i get a 30 ms (or 30000 microsecond) DPC call originating from the raid driver, and this is when the system is idle, not doing anything at all. This spike is enough to cause audio dropouts in my system and is driving me nuts. I tried swapping the card to the other 16x PCIe slot to no avail.

I used the xperf utility and the debug symbols from MS to pinpoint
the Adaptec card as the culprit.

Otherwise, my DPC latency values are pretty good, usually fluctuating from 20-50us
on idle, up to around 200us if playing back audio. I'm using the latest beta mobo bios.
The original initial bios had very bad DPC latency issues.

I'd like to figure out if this is because of some incompatibility with my particular motherboard (Gigabyte X48-DQ6), or is inherent to the card, or a driver problem, or what. I've tried both released BIOSes and drivers and they've all exhibited the problem.

Thanks.
alamone is offline  
post #744 of 7891 Old 11-04-2008, 08:51 AM
Advanced Member
 
garycase2001's Avatar
 
Join Date: Dec 2006
Posts: 548
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 0 Post(s)
Liked: 10
Just noticed that TigerDirect has the Seagate 1.5TB drives on sale today for $149.99 w/free shipping !! I just jumped on that ... there may be a few others who'd like to as well
garycase2001 is offline  
post #745 of 7891 Old 11-04-2008, 12:34 PM
Member
 
WeeboTech's Avatar
 
Join Date: Oct 2006
Posts: 57
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 0 Post(s)
Liked: 10
Quote:
Originally Posted by garycase2001 View Post

Just noticed that TigerDirect has the Seagate 1.5TB drives on sale today for $149.99 w/free shipping !! I just jumped on that ... there may be a few others who'd like to as well

Thanks for the heads up. I was about to pull the trigger @ newegg, but this deal saved me some extra
WeeboTech is offline  
post #746 of 7891 Old 11-05-2008, 08:19 AM
Senior Member
 
spiderv6's Avatar
 
Join Date: Mar 2003
Location: NJ
Posts: 283
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 0 Post(s)
Liked: 11
and now newegg are down to the $150 level as well.

Guess who ordered from amazon yesterday at $179? Duh!

Guess who just printed off a RMA from amazon and placed an order from tiger........
spiderv6 is offline  
post #747 of 7891 Old 11-05-2008, 10:40 AM
Member
 
Bubber Jones's Avatar
 
Join Date: Jun 2002
Posts: 19
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 0 Post(s)
Liked: 10
Those interested in WHS.... Newegg has it for $99.00...
Bubber Jones is offline  
post #748 of 7891 Old 11-05-2008, 12:18 PM
Senior Member
 
sdheda's Avatar
 
Join Date: Sep 2005
Location: O.C., Ca
Posts: 351
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 0 Post(s)
Liked: 10
Quote:
Originally Posted by Bubber Jones View Post

Those interested in WHS.... Newegg has it for $99.00...

That is actually the new price. All retailers should be selling for this price.
sdheda is offline  
post #749 of 7891 Old 11-05-2008, 05:14 PM
Senior Member
 
KnightRT's Avatar
 
Join Date: May 2006
Posts: 230
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 0 Post(s)
Liked: 10
http://www.supermicro.com/products/a...USAS-S8i_R.cfm

Can someone explain to me how that card, which is available for $300, is different from this one:

http://www.newegg.com/Product/Produc...82E16816151023

The Areca card is $650. They both have the same Intel IOP348 chip, the same twin-SAS ports with support for 8 drives, and 256 MB of DDR2 cache. Put another way, the Areca is the best 8-drive card available and I'm trying to figure out why the Supermicro version costs less than half as much. Any ideas?
KnightRT is offline  
post #750 of 7891 Old 11-05-2008, 06:53 PM
AVS Special Member
 
HappyFunBoater's Avatar
 
Join Date: Aug 2006
Location: Orlando, FL
Posts: 1,992
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 0 Post(s)
Liked: 10
Quote:
Originally Posted by KnightRT View Post

http://www.supermicro.com/products/a...USAS-S8i_R.cfm

Can someone explain to me how that card, which is available for $300, is different from this one:

http://www.newegg.com/Product/Produc...82E16816151023

The Areca card is $650. They both have the same Intel IOP348 chip, the same twin-SAS ports with support for 8 drives, and 256 MB of DDR2 cache. Put another way, the Areca is the best 8-drive card available and I'm trying to figure out why the Supermicro version costs less than half as much. Any ideas?

Why do you think the Areca is the best card available?

The Supermicro board comes from a different RAID vendor. The only thing they have in common is the IOP348. The RAID stacks are different.
HappyFunBoater is offline  
Reply Home Theater Computers

Tags
Computers

User Tag List

Thread Tools
Show Printable Version Show Printable Version
Email this Page Email this Page


Forum Jump: 

Posting Rules  
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are Off
Pingbacks are Off
Refbacks are Off