thinking about building a RAID.. HW questions - AVS Forum
Forum Jump: 
Reply
 
Thread Tools
post #1 of 90 Old 09-29-2008, 05:28 PM - Thread Starter
Advanced Member
 
scottlindner's Avatar
 
Join Date: Jun 2005
Location: Colorado Springs, CO
Posts: 705
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 0 Post(s)
Liked: 11
I want to build a Linux based RAID. I already have a PC I'm not using for anything else and a four port SATA card. All I really need to get is the hard drives. I am thinking of 4 1TB drives in a RAID5. The server is a AMD Athlon 1800+ with 768MB of RAM. I have a RealTec 1Gbps card in it and my entire network is 1Gbps. This should help for throughput, even though it's probably overkill for my needs. I know the hardware is a bit old, but I suspect it'll be just fine for serving up music and movies. I'd rather not dump big $$$ unless I really have to.

So here's my first question. RAID cards are expensive! So if I'm going to dump big dollars on a RAID card, I'm going to make sure it's PCI Express instead of PCI. Which means an entire upgrade. So.. here's where I'm going. Is software RAID even an option with a machine of this capability? I thought my SATA card has a RAID controller but it appears it does not. It is a Promise SATA 150 TX4.

What are your thoughts? Complete new hardware, or can I get away with what I have and a SW RAID?

Cheers,
Scott
scottlindner is offline  
Sponsored Links
Advertisement
 
post #2 of 90 Old 09-29-2008, 07:12 PM
Advanced Member
 
jflatt's Avatar
 
Join Date: Jul 2003
Location: Las Vegas
Posts: 847
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 0 Post(s)
Liked: 11
I have 4x500GB running on an 1800+, 512MB RAM, software RAID5, and records HD ATSC with two tuners. Runs fine. One plus is if you do software RAID, you can mix your hardware later and still have it run the same array.
jflatt is offline  
post #3 of 90 Old 09-29-2008, 07:54 PM
AVS Special Member
 
tux99's Avatar
 
Join Date: Jan 2005
Location: Europe
Posts: 1,523
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 0 Post(s)
Liked: 11
You didn't specify how you intend to use your RAID but if it's to stream a couple of HD streams to frontend boxes it sounds more than enough to me, even recording to it at the same time should be fine.
Anything other than 1080p HD decoding and playing 3D games doesn't really require that much cpu power, my 4 year old 2.53Ghz P4 still feels fast for everything else.
Remember this is Linux not bloatware Windows.

If on the other hand your are thinking of competing with google then it might be slightly underspec'ed ...

I have heard only good things about those Promise SATA cards with Linux, in fact I'm waiting for my TX2plus too arrive to complete my 4x 1TB RAID5 on my 2.53GHz P4.

My Linux news / reviews / tips+tricks / downloads web site: http://www.linuxtech.net/
tux99 is offline  
post #4 of 90 Old 09-29-2008, 10:35 PM
Senior Member
 
SeijiSensei's Avatar
 
Join Date: Sep 2007
Location: Metro Boston
Posts: 466
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 0 Post(s)
Liked: 10
I've run software RAID on Linux servers for nearly a decade now and have never detected a major performance penalty. The kernel code has been pored over for a very long time so I suspect it's pretty efficient. The drawback to software RAID is that you have to execute commands at the OS level if you want to replace a drive on-the-fly. On machines with hardware RAID, especially ones that support hot-swap drives, you can just pop the drive out from under the operating system and replace it. With software RAID you have to use mdadm to tell the kernel to drop the faulty drive, replace the physical device, then use mdadm to insert the new drive into the array. If you don't mind taking the machine entirely off-line, you can simply shut down, replace the drive, and restart. The OS will notice that the drive needs to be inserted into the array and begin reconstruction.

Most modern Linux distros let you build the array at installation. For servers I use CentOS, the free repackaging of RedHat Enterprise Linux, because it has long-term support. I use the installer to build the array, then tell the installer to turn the array into a single "physical volume" using something called the "Logical Volume Manager." With LVM you can divide a physical volume into a number of "logical volumes" that behave equivalently to disk partitions. Unlike partitions, logical volumes can be resized, and you can create backup "snapshots" of the logical volumes.

Regardless of how you set up the array, I'd boot from a separate disk that holds the /boot and / (root) partitions, then mount the array's volume(s) as /home or at a mount point under /media. With this arrangement you can modify the operating system files without touching the files you've stored on the array. A 20-40GB PATA disk works fine as the boot disk. In fact you might want to leave some of it unpartitioned in case you want to install another operating system or test out an upgraded version of your current OS before switching over.

Another thread in this forum discusses the choice of a filesystem to install on the array. For video storage, many of us use Silicon Graphics's "XFS" or IBM's "JFS." If you're used to Windows, you might be surprised by the notion that you can choose the filesystem that will be storing your data since Windows offers NTFS and nothing else. If you are sharing these files with Windows machines using Samba, or with *nix machines using NFS, the clients will be entirely unaware of the filesystem that underlies the actual data on the server.

This all assumes you'll be watching the stored material on another device. If you want to watch or listen to the stored files on the machine itself, you'll probably need to install additional software that can't be included in free distributions because of patent or copyright restrictions. If the intent is simply to serve these files to other machines, you won't need anything beyond a plain vanilla distribution like CentOS.
SeijiSensei is offline  
post #5 of 90 Old 09-30-2008, 03:31 AM - Thread Starter
Advanced Member
 
scottlindner's Avatar
 
Join Date: Jun 2005
Location: Colorado Springs, CO
Posts: 705
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 0 Post(s)
Liked: 11
Wow. I had no clue. Thanks so much for the feedback. This really helps a lot.

For those that asked. I have several intents for the RAID but it seems like all of them will be satisfied adequately. The most hard hitting is going to be playback of my BeyondTV recorded channels. I'd love to record directly to the RAID from BeyondTV but I have learned this is a no-no with the BTV crowd. So this RAID will be used for all of our DVDs, our CDs, and all of our pictures.

SeijiSensei, you suggested a PATA drive for the OS. I was planning on doing this for the very reasons you stated. Does swapping on an old drive cause problems, or does the OS, SAMBA, and whatever it takes to build the RAID take so little space that my 768MB of RAM is adequate without swapping on an old drive?

Anyone have a favorite reference to point me to for setting up the RAID server?
scottlindner is offline  
post #6 of 90 Old 09-30-2008, 07:21 AM
Senior Member
 
SeijiSensei's Avatar
 
Join Date: Sep 2007
Location: Metro Boston
Posts: 466
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 0 Post(s)
Liked: 10
You probably should read the Software RAID HOWTO to get a sense of how it works. As I said, most installers like the one in CentOS give you the option to set up an array so it's not so hard as it might first appear.

I'd put a 1 GB swap partition on your boot drive. If all you're doing is serving files the machine probably won't go into swap that often, but there are portions of the kernel that load at boot and get swapped out if it's available. Linux uses a very efficient disk caching algorithm, so if there's sufficient memory a lot of the data written to swap is still cached in memory.

Here's the memory statistics for a machine with 1 GB of memory I built recently as a web/database/DNS server:

Code:
Mem:   1033864k total,  1016832k used,    17032k free,   132224k buffers
Swap:  2097144k total,    21712k used,  2075432k free,   559520k cached
Most of the 2GB I allocated to a swap partition is unused. All the memory is in use, but about half of that is being used for disk caching (the "cached" number). The actual program code occupies the other half a gigabyte of physical memory. Along with the applications I already mentioned the machine also runs Samba and a sends and receives mail for my client's organization. It hardly ever "breaks a sweat;" the 2.8 GHz Pentium-D CPU is 99% idle nearly all the time.

When partitioning the boot device, I'd allocate 256 MB to /boot (more than sufficient), 1 GB to swap, and use some or all of the remaining space for the operating system. Linux easily fits in a 10 GB partition, so a 20 GB drive could hold a couple of copies of the OS. Even in this arrangement you'll only need one swap partition. Create /boot and swap as "primary" partitions, and make the remaining space an "extended" partition.
SeijiSensei is offline  
post #7 of 90 Old 10-01-2008, 03:39 AM - Thread Starter
Advanced Member
 
scottlindner's Avatar
 
Join Date: Jun 2005
Location: Colorado Springs, CO
Posts: 705
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 0 Post(s)
Liked: 11
I will read that HowTo and start from there. Thanks for the help!

Cheers,
Scott
scottlindner is offline  
post #8 of 90 Old 10-05-2008, 06:17 AM - Thread Starter
Advanced Member
 
scottlindner's Avatar
 
Join Date: Jun 2005
Location: Colorado Springs, CO
Posts: 705
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 0 Post(s)
Liked: 11
Quote:
Originally Posted by SeijiSensei View Post

For servers I use CentOS, the free repackaging of RedHat Enterprise Linux, because it has long-term support.

How do you feel about using Ubuntu as the OS for the RAID server? I currently use Ubuntu for my servers and would prefer to stick with a single OS, but if there is some reason to use CentOS for the RAID server, I'll do it.

I have been slowly reading through the HowTo you suggested. Should be done soon. Just ordered the drives so I better be done reading soon!

Scott
scottlindner is offline  
post #9 of 90 Old 10-05-2008, 09:15 AM
AVS Special Member
 
Troubleshooter's Avatar
 
Join Date: Aug 2001
Location: Southern Cowhampshire
Posts: 1,670
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 0 Post(s)
Liked: 10
I've said it before and I still stick with it....Use EVMS. Nice gui, extremely powerful and brought to us from our good friends at IBM. It's in the Ubuntu repositories even.

-Trouble
Troubleshooter is offline  
post #10 of 90 Old 10-05-2008, 09:30 AM
Newbie
 
TJ304's Avatar
 
Join Date: Nov 2004
Posts: 13
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 0 Post(s)
Liked: 10
Last time I checked EVMS was broken in Ubuntu. Has that been fixed now?
TJ304 is offline  
post #11 of 90 Old 10-05-2008, 09:36 AM - Thread Starter
Advanced Member
 
scottlindner's Avatar
 
Join Date: Jun 2005
Location: Colorado Springs, CO
Posts: 705
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 0 Post(s)
Liked: 11
Quote:
Originally Posted by Troubleshooter View Post

I've said it before and I still stick with it....Use EVMS. Nice gui, extremely powerful and brought to us from our good friends at IBM. It's in the Ubuntu repositories even.

-Trouble



https://wiki.ubuntu.com/Evms

Will you continue to say it?

Scott
scottlindner is offline  
post #12 of 90 Old 10-05-2008, 10:16 AM
Senior Member
 
SeijiSensei's Avatar
 
Join Date: Sep 2007
Location: Metro Boston
Posts: 466
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 0 Post(s)
Liked: 10
I've not tried to set up a server with Ubuntu. Since Ubuntu began as a desktop distro and only lately has tried to expand into the server space, I've just been more comfortable with RedHat-flavored distros like CentOS that I've used for a decade now. Perhaps someone who has built a server with Ubuntu can tell us how easy it is to build the array with the Ubuntu installer.

Alternatively you can just install the OS to the boot drive as we discussed earlier and leave the array for later. You'll have to build it from scratch that way. I'd use command-line tools like fdisk and mdadm, but there are probably GUI tools as well.

1) Use fdisk to create Linux RAID partitions on the disks (partition type fd); make the whole disk a single partition. Use LVM if you want to create smaller "logical volumes."
2) Use mdadm to build the array from the component disks.
3) [optional] Convert the array device to LVM.
4) Create a filesystem on the array using mkfs and choosing one of ext3, XFS, JFS or whatever.
5) Add an entry in /etc/fstab to mount the array at boot.

I might have missed a step along the way, but I think that's a pretty good overview.
SeijiSensei is offline  
post #13 of 90 Old 10-05-2008, 06:28 PM
AVS Special Member
 
Troubleshooter's Avatar
 
Join Date: Aug 2001
Location: Southern Cowhampshire
Posts: 1,670
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 0 Post(s)
Liked: 10
Wow I had no idea Ubuntu was having issues. I've run it for many years now and currently run it on Hardy with ~3TB of storage. It has never recommended by the updater that I uninstall it. I'd say that based on that link I'd agree, don't use it though it has performend wonderfully for me.

-Trouble
Troubleshooter is offline  
post #14 of 90 Old 10-06-2008, 06:34 AM
 
mythmaster's Avatar
 
Join Date: Mar 2008
Location: 255.255.255.255
Posts: 2,142
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 0 Post(s)
Liked: 10
Quote:
Originally Posted by SeijiSensei View Post

Perhaps someone who has built a server with Ubuntu can tell us how easy it is to build the array with the Ubuntu installer.

You have to use the alternate, text-based installer. It's pretty straight-forward: just manually configure partitions, set the partitions you want to use in your RAID volume(s) to type "RAID" (the installer will set to "fd"), then the option to configure RAID devices will appear where you can set the RAID type and add partitions to each device. The RAID volume(s) will appear and you can configure with LVM, filesystem, mount point just like any other partition.
mythmaster is offline  
post #15 of 90 Old 10-06-2008, 03:20 PM
Member
 
scarycall's Avatar
 
Join Date: Mar 2007
Location: Tachikawa, Japan
Posts: 113
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 0 Post(s)
Liked: 10
My server runs software raid and it is very efficient. And as the poster mentioned above, definitely put LVM onto the fresh raid before you put the filesystem on it. This way, if in the future you would like to add drives to your array, you can easily resize the total size without any hassle. And my personal preference for filesystem is XFS .
scarycall is offline  
post #16 of 90 Old 10-06-2008, 03:42 PM
 
mythmaster's Avatar
 
Join Date: Mar 2008
Location: 255.255.255.255
Posts: 2,142
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 0 Post(s)
Liked: 10
Quote:
Originally Posted by scarycall View Post

My server runs software raid and it is very efficient. And as the poster mentioned above, definitely put LVM onto the fresh raid before you put the filesystem on it. This way, if in the future you would like to add drives to your array, you can easily resize the total size without any hassle. And my personal preference for filesystem is XFS .

Absolutely! Below are some unixbench test results for my 2 drive software RAID 0 array with XFS (not using LVM on this one, though):

Quote:


File Read 1024 bufsize 2000 maxblocks 1639552.0 KBps (30.0 secs, 3 samples)
File Write 1024 bufsize 2000 maxblocks 1197888.0 KBps (30.0 secs, 3 samples)
File Copy 1024 bufsize 2000 maxblocks 674952.0 KBps (30.0 secs, 3 samples)
File Read 256 bufsize 500 maxblocks 545336.0 KBps (30.0 secs, 3 samples)
File Write 256 bufsize 500 maxblocks 374695.0 KBps (30.0 secs, 3 samples)
File Copy 256 bufsize 500 maxblocks 230704.0 KBps (30.0 secs, 3 samples)
File Read 4096 bufsize 8000 maxblocks 3065790.0 KBps (30.0 secs, 3 samples)
File Write 4096 bufsize 8000 maxblocks 2644977.0 KBps (30.0 secs, 3 samples)
File Copy 4096 bufsize 8000 maxblocks 1364321.0 KBps (30.0 secs, 3 samples)

mythmaster is offline  
post #17 of 90 Old 10-11-2008, 06:14 AM
Newbie
 
dexxtreme's Avatar
 
Join Date: Oct 2008
Posts: 3
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 0 Post(s)
Liked: 10
I actually have 2 XFS software RAID5 arrays (3x500GB and 3x750GB) on my fileserver. (One SATA controller is a Promise SATA 300 TX4 and one is on a Silicon Image controller.) It is running Slackware 12 on a dual P3 1.266. (A dual P3 with 2GB RAM is more than enough power to run a NAS.) I use it to stream DVD's to my HTPC (also running Slackware 12) through a gigabit network, so it really isn't under any real stress right now.

Code:
# df -h /share*
Filesystem            Size  Used Avail Use% Mounted on
/dev/md/0             1.4T  1.1T  341G  76% /share
/dev/md/1             932G  120G  813G  13% /share2
dexxtreme is offline  
post #18 of 90 Old 10-26-2008, 07:57 AM
 
quantumstate's Avatar
 
Join Date: Apr 2006
Posts: 1,694
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 0 Post(s)
Liked: 17
I used XFS for a couple years with Debian, but eventually found that it does not recover well from system crashes. It's supposed to be journalling, but that does not seem to work! You begin to get subtle file corruptions that eventually become noticeable. I've since resorted to Ext3.

I have a question about RAID. I am about to buy a new Asus mobo with hardware RAID, and may set that up with 3 disk RAID5. (3 TB - 1) If I do so, can I later add one or more disks without rebuilding the array? If so, how is it possible to do this in BIOS?

Can I hot-swap disks? How? Can I partition a RAID array?
quantumstate is offline  
post #19 of 90 Old 10-26-2008, 05:28 PM
Advanced Member
 
drkdiggler's Avatar
 
Join Date: Nov 2002
Location: Arlington, VA
Posts: 523
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 0 Post(s)
Liked: 10
I've been running Ubuntu 8.04 LTS on a Gigabyte GA-M59SLI-S5 motherboard for the last 5 months. I have 5x 750 GB drives hooked up to the onboard SATA ports configured to run in RAID5, along with 2x 500 GB drives connected to the other onboard SATA ports which are mirrored and serve as my boot/home partitions. Just to give you some idea of Linux software RAID performance, I can read files from the RAID5 array at ~ 191 MB/s and write files to it at ~ 96 MB/s (megabyte, not megabit). Keep in mind that these numbers are for large files (~5GB), but I think that they illustrate the point that you don't have to buy an expensive hardware RAID card to serve your purposes (as previously said, unless you plan on competing with Google ). However, I would recommend getting a hardware card if you don't feel comfortable managing your array(s) from the command line. You can always buy the drives and test out some recovery situations to see if you feel comfortable, before dropping the cash on a card.
drkdiggler is offline  
post #20 of 90 Old 10-27-2008, 04:22 AM - Thread Starter
Advanced Member
 
scottlindner's Avatar
 
Join Date: Jun 2005
Location: Colorado Springs, CO
Posts: 705
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 0 Post(s)
Liked: 11
Quote:
Originally Posted by quantumstate View Post

I used XFS for a couple years with Debian, but eventually found that it does not recover well from system crashes. It's supposed to be journalling, but that does not seem to work! You begin to get subtle file corruptions that eventually become noticeable. I've since resorted to Ext3.

I have a question about RAID. I am about to buy a new Asus mobo with hardware RAID, and may set that up with 3 disk RAID5. (3 TB - 1) If I do so, can I later add one or more disks without rebuilding the array? If so, how is it possible to do this in BIOS?

Can I hot-swap disks? How? Can I partition a RAID array?

I have not set up my RAID yet, waiting for the drives to hit my price point, but from what I have read you can hot swap disks with a software RAID in Linux using the command line utility. I'm assuming you're running strictly SATA and no PATA.

Scott
scottlindner is offline  
post #21 of 90 Old 10-27-2008, 04:49 AM
 
quantumstate's Avatar
 
Join Date: Apr 2006
Posts: 1,694
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 0 Post(s)
Liked: 17
Strictly SATA, but all my questions apply to hardware RAID on the ASUS mobo.

I can't believe nobody's using it.
quantumstate is offline  
post #22 of 90 Old 10-27-2008, 05:15 AM - Thread Starter
Advanced Member
 
scottlindner's Avatar
 
Join Date: Jun 2005
Location: Colorado Springs, CO
Posts: 705
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 0 Post(s)
Liked: 11
The problem might be that you're asking this question in a Linux sub forum and not Windows.
scottlindner is offline  
post #23 of 90 Old 10-27-2008, 07:57 AM
 
mythmaster's Avatar
 
Join Date: Mar 2008
Location: 255.255.255.255
Posts: 2,142
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 0 Post(s)
Liked: 10
Quote:
Originally Posted by quantumstate View Post

Strictly SATA, but all my questions apply to hardware RAID on the ASUS mobo.

I can't believe nobody's using it.

Which ASUS mobo? If you're talking about a mobo supporting RAID in the BIOS, then it is not hardware RAID. You would still have to set up software RAID in linux unless you have a hardware RAID add-on card.

As for hot-swapping with a software RAID, you would manually have to stop the RAID device, and your drives and mobo(? never tried it) would have to support hot-swapping. The array would then have to be rebuilt.

As for adding drives, if you're using LVM, you may add drives to the volume, but not to the RAID device without rebuilding the array.

Also, the RAID device can be "partitioned" with LVM.

BTW, I've never experienced any data corruption on XFS after a crash recovery. Good luck trying to recover deleted files, though
mythmaster is offline  
post #24 of 90 Old 10-27-2008, 06:31 PM
 
quantumstate's Avatar
 
Join Date: Apr 2006
Posts: 1,694
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 0 Post(s)
Liked: 17
Quote:
Originally Posted by mythmaster View Post

Which ASUS mobo? If you're talking about a mobo supporting RAID in the BIOS, then it is not hardware RAID.

P5N7A-VM


Quote:
Originally Posted by mythmaster View Post

The array would then have to be rebuilt.

As for adding drives, if you're using LVM, you may add drives to the volume, but not to the RAID device without rebuilding the array.

Meh, what a PITA. Have we made not progress in the past 20 years? When I worked for Digital Equipment in the '80's as an SA we had all these RAID features!

Are we regressing? Is America turning Republican?


Quote:
Originally Posted by mythmaster View Post

BTW, I've never experienced any data corruption on XFS after a crash recovery.

Oh, you won't see it for a while. Give it time... your OS will start acting weird in ways that Linux doesn't do.
quantumstate is offline  
post #25 of 90 Old 10-27-2008, 07:56 PM
 
mythmaster's Avatar
 
Join Date: Mar 2008
Location: 255.255.255.255
Posts: 2,142
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 0 Post(s)
Liked: 10
Quote:
Originally Posted by quantumstate View Post

Meh, what a PITA. Have we made not progress in the past 20 years? When I worked for Digital Equipment in the '80's as an SA we had all these RAID features!

Was it software RAID?

Quote:
Originally Posted by quantumstate View Post

Are we regressing? Is America turning Republican?

Dude, you really don't want to go there with me.

Quote:
Originally Posted by quantumstate View Post

Oh, you won't see it for a while. Give it time... your OS will start acting weird in ways that Linux doesn't do.

I have to admit that my systems don't crash...so I obviously don't have as much experience as you do in this department.

BTW, *ANY* filesystem will become corrupted after repeated crashes. So, the solution here is NOT to change filesystems; the solution here is NOT TO LET YOUR SYSTEMS CRASH.
mythmaster is offline  
post #26 of 90 Old 10-28-2008, 03:06 AM
Member
 
rjburke377's Avatar
 
Join Date: Sep 2006
Posts: 94
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 0 Post(s)
Liked: 10
Quote:
Originally Posted by scottlindner View Post

What are your thoughts? Complete new hardware, or can I get away with what I have and a SW RAID?

I think you can get away with what you have and manage the storage using Linux LVM2.

Ubuntu 8.04 doesn't provide an LVM2 GUI package by default but I use the system-config-lvm 1.0.18 debian package. It does an ok job helping you manage the storage and filesystem re-size activities.

I use a 3ware PCI-X card to handle RAID5 activities but LVM2 should be able to do this in software.

/R
rjburke377 is offline  
post #27 of 90 Old 10-28-2008, 04:27 AM
 
quantumstate's Avatar
 
Join Date: Apr 2006
Posts: 1,694
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 0 Post(s)
Liked: 17
'Not let my systems crash'?

You must not be using the latest software for hardware accel, codecs, etc. Or video drivers. Or Konqueror.

Good luck.
quantumstate is offline  
post #28 of 90 Old 10-29-2008, 10:19 AM
Senior Member
 
SeijiSensei's Avatar
 
Join Date: Sep 2007
Location: Metro Boston
Posts: 466
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 0 Post(s)
Liked: 10
Replacing a drive with software RAID is pretty easy if you're willing to use the command-line mdadm program. You don't need to stop the array, just mark the device as failed with "mdadm --set-faulty" then remove it from the array with "mdadm --remove". After installing the replacement drive, you can insert it into the array with "mdadm --add". The software will reconstruct the data on the new drive.

I've never tried adding additional drives to an array, but I suspect mythmaster is right that it's not possible. When you build an array using software RAID, you specify the number of devices and designate how many are live and how many are hot spares. I think the number of drives is a fundamental characteristic of the array and not one that can changed on-the-fly as it were. However, rather than take my word for it, you should probably read the Software RAID HOWTO.

I'd also endorse the use of LVM2 to avoid re-partitioning issues.

Like quantumstate, I've also had problems with XFS, though in my case I'm using it on an external drive array. If I'm not careful about the sequence of events while turning off the computer or the drive, I can easily screw up the journal and lose recent files. I assumed these problems had more to do with using a removable drive and wouldn't apply to a fixed array, but maybe I'm being optimistic.
SeijiSensei is offline  
post #29 of 90 Old 10-30-2008, 10:49 AM
Member
 
Loto_Bak's Avatar
 
Join Date: Dec 2004
Posts: 36
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 0 Post(s)
Liked: 10
software raid 5 is easily expandable

for example raid device called /dev/md0
with 5 disks. Adding a Sixth disk

mdadm --add /dev/md0 /dev/sdf1
this will add the disk sdf1 as a ready to go spare on /dev/md0

mdadm --grow /dev/md0 --raid-devices=6
this will start a raid reshape. your data is fully accessable while this takes place.

I personally dont use LVM. I use JFS filesystem (seems very mature)
JFS filesystem driver has built in filesystem expanding, but it cannot be shrunk.

As well dont use so called 'hardware raid' on your motherboard. Its crap. The only hardware raid you should consider would be a dedicated add-in card such as promise or adaptec. Silicon image cheap controllers don't count either. They simply use a software driver to do the raid work and often these drivers are windows only.

Stick with either a decent add in card or mdadm under linux.
Loto_Bak is offline  
post #30 of 90 Old 10-30-2008, 01:11 PM
 
mythmaster's Avatar
 
Join Date: Mar 2008
Location: 255.255.255.255
Posts: 2,142
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 0 Post(s)
Liked: 10
Quote:
Originally Posted by Loto_Bak View Post

software raid 5 is easily expandable

for example raid device called /dev/md0
with 5 disks. Adding a Sixth disk

mdadm --add /dev/md0 /dev/sdf1
this will add the disk sdf1 as a ready to go spare on /dev/md0

mdadm --grow /dev/md0 --raid-devices=6
this will start a raid reshape. your data is fully accessable while this takes place..

I wasn't aware of this. Cool.
mythmaster is offline  
Reply HTPC - Linux Chat

User Tag List

Thread Tools
Show Printable Version Show Printable Version
Email this Page Email this Page


Forum Jump: 

Posting Rules  
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are Off
Pingbacks are Off
Refbacks are Off