New Media Server Build - Need Some Advice! - AVS | Home Theater Discussions And Reviews
Forum Jump: 
 
Thread Tools
post #1 of 14 Old 03-27-2012, 01:53 PM - Thread Starter
Member
 
Griff1324's Avatar
 
Join Date: Sep 2004
Posts: 193
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 12 Post(s)
Liked: 27
Back in the day, when I had to walk to and from school up hill both ways, I built my first media server. It was running Server 2003 with two Raidcore BC4852 raid cards and sixteen 300 GB hard drives. Over the years, I upgraded my mobo, processor, RAM, hard drives and moved to Server 2008. One thing that stayed the same was the Raidcore raid cards. The server has now grown to 9 TB of data and it is just about maxed out.

Recently, I had a HD fail and ripping into my old Cooler Master Stacker case has gotten old. Therefore, I figured it was time to upgrade the case and hard drives. I just picked up a Norco 4224 and plan to swap out my 1.5 TB drives with 3 TB drives.

At the time of my original build, hardware RAID was the way to go. However, it appears things have changed since then. Therefore, I am considering moving away from the hardware RAID and going with software. While I am familiar with the terms unRAID and FlexRaid, the software itself is new to me. So, I am looking for some advice!

With my current setup, I am used to seeing my array as one large drive. I just save my files to that drive and it's done. I would prefer to keep my new setup this way. I don't want to worry about saving files on this drive letter or that drive letter. From what I understand, this referred to drive pooling in unRaid and FlexRaid.

Being in IT and liking to learn new things, I am considering running VMware ESXi HyperVisor on my server and running numerous virtual machines. One of which will be Server 2008 as I run a domain in my house. Obviously, all the VMs will need access to the array.

With all of that said, which solution would you choose? Do I stay with my hardware raid cards or go with FlexRaid? From what I understand unRaid is it's own OS designed to run from a USB stick so this probably won't work with what I am trying to accomplish with my server.

All suggestions are greatly appreciated!
Griff1324 is offline  
Sponsored Links
Advertisement
 
post #2 of 14 Old 03-27-2012, 03:53 PM
Advanced Member
 
Dropkick Murphy's Avatar
 
Join Date: Dec 2008
Posts: 630
Mentioned: 1 Post(s)
Tagged: 0 Thread(s)
Quoted: 35 Post(s)
Liked: 69
Before you discount unRAID completely have a look at this list of add-ons.

http://lime-technology.com/wiki/inde...UnRAID_Add_Ons

My server is pretty new and I have not loaded any of this stuff yet but I plan to.

Dropkick Murphy is offline  
post #3 of 14 Old 03-27-2012, 04:10 PM
Senior Member
 
Lars99's Avatar
 
Join Date: May 2010
Posts: 298
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 0 Post(s)
Liked: 10
Quote:
Originally Posted by Griff1324 View Post

Back in the day, when I had to walk to and from school up hill both ways, I built my first media server. It was running Server 2003 with two Raidcore BC4852 raid cards and sixteen 300 GB hard drives. Over the years, I upgraded my mobo, processor, RAM, hard drives and moved to Server 2008. One thing that stayed the same was the Raidcore raid cards. The server has now grown to 9 TB of data and it is just about maxed out.

Recently, I had a HD fail and ripping into my old Cooler Master Stacker case has gotten old. Therefore, I figured it was time to upgrade the case and hard drives. I just picked up a Norco 4224 and plan to swap out my 1.5 TB drives with 3 TB drives.

At the time of my original build, hardware RAID was the way to go. However, it appears things have changed since then. Therefore, I am considering moving away from the hardware RAID and going with software. While I am familiar with the terms unRAID and FlexRaid, the software itself is new to me. So, I am looking for some advice!

With my current setup, I am used to seeing my array as one large drive. I just save my files to that drive and it's done. I would prefer to keep my new setup this way. I don't want to worry about saving files on this drive letter or that drive letter. From what I understand, this referred to drive pooling in unRaid and FlexRaid.

Being in IT and liking to learn new things, I am considering running VMware ESXi HyperVisor on my server and running numerous virtual machines. One of which will be Server 2008 as I run a domain in my house. Obviously, all the VMs will need access to the array.

With all of that said, which solution would you choose? Do I stay with my hardware raid cards or go with FlexRaid? From what I understand unRaid is it's own OS designed to run from a USB stick so this probably won't work with what I am trying to accomplish with my server.

All suggestions are greatly appreciated!

With a lot of work, you can run unRaid in an ESXi VM. It's not a process for the feint of heart.

For simplicity, I would recommend running Server 2008 (or win 8 server) as the main OS and then run VMs inside that. From there you could run FlexRaid, SNAPRaid, or Windows softeware raid/Win 8 storage spaces on in the same environment.

The reason I suggest this is running ESXi has different hardware requirements then running 2008 on top. Unless you have a specific ESXi required VM, you can run regular VMs a la VirtualBox, etc... straight from 2008. It's a much simpler solution.

Since you mention drive pooling, there is also software such as Drive Bender or Stable Bit Drive Pool that also presents your drives as a single volume (drive pooling) but don't have the raid style disk failure protection of unRaid/FlexRaid. They are cheaper and Drive Bender at least offers a free evaluation. If you don't require disk protection, it's another thing to consider.
Lars99 is offline  
post #4 of 14 Old 03-27-2012, 04:44 PM
AVS Special Member
 
Darin's Avatar
 
Join Date: Aug 2002
Location: Atlanta
Posts: 5,999
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 0 Post(s)
Liked: 10
Quote:
Originally Posted by Griff1324 View Post

which solution would you choose? Do I stay with my hardware raid cards or go with FlexRaid? From what I understand unRaid is it's own OS designed to run from a USB stick so this probably won't work with what I am trying to accomplish with my server.

Personally, I think something like FlexRAID, unRAID, SnapRAID, etc., are all much better choices for a media server than hardware RAID. Your data isn't striped, which makes it much easier to recover should something ever go wrong. In fact, your data isn't even dependent on the RAID solution. The drives can be pulled and read as-is on another system. You can mix/match drive type, size, speed, and even interface, as needed. Since the data isn't striped, all the drives can be spun down except the one that has the data you're accessing. If/when you need to expand, you can use whatever interface boards and drives you want, you aren't "locked" in to anything.

FlexRAID, SnapRAID, disParity, and similar have some additional advantages over unRAID: They can be used on a range of OSs, and can even be installed, and set up without having to move your data. They can also then be uninstalled if you decide you don't like them, again, leaving your data as it was. Since they sit on top of whatever OS you are running, they are generally only limited by whatever the limits of that OS are. unRAID has a limit of 20 data drives. FlexRAID and SnapRAID can also provide more fault tolerance. SnapRAID can use two parity drives to accommodate recovery from up to two failures, and FlexRAID can use a virtually unlimited number of parity drives, providing a fault tolerance level limited only by your budget (though I would imagine you would reach a point where too many parity drives just gets too slow). unRAID (and I believe disParity) only support one parity drive. Which IMO, is too little for anything beyond 8-10 drives. unRAID is certainly the most limited of choices (it includes it's own OS, which is a bit dated, limited number of drives, limited parity, not so easy to have it host the broad range of other functionality that you might run on a different server OS), but it's also the most established of this type of solution. It's a proven solution that works for many, as long as it's limitations aren't an issue for you.

My dual Rythmik Servo sub project (actually quad now, need to update page)
HDM format neutral thanks to the pricing wars of the '07 xmas shopping season :)
Darin is offline  
post #5 of 14 Old 03-27-2012, 06:12 PM - Thread Starter
Member
 
Griff1324's Avatar
 
Join Date: Sep 2004
Posts: 193
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 12 Post(s)
Liked: 27
Having disc parity is a must! I do not want to have to rip any of my movies or music over again unless there was some catastrophic event. Plus, I will be keeping all of my documents, pictures, etc on the array also (yes I do back these critical files up and off-site).

With FlexRaid is the entire array have parity or is just the specific directories that you specify?
Griff1324 is offline  
post #6 of 14 Old 03-27-2012, 06:39 PM
AVS Special Member
 
Darin's Avatar
 
Join Date: Aug 2002
Location: Atlanta
Posts: 5,999
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 0 Post(s)
Liked: 10
You can specify what paths you want to protect. You can even exclude (or include) certain files, file types, etc.

My dual Rythmik Servo sub project (actually quad now, need to update page)
HDM format neutral thanks to the pricing wars of the '07 xmas shopping season :)
Darin is offline  
post #7 of 14 Old 03-27-2012, 06:50 PM
Senior Member
 
Lars99's Avatar
 
Join Date: May 2010
Posts: 298
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 0 Post(s)
Liked: 10
Quote:
Originally Posted by Griff1324 View Post

Having disc parity is a must! I do not want to have to rip any of my movies or music over again unless there was some catastrophic event. Plus, I will be keeping all of my documents, pictures, etc on the array also (yes I do back these critical files up and off-site).

With FlexRaid is the entire array have parity or is just the specific directories that you specify?

By standard config, the entire array has as many parity disks as you want. As mentioned, you can exclude certain files if desired.

Any particular reason for Server 2008? If you don't need AD, have you considered WHS 2011?
Lars99 is offline  
post #8 of 14 Old 03-27-2012, 09:12 PM - Thread Starter
Member
 
Griff1324's Avatar
 
Join Date: Sep 2004
Posts: 193
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 12 Post(s)
Liked: 27
Just to clarrify if I use one disc for parity then all of the data in the array has redundancy like a typical hardware RAID 5. If I use two discs for parity with FlexRaid then this would be similar to a RAID 6 setup. Is this correct? I just want to make sure that if I use one or two discs for parity then ALL of my data in the array is protected from a failed drive.

The reason for Server 2008 is because I am the IT admin for a small manufacturing company. We are pretty much a Microsoft shop (Server 2003, Exchange 2003, CRM, etc). Running a domain environment at home with Acitve Directory allows me to play around and learn without really screwing up the company's production environment.
Griff1324 is offline  
post #9 of 14 Old 03-27-2012, 09:25 PM
Senior Member
 
Lars99's Avatar
 
Join Date: May 2010
Posts: 298
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 0 Post(s)
Liked: 10
Quote:
Originally Posted by Griff1324 View Post

Just to clarrify if I use one disc for parity then all of the data in the array has redundancy like a typical hardware RAID 5. If I use two discs for parity with FlexRaid then this would be similar to a RAID 6 setup. Is this correct? I just want to make sure that if I use one or two discs for parity then ALL of my data in the array is protected from a failed drive.

The reason for Server 2008 is because I am the IT admin for a small manufacturing company. We are pretty much a Microsoft shop (Server 2003, Exchange 2003, CRM, etc). Running a domain environment at home with Acitve Directory allows me to play around and learn without really screwing up the company's production environment.

From a parity standpoint, yes.

Understand your requirements.
I would still recommend a WHS 2011 front end and then run Win 2008 in a VM. Particularly so if you're in a learning mode as you can instantly reset something and not worry about introducing anomalies into your data stream.

FlexRaid and the others work best when you set and forget. Altering Windows permissions, for example, can create a headache for drive access. Nothing data integrity mind you, but it can cause some frustration.
Lars99 is offline  
post #10 of 14 Old 03-28-2012, 06:13 AM
Advanced Member
 
xcrunner529's Avatar
 
Join Date: Dec 2006
Posts: 536
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 31 Post(s)
Liked: 15
Quote:
Originally Posted by Griff1324 View Post

We are pretty much a Microsoft shop (Server 2003, Exchange 2003, CRM, etc).

You should probably upgrade those. 2 versions behind on each. Exchange especially has had major changes.
xcrunner529 is offline  
post #11 of 14 Old 05-07-2012, 08:00 PM
Newbie
 
dtrounce's Avatar
 
Join Date: Jan 2006
Posts: 6
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 0 Post(s)
Liked: 11
I have a somewhat similar setup:

* Chassis: Supermicro SC847A-R1400LPB - 36 hot-swap 3.5", plus 2 internal 3.5" drives in a 4U. 9x mini-SAS connectors for the 36 SATA drives
* Motherboard: Supermicro X8SAX: X58, 2x PCI-X, 1x PCI, 2x PCIe x16, 1x PCIe x1. 6x SATA - two for the 2x internal drives for RAID1 OS, 4x for hot-swap (needs a reverse mini-SAS breakout cable)
* Intel i7-920
* 2x6GB = 12GB RAM
* 2x RaidCORE BC4852 PCI-X (running final 3.3.1 firmware/drivers). Used just to provide 2x 8x = 16x SATA ports now. Note only 15 arrays/legacy drives are possible, regardless of the number of cards you have, as SCSI limits you to 15 devices in addition to the controller. Needs 4x reverse mini-SAS breakout cables
* 2x Supermicro AOC-SASLP-MV8 - provides 4x more mini-SAS for 16 more hot-swap drives (need 4x mini-SAS cables)
* Sabrent SiI3114 PCI card for 4x more SATA ports. Using just one port, to replace the 16th drive not usable with the RAIDCore

To use the RAIDCore controllers (even just for SATA ports, not RAID), you need either Windows or Linux for the host - those are the only RAIDCore drivers available. So I don't think you can run ESXi as the host. I think this also rules out most of the other options for a dedicated storage box running ZFS, e.g. FreeNAS (based on FreeBSD), Nexenta (based on OpenSolaris), etc.

I am running Windows Server 8/2012 beta as the host. You can run Hyper-V VMs, but in order to use more than 3 drives (plus the VHD for OS) you need a guest OS that runs the Hyper-V Integration components, only available for Windows and Linux. So again, you can't run nice ZFS-based systems in VMs. There are projects to port ZFS to Linux, which might work (either ZFS-FUSE, native Ubuntu, etc.) but these don't look stable and would be hard work to get up and running.

I want to run Windows DFS as well to keep this box in sync over a slow network link with another Windows Server box. So for a VM-based storage software, that would require exporting the volume from the VM to Windows as iSCSI - more complexity and points of failure. So I really want a good software RAID solution that can use all 36 drives and runs on Windows.

I want:
* Software RAID, to run across different the 6 controllers
* RAID5 at least. RAID6 is better
* Works with 36 disks
* Totally reliable, bug-free. ZFS looks like a really good option here with checksums.
* A single volume exported. Drive pooling would be ok too
* High performance - several hundred MB/sec (like RAIDCore native)
* Expandable, as I add and replace disks). RAIDCore has this
* Fast/smart rebuilds. ZFS has this, most others don't

Options I've considered/tested are:
* Traditional Windows software RAID. RAID5 only, speed is ok, well-tested and reliable. Not expandable, need to set up separate volumes on different slices to use different sized disks, no RAID6, no email alerts
* Windows Storage Spaces, with Server 2012. This looks like it should address the needs, but the actual product (at least in the current beta) is nowhere near the promise yet, either for stability or features. Testing shows terrible performance (20MB/sec), it still has bugs in the beta that makes it unusable (e.g. freezing, artificial limits), no alerts, no RAID6. It also doesn't rebalance data if you add drives. It's not yet even as useful as traditional software RAID
* FlexRAID. I was quite excited about this option. But testing with the current FlexRAID 2.0 update 6 would not start the storage pool correctly after a reboot, making the data inaccessible. Also it is $100 for parity and pooling, which would be fine if it was a well-tested product, not still a promising but buggy work-in-progress
* unRAID - not an option, no drivers for RAIDCore, only 20 disks. And expensive.
* SnapRAID - not realtime, command-line only
* BTRFS - experimental, no RAID5/6, Linux-based

ZFS looks like a really good option. But
* Not native for Windows. So requires the complexity of a VM, which requires Linux to use the RAIDCore cards. This requires a stable Linux-based ZFS system - I'm not sure that exists yet. Plus there is the danger as ZFS now is using virtualized disks (even it owns them) so there may be caching issues, where ZFS thinks it has written a block, but the host has cached it. And the complexity of iSCSI to use DFS.
* Less flexible than you think. You can't change the number of disks (vdevs) once it is created. So need to create up-front with 36x vdevs.

Any other thoughts on this?

Cheers,
David
dtrounce is offline  
post #12 of 14 Old 05-07-2012, 09:30 PM
Senior Member
 
Lars99's Avatar
 
Join Date: May 2010
Posts: 298
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 0 Post(s)
Liked: 10
Quote:
Originally Posted by dtrounce View Post

I have a somewhat similar setup:

* Chassis: Supermicro SC847A-R1400LPB - 36 hot-swap 3.5", plus 2 internal 3.5" drives in a 4U. 9x mini-SAS connectors for the 36 SATA drives
* Motherboard: Supermicro X8SAX: X58, 2x PCI-X, 1x PCI, 2x PCIe x16, 1x PCIe x1. 6x SATA - two for the 2x internal drives for RAID1 OS, 4x for hot-swap (needs a reverse mini-SAS breakout cable)
* Intel i7-920
* 2x6GB = 12GB RAM
* 2x RaidCORE BC4852 PCI-X (running final 3.3.1 firmware/drivers). Used just to provide 2x 8x = 16x SATA ports now. Note only 15 arrays/legacy drives are possible, regardless of the number of cards you have, as SCSI limits you to 15 devices in addition to the controller. Needs 4x reverse mini-SAS breakout cables
* 2x Supermicro AOC-SASLP-MV8 - provides 4x more mini-SAS for 16 more hot-swap drives (need 4x mini-SAS cables)
* Sabrent SiI3114 PCI card for 4x more SATA ports. Using just one port, to replace the 16th drive not usable with the RAIDCore

To use the RAIDCore controllers (even just for SATA ports, not RAID), you need either Windows or Linux for the host - those are the only RAIDCore drivers available. So I don't think you can run ESXi as the host. I think this also rules out most of the other options for a dedicated storage box running ZFS, e.g. FreeNAS (based on FreeBSD), Nexenta (based on OpenSolaris), etc.

I am running Windows Server 8/2012 beta as the host. You can run Hyper-V VMs, but in order to use more than 3 drives (plus the VHD for OS) you need a guest OS that runs the Hyper-V Integration components, only available for Windows and Linux. So again, you can't run nice ZFS-based systems in VMs. There are projects to port ZFS to Linux, which might work (either ZFS-FUSE, native Ubuntu, etc.) but these don't look stable and would be hard work to get up and running.

I want to run Windows DFS as well to keep this box in sync over a slow network link with another Windows Server box. So for a VM-based storage software, that would require exporting the volume from the VM to Windows as iSCSI - more complexity and points of failure. So I really want a good software RAID solution that can use all 36 drives and runs on Windows.

I want:
* Software RAID, to run across different the 6 controllers
* RAID5 at least. RAID6 is better
* Works with 36 disks
* Totally reliable, bug-free. ZFS looks like a really good option here with checksums.
* A single volume exported. Drive pooling would be ok too
* High performance - several hundred MB/sec (like RAIDCore native)
* Expandable, as I add and replace disks). RAIDCore has this
* Fast/smart rebuilds. ZFS has this, most others don't

Options I've considered/tested are:
* Traditional Windows software RAID. RAID5 only, speed is ok, well-tested and reliable. Not expandable, need to set up separate volumes on different slices to use different sized disks, no RAID6, no email alerts
* Windows Storage Spaces, with Server 2012. This looks like it should address the needs, but the actual product (at least in the current beta) is nowhere near the promise yet, either for stability or features. Testing shows terrible performance (20MB/sec), it still has bugs in the beta that makes it unusable (e.g. freezing, artificial limits), no alerts, no RAID6. It also doesn't rebalance data if you add drives. It's not yet even as useful as traditional software RAID
* FlexRAID. I was quite excited about this option. But testing with the current FlexRAID 2.0 update 6 would not start the storage pool correctly after a reboot, making the data inaccessible. Also it is $100 for parity and pooling, which would be fine if it was a well-tested product, not still a promising but buggy work-in-progress
* unRAID - not an option, no drivers for RAIDCore, only 20 disks. And expensive.
* SnapRAID - not realtime, command-line only
* BTRFS - experimental, no RAID5/6, Linux-based

ZFS looks like a really good option. But
* Not native for Windows. So requires the complexity of a VM, which requires Linux to use the RAIDCore cards. This requires a stable Linux-based ZFS system - I'm not sure that exists yet. Plus there is the danger as ZFS now is using virtualized disks (even it owns them) so there may be caching issues, where ZFS thinks it has written a block, but the host has cached it. And the complexity of iSCSI to use DFS.
* Less flexible than you think. You can't change the number of disks (vdevs) once it is created. So need to create up-front with 36x vdevs.

Any other thoughts on this?

Cheers,
David



I'm not sure I agree with your assessment of FlexRaid nor is the price accurate.
Lars99 is offline  
post #13 of 14 Old 05-08-2012, 01:32 AM
Newbie
 
dtrounce's Avatar
 
Join Date: Jan 2006
Posts: 6
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 0 Post(s)
Liked: 11
I understood the promo pricing for FlexRAID was $60 last week, but was going up to $100 very soon?

The error I got was StatusCodeException 12029 - per forum.flexraid.com/index.php?topic=575.0. Nothing I tried would keep the service running without crashing after trying to start the storage pool.


I've just been doing some testing of the iSCSI to ZFS on Ubuntu on Hyper-V option, and it looks pretty good. I was able to get a highly resilient ZFS block device attached to Windows over iSCSI, format it as GPT/NTFS, test removing and restoring disks. ZFS does look very powerful, though a little hard to use without a GUI. I used Linux on ZFS on Ubuntu 12.04, which comes with the required Hyper-V integration components.

Anyone know a good GUI for ZFS on Linux, for configuration, monitoring, setting up email alerts, etc?
dtrounce is offline  
post #14 of 14 Old 05-08-2012, 05:54 AM
Senior Member
 
Lars99's Avatar
 
Join Date: May 2010
Posts: 298
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 0 Post(s)
Liked: 10
There's no telling how long the pricing will last.

Did you attempt to determine what went wrong with FlexRaid? I would attempt to discount operator error before you dismiss the product. I and many others are using it here without issue.
Lars99 is offline  
Sponsored Links
Advertisement
 
Reply Home Theater Computers

Thread Tools
Show Printable Version Show Printable Version
Email this Page Email this Page


Forum Jump: 

Posting Rules  
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are Off
Pingbacks are Off
Refbacks are Off