FlexRAID vs. Storage Spaces (and why)? - AVS Forum
Forum Jump: 
 8Likes
Reply
 
Thread Tools
post #1 of 49 Old 08-09-2014, 09:32 AM - Thread Starter
Senior Member
 
Aryn Ravenlocke's Avatar
 
Join Date: Aug 2006
Location: Tempe, AZ, USA
Posts: 400
Mentioned: 2 Post(s)
Tagged: 0 Thread(s)
Quoted: 181 Post(s)
Liked: 68
FlexRAID vs. Storage Spaces (and why)?

In about two weeks, I am going to be building a home media server. It will have somewhere between 12-16x 3 TB HDD, either WD Red or the Seagate equivalent. What has been repeatedly recommended to me by the folks out here that will be helping me with acquiring and assembling is to save myself the headaches and money by simply using the version of Windows Storage Spaces found in Windows 8.1 (not Drive Extender found in WHS). However,when I mentioned here on the forums that I was looking at using WSS, the resounding reaction was that I was being reckless. Yet, when I ask for reasons why (so I can make an informed choice) all I get are really vague answers about how I should use whatever I think is best, but that WSS is far and away asking for trouble. All I have really been able to gather in terms of reasons though is that WSS = using Windows = Bad.

I'm not married to the idea of using Windows, but my understanding of Linux is almost no-existent, so if I am going to abandon Windows (which has never actually given me problems) for something else on the server, I'd like to know why I am doing it.

I'm not super tech savvy, but I've tried to compare the features just by reading on the various home sites for solutions.

Initially, I thought I had found a solution with Synology. But when I tried to contact the company with a short list of various questions to make sure I was going to be able to do the things I wanted and so forth, my response was literally 8 weeks in coming and only provided me with links to pages on the home site. Granted, I understand forum users at places like AVS, XBMC, and so forth are often more knowledgeable/helpful than emails that wind up in a company's sales department somehow, but if I am going to fork over $3-4k, I would hope for at least some support from an "official source" rather than having to rely on finding an expert elsewhere when they may or may not be around So Synology got dropped.

I'm not terribly thrilled with SnapRAID. Even though I am not worried about information written to the server taking a while to b/u, at the same time, I will be writing substantially to the NAS about twice a week. I would like to not have to wait until after the b/u is done to be able to put the originals in storage.

UnRAID, FlexRAID and WSS all seem viable, though UnRAID does not seem to be friendly should a drive fail (something else I have yet to encounter, but I'd rather not push my luck and assume I'll never have a problem).

I guess that leaves FlexRAID or WSS. But from what I can tell, the biggest difference is that WSS is native to Windows and requires clean drives, while FlexRAID does not. Since I am building from scratch, I'm not concerned about that part. Although I have 30 TB of data currently residing on external discs, those discs will be used to write once to the server, and then be stored as backup should the array fail. In the event that they fail while in storage, I will still have the discs, but that would mean going through and re-ripping, so that's pretty much the "break glass only in case of emergency" solution.

Finally, as for write speeds, I'm not overly concerned. Obviously, I don't want to be taking days to write the movies I add to the server twice a week, but I've already set aside 6 weeks for the initial build to be performed. So it takes however long it takes. I am far more concerned about retrieving the data and also about the drives spinning down when not in use (I don't need 12-16 drives all spinning to watch an episode of Burn Notice, thanks.)

So, it seems like either WSS or FlexRAID, using either one in parity mode so there is some built-in redundancy, but I'm not sacrificing massive numbers of discs. My question is, which will serve my purposes better, and even more importantly, why? (Windows=Bad is not a valid reason in my book.)
Aryn Ravenlocke is online now  
Sponsored Links
Advertisement
 
post #2 of 49 Old 08-09-2014, 10:42 AM
Advanced Member
 
ElJimador's Avatar
 
Join Date: Mar 2011
Posts: 531
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 36 Post(s)
Liked: 85
I fully intended to use Storage Spaces for parity w/my latest build until i went to st it up and realized it was going to give me a full TB less space than it should have allocated. I don't remember the #s since it's been a couple months but I was throwing 4x 4TB brand new WD reds in there and what it spit out for available space was 1TB less than if I was using unRAID (which I use for my other server w/ 6x 3TB WD reds). That turned me off to WSS for parity but i havent bothered looking yet for another solution (since its not really a priority for me until im a little closer to running out of space on the other server).
ElJimador is offline  
post #3 of 49 Old 08-09-2014, 12:32 PM
Senior Member
 
wdeydwondrer's Avatar
 
Join Date: Sep 2013
Posts: 269
Mentioned: 1 Post(s)
Tagged: 0 Thread(s)
Quoted: 25 Post(s)
Liked: 27
I just set up my system using 3x3TB wd red drives and 1x3TB seagate (for parity drive) on FlexRaid's tRAID as it also hosts my file server and ftp server.

2 main reasons I didn't got with storage spaces and actually purchased traid. The first being that the new file system isn't enabled for parity storage spaces on win8 pro (nor is it near a finalized product yet). The second reason being how storage spaces writes blocks of data. Without a bunch of tweaking, you can get a fairly decent chunk of space eaten up without any actual data stored because of how block storage works.

If using traid though for real time parity, DO NOT use the wd red drives. Get drives with the faster random times you can find. The red drives are terrible at that specific feature. My 3 reds were getting sustained throughput of 40-50MB/s, adding the seagate as a parity drive bumped it closer to 50-60MB/s. I wound up having to enable a 100gig landing drive which works at full drive speed for incoming data and then moves it to the raid as a background task. Read speeds off the array though hover around 250MB/s and probably would be faster if I was using a controller that cost more than $40, lol.

As far as flexraid goes, remember traid for real time parity so you can modify stored files, raidf for cold storage.
wdeydwondrer is offline  
post #4 of 49 Old 08-09-2014, 01:48 PM
Senior Member
 
dfkimbro's Avatar
 
Join Date: Aug 2013
Location: Franklin, TN (mostly)
Posts: 225
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 23 Post(s)
Liked: 39
I'm doing some of the same investigation as you as I've been learning about network storage before I jump into building a server. I've been reading everything I can on Storage Spaces and there isn't a lot out there, even on sites for IT professionals. The way Storage Spaces works in actual practice is really confusing to me. I'm not an IT professional or computer science guy, but I am an engineer, so I'm used to reading technical/scientific documents. I'll summarize what I "think" I've learned...since the information is so limited, some of this is my take on it.

From what I've read, it sounds like it is stable and reliable, but has some limited functionality.

The parity pools are notoriously slow for writes. Mirroring seems to be about as fast as some older hardware RAID cards.

When you increase the number of drives, they have to be added in multiples of the number of drives in the existing pools.

It's not clear to me how the data are striped across drives, and if you can keep the data unstriped like you can with Flexraid's RAID-f.

While you can use different drive types and sizes, there are a limited number of ways to set it up to use all of the available drive space.

If a drive is removed it can only be read in a Windows8 or Windows Server 2012 machine.

The consensus opinion over at spiceworks.com (IT professional community forums) is that the ReFS file system and Storage Spaces is a neat idea, but it's about 5 years away from being ready for actual use. The storage guru there said it's only something he would use if he had no other options.

I've come to a conclusion that Storage Spaces is too limited for me. I think it can be made to work and be reliable if you set it up initially with all the drives you're ever going to use, and don't try to expand the pool(s) with additional or bigger drives.

Since I've pretty much decided I want a Windows-based server, I'm back to trying to decide if I want to use hardware RAID or some combination of Flexraid's RAID-f (for static files) and tRAID (for frequently changing files).

If someone has direct experience and I'm wrong about any of what I've posted here, please let me know and I'll edit it.
dfkimbro is offline  
post #5 of 49 Old 08-09-2014, 02:07 PM - Thread Starter
Senior Member
 
Aryn Ravenlocke's Avatar
 
Join Date: Aug 2006
Location: Tempe, AZ, USA
Posts: 400
Mentioned: 2 Post(s)
Tagged: 0 Thread(s)
Quoted: 181 Post(s)
Liked: 68
Quote:
Originally Posted by dfkimbro View Post

When you increase the number of drives, they have to be added in multiples of the number of drives in the existing pools.


If a drive is removed it can only be read in a Windows8 or Windows Server 2012 machine.

I asked about this very issue here, but it seems so few people are using WSS that no one was able to answer for sure. The long and the short of it though is yes, if you start with multiple drives, in order to add drives to the array later, you need to add in multiples of that initial number of drives (there are some caveats, but they don't really apply to a home media server). So, if you begin by deciding on parity, then you will have 3 (or more) drives initially. When adding future drives for expansion, that means adding 3 (or more) drives to make the expansion happen.

At first I figured I would add just three then, but I realized I would be unable to max out the bays in my two cases if I did that as they are 8-bay units. So if I were to go with WSS I would be going with a parity setup but beginning with 4 drives initially. That way, as I grow, I can fill out the case.

The Windows 8 only issue is not a deal breaker for me. If I build this server using Windows, there is no other OS that I would need to read the drives from. Even in a total array failure, there are plenty of Windows PCs in the house. At least 2 of which have Windows 8.1.
Aryn Ravenlocke is online now  
post #6 of 49 Old 08-09-2014, 02:08 PM
Advanced Member
 
Defcon's Avatar
 
Join Date: Sep 2001
Posts: 919
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 35 Post(s)
Liked: 39
There is IMO very little point of using any block based parity system (RAID, StorageSpaces, ZFS) for home use where most data is static. I'd rather have the flexibility of using any size disk I want, and the safety of data being in native formats with no extra management layer needed to access it.
Defcon is online now  
post #7 of 49 Old 08-09-2014, 02:26 PM
Senior Member
 
wdeydwondrer's Avatar
 
Join Date: Sep 2013
Posts: 269
Mentioned: 1 Post(s)
Tagged: 0 Thread(s)
Quoted: 25 Post(s)
Liked: 27
Quote:
Originally Posted by Defcon View Post
There is IMO very little point of using any block based parity system (RAID, StorageSpaces, ZFS) for home use where most data is static. I'd rather have the flexibility of using any size disk I want, and the safety of data being in native formats with no extra management layer needed to access it.
You do realize that's the goal of flexraid (traid and raidf) and the nzfs stack? They don't stripe the data so in the event of an array failure or unrecoverable failure (too many drives die at once), all remaining drive can be read from ANY computer. The only extra layer is basically to create the parity itself. traid can actually be set up to use all the drives as individual drives like normal w/ parity covering the whole array in the background.
wdeydwondrer is offline  
post #8 of 49 Old 08-09-2014, 02:31 PM
Senior Member
 
wdeydwondrer's Avatar
 
Join Date: Sep 2013
Posts: 269
Mentioned: 1 Post(s)
Tagged: 0 Thread(s)
Quoted: 25 Post(s)
Liked: 27
Quote:
Originally Posted by Aryn Ravenlocke View Post
I asked about this very issue here, but it seems so few people are using WSS that no one was able to answer for sure. The long and the short of it though is yes, if you start with multiple drives, in order to add drives to the array later, you need to add in multiples of that initial number of drives (there are some caveats, but they don't really apply to a home media server). So, if you begin by deciding on parity, then you will have 3 (or more) drives initially. When adding future drives for expansion, that means adding 3 (or more) drives to make the expansion happen.

At first I figured I would add just three then, but I realized I would be unable to max out the bays in my two cases if I did that as they are 8-bay units. So if I were to go with WSS I would be going with a parity setup but beginning with 4 drives initially. That way, as I grow, I can fill out the case.

The Windows 8 only issue is not a deal breaker for me. If I build this server using Windows, there is no other OS that I would need to read the drives from. Even in a total array failure, there are plenty of Windows PCs in the house. At least 2 of which have Windows 8.1.
I haven't run across any information saying you have to add drives to a parity pool in the same quantity of the original array. As I decided against running wss I can only say that in theory you should be able to add 1 drive at a time (or 12 for that matter). Where the problem comes into play is the lack of rebalancing to the newly added drives. The only thing you should have to do though when adding a new drive is simply recreate the parity.

dfkimbro, WSS is supposed to stripe data on all data drives and compete parity on the parity drive (another large difference between wss and flexraid). Also, hardware raid isn't what comes on your motherboard. Hardware raid requires rather pricey controllers and comes with it's own bag full of problems. For a small home server with a couple drives only, IMO, hardware raid is far to pricey to jump into when weighing the benefits for largely static files.
wdeydwondrer is offline  
post #9 of 49 Old 08-09-2014, 04:13 PM
Advanced Member
 
Defcon's Avatar
 
Join Date: Sep 2001
Posts: 919
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 35 Post(s)
Liked: 39
Quote:
Originally Posted by wdeydwondrer View Post
You do realize that's the goal of flexraid (traid and raidf) and the nzfs stack? They don't stripe the data so in the event of an array failure or unrecoverable failure (too many drives die at once), all remaining drive can be read from ANY computer. The only extra layer is basically to create the parity itself. traid can actually be set up to use all the drives as individual drives like normal w/ parity covering the whole array in the background.
I was perhaps not clear. What I meant was that I'd rather use file based parity which calculates parity statically and keeps file in native format. The systems I know that do this are FlexRaid, unRaid and Amahi (Greyhole). I personally plan on using FlexRaid.
Defcon is online now  
post #10 of 49 Old 08-10-2014, 07:36 AM
Senior Member
 
dfkimbro's Avatar
 
Join Date: Aug 2013
Location: Franklin, TN (mostly)
Posts: 225
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 23 Post(s)
Liked: 39
Some nice discussions on WSS and other options in the comments on this blog post...http://helgeklein.com/blog/2012/03/w...-design-flaws/
dfkimbro is offline  
post #11 of 49 Old 08-10-2014, 07:50 AM
AVS Addicted Member
 
Mfusick's Avatar
 
Join Date: Aug 2002
Location: Western MA
Posts: 23,033
Mentioned: 2 Post(s)
Tagged: 0 Thread(s)
Quoted: 169 Post(s)
Liked: 852
Being able to add or remove drives and have them readable in any system is a huge bonus for a media server... As is being able to add drives with data already on them. Or "grow as you go" and add drives one at a time over time as you can afford it and need storage. I started at 12TB and now I'm at 50TB, plus I upgraded a bunch of times and never worried about data loss or had to copy anything.
Mfusick is online now  
post #12 of 49 Old 08-10-2014, 08:31 AM - Thread Starter
Senior Member
 
Aryn Ravenlocke's Avatar
 
Join Date: Aug 2006
Location: Tempe, AZ, USA
Posts: 400
Mentioned: 2 Post(s)
Tagged: 0 Thread(s)
Quoted: 181 Post(s)
Liked: 68
Working at the block level instead of at the file level may, I think, be the biggest issue for me in terms of WSS. With so many movies and shows, multiple people watching is bound to wind up spinning up almost the whole array as files split over more than one physical drive start being accessed. So, it's looking like FlexRAID is going to be my solution after all. Hopefully, it's rather user friendly and it doesn't take an IT pro to set it up.
Aryn Ravenlocke is online now  
post #13 of 49 Old 08-10-2014, 09:21 AM
AVS Special Member
 
StinDaWg's Avatar
 
Join Date: Feb 2006
Posts: 3,623
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 274 Post(s)
Liked: 159
Quote:
Originally Posted by Mfusick View Post
Being able to add or remove drives and have them readable in any system is a huge bonus for a media server... As is being able to add drives with data already on them. Or "grow as you go" and add drives one at a time over time as you can afford it and need storage. I started at 12TB and now I'm at 50TB, plus I upgraded a bunch of times and never worried about data loss or had to copy anything.
To be clear, which method are you referring to that allows this? I have 2 or 3 existing hard drives with data on them that I would like to combine into 1 virtual drive. Which is the best way to go about doing this? I'm running Windows 8.1.
StinDaWg is offline  
post #14 of 49 Old 08-10-2014, 10:23 AM
AVS Addicted Member
 
Mfusick's Avatar
 
Join Date: Aug 2002
Location: Western MA
Posts: 23,033
Mentioned: 2 Post(s)
Tagged: 0 Thread(s)
Quoted: 169 Post(s)
Liked: 852
Quote:
Originally Posted by StinDaWg View Post
To be clear, which method are you referring to that allows this? I have 2 or 3 existing hard drives with data on them that I would like to combine into 1 virtual drive. Which is the best way to go about doing this? I'm running Windows 8.1.
Flexraid
Mfusick is online now  
post #15 of 49 Old 08-10-2014, 11:04 AM
Senior Member
 
wdeydwondrer's Avatar
 
Join Date: Sep 2013
Posts: 269
Mentioned: 1 Post(s)
Tagged: 0 Thread(s)
Quoted: 25 Post(s)
Liked: 27
Quote:
Originally Posted by Aryn Ravenlocke View Post
Working at the block level instead of at the file level may, I think, be the biggest issue for me in terms of WSS. With so many movies and shows, multiple people watching is bound to wind up spinning up almost the whole array as files split over more than one physical drive start being accessed. So, it's looking like FlexRAID is going to be my solution after all. Hopefully, it's rather user friendly and it doesn't take an IT pro to set it up.
It's not exactly friendly the first time or two. I would recommend using the trial period to set a test array up on your server, play around with the settings. Then you can also tweak settings to get the best performance on your system. Then purchase it and rebuild the array if you wish a clean setup. While not friendly, it's certainly not hard though. If you can read and follow the wiki, you'll be just fine.


Quote:
Originally Posted by StinDaWg View Post
To be clear, which method are you referring to that allows this? I have 2 or 3 existing hard drives with data on them that I would like to combine into 1 virtual drive. Which is the best way to go about doing this? I'm running Windows 8.1.
Flexraid can do that with both their products. raidF with pooling or traid with pooling. And at least in traid, it's suggested to pre-load all your drives if you have a ton of media to add on. Then just create a parity after you build the array. Will be far faster.
wdeydwondrer is offline  
post #16 of 49 Old 08-10-2014, 11:23 AM - Thread Starter
Senior Member
 
Aryn Ravenlocke's Avatar
 
Join Date: Aug 2006
Location: Tempe, AZ, USA
Posts: 400
Mentioned: 2 Post(s)
Tagged: 0 Thread(s)
Quoted: 181 Post(s)
Liked: 68
Quote:
Originally Posted by wdeydwondrer View Post
It's not exactly friendly the first time or two. I would recommend using the trial period to set a test array up on your server, play around with the settings. Then you can also tweak settings to get the best performance on your system. Then purchase it and rebuild the array if you wish a clean setup. While not friendly, it's certainly not hard though. If you can read and follow the wiki, you'll be just fine.




Flexraid can do that with both their products. raidF with pooling or traid with pooling. And at least in traid, it's suggested to pre-load all your drives if you have a ton of media to add on. Then just create a parity after you build the array. Will be far faster.
It's almost tempting to go ahead and build using the 10 drives on my desk and calling it done, but I'm sticking firm to building a fresh array and using the 10 drives I have as the primary back-ups in case of failure. They save me the nightmare of having to re-rip many hundreds of discs all over again, that's piece of mind worth the extra cost of building a fresh array.



If an array can be built with loaded drives though and there is no data-loss involved, I may go ahead and look at the trial version and use it on my current discs. That way, when I get the server built, I will already have an idea of what it is I am doing.

The question then becomes: t-RAID or f-RAID?

Is t-RAID overkill for a household media server? Does it really matter which?

Last edited by Aryn Ravenlocke; 08-10-2014 at 11:24 AM. Reason: Added questions
Aryn Ravenlocke is online now  
post #17 of 49 Old 08-10-2014, 11:35 AM
AVS Special Member
 
ajhieb's Avatar
 
Join Date: Jul 2009
Posts: 1,423
Mentioned: 7 Post(s)
Tagged: 0 Thread(s)
Quoted: 319 Post(s)
Liked: 391
Quote:
Originally Posted by Aryn Ravenlocke View Post
The question then becomes: t-RAID or f-RAID?

Is t-RAID overkill for a household media server? Does it really matter which?
It's not really a matter of overkill. One isn't really better than the other. They just serve different usage needs.

For a media server, either one will work. with RAID-F, you'll have better write performance (which might be negated is all of your writing is done via your network) but you'll have to balance how often you do parity updates vs how comfortable you are without having redundancy on new data. I think a lot of people chose to do parity updates once a week. If you go with t-RAID then you'll have diminished write performance, because parity is calculated in realtime, but you won't have to schedule updates as they become unnecessary.
ilovejedd likes this.

RAID protection is only for failed drives. That's it. It's no replacement for a proper backup.
ajhieb is online now  
post #18 of 49 Old 08-10-2014, 11:38 AM
Senior Member
 
wdeydwondrer's Avatar
 
Join Date: Sep 2013
Posts: 269
Mentioned: 1 Post(s)
Tagged: 0 Thread(s)
Quoted: 25 Post(s)
Liked: 27
Basically sums it up. raid f is for cold files that just aren't going to change. I went with traid for my setup so I don't have to worry about scheduling parity and because I'm running my file server out of the same array. The write speeds are slower but a landing drive can mask that issue very effectively.
wdeydwondrer is offline  
post #19 of 49 Old 08-10-2014, 12:39 PM
AVS Addicted Member
 
assassin's Avatar
 
Join Date: Jul 2004
Posts: 12,961
Mentioned: 1 Post(s)
Tagged: 0 Thread(s)
Quoted: 50 Post(s)
Liked: 241
For those of you who are new to flex raid here is something to get you started...

http://assassinhtpcblog.com/server-flexraid/


To view links or images in signatures your post count must be 0 or greater. You currently have 0 posts.


To view links or images in signatures your post count must be 0 or greater. You currently have 0 posts.


To view links or images in signatures your post count must be 0 or greater. You currently have 0 posts.


To view links or images in signatures your post count must be 0 or greater. You currently have 0 posts.

assassin is offline  
post #20 of 49 Old 08-10-2014, 01:22 PM
Senior Member
 
wdeydwondrer's Avatar
 
Join Date: Sep 2013
Posts: 269
Mentioned: 1 Post(s)
Tagged: 0 Thread(s)
Quoted: 25 Post(s)
Liked: 27
Quote:
Originally Posted by assassin View Post
For those of you who are new to flex raid here is something to get you started...

http://assassinhtpcblog.com/server-flexraid/
Is a decent read through to start, but doesn't apply if you buy traid. Different setup.
wdeydwondrer is offline  
post #21 of 49 Old 08-10-2014, 01:29 PM
AVS Addicted Member
 
assassin's Avatar
 
Join Date: Jul 2004
Posts: 12,961
Mentioned: 1 Post(s)
Tagged: 0 Thread(s)
Quoted: 50 Post(s)
Liked: 241
Quote:
Originally Posted by wdeydwondrer View Post
Is a decent read through to start, but doesn't apply if you buy traid. Different setup.
I personally don't recommend t-raid for a htpc server unless you specifically needed it for some reason. F-raid is much better for media storage, imo.


To view links or images in signatures your post count must be 0 or greater. You currently have 0 posts.


To view links or images in signatures your post count must be 0 or greater. You currently have 0 posts.


To view links or images in signatures your post count must be 0 or greater. You currently have 0 posts.


To view links or images in signatures your post count must be 0 or greater. You currently have 0 posts.

assassin is offline  
post #22 of 49 Old 08-10-2014, 01:36 PM
AVS Special Member
 
ajhieb's Avatar
 
Join Date: Jul 2009
Posts: 1,423
Mentioned: 7 Post(s)
Tagged: 0 Thread(s)
Quoted: 319 Post(s)
Liked: 391
Quote:
Originally Posted by assassin View Post
I personally don't recommend t-raid for a htpc server unless you specifically needed it for some reason. F-raid is much better for media storage, imo.
Just out of curiosity, are there any advantages for raid-f on a media server besides the improved write performance?

RAID protection is only for failed drives. That's it. It's no replacement for a proper backup.
ajhieb is online now  
post #23 of 49 Old 08-10-2014, 01:58 PM
Senior Member
 
wdeydwondrer's Avatar
 
Join Date: Sep 2013
Posts: 269
Mentioned: 1 Post(s)
Tagged: 0 Thread(s)
Quoted: 25 Post(s)
Liked: 27
Quote:
Originally Posted by ajhieb View Post
Just out of curiosity, are there any advantages for raid-f on a media server besides the improved write performance?
There are no advantages to raidf really beyond the increased write speed. They're both built on the nzfs stack and are subprojects of each other. One is just using real time parity
wdeydwondrer is offline  
post #24 of 49 Old 08-10-2014, 02:09 PM
AVS Addicted Member
 
assassin's Avatar
 
Join Date: Jul 2004
Posts: 12,961
Mentioned: 1 Post(s)
Tagged: 0 Thread(s)
Quoted: 50 Post(s)
Liked: 241
Quote:
Originally Posted by ajhieb View Post
Just out of curiosity, are there any advantages for raid-f on a media server besides the improved write performance?
T-raid seems to be more buggy for many people (at least the last time I checked their forums). You also can't copy a smaller drive to a larger drive. You also can't exclude certain content if wanted/needed. And for static data like movies and media I want stability and I want the option of being able to use whatever hard drive I want that's lying around as speed really isn't that important with the array. Also, for me pooling is a must have. F-raid just works.

There are a few wiki articles but to me this is the bottom line from one of them...

You should use f-raid over t-raid when:
"When the data to be protected is exclusively static or changes very rarely. In this particular case, Snapshot RAID is simply more efficient."
ajhieb and royalpython like this.


To view links or images in signatures your post count must be 0 or greater. You currently have 0 posts.


To view links or images in signatures your post count must be 0 or greater. You currently have 0 posts.


To view links or images in signatures your post count must be 0 or greater. You currently have 0 posts.


To view links or images in signatures your post count must be 0 or greater. You currently have 0 posts.

assassin is offline  
post #25 of 49 Old 08-10-2014, 02:12 PM
AVS Addicted Member
 
assassin's Avatar
 
Join Date: Jul 2004
Posts: 12,961
Mentioned: 1 Post(s)
Tagged: 0 Thread(s)
Quoted: 50 Post(s)
Liked: 241
Quote:
Originally Posted by wdeydwondrer View Post
There are no advantages to raidf really beyond the increased write speed. They're both built on the nzfs stack and are subprojects of each other. One is just using real time parity
This is incorrect, imo. When I get time I will post the wiki that lists the pros and cons of each. With static data the developer (and I) favors f-raid. And correct me if I am wrong but I don't think t-raid offers pooling.


To view links or images in signatures your post count must be 0 or greater. You currently have 0 posts.


To view links or images in signatures your post count must be 0 or greater. You currently have 0 posts.


To view links or images in signatures your post count must be 0 or greater. You currently have 0 posts.


To view links or images in signatures your post count must be 0 or greater. You currently have 0 posts.

assassin is offline  
post #26 of 49 Old 08-10-2014, 02:14 PM
AVS Addicted Member
 
assassin's Avatar
 
Join Date: Jul 2004
Posts: 12,961
Mentioned: 1 Post(s)
Tagged: 0 Thread(s)
Quoted: 50 Post(s)
Liked: 241
Found it: http://www.flexraid.com/faq-items/tr...r-file-system/


To view links or images in signatures your post count must be 0 or greater. You currently have 0 posts.


To view links or images in signatures your post count must be 0 or greater. You currently have 0 posts.


To view links or images in signatures your post count must be 0 or greater. You currently have 0 posts.


To view links or images in signatures your post count must be 0 or greater. You currently have 0 posts.

assassin is offline  
post #27 of 49 Old 08-10-2014, 02:44 PM
Senior Member
 
wdeydwondrer's Avatar
 
Join Date: Sep 2013
Posts: 269
Mentioned: 1 Post(s)
Tagged: 0 Thread(s)
Quoted: 25 Post(s)
Liked: 27
Quote:
Originally Posted by assassin View Post
traid has pooling built in, not as an add on buy option. I also don't see why you would want to exlude files from traid as it's all real time parity and the entire point of excluding files from raidf is so you don't have to deal with parity issues.

The basic issue is the write speed. The other concerns are with parity are not really applicable in this case. How often will you be pulling and replacing your drives? If the answer is not often to never, then the point is moot.

EDIT>> assassin, have you actually run a traid setup? I've used both in the last 3 months
Mfusick likes this.
wdeydwondrer is offline  
post #28 of 49 Old 08-10-2014, 02:56 PM
AVS Addicted Member
 
assassin's Avatar
 
Join Date: Jul 2004
Posts: 12,961
Mentioned: 1 Post(s)
Tagged: 0 Thread(s)
Quoted: 50 Post(s)
Liked: 241
Quote:
Originally Posted by wdeydwondrer View Post
traid has pooling built in, not as an add on buy option. I also don't see why you would want to exlude files from traid as it's all real time parity and the entire point of excluding files from raidf is so you don't have to deal with parity issues.

The basic issue is the write speed. The other concerns are with parity are not really applicable in this case. How often will you be pulling and replacing your drives? If the answer is not often to never, then the point is moot.

EDIT>> assassin, have you actually run a traid setup? I've used both in the last 3 months
Yes. Have run traid but it's been a while.

Good to know about the pooling. That's a must have for me.

I might want to exclude my temp folder (and others) depending on what I am doing at the time.

Otherwise as the wiki itself notes there are definitely perks when using f-raid for static data like movies, music, etc which I would imagine is what most (please note I did not say all) people in a htpc forum are using their server for. I have been using my server for years and other than the occasional power outage it's an appliance that I don't touch. It sends me emails each day that it updated and plays and protects my static data. For me it's ideal. Ymmv.


To view links or images in signatures your post count must be 0 or greater. You currently have 0 posts.


To view links or images in signatures your post count must be 0 or greater. You currently have 0 posts.


To view links or images in signatures your post count must be 0 or greater. You currently have 0 posts.


To view links or images in signatures your post count must be 0 or greater. You currently have 0 posts.

assassin is offline  
post #29 of 49 Old 08-10-2014, 03:28 PM
AVS Special Member
 
ajhieb's Avatar
 
Join Date: Jul 2009
Posts: 1,423
Mentioned: 7 Post(s)
Tagged: 0 Thread(s)
Quoted: 319 Post(s)
Liked: 391
Thumbs up

Quote:
Originally Posted by assassin View Post
T-raid seems to be more buggy for many people (at least the last time I checked their forums). You also can't copy a smaller drive to a larger drive. You also can't exclude certain content if wanted/needed. And for static data like movies and media I want stability and I want the option of being able to use whatever hard drive I want that's lying around as speed really isn't that important with the array. Also, for me pooling is a must have. F-raid just works.

There are a few wiki articles but to me this is the bottom line from one of them...

You should use f-raid over t-raid when:
"When the data to be protected is exclusively static or changes very rarely. In this particular case, Snapshot RAID is simply more efficient."
Thanks for the info.

RAID protection is only for failed drives. That's it. It's no replacement for a proper backup.
ajhieb is online now  
post #30 of 49 Old 08-10-2014, 03:40 PM
Advanced Member
 
Dropkick Murphy's Avatar
 
Join Date: Dec 2008
Posts: 601
Mentioned: 1 Post(s)
Tagged: 0 Thread(s)
Quoted: 18 Post(s)
Liked: 63
Quote:
Originally Posted by Aryn Ravenlocke View Post
though UnRAID does not seem to be friendly should a drive fail
If a drive fails you just put in a new one and let parity rebuild it. If the parity drive fails you just put in a new one and rewrite the parity. If the parity and a data drive fail at the same time (highly unlikely) you would lose only the information on the data drive.

What's so unfriendly about that?

My unRAID server has been running 24/7 for over 3 trouble-free years.
ilovejedd likes this.

Dropkick Murphy is offline  
Reply Home Theater Computers

User Tag List



Forum Jump: 

Posting Rules  
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are Off
Pingbacks are Off
Refbacks are Off