Adding Drives to Windows Storage Spaces NAS - AVS Forum
Forum Jump: 
 3Likes
Reply
 
Thread Tools
post #1 of 44 Old 07-27-2014, 11:36 PM - Thread Starter
Senior Member
 
Aryn Ravenlocke's Avatar
 
Join Date: Aug 2006
Location: Tempe, AZ, USA
Posts: 485
Mentioned: 2 Post(s)
Tagged: 0 Thread(s)
Quoted: 230 Post(s)
Liked: 78
Adding Drives to Windows Storage Spaces NAS

Hey everyone, I'm only about a week or so away from finally putting together my NAS. Synology initially won out for my solution, but when I attempted to contact Synology with some technical questions my experience was less than pleasant - and that was after an unacceptably long 12 day wait for a response. So, I've decided to go with Windows Storage Spaces (I think). I do have a very important question to ask about how the slightly updated for Windows 8 version works when it comes to upgrading.

Initially, I am starting with 12 x 3 TB drives. When I was reading the literature on SS, it appeared that once my drives started to push capacity, that I could simply add another 3 TB drive. However, in a completely unrelated conversation, it came up that SS works as such that however many drives a person uses to create the initial drive pool, that same number of drives must be used to expand the memory and have it actually be used. Apparently if I were to add only one drive for instance, it would show as part of the system, but it would n't actually get written to.

Is this really how it works? If so, I think I might just initiate the system with 4 drives, then add 4 as "expansions" two more times until all 12 are in place. That way, when the time comes to expand again, I only need to come up with four drives instead of 12. An additional 4 drives should be more than sufficient until the day I become independently wealthy. After converting everything to uncompressed mkvs, I should be sitting around 26 TB of media so the NAS will be close(ish) to initial capacity when I am done loading it.

Anyone that can weigh in before I go spending a month+ messing with building out the NAS, only to find I screwed up?
Aryn Ravenlocke is online now  
Sponsored Links
Advertisement
 
post #2 of 44 Old 07-28-2014, 12:33 AM
AVS Special Member
 
Dark_Slayer's Avatar
 
Join Date: May 2012
Posts: 2,657
Mentioned: 5 Post(s)
Tagged: 0 Thread(s)
Quoted: 238 Post(s)
Liked: 315
If I were starting from scratch the list of options I would consider in order would be
  • Ubuntu with zfs
  • freenas
  • windows w/ flexraid
  • unraid
  • ubuntu with snapraid
  • a few other science-project-zfs server builds
  • a commercial unit where I didn't ask the actual tech support company for support (e.g. forums are always faster / typically more knowledgable than going to synology/qnap/etc support)
  • Just using all the drives individually without any raid
  • then maybe storage spaces
Phrehdd likes this.
Dark_Slayer is offline  
post #3 of 44 Old 07-28-2014, 09:02 AM - Thread Starter
Senior Member
 
Aryn Ravenlocke's Avatar
 
Join Date: Aug 2006
Location: Tempe, AZ, USA
Posts: 485
Mentioned: 2 Post(s)
Tagged: 0 Thread(s)
Quoted: 230 Post(s)
Liked: 78
Quote:
Originally Posted by Dark_Slayer View Post
If I were starting from scratch the list of options I would consider in order would be
  • Ubuntu with zfs
  • freenas
  • windows w/ flexraid
  • unraid
  • ubuntu with snapraid
  • a few other science-project-zfs server builds
  • a commercial unit where I didn't ask the actual tech support company for support (e.g. forums are always faster / typically more knowledgable than going to synology/qnap/etc support)
  • Just using all the drives individually without any raid
  • then maybe storage spaces
Using all the drives individually is something I am trying to avoid. Space and file management are something I want to be able to stop paying close attention to once I get this thing built out.

What would be the issue with Windows Storage Spaces?
Aryn Ravenlocke is online now  
post #4 of 44 Old 07-28-2014, 10:49 AM
AVS Special Member
 
jhoff80's Avatar
 
Join Date: May 2005
Posts: 4,224
Mentioned: 2 Post(s)
Tagged: 0 Thread(s)
Quoted: 141 Post(s)
Liked: 201
I believe there were some major performance issues with Storage Spaces early on, but I don't know if that's gotten fixed yet.


Personally, I just use DriveBender on a Windows Server (2012r2 Essentials) in order to pool drives and provide some file duplication.

XBL/Steam: JHoff80
jhoff80 is online now  
post #5 of 44 Old 07-28-2014, 03:15 PM - Thread Starter
Senior Member
 
Aryn Ravenlocke's Avatar
 
Join Date: Aug 2006
Location: Tempe, AZ, USA
Posts: 485
Mentioned: 2 Post(s)
Tagged: 0 Thread(s)
Quoted: 230 Post(s)
Liked: 78
I just took a look at the Ubuntu/ZFS combination. Seeing as how I haven't worked with a command line since early to mid 80s ( I think I know how to open one on my PC), I'm fairly certain I do not have the technical wizardry to go that route.
Aryn Ravenlocke is online now  
post #6 of 44 Old 07-28-2014, 04:40 PM - Thread Starter
Senior Member
 
Aryn Ravenlocke's Avatar
 
Join Date: Aug 2006
Location: Tempe, AZ, USA
Posts: 485
Mentioned: 2 Post(s)
Tagged: 0 Thread(s)
Quoted: 230 Post(s)
Liked: 78
Quote:
Originally Posted by Aryn Ravenlocke View Post
I just took a look at the Ubuntu/ZFS combination. Seeing as how I haven't worked with a command line since early to mid 80s ( I think I know how to open one on my PC), I'm fairly certain I do not have the technical wizardry to go that route.
Yup, glad I ran a test on the old PC. I got Ubuntu installed, but that meant the module/driver whatever it is referred to that made the wireless device work wasn't seen anymore and I haven't the faintest idea ho to do all that kerneling stuff to make it work again. Looks like I am sticking with a Windows based solution. Given that Storage Spaces is native to 8.1, it would seem to make sense. I hadn't heard of any issues and no one brought any up when I was spit-balling for answers to how to build out my NAS.
Aryn Ravenlocke is online now  
post #7 of 44 Old 07-28-2014, 04:53 PM
AVS Special Member
 
Dark_Slayer's Avatar
 
Join Date: May 2012
Posts: 2,657
Mentioned: 5 Post(s)
Tagged: 0 Thread(s)
Quoted: 238 Post(s)
Liked: 315
You typically wouldn't run a wireless server, or worry about supporting legacy hardware (which windows 8.1 won't do all that well either) on a new build

That being said, my comments were truly just that. I would look at those in that order because I think the best solutions are at the top. They may not be the best for you in that order

If you are uncomfortable with ssh and terminal stuff (which in reality I find to be more fun, efficient, etc) then unraid or flexraid+windows is a great choice

Seems as you are already running 8.1, so the easiest choice would likely be flexraid raid-f

Read page 2 - the section titled "Looking for trouble:devil's in the details" for problems that would drive me insane with storage spaces http://arstechnica.com/information-t...when-it-works/

I would have the same issues with storage spaces that I have with WMC and WHS. If you want to use them the way they are specifically created and never deviate then you will probably be happy, but as soon as you start trying to stray from the path headaches abound.
Dark_Slayer is offline  
post #8 of 44 Old 07-28-2014, 06:36 PM - Thread Starter
Senior Member
 
Aryn Ravenlocke's Avatar
 
Join Date: Aug 2006
Location: Tempe, AZ, USA
Posts: 485
Mentioned: 2 Post(s)
Tagged: 0 Thread(s)
Quoted: 230 Post(s)
Liked: 78
Quote:
Originally Posted by Dark_Slayer View Post
You typically wouldn't run a wireless server, or worry about supporting legacy hardware (which windows 8.1 won't do all that well either) on a new build

That being said, my comments were truly just that. I would look at those in that order because I think the best solutions are at the top. They may not be the best for you in that order

If you are uncomfortable with ssh and terminal stuff (which in reality I find to be more fun, efficient, etc) then unraid or flexraid+windows is a great choice

Seems as you are already running 8.1, so the easiest choice would likely be flexraid raid-f

Read page 2 - the section titled "Looking for trouble:devil's in the details" for problems that would drive me insane with storage spaces http://arstechnica.com/information-t...when-it-works/

I would have the same issues with storage spaces that I have with WMC and WHS. If you want to use them the way they are specifically created and never deviate then you will probably be happy, but as soon as you start trying to stray from the path headaches abound.

I hadn't planned on running the server wirelessly. But before possibly building a server out using Ubuntu, I wanted to see if I could figure out how to even operate inside of Ubuntu. That particular machine relies on wireless for Internet access. This particular PC that I am currently using for browsing the Internet and doing projects for university is indeed a Windows 8.1 machine. The server I will be building is virgin. I can put anything I want on it. I just need to be able to manipulate data on it and make it go once I do put an OS on there.

Everyone I talk to raves about how incredibly lightweight Linux is and how that significantly improves performance. That's rather appealing in a server. I'm not particularly married to Windows. It's just that outside of Windows or Mac, I am rather out of my realm - and Macs seem to make horrible HTPC/servers.
Aryn Ravenlocke is online now  
post #9 of 44 Old 07-30-2014, 12:40 PM
AVS Special Member
 
StardogChampion's Avatar
 
Join Date: Dec 2007
Location: New Hampshire, USA
Posts: 3,077
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 74 Post(s)
Liked: 145
Amahi might be worth a look: https://www.amahi.org/

I'd put FreeNAS up at the top of the list too.

I use WHS2011 + StableBit and it's great. But, WHS2011 isn't available for $40 like it was a year and a half ago when I rebuilt my home server.

 

 

StardogChampion is offline  
post #10 of 44 Old 07-30-2014, 01:12 PM
AVS Special Member
 
Dark_Slayer's Avatar
 
Join Date: May 2012
Posts: 2,657
Mentioned: 5 Post(s)
Tagged: 0 Thread(s)
Quoted: 238 Post(s)
Liked: 315
To give it another try (without having to roll in wireless drivers and junk) just load https://my.vmware.com/web/vmware/fre...are_player/6_0 on your 8.1 box

For a pretty good linux server guide look here http://www.havetheknowhow.com/

The biggest benefit I tend to see is lower ram requirements in HTPCs, but with a zfs server that goes out the window. You can also ssh from your terminal on whichever Mac OSX products you have running on the same network. You would just open the terminal and type "ssh username@ip.address.of.server" then you're off and running

The other big benefit is zfs in general. It's highly regarded for data-integrity, but you probably need a primer before heading down this path on ubuntu alone (or freebsd). The primary reason I'd recommend going the ubuntu route is the prevalence of applications already ported to debian like subsonic, plex, myth, etc. Like stardog mentioned, freenas is more user friendly, easy GUI way to use zfs. Here is a beginner zfs guide from the freenas forums https://drive.google.com/file/d/0BzH...it?usp=sharing

I (and a lot of avsforum's media server crowd) just use flexraid atop windows. When I wanted to have a media server, I already had 9TB of movies, tv shows, and music so it would have been costly for me to replace that space with empty drives and get to the amount of space I was shooting for - in which case flexraid worked out nicely since it lets you use data-full drives. Snapraid could also work well in a linux server. It would give you snapshot parity similar to flexraid's raid-f
Dark_Slayer is offline  
post #11 of 44 Old 07-30-2014, 04:01 PM - Thread Starter
Senior Member
 
Aryn Ravenlocke's Avatar
 
Join Date: Aug 2006
Location: Tempe, AZ, USA
Posts: 485
Mentioned: 2 Post(s)
Tagged: 0 Thread(s)
Quoted: 230 Post(s)
Liked: 78
Quote:
Originally Posted by Dark_Slayer View Post
To give it another try (without having to roll in wireless drivers and junk) just load https://my.vmware.com/web/vmware/fre...are_player/6_0 on your 8.1 box



I (and a lot of avsforum's media server crowd) just use flexraid atop windows. When I wanted to have a media server, I already had 9TB of movies, tv shows, and music so it would have been costly for me to replace that space with empty drives and get to the amount of space I was shooting for - in which case flexraid worked out nicely since it lets you use data-full drives. Snapraid could also work well in a linux server. It would give you snapshot parity similar to flexraid's raid-f

I appreciate the links. I'll have to check them out after I get some work (shudder to think of it) done this weekend. Obviously, I am still rather new to a great deal of this sort of thing beyond the most basic concepts. Flexraid suddenly sounds much more appealing. I have 9 external drives all sitting on my desk nearly filled with media data. The ability to not have to replace them would be nice. Although, I considered keeping them around after migrating everything into the new NAS so they could act as an actual back-up (versus simply RAID). At the very least, they would save me from having to re-rip many hundreds (or even a couple thousand) disks if there was a NAS failure.
Aryn Ravenlocke is online now  
post #12 of 44 Old 07-30-2014, 05:04 PM
Advanced Member
 
DotJun's Avatar
 
Join Date: Dec 2012
Posts: 961
Mentioned: 1 Post(s)
Tagged: 0 Thread(s)
Quoted: 99 Post(s)
Liked: 58
Since you were entertaining a synology system I am assuming you are not financially limited so I'm going to suggest for you to just get a hardware card and be done with it.
DotJun is offline  
post #13 of 44 Old 07-30-2014, 06:35 PM
AVS Special Member
 
Dark_Slayer's Avatar
 
Join Date: May 2012
Posts: 2,657
Mentioned: 5 Post(s)
Tagged: 0 Thread(s)
Quoted: 238 Post(s)
Liked: 315
I don't see why you would want to go with hardware raid?

In industrial applications it's pretty painfully obvious why you'd do just about anything to increase raw i/o, but why for a home media server? It doesn't speed up the time it takes me to rip, and all my other stuff is quite nicely automated. I never sit staring at the screen thinking "I really wish my scratch disk to array transfers weren't limited to 100MB/s"

Different strokes I suppose, but I begin to wonder what your media acquisition must be like to want better than a bare disk performance

OP if you like and want to go hardware raid, go ahead and budget for a backup controller (IMO)
Dark_Slayer is offline  
post #14 of 44 Old 07-30-2014, 07:46 PM - Thread Starter
Senior Member
 
Aryn Ravenlocke's Avatar
 
Join Date: Aug 2006
Location: Tempe, AZ, USA
Posts: 485
Mentioned: 2 Post(s)
Tagged: 0 Thread(s)
Quoted: 230 Post(s)
Liked: 78
Quote:
Originally Posted by Dark_Slayer View Post
I don't see why you would want to go with hardware raid?

In industrial applications it's pretty painfully obvious why you'd do just about anything to increase raw i/o, but why for a home media server? It doesn't speed up the time it takes me to rip, and all my other stuff is quite nicely automated. I never sit staring at the screen thinking "I really wish my scratch disk to array transfers weren't limited to 100MB/s"

Different strokes I suppose, but I begin to wonder what your media acquisition must be like to want better than a bare disk performance

OP if you like and want to go hardware raid, go ahead and budget for a backup controller (IMO)
I'll be skipping hardware RAID. I'm not a tech savvy enough guy to fix it if/when the card goes bad. Also, as far as I can see, the biggest difference is in write speed. I've already resigned myself to a 5-6 week disk write once I start loading up the NAS. I've had no illusions since day one that the process was going to be quick. But once it's done, it's done (unless something fails and I have to rebuild). I'm far more concerned with being able to support multiple 1080p streams than I am how fast I can write to the disk farm. Once the farm is loaded, it will probably get updated 1-2 times a week. That will keep write time to the barest of minimums.

The Synology system was not obscenely expensive. In fact, the only two negatives I ran into were that the disks would only ever be able to be read by Synology machines. However, this is only problematic should I ever decide to move on/trade up to something else. The biggest negative with Synology was that the customer service was crap and I couldn't get a straight answer to some simple (in my opinion) questions. So, I'm going with two Media Sonic 8-drive cases (non-RAID) and looking for a software solution. I'm hoping to come in around $3000 all told. That might wind up being a bit on the low side, but I'd at least like to come fairly close. If I go over, I want it to be for a strong reason that will pay off down the road.

From what I am hearing here, it sounds like Storage Spaces might actually be riskier/more problematic than I was led to believe.
Aryn Ravenlocke is online now  
post #15 of 44 Old 07-31-2014, 01:35 AM
Advanced Member
 
DotJun's Avatar
 
Join Date: Dec 2012
Posts: 961
Mentioned: 1 Post(s)
Tagged: 0 Thread(s)
Quoted: 99 Post(s)
Liked: 58
Quote:
Originally Posted by Dark_Slayer View Post
I don't see why you would want to go with hardware raid?

In industrial applications it's pretty painfully obvious why you'd do just about anything to increase raw i/o, but why for a home media server? It doesn't speed up the time it takes me to rip, and all my other stuff is quite nicely automated. I never sit staring at the screen thinking "I really wish my scratch disk to array transfers weren't limited to 100MB/s"

Different strokes I suppose, but I begin to wonder what your media acquisition must be like to want better than a bare disk performance

OP if you like and want to go hardware raid, go ahead and budget for a backup controller (IMO)

I just find them simpler to setup and admin. I haven't tried all the software implementations, only a handful of them in fact, but the ones I did try out just annoyed me when it came time to dump data onto the server.

Again, it seemed like the OP wasn't finance limited and other options were being handed out other than the original wss he wanted to implement so I added my own option.

I don't rip directly to my file server, I copy stuff over to them once a week as far as media goes. I run raid level 6, but that is for my own reasons, which I'm not saying the OP should run.
DotJun is offline  
post #16 of 44 Old 07-31-2014, 11:02 AM - Thread Starter
Senior Member
 
Aryn Ravenlocke's Avatar
 
Join Date: Aug 2006
Location: Tempe, AZ, USA
Posts: 485
Mentioned: 2 Post(s)
Tagged: 0 Thread(s)
Quoted: 230 Post(s)
Liked: 78
Quote:
Originally Posted by DotJun View Post
I just find them simpler to setup and admin. I haven't tried all the software implementations, only a handful of them in fact, but the ones I did try out just annoyed me when it came time to dump data onto the server.

Again, it seemed like the OP wasn't finance limited and other options were being handed out other than the original wss he wanted to implement so I added my own option.

I don't rip directly to my file server, I copy stuff over to them once a week as far as media goes. I run raid level 6, but that is for my own reasons, which I'm not saying the OP should run.
While I wouldn't call myself finance limited, $3000 is not exactly big money to make my NAS happen. I'm going to be up against that budget no matter which direction I go.

I will be adding to the NAS much like you do, adding data probably once a week once the initial 26 TB data dump is done. Write times don't bother me since the big dump will only be happening once.

Storage Spaces seemed a logical conclusion, but I'm starting to wonder if I need to reconsider that approach.
Aryn Ravenlocke is online now  
post #17 of 44 Old 07-31-2014, 11:13 AM
AVS Addicted Member
 
Foxbat121's Avatar
 
Join Date: Jul 2003
Location: VA
Posts: 10,052
Mentioned: 3 Post(s)
Tagged: 0 Thread(s)
Quoted: 250 Post(s)
Liked: 250
It's kind of comical that OP asks a question about Windows Storage Space but none of the answers ever touched on that but rather offer alternatives. My question is if you never experienced WSS, why bother suggesting the alternatives?


I personally am really intrigued about WSS but have no need for a NAS nor have the necessary OS to support it. For a few Win8 PCs I have, there is only one HDD so no need for WSS. I ran WHS 2011 which is quite old. The network speed, even on gigabit backbone, will be my biggest limiting factor for access files on WHS shares.
Foxbat121 is online now  
post #18 of 44 Old 07-31-2014, 12:49 PM
Advanced Member
 
Phrehdd's Avatar
 
Join Date: Apr 2008
Posts: 521
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 31 Post(s)
Liked: 108
I have learned over time that often the best help is not from the companies that make the products but by the forums associated. I did have one firmware upgrade problem with my NAS units and I made comment in that maker's forums. End result was that support not only contacted me but (with my permission) remoted into my NAS units and attempted to fix the problem. One of the engineers was all the way in Asia. Alas, they could not and I simply reverted back to the last firmware before the problem occurred. I was lucky. (The maker was QNAP and the models - 559pro and 469L, 5x4tb Seagate MV drives along with 4x3 tb WD red drives respectively.)

NAS for media storage doesn't require a great deal. In fact, one doesn't need the fastest NAS but ability to remain consistent and do reasonably well on larger file operations. I think for my next adventure into NAS, I'll consider using either a small PC or a Mac Mini sitting on top of a larger drive storage box. If I go with the latter, I can simply add share and DLNA services to the Mini and let it do double duty as a head for the storage and be usable as a computer. Since OSX doesn't provide RAID beyond 0 and 1 (0+1, and 1+0 as well), I'd move to a software RAID solution and attached the storage either via USB3 or Thunderbolt (possibly eSATA interface). In some respects I am doing nothing more than a DAS set up that is shared with some better control than some NAS units and certainly (if one doesn't know Unix commands) easier to maintain. If I am a bit more adventuresome, I may do exactly the same shared DAS set up but with Linux....this will be most likely in early 2015.

In the past, I'll just say the best performance I had from RAID was with a hardware solution but again, for media files super speed is not really important but handy only when it comes to populating the drives with files.
Phrehdd is offline  
post #19 of 44 Old 07-31-2014, 07:40 PM
AVS Special Member
 
Dark_Slayer's Avatar
 
Join Date: May 2012
Posts: 2,657
Mentioned: 5 Post(s)
Tagged: 0 Thread(s)
Quoted: 238 Post(s)
Liked: 315
Quote:
Originally Posted by Foxbat121 View Post
It's kind of comical that OP asks a question about Windows Storage Space but none of the answers ever touched on that but rather offer alternatives. My question is if you never experienced WSS, why bother suggesting the alternatives?
I had recently looked into it myself and chose not to use it primarily due to what I saw in the arstechnica article I linked in post #7
Dark_Slayer is offline  
post #20 of 44 Old 08-01-2014, 11:58 AM
Advanced Member
 
DotJun's Avatar
 
Join Date: Dec 2012
Posts: 961
Mentioned: 1 Post(s)
Tagged: 0 Thread(s)
Quoted: 99 Post(s)
Liked: 58
Quote:
Originally Posted by Foxbat121 View Post
It's kind of comical that OP asks a question about Windows Storage Space but none of the answers ever touched on that but rather offer alternatives. My question is if you never experienced WSS, why bother suggesting the alternatives?


I personally am really intrigued about WSS but have no need for a NAS nor have the necessary OS to support it. For a few Win8 PCs I have, there is only one HDD so no need for WSS. I ran WHS 2011 which is quite old. The network speed, even on gigabit backbone, will be my biggest limiting factor for access files on WHS shares.

I tried out wss when win8 first came out and the write performance was abysmal compared to hardware. I also didn't like the way wss handled new data after adding a new drive to the pool.
DotJun is offline  
post #21 of 44 Old 08-01-2014, 12:03 PM
AVS Special Member
 
StardogChampion's Avatar
 
Join Date: Dec 2007
Location: New Hampshire, USA
Posts: 3,077
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 74 Post(s)
Liked: 145
Quote:
Originally Posted by Foxbat121 View Post
It's kind of comical that OP asks a question about Windows Storage Space but none of the answers ever touched on that but rather offer alternatives. My question is if you never experienced WSS, why bother suggesting the alternatives?
I know, it weird, it's like people think it's discussion forum where they can share their experiences with their server setups so the OP has a large pool of information to pull for when making a decision about setting up a server.

People need to stop it.

Really.


 

 

StardogChampion is offline  
post #22 of 44 Old 08-01-2014, 12:46 PM
AVS Addicted Member
 
Foxbat121's Avatar
 
Join Date: Jul 2003
Location: VA
Posts: 10,052
Mentioned: 3 Post(s)
Tagged: 0 Thread(s)
Quoted: 250 Post(s)
Liked: 250
Quote:
Originally Posted by Dark_Slayer View Post
I had recently looked into it myself and chose not to use it primarily due to what I saw in the arstechnica article I linked in post #7
How accurate/relevant is that article (dated 2012) when it comes 8.1? Did Microsoft make any enhancements to WSS?
Foxbat121 is online now  
post #23 of 44 Old 08-01-2014, 12:49 PM
AVS Addicted Member
 
Foxbat121's Avatar
 
Join Date: Jul 2003
Location: VA
Posts: 10,052
Mentioned: 3 Post(s)
Tagged: 0 Thread(s)
Quoted: 250 Post(s)
Liked: 250
Quote:
Originally Posted by DotJun View Post
I tried out wss when win8 first came out and the write performance was abysmal compared to hardware. I also didn't like the way wss handled new data after adding a new drive to the pool.
I think that's exactly what OP stated and he was not sure if Windows 8.1 changed all that.
Foxbat121 is online now  
post #24 of 44 Old 08-01-2014, 12:52 PM
AVS Addicted Member
 
Foxbat121's Avatar
 
Join Date: Jul 2003
Location: VA
Posts: 10,052
Mentioned: 3 Post(s)
Tagged: 0 Thread(s)
Quoted: 250 Post(s)
Liked: 250
Quote:
Originally Posted by StardogChampion View Post
I know, it weird, it's like people think it's discussion forum where they can share their experiences with their server setups so the OP has a large pool of information to pull for when making a decision about setting up a server.

People need to stop it.

Really.

OP is not asking for advise for building a server. He asks a specific question about viability of WSS. It would be very helpful if someone with WSS experience will pitch in with either positive or negative experience so that we can all learn to either embrace it or avoid it. If you never tried, how do you know it is better or worse than other solutions?
Foxbat121 is online now  
post #25 of 44 Old 08-12-2014, 06:44 PM
AVS Special Member
 
Elill's Avatar
 
Join Date: Dec 2007
Location: Sydney, Australia
Posts: 1,468
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 19 Post(s)
Liked: 22
Good, but long article on it here

http://betanews.com/2014/01/15/windo...raid-for-good/

I am looking at replacement options for a 4 bay QNAP......undecided at this stage

Peter the Greek

Downunder Theatre MkII
Redefining snail pace construction
"what is worth knowing is difficult to learn"

Elill is offline  
post #26 of 44 Old 08-16-2014, 11:44 PM
Advanced Member
 
DotJun's Avatar
 
Join Date: Dec 2012
Posts: 961
Mentioned: 1 Post(s)
Tagged: 0 Thread(s)
Quoted: 99 Post(s)
Liked: 58
Quote:
Originally Posted by Elill View Post
Good, but long article on it here

http://betanews.com/2014/01/15/windo...raid-for-good/

I am looking at replacement options for a 4 bay QNAP......undecided at this stage

Great article. Unfortunately, his simple test ended the same way mine did as far as a parity array went, with abysmal write speed. Even when I connected 7 drives, the speed was still very low for writes. It's as if write caching was disabled.
DotJun is offline  
post #27 of 44 Old 08-17-2014, 12:19 AM - Thread Starter
Senior Member
 
Aryn Ravenlocke's Avatar
 
Join Date: Aug 2006
Location: Tempe, AZ, USA
Posts: 485
Mentioned: 2 Post(s)
Tagged: 0 Thread(s)
Quoted: 230 Post(s)
Liked: 78
Windows Storage Spaces was a finalist for the home media server, but they got two things wrong that I just couldn't get around. The first was that, since my initial pool was going to be four drives, I was going to have to add four drives each time I wanted to expand the array. The second issue was that SS handle data at the block level and not the file level. I don't want too many of my files be split across multiple drives. Nor do I like that handling things at the block level can result in wasted unused space.

If MS keeps on this path though, and addresses issues like that instead of just chucking the entire thing, Storage Spaces would be an ideal solution for many people.
Aryn Ravenlocke is online now  
post #28 of 44 Old 08-17-2014, 12:50 AM
Advanced Member
 
DotJun's Avatar
 
Join Date: Dec 2012
Posts: 961
Mentioned: 1 Post(s)
Tagged: 0 Thread(s)
Quoted: 99 Post(s)
Liked: 58
Quote:
Originally Posted by Aryn Ravenlocke View Post
Windows Storage Spaces was a finalist for the home media server, but they got two things wrong that I just couldn't get around. The first was that, since my initial pool was going to be four drives, I was going to have to add four drives each time I wanted to expand the array. The second issue was that SS handle data at the block level and not the file level. I don't want too many of my files be split across multiple drives. Nor do I like that handling things at the block level can result in wasted unused space.

If MS keeps on this path though, and addresses issues like that instead of just chucking the entire thing, Storage Spaces would be an ideal solution for many people.

Striping data across the drives is the norm for parity. The question is, does SS allow you to change the "block" size?

As far as SS not handling data relocation/rebalance on new drives, yes I must agree that they need to fix that ASAP.

I saw an old post, I think was yours in fact, about the whole adding drives in the same amount as you had used when first creating the array, which I found as odd behavior. Can you not just thin provision the total amount of calculated space based on the drives you want to use times your SATA/sas ports?

I have changed my mind about SS though as far as using it for multimedia storage. Yes, the write speeds are slow, but the read speeds are great and only gets faster with the addition of more drives. Since most people here seem to not care about the write speed as much as read speed for their media storage, then SS is a great solution IMO especially since you have true real time protection AND won't have to worry about bitrot.
DotJun is offline  
post #29 of 44 Old 08-17-2014, 10:16 AM - Thread Starter
Senior Member
 
Aryn Ravenlocke's Avatar
 
Join Date: Aug 2006
Location: Tempe, AZ, USA
Posts: 485
Mentioned: 2 Post(s)
Tagged: 0 Thread(s)
Quoted: 230 Post(s)
Liked: 78
Quote:
Originally Posted by DotJun View Post
Striping data across the drives is the norm for parity. The question is, does SS allow you to change the "block" size?
The blocks are all 256 MB and their size cannot be altered.

Striping data for parity purposes is not an issue. Spreading my 3,000 titles out across multiple drives is fine. Breaking them up across multiple drives gets kind of iffy though. One features I really like in some of the alternatives that write at the file level instead of the block level is the one that, with a simple check box, allows me to prohibit breaking files to write to the array. Every file is complete on whatever drive it is on. There are at least two big reasons I prefer this. First, if there is a critical failure, the only files that need to be replaced are the ones on the drive, not potentially 80% of the array. Second, one movie or television show = one drive spinning, no more. When I used to have 3 HDD sitting on my desk, I didn't sweat them all running sometimes. But as my array grew larger and larger, the need for individual disc spin-up/down became a much bigger issue.

Quote:
Originally Posted by DotJun View Post
As far as SS not handling data relocation/rebalance on new drives, yes I must agree that they need to fix that ASAP.
By all accounts, WSS is still being aggressively developed/designed. One can only hope this is the first thing to be addressed.

Quote:
Originally Posted by DotJun View Post
I saw an old post, I think was yours in fact, about the whole adding drives in the same amount as you had used when first creating the array, which I found as odd behavior. Can you not just thin provision the total amount of calculated space based on the drives you want to use times your SATA/sas ports?
Short answer, no. Besides, thin-provisioning doesn't really apply when the only use the pool is being put to is media archiving. For media storage purposes, physical space is the only real concern; either the drive has enough room for that 43 GB BD rip, or it doesn't.

Here a link that explains it fairly simply, using a pair of illustrations to show how/why the number of drives in the initial parity pool is such a big deal.


Quote:
Originally Posted by DotJun View Post
I have changed my mind about SS though as far as using it for multimedia storage. Yes, the write speeds are slow, but the read speeds are great and only gets faster with the addition of more drives. Since most people here seem to not care about the write speed as much as read speed for their media storage, then SS is a great solution IMO especially since you have true real time protection AND won't have to worry about bitrot.
Like you, the initial write times didn't concern me overly much. If you have a large initial library, the expectation going into the project should be that building the array/library will take some time. It's not how long the initial build takes, it's about making sure that the drives can meet the server's read/play needs fast enough afterward. Protecting in real-time meant that there would be no future spans of 15-20+ hours for parity to be reconfigured. a one-time initialization doesn't bother me though.

WSS, using ReFS sounds ideal at first glance, and even at second glance. If I hadn't been hanging out here in these forums, I would never have encountered any Windows skeptics and taken the time to investigate deeper. While the developers of WSS don't hide the fact that they operate at the block level instead of the file level, or how the columns/drive addition system works, they do keep it rather in the background. The common Joe Six-pack with no tech experience isn't likely to know what any of that stuff means, but rather will see the highlighted features and even some of the very informative videos and articles that show just how easy WSS is to use. The "surprise" comes later.

My new server is currently in the early stages of construction. I'm waiting on an IT guy and the Universe to do their parts and then I'll be loading my 30 TB of media up. Because of block handling and the way drives need to be added under WSS, I'll be going with one of the FlexRAID solutions instead. If WSS ever figures out the three issues with re-balancing data (something that needs to happen quickly if WSS is going to survive), the ability to restrict files to single drives (working at the file level instead of the block), and allows single drive addition to a parity array, then I would absolutely consider moving over. There are many things I really like about WSS, but those three things make it hard for me to adopt it right now.
Aryn Ravenlocke is online now  
post #30 of 44 Old 08-18-2014, 02:23 AM
Advanced Member
 
DotJun's Avatar
 
Join Date: Dec 2012
Posts: 961
Mentioned: 1 Post(s)
Tagged: 0 Thread(s)
Quoted: 99 Post(s)
Liked: 58
Adding Drives to Windows Storage Spaces NAS

250mb blocks doesn't sound so bad for a media server that is holding huge iso's. It would be bad for me though since I have multiple compressed files. How disappointing that ms does not let you choose the block size.

My problem with non-real time parity implementations like flexraid is that if a disc goes out, I will lose access to all the data on that disc until I get it copied back over from a backup.

I myself use hardware raid. I was very interested in ws, well ok really refs, due to never needing checkdsk again. Bye bye bitrot!

Btw, I have flexraid also on another computer and I don't know if it's just my setup or not but I found some bad behavior with it. That is, whenever I work on a file that is on a flex array, my write/read times are horribly slow. This should never happen as flexraid does not do on the fly parity calcs but there it is. Let me give an example: open a file that resides on a flex array using tsmuxer. Have tsmuxer spit out the file onto a non flex hdd. Notice how long it takes to finish the operation.

Edit: btw, breaking files up across multiple drives is exactly how parity works when the goal is to be able to use ALL of your data even with a drive failure.
DotJun is offline  
Reply Home Theater Computers



Forum Jump: 

Posting Rules  
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are Off
Pingbacks are Off
Refbacks are Off