Guide To Building A Media Storage Server - Page 193 - AVS Forum
Forum Jump: 
Reply
 
Thread Tools
post #5761 of 7909 Old 03-09-2010, 02:37 PM
Member
 
mitgib's Avatar
 
Join Date: Jan 2009
Posts: 19
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 0 Post(s)
Liked: 10
Couple of quick questions. Took a couple days to get caught up on this thread, but I got a few ideas, and before jumping to any conclusion, a little counter thought might prove useful.

My current media server is WHS with an OS drive on the motherboard controller, then a 3ware 9690SA to a Chenbro CK12803 with 15 1tb drives in RAID5, I'm exporting in 2TB slices to the OS since WHS is fairly retarded about large arrays.

I'm down to 4tb of free space and at the rate I purchase Blu-Rays, expansion is not far off. I caught up on this thread to hopefully get an idea about who is selling drives cheap, well that didn't happen for me with 1tb drives, but I did get an idea about 2tb drives seem to be quickly dropping in price and per byte costs are now well under 1tb drives, so while I would need to expand to a 2nd chassis, populate it with 2tb drives, then remove the slices from the original RAID5 array, WHS would migrate the data to the new array using the 2tb drives, and rebuild/expand the original chassis with more 2tb drives and sell off the 1tb drives for whatever I can get.

Now this will keep my data intact while migrating it, but this also needs to hopefully solve an issue I've had ever since I built this box, which is that dang balancing act WHS does every hour. If it happens when I am watching a movie, the stream stutters for a short period then back to normal, if I move the data to the local node playing it, no issue, so it is deffinatly WHS. My thought was could/should I build this new array of 16 2tb drives as RAID6/10 (or 2 8 drive arrays for that matter). I'm looking for a smoother stream, and obviously more space.

I just recieved a Chenbro UEK and before I order anything else, just want to explore my options.
mitgib is offline  
Sponsored Links
Advertisement
 
post #5762 of 7909 Old 03-09-2010, 03:15 PM
AVS Special Member
 
MiBz's Avatar
 
Join Date: May 2005
Posts: 1,093
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 0 Post(s)
Liked: 10
Quote:
Originally Posted by kapone View Post

Nah...my stack is better... and I have two of these.....


Now that's what I'm talkin about !
And the Banker's Box sitting on top of it is so 1990's storage
MiBz is offline  
post #5763 of 7909 Old 03-09-2010, 03:55 PM
AVS Special Member
 
MiBz's Avatar
 
Join Date: May 2005
Posts: 1,093
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 0 Post(s)
Liked: 10
Quote:
Originally Posted by mitgib View Post

Couple of quick questions. Took a couple days to get caught up on this thread, but I got a few ideas, and before jumping to any conclusion, a little counter thought might prove useful.

My current media server is WHS with an OS drive on the motherboard controller, then a 3ware 9690SA to a Chenbro CK12803 with 15 1tb drives in RAID5, I'm exporting in 2TB slices to the OS since WHS is fairly retarded about large arrays.

I'm down to 4tb of free space and at the rate I purchase Blu-Rays, expansion is not far off. I caught up on this thread to hopefully get an idea about who is selling drives cheap, well that didn't happen for me with 1tb drives, but I did get an idea about 2tb drives seem to be quickly dropping in price and per byte costs are now well under 1tb drives, so while I would need to expand to a 2nd chassis, populate it with 2tb drives, then remove the slices from the original RAID5 array, WHS would migrate the data to the new array using the 2tb drives, and rebuild/expand the original chassis with more 2tb drives and sell off the 1tb drives for whatever I can get.

Now this will keep my data intact while migrating it, but this also needs to hopefully solve an issue I've had ever since I built this box, which is that dang balancing act WHS does every hour. If it happens when I am watching a movie, the stream stutters for a short period then back to normal, if I move the data to the local node playing it, no issue, so it is deffinatly WHS. My thought was could/should I build this new array of 16 2tb drives as RAID6/10 (or 2 8 drive arrays for that matter). I'm looking for a smoother stream, and obviously more space.

I just recieved a Chenbro UEK and before I order anything else, just want to explore my options.

There's a much better way, without having to break things up into 2TB LUNs, and you'll also avoid anything to do with WHS's storage pool for Video storage.

Build your new array using 2TB drives.
Remote desktop into WHS, right click on my computer and select manage.
Select Disk Management and you'll see the new array volume which isn't initialized and shows as unallocated.

Right click on the grey area of the volume, intialize as GPT volume.
Format it NTFS |64k clusters | Assign a drive letter above D (ex: R)
Give it a name (ex: RaidStorage)
Exit from Disk Management

Go to My Computer and the R drive now appears as a drive letter.
Open it
Create a new folder, call it Movies or RaidStorage (or whatever you like)

Open the link to WHS's folder shares and right click on the Video share.
Note the share security settings for each group.
Administrators
Creator Owner
RO_5(WHS\\RO_5) - this is the group containing users with read only access
RW_5(WHS\\RW_5) - this is the group containing users with read write access.
System

Now go back to the folder you created on the array drive
Right click, select Share and set the share security to the same as Video noted above.

Now when you access WHS from any network client you will see a shared called Movies (or whatever you called it) that can be accessed same as all other shares.

The additional step to assign the same share rights and groups as WHS's Video folder means that any users you add or changes to those users you make in the WHS console that you give access to Video will also apply to this folder so you'll never need to do it manually.

Last step in remote desktop
Open the WHS shared folder's link and open the Video folder.
In my computer open the drive letter you assigned to the raid array.
Now drag/copy the movies from the WHS Video share to the raid array.

You can also temporarily do the same for all other data that's on the WHS drive pool.

Once the drive pool is empty, you can start removing each 1TB LUN from WHS's disk management, but before you do this make sure you have at least 1 1TB single drive in the drive pool that WHS can move data like client backups to when it prepares to eject the raid LUNs.

At that point you have all your video files safely on the raid array outside the WHS pool and you can still use 2,3,4 x 1TB drives (with duplication on) for your pics, music, documents and whatever else you want.
MiBz is offline  
post #5764 of 7909 Old 03-09-2010, 06:18 PM
AVS Special Member
 
pclausen's Avatar
 
Join Date: Nov 2002
Location: Charlottesville, VA
Posts: 1,036
Mentioned: 2 Post(s)
Tagged: 0 Thread(s)
Quoted: 5 Post(s)
Liked: 22
So I've been trying to get my 1680x controller to see my E1 for the last couple of weeks, and I just don't think it's going to happen.

I've had both my TQ and E1 on the bench for some time now. The 1170 and all its 24 drives have been pulled and I have tried the 1680x in all my PCIE slots by itself, and no go as far as the E1 being picked up.

I have tried with drives in the E1 and without. I have tried booting the E1 before the TQ and visa versa, no go.

I've pulled the 1680x and left it out for several days with all power disconnected to both the TQ and E1.

I'm currently using a 1 meter 3ware SFF-8088 - SFF-8088 cable and have a 2 meter 3ware cable on order. However, I'm not at all optimistic that will fix me.

Also, the 1680x is clearly communication with the E1 as after the E1 is powered on, all the drives in the enclosure become active (the blue activity LEDs are constantly blinking), then when the TQ is powered up, the 1680x will "ping" each drive and turn off the blue LED, and then the 1680x BIOS lets the OS boot.

So I'm ready to move to Plan B, which entails picking up a pair of SFF-8088 to SAS breakout cables. This way I can install my E1 drives in my TQ enlosure and bypass that ^&*% SAS expander in the E1 enclosure all together. My hope is that this will allow me to bring my 6x2TB RAID6 array back online.

If that works, my next step would be to pick up a 3ware 9690SA-4IAE controller, and a bunch of Hitachi 2TB drives, and hope the 3ware will play nice with the E1. If so, once the new raid has been created, copy over all data from the old E1 drives mounted in the TQ via breakout cables.

If the 3ware won't work with the E1 either, I guess I'll need to re-evaluate my options, and talk to SuperMicro about what is actually supposed to work with the SAS expanders.

The really frustrating part is that I had no issues for a long time until I decided to power down my system...
pclausen is offline  
post #5765 of 7909 Old 03-09-2010, 06:26 PM
Senior Member
 
xuniman's Avatar
 
Join Date: Jun 2002
Location: Simi Valley, CA
Posts: 427
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 1 Post(s)
Liked: 10
pclausen - I have a 3ware 9690sa-4I4E with BBU that I'm not currently using. I originally got it for testing with a Chenbro expander and wasn't happy with the results so I was going to put it on eBay. If you want to give it a try send me a PM and we can probably work something out.
xuniman is offline  
post #5766 of 7909 Old 03-09-2010, 06:56 PM
Senior Member
 
stevetoney's Avatar
 
Join Date: Mar 2004
Posts: 358
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 0 Post(s)
Liked: 10
pclausen,

sorry your problem persist.. I modeled all I'm doing after your setup and advice. So far after the hiccups. my e1 and 1680ix are still stable.. multiple poweroffs - 10tb's copied, multiple volume checks. I have another 2TB drive coming to expand the current 5 drive array

my last hiccup happen during an expansion - I have since expand the seet made of 1tb drives

another drastic step would be to try the MB from the TQ in the E1

or get a non raid SAS SM card and run some 8087 cable out to the e1 just to see if you can see the E1.

another test would be to pull your OS drive - set aside and use another drive for a clean OS install just to check, but as you noted recognized vol's show up at boot before OS

If I have any more problem - I will get a 3ware card..

it is rare but possible that something on the HBA port, cable path, or expander port backplane simply went out so it is not fully working

my week of this was a major frustration - nearly order the 3ware then and it all came back on for me. hope it last, but I have the seed of doubt from our mutual experience

Quote:
Originally Posted by pclausen View Post

So I've been trying to get my 1680x controller to see my E1 for the last couple of weeks, and I just don't think it's going to happen.

I've had both my TQ and E1 on the bench for some time now. The 1170 and all its 24 drives have been pulled and I have tried the 1680x in all my PCIE slots by itself, and no go as far as the E1 being picked up.

I have tried with drives in the E1 and without. I have tried booting the E1 before the TQ and visa versa, no go.

I've pulled the 1680x and left it out for several days with all power disconnected to both the TQ and E1.

I'm currently using a 1 meter 3ware SFF-8088 - SFF-8088 cable and have a 2 meter 3ware cable on order. However, I'm not at all optimistic that will fix me.

Also, the 1680x is clearly communication with the E1 as after the E1 is powered on, all the drives in the enclosure become active (the blue activity LEDs are constantly blinking), then when the TQ is powered up, the 1680x will "ping" each drive and turn off the blue LED, and then the 1680x BIOS lets the OS boot.

So I'm ready to move to Plan B, which entails picking up a pair of SFF-8088 to SAS breakout cables. This way I can install my E1 drives in my TQ enlosure and bypass that ^&*% SAS expander in the E1 enclosure all together. My hope is that this will allow me to bring my 6x2TB RAID6 array back online.

If that works, my next step would be to pick up a 3ware 9690SA-4IAE controller, and a bunch of Hitachi 2TB drives, and hope the 3ware will play nice with the E1. If so, once the new raid has been created, copy over all data from the old E1 drives mounted in the TQ via breakout cables.

If the 3ware won't work with the E1 either, I guess I'll need to re-evaluate my options, and talk to SuperMicro about what is actually supposed to work with the SAS expanders.

The really frustrating part is that I had no issues for a long time until I decided to power down my system...

stevetoney is offline  
post #5767 of 7909 Old 03-09-2010, 07:12 PM
Member
 
Scirus Arcnus's Avatar
 
Join Date: Feb 2009
Posts: 76
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 0 Post(s)
Liked: 10
Quote:
Originally Posted by analogueaddict View Post


The PCIe X4 socket has a theoretical maximum throughput of 1GB/s with PCIe 1.0 spec or 2GB/s at PCI 2.0 spec, the X8 is double that assuming the number of lanes to the socket on the motherboard is in keeping with the physical size of the socket (some boards supply X8 sockets but only run them at X1 or X4 speeds). The X8 socket design of the PERC 6/i board should accomodate 2GB/s throughput as it is only a PCI 1.0 spec board. I cannot find reference as to whether the Supermicro controller is PCI 1.0 or 2.0 so I would suggest conservatively it's a 1GB/s throughput card, so the PERC card has higher throughput. The question is whether you'll ever need that kind of performance?

I hope this helps with your decision, but suffice to say they are both very good controllers and I wish they were available when I was designing my system, I am sure the design would be very different to what I have now.

Best wishes,

Dave

Hi, I really appreciate your input. My mainboard is a Gigabyte GA-H55M-S2H and the main PCIe slot is X16 and 2.0 while the other is X4 electrically but X16 physically and presumably 1.x spec (since Gigabyte does not specify as they do with the true X16).

So, based upon your details that seems to mean that either card will work in the X4 slot but the PERC's performance potential would be halved while still effectively the same as the MV8 (alternatively, the PERC could be used to its full potential in the true X16 slot)?

A note in the Gigabyte manual seems to state that the true X16 slot operates at up to X4 in CrossfireX mode (i.e. when the X4 is also populated with a GPU). If that is so, would it affect non-GPU cards -i.e. if a GPU is installed in the X16 slot and a storage controller in the X4 slot then will the GPU performance suffer?

My goal is to cost effectively and compactly run more media drives while just maintaining drive-to-drive performance similar to the onboard controller rather than reducing it with port multiplier or PCI cards.
Scirus Arcnus is offline  
post #5768 of 7909 Old 03-10-2010, 01:34 AM
Senior Member
 
the_beast666's Avatar
 
Join Date: Apr 2009
Location: London, UK
Posts: 334
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 0 Post(s)
Liked: 10
Quote:
Originally Posted by Sciúrus Arcânus View Post

Hi, I really appreciate your input. My mainboard is a Gigabyte GA-H55M-S2H and the main PCIe slot is X16 and 2.0 while the other is X4 electrically but X16 physically and presumably 1.x spec (since Gigabyte does not specify as they do with the true X16).

So, based upon your details that seems to mean that either card will work in the X4 slot but the PERC's performance potential would be halved while still effectively the same as the MV8 (alternatively, the PERC could be used to its full potential in the true X16 slot)?

A note in the Gigabyte manual seems to state that the true X16 slot operates at up to X4 in CrossfireX mode (i.e. when the X4 is also populated with a GPU). If that is so, would it affect non-GPU cards -i.e. if a GPU is installed in the X16 slot and a storage controller in the X4 slot then will the GPU performance suffer?

My goal is to cost effectively and compactly run more media drives while just maintaining drive-to-drive performance similar to the onboard controller rather than reducing it with port multiplier or PCI cards.

AS far as the PERC is concerned an x4 and x8 slot are the same - rarely can the PERC card exceed the 1000MB/s that an x4 provides, so the extra bandwidth of the larger slot is not needed. You really need a faster card to take advantage of the bandwidth, and there is little point for a media server.

With the same drives the Supermicro card may even be faster as it has lower latency than the PERC (because it is a simple HBA).
the_beast666 is offline  
post #5769 of 7909 Old 03-10-2010, 04:02 AM
AVS Special Member
 
pclausen's Avatar
 
Join Date: Nov 2002
Location: Charlottesville, VA
Posts: 1,036
Mentioned: 2 Post(s)
Tagged: 0 Thread(s)
Quoted: 5 Post(s)
Liked: 22
Quote:
Originally Posted by stevetoney View Post

another drastic step would be to try the MB from the TQ in the E1

Appreciate the feedback Steve. Keep in mind that my 1680x has external connectors only. That being the case, I think I could achieve the same goal by getting a longish SFF-8088 to SFF-8087 cable and run it from one of the 1680x external ports, directly to the SFF-8087 connector on the E1 backplane.

Quote:


or get a non raid SAS SM card and run some 8087 cable out to the e1 just to see if you can see the E1.

Yes, I will likely try that as well in an effort to pin-point what is not working.

Quote:


another test would be to pull your OS drive - set aside and use another drive for a clean OS install just to check, but as you noted recognized vol's show up at boot before OS

I did have plans to move from W7 x64 to 2008R2. I've been using R2 at work on my new VMs and really like it.

Quote:


it is rare but possible that something on the HBA port, cable path, or expander port backplane simply went out so it is not fully working

True, but it sure is proved a bear to troubleshoot. And at the drop of a hat, things might begin working again, with me/us being no closer to knowing why there were issues in the first place.

At work I'm playing with a stack of HP LeftHand P4500 SAN nodes, each communicating with a pair of switches via dual GigE connections. The whole stack is combined into a single cluster, giving me the ability to create a volume that spans all the nodes. Each node is comprised of 12 450GB 15K SAS drives, set up as a pair of 6 member raid 5 arrays. All these 6x450GB raid 5 arrays are then combined using network raid across all the nodes, and presented to the OS'es as an iSCSI target.

Physically, each node is a 2U HP DL180 running some heavily customized flavor of lunix and a SAS raid controller and proprietary LeftHand software.

I'd love to create a similar setup at home, using 2TB SATA drives and open source software. This way, I can loose an entire node and it won't impact my data.
pclausen is offline  
post #5770 of 7909 Old 03-10-2010, 09:23 AM
Member
 
mitgib's Avatar
 
Join Date: Jan 2009
Posts: 19
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 0 Post(s)
Liked: 10
I think this is a great idea MiBz, totally avoids the hourly balancing WHS does. I'm not very good with Windows, how would I create a storage area which comprises multiple partitions/arrays? Similar to LVM, I am a linux geek, why did I choose Windows for this? Or is this something I should avoid and just tell MyMovies different shares as I go along and fill arrays should I choose to do 8 drive RAID6 arrays?

Quote:
Originally Posted by MiBz View Post

There's a much better way, without having to break things up into 2TB LUNs, and you'll also avoid anything to do with WHS's storage pool for Video storage.

Build your new array using 2TB drives.
Remote desktop into WHS, right click on my computer and select manage.
Select Disk Management and you'll see the new array volume which isn't initialized and shows as unallocated.

Right click on the grey area of the volume, intialize as GPT volume.
Format it NTFS |64k clusters | Assign a drive letter above D (ex: R)
Give it a name (ex: RaidStorage)
Exit from Disk Management

Go to My Computer and the R drive now appears as a drive letter.
Open it
Create a new folder, call it Movies or RaidStorage (or whatever you like)

Open the link to WHS's folder shares and right click on the Video share.
Note the share security settings for each group.
Administrators
Creator Owner
RO_5(WHS\\RO_5) - this is the group containing users with read only access
RW_5(WHS\\RW_5) - this is the group containing users with read write access.
System

Now go back to the folder you created on the array drive
Right click, select Share and set the share security to the same as Video noted above.

Now when you access WHS from any network client you will see a shared called Movies (or whatever you called it) that can be accessed same as all other shares.

The additional step to assign the same share rights and groups as WHS's Video folder means that any users you add or changes to those users you make in the WHS console that you give access to Video will also apply to this folder so you'll never need to do it manually.

Last step in remote desktop
Open the WHS shared folder's link and open the Video folder.
In my computer open the drive letter you assigned to the raid array.
Now drag/copy the movies from the WHS Video share to the raid array.

You can also temporarily do the same for all other data that's on the WHS drive pool.

Once the drive pool is empty, you can start removing each 1TB LUN from WHS's disk management, but before you do this make sure you have at least 1 1TB single drive in the drive pool that WHS can move data like client backups to when it prepares to eject the raid LUNs.

At that point you have all your video files safely on the raid array outside the WHS pool and you can still use 2,3,4 x 1TB drives (with duplication on) for your pics, music, documents and whatever else you want.

mitgib is offline  
post #5771 of 7909 Old 03-10-2010, 10:05 AM
AVS Special Member
 
kapone's Avatar
 
Join Date: Jan 2003
Posts: 4,446
Mentioned: 8 Post(s)
Tagged: 0 Thread(s)
Quoted: 93 Post(s)
Liked: 133
Time for a new plan.... My perpetual quest for low cost transports continues.

Now, that WHS (I know! ) has ironed out some of the bugs that were there in the initial release, it may be time to revisit it again. Again, the new plan's basic underlying assumption is that we're not building a high performance raid/san system. We're build a storage system primarily for storing media, and have some redundancy, so that if a disk crashes (and it will) we don't have to re-rip everything that was on that disk.

Now, we could of course build a big ass server with a bunch of simple raid cards or SAS cards and chassis which have expanders built into them, but most of this is quite expensive. There has to be a better way.

So, back to using the good ol gigabit as a transport. One of things that brought me back to this, is some tinkering I did with iSCSI and bootup issues, where the shares would disappear or not reconnect automatically or all sorts of weird issues. All of these issues boil down to one thing, as far as iSCSI is concerned. The network must be up before iSCSI do its thing, but during the boot process, the system doesn't consider the "network" as a critical item, and just keeps booting and starting the Server service (or other servies that are related to disk services... *cough* WHS DE), and of course the shares fail to connect.

Now (I took this from an excerpt, that talks about the same issue):

Quote:


Setting up the LUN, mapping it as a disk, and putting your file shares on it is great, but there are a few configuration steps you need to take if you experience your share settings disappear every time you reboot (even thought the files remain).

First, ensure that the Server service is dependent on the Microsoft iSCSI Initiator Service. To do this, go into the Services MMC, open the Server service properties, and check under the Dependencies tab. No Microsoft iSCSI Initiator? Open Regedit (Run - regedit.exe) and navigate to HKEY_LOCAL_MACHINE\\System\\CurrentControlSet\\Services\\lan manserver. Find the key "DependOnService" and set it's value to MSiSCSI. The Server service is responsible for creating the shares; if this service starts up before the iSCSI LUNs are ready on the server, then the shares will not appear.

Second, make sure you set up the Microsoft iSCSI Initiator to automatically restore the connection and drive letters. Under the Targets tab, when you highlighted the target and clicked "Log On", did you check "Automatically restore this connection when the system boots"? If not, remove the connection and log it back on, this time selecting the correct option.

The bolded part is what's interesting. In theory, couldn't we make the WHS DE service (and it IS a service) "depend on" the iSCSI service as well (and the iSCSI service is set to depend on the network service), so that's not started until iSCSI is up? I tinkered with this a bit, sometime back, but scrapped my WHS setup for lack of time and never pursued it again, however, I think it's time to revisit it.

I'm thinking low power, probably atom based motherboards (still need to find one with 6 SATA ports), or cheap mATX boards with low power CPUs with something like OpenFiler as the iSCSI target node software, hooked into a dedciated subnet on the switch, so that all iSCSI trafiic is isolated, and use WHS, modify it's services configuration, and then use iSCSI targets as disks in DE.

lol..I know, lots of moving parts, but hey isn't that the fun part? Watcha guys think?
kapone is offline  
post #5772 of 7909 Old 03-10-2010, 11:56 AM
AVS Special Member
 
MiBz's Avatar
 
Join Date: May 2005
Posts: 1,093
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 0 Post(s)
Liked: 10
Quote:
Originally Posted by kapone View Post


I'm thinking low power, probably atom based motherboards (still need to find one with 6 SATA ports), or cheap mATX boards with low power CPUs with something like OpenFiler as the iSCSI target node software, hooked into a dedciated subnet on the switch, so that all iSCSI trafiic is isolated, and use WHS, modify it's services configuration, and then use iSCSI targets as disks in DE.

lol..I know, lots of moving parts, but hey isn't that the fun part? Watcha guys think?

I'm not so convinced an iSCSI backstore is a good idea for WHS. I see tremendous potential in file corruption and broken tombstones, but this is just my intuition based on experiences with iSCSI and knowing how WHS works. I always enjoy your experiments, so I look forward to it.

btw SuperMicro has some new Atom boards out with 6 SATA ports !
MiBz is offline  
post #5773 of 7909 Old 03-10-2010, 12:10 PM
AVS Special Member
 
kapone's Avatar
 
Join Date: Jan 2003
Posts: 4,446
Mentioned: 8 Post(s)
Tagged: 0 Thread(s)
Quoted: 93 Post(s)
Liked: 133
Quote:
Originally Posted by MiBz View Post

I'm not so convinced an iSCSI backstore is a good idea for WHS. I see tremendous potential in file corruption and broken tombstones, but this is just my intuition based on experiences with iSCSI and knowing how WHS works. I always enjoy your experiments, so I look forward to it.

btw SuperMicro has some new Atom boards out with 6 SATA ports !

Well, if the iSCSI implementation itself is stress tested and solid (and it is/should be) the "potential" for corruption is the same as that for local hard drives, as far as WHS is concerned. Now, DOES WHS have corruption issues with local HDs? Well, that's probably up for debate. It "seems" like with PP2 (and the upcoming PP3) they fixed some of the corruption issues (it doesn't even write to a buffer area anymore, straight to disk).

Yes, I saw that Supermicro board...still too expensive . We need something in the $70 range to be cost effective WITH 6 SATA ports (or more).
kapone is offline  
post #5774 of 7909 Old 03-10-2010, 12:15 PM
AVS Special Member
 
ilovejedd's Avatar
 
Join Date: Aug 2006
Posts: 3,725
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 38 Post(s)
Liked: 59
Quote:
Originally Posted by MiBz View Post

btw SuperMicro has some new Atom boards out with 6 SATA ports !

Yeah, for close to $200. It's tempting, especially given the Intel gigabit instead of the usual Realtek, but the price is a deterrent.
ilovejedd is offline  
post #5775 of 7909 Old 03-10-2010, 12:41 PM
AVS Special Member
 
kapone's Avatar
 
Join Date: Jan 2003
Posts: 4,446
Mentioned: 8 Post(s)
Tagged: 0 Thread(s)
Quoted: 93 Post(s)
Liked: 133
I had started creating this schematic back in Sep' 08 but never finished working on it. I still think this is feasible..... with "enough" performance that it can easily saturate a gigabit link and provide redundancy at the node level. No expensive raid cards, no SAS adapters or backplanes. Simple ITX (or maybe mATX) motherboards in each node. Actually two of them in each node to provide 12 SATA connections for a 2U node box.

kapone is offline  
post #5776 of 7909 Old 03-10-2010, 01:00 PM
AVS Special Member
 
MiBz's Avatar
 
Join Date: May 2005
Posts: 1,093
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 0 Post(s)
Liked: 10
Quote:
Originally Posted by kapone View Post

Well, if the iSCSI implementation itself is stress tested and solid (and it is/should be) the "potential" for corruption is the same as that for local hard drives, as far as WHS is concerned. Now, DOES WHS have corruption issues with local HDs? Well, that's probably up for debate. It "seems" like with PP2 (and the upcoming PP3) they fixed some of the corruption issues (it doesn't even write to a buffer area anymore, straight to disk).

Yes, I saw that Supermicro board...still too expensive . We need something in the $70 range to be cost effective WITH 6 SATA ports (or more).

sry hadn't looked up the price. SM is usually not the cheapest.

In any case these Atom boards are based on the latest Intel Pineview low power Atom series. We're going to see more derivatives from other vendors very soon.

I think Zotac also has a Pineview D510 Atom board with 6 SATA coming out soon.

Edit:
More Pineview based boards
MiBz is offline  
post #5777 of 7909 Old 03-10-2010, 03:04 PM
Senior Member
 
the_beast666's Avatar
 
Join Date: Apr 2009
Location: London, UK
Posts: 334
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 0 Post(s)
Liked: 10
Quote:
Originally Posted by kapone View Post

I had started creating this schematic back in Sep' 08 but never finished working on it. I still think this is feasible..... with "enough" performance that it can easily saturate a gigabit link and provide redundancy at the node level. No expensive raid cards, no SAS adapters or backplanes. Simple ITX (or maybe mATX) motherboards in each node. Actually two of them in each node to provide 12 SATA connections for a 2U node box.

I would still avoid Atom for this though. Pineview is better than the crappy 945GC, but still uses way to much power to be paired with an 8-11W cpu.

A decent AMD 785G board (such as this, with its 6 SATA ports uses little more power at idle but provides the expansion options required to make each node run well (add in an 8-port Supermicro HBA and a decent NIC - Pro/1000 PT or a hardware iSCSI card like a Broadcom NetXtremeII if Linux support is available yet for iSCSI offload). This gives you lots of storage drives and enough cpu to run software RAID6 on each node, all for similar money to a mid-range Atom board.
the_beast666 is offline  
post #5778 of 7909 Old 03-10-2010, 03:11 PM
AVS Special Member
 
kapone's Avatar
 
Join Date: Jan 2003
Posts: 4,446
Mentioned: 8 Post(s)
Tagged: 0 Thread(s)
Quoted: 93 Post(s)
Liked: 133
Quote:
Originally Posted by the_beast666 View Post

I would still avoid Atom for this though. Pineview is better than the crappy 945GC, but still uses way to much power to be paired with an 8-11W cpu.

A decent AMD 785G board (such as this, with its 6 SATA ports uses little more power at idle but provides the expansion options required to make each node run well (add in an 8-port Supermicro HBA and a decent NIC - Pro/1000 PT or a hardware iSCSI card like a Broadcom NetXtremeII if Linux support is available yet for iSCSI offload). This gives you lots of storage drives and enough cpu to run software RAID6 on each node, all for similar money to a mid-range Atom board.

No no no.... That kills the whole "cheap" objective. That board itself is $85. You'll still need a CPU (since you need memory in either case, I'm leaving that out). And no addin cards. That's the whole point of having "low powered" boards with multiple SATA ports so that they ARE the equivalent of an add in card.

The point of using gigabit as a transport is multiple SMALL nodes, not a few large ones. I'm focussing on "6" as the magic number for the number of SATA ports since that's about the ideal number for a RAID-5 array. Now, RAID-5 vs RAID-6 is a whole another debate, and I'll leave that out. But with multiple small arrays of 6 disks, yes, you take a hit on HDD costs, to have redundancy, but you gain better fault tolerance than a monolithic card and/or one big array.
kapone is offline  
post #5779 of 7909 Old 03-10-2010, 03:17 PM
AVS Special Member
 
scientest's Avatar
 
Join Date: Nov 2007
Location: Memphis
Posts: 1,606
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 0 Post(s)
Liked: 12
Quote:
Originally Posted by kapone View Post

The point of using gigabit as a transport is multiple SMALL nodes, not a few large ones. I'm focussing on "6" as the magic number for the number of SATA ports since that's about the ideal number for a RAID-5 array. Now, RAID-5 vs RAID-6 is a whole another debate, and I'll leave that out. But with multiple small arrays of 6 disks, yes, you take a hit on HDD costs, to have redundancy, but you gain better fault tolerance than a monolithic card and/or one big array.

Since you're talking about a non-traditional type storage solution, if you're running a bunch of small nodes, and they're doing nothing else but storage management you could run Gluster: Gluster Community/
scientest is offline  
post #5780 of 7909 Old 03-10-2010, 03:43 PM
Senior Member
 
the_beast666's Avatar
 
Join Date: Apr 2009
Location: London, UK
Posts: 334
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 0 Post(s)
Liked: 10
Quote:
Originally Posted by kapone View Post

No no no.... That kills the whole "cheap" objective. That board itself is $85. You'll still need a CPU (since you need memory in either case, I'm leaving that out). And no addin cards. That's the whole point of having "low powered" boards with multiple SATA ports so that they ARE the equivalent of an add in card.

The point of using gigabit as a transport is multiple SMALL nodes, not a few large ones. I'm focussing on "6" as the magic number for the number of SATA ports since that's about the ideal number for a RAID-5 array. Now, RAID-5 vs RAID-6 is a whole another debate, and I'll leave that out. But with multiple small arrays of 6 disks, yes, you take a hit on HDD costs, to have redundancy, but you gain better fault tolerance than a monolithic card and/or one big array.

The board is $85, add a $30 cpu, and you have only spent the same money as you would be spending on a worthwhile Atom board (ie one with added SATA ports). Now the extra HBA makes sense, because it costs less than another mobo/cpu/ram combo (and avoids power headaches also). The NIC is a good idea as you only need a few nodes and the onboard Realtek's are crap and the Supermicro Atom board with Intel's onboard costs too much. My board/cpu/RAM/HBA/(NIC) combo will be cheaper and perform better than 2 of your nodes, and be much easier to implement and manage. Not to mention how much time and money you'll have to spend to get 2 6-drive cases or fit 2 mobos into a single 12-drive 2U case...

I would also prefer a 12-disk RAID6 rather than 2 6-disk RAID5 arrays - the same parity loss, but fewer potential problems with disk failures during a rebuild (although with twice the numbers of drives in each array it does make the decision a bit of a toss-up). If you don't mind going with a Linux head node you could RAID5/6 all of your sotrage nodes together too for extra redundancy if you really wanted. RAID66...
the_beast666 is offline  
post #5781 of 7909 Old 03-10-2010, 04:11 PM
Senior Member
 
stevetoney's Avatar
 
Join Date: Mar 2004
Posts: 358
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 0 Post(s)
Liked: 10
I use to be able to work with a variety of SAN types some years ago at work, now I'm too old and just a management \\ staff person in the DC \\ NOVA area with DoD. No more hands on at work.. Not sure I could afford SAN stuff at home even these days when prices have dropped a fair amount..

Sorry -- I forgot you have the 1680x model in my comments....

I'm using 2008R2 and VM's and my hiccups happen on it.. I still have not reinstalled the BBU on the Areca - got distracted last weekend

I have w7 64 on 3 other PC and W7 32 on 2 other PC in the house and everyone of them has a similar weird problem.. They will lose their brains on connecting to the network servers or each other's share (LAN problem) that only a reboot will fix.. Often these machines will get stuck at logging off forever requiring a hard press the off buttion to restart. then everything is fine for a period of time and the problem reappears..

Never a problem with internet access on LAN and access to other LAN computers..

very annoying..



Quote:
Originally Posted by pclausen View Post

Appreciate the feedback Steve. Keep in mind that my 1680x has external connectors only. That being the case, I think I could achieve the same goal by getting a longish SFF-8088 to SFF-8087 cable and run it from one of the 1680x external ports, directly to the SFF-8087 connector on the E1 backplane.


Yes, I will likely try that as well in an effort to pin-point what is not working.


I did have plans to move from W7 x64 to 2008R2. I've been using R2 at work on my new VMs and really like it.


True, but it sure is proved a bear to troubleshoot. And at the drop of a hat, things might begin working again, with me/us being no closer to knowing why there were issues in the first place.

At work I'm playing with a stack of HP LeftHand P4500 SAN nodes, each communicating with a pair of switches via dual GigE connections. The whole stack is combined into a single cluster, giving me the ability to create a volume that spans all the nodes. Each node is comprised of 12 450GB 15K SAS drives, set up as a pair of 6 member raid 5 arrays. All these 6x450GB raid 5 arrays are then combined using network raid across all the nodes, and presented to the OS'es as an iSCSI target.

Physically, each node is a 2U HP DL180 running some heavily customized flavor of lunix and a SAS raid controller and proprietary LeftHand software.

I'd love to create a similar setup at home, using 2TB SATA drives and open source software. This way, I can loose an entire node and it won't impact my data.

stevetoney is offline  
post #5782 of 7909 Old 03-10-2010, 04:31 PM
Member
 
carpenike's Avatar
 
Join Date: Aug 2009
Posts: 77
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 0 Post(s)
Liked: 10
Here's a potential alternative to WHS that's free/open source.:

http://www.amahi.org
carpenike is offline  
post #5783 of 7909 Old 03-10-2010, 04:54 PM
AVS Special Member
 
MiBz's Avatar
 
Join Date: May 2005
Posts: 1,093
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 0 Post(s)
Liked: 10
Quote:
Originally Posted by mitgib View Post

I think this is a great idea MiBz, totally avoids the hourly balancing WHS does. I'm not very good with Windows, how would I create a storage area which comprises multiple partitions/arrays? Similar to LVM, I am a linux geek, why did I choose Windows for this? Or is this something I should avoid and just tell MyMovies different shares as I go along and fill arrays should I choose to do 8 drive RAID6 arrays?

Sorry I missed your post earlier.
There is no need to use multiple partitions or break up the array into chunks with this method. That's one of the advantages of going with this solution.

Once you create a raidset and volume set on the your raid controller Windows Disc managment will see the volume. Just initialize it as a GPT volume with 64k clusters and you can grow this single array to 256TB.

Finally you'll have the WHS's drive pool and raid array all shared on the same WHS server but independent of each other. You can add singular drives of any type size the WHS drive pool and you can also expand the raid array with more 2TB drives. You can even backup data from onto the other with any sync software.

Here's an example.

WHS drive pool
Shares: User/Documents/Pictures/Music
1TB drive - 20GB system \\ Start of storage pool (D)
1TB drive - part of WHS storage pool (duplication on/off per folder)


Raid6 Storage (R) 8TB single volume
Shares: Movies
10 x 2TB drives
MiBz is offline  
post #5784 of 7909 Old 03-10-2010, 05:07 PM
AVS Special Member
 
MiBz's Avatar
 
Join Date: May 2005
Posts: 1,093
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 0 Post(s)
Liked: 10
Quote:
Originally Posted by the_beast666 View Post

I would still avoid Atom for this though. Pineview is better than the crappy 945GC, but still uses way to much power to be paired with an 8-11W cpu.

Without getting into the suitability of using an Atom cpu for any specific application, but from a power efficiency perspective Pineview is only 12-15w, which is quite amazing.
MiBz is offline  
post #5785 of 7909 Old 03-10-2010, 05:30 PM
Newbie
 
FrankieFiero's Avatar
 
Join Date: Oct 2003
Posts: 11
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 0 Post(s)
Liked: 10
Quote:
Originally Posted by ohpleaseno View Post

I know this is an old bump, but I just did a Norco 4220 and save for the RAM, I used exactly the same parts as you, but I never saw your post.

So far, the WD Scorpios in Raid 1 are working great. Data migration from the old WHS to the new one is a time-comsuming PITA though.

How well do those Norco cables (C-SFF8087-D) work with the 4220 case? I am using the ones that came with my Areca card and they don't sit very well in 3 of the 5 backplanes, the fold of the sheet metal interferes with the latch on the cable. I just don't feel very comfortable unless I get that satisfying 'click' of the cable into the socket. A light tug is all it takes for the cable to come out.
FrankieFiero is offline  
post #5786 of 7909 Old 03-10-2010, 05:41 PM
gsr
Oppo Beta Group
 
gsr's Avatar
 
Join Date: Apr 2001
Location: Massachusetts, USA
Posts: 7,693
Mentioned: 3 Post(s)
Tagged: 0 Thread(s)
Quoted: 219 Post(s)
Liked: 229
Quote:
Originally Posted by FrankieFiero View Post

How well do those Norco cables (C-SFF8087-D) work with the 4220 case? I am using the ones that came with my Areca card and they don't sit very well in 3 of the 5 backplanes, the fold of the sheet metal interferes with the latch on the cable. I just don't feel very comfortable unless I get that satisfying 'click' of the cable into the socket. A light tug is all it takes for the cable to come out.

With the stock fan board, they work fine - there's enough room to plug them in and route them to the motherboard. But they're stiff - VERY stiff - and don't work so will if you switch to Cavediver's 3x120mm fan board. I switched to the Star Tech cables which are a lot more flexible.
gsr is online now  
post #5787 of 7909 Old 03-10-2010, 08:46 PM
AVS Special Member
 
scientest's Avatar
 
Join Date: Nov 2007
Location: Memphis
Posts: 1,606
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 0 Post(s)
Liked: 12
Quote:
Originally Posted by carpenike View Post

Here's a potential alternative to WHS that's free/open source.:

http://www.amahi.org

Interesting, have you tried it out?
scientest is offline  
post #5788 of 7909 Old 03-10-2010, 09:08 PM
Member
 
carpenike's Avatar
 
Join Date: Aug 2009
Posts: 77
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 0 Post(s)
Liked: 10
Quote:
Originally Posted by scientest View Post

Interesting, have you tried it out?

Not yet. I actually just stumbled across it this evening, will probably throw it into a VM later on and play around with it. The plugins seem like a neat concept; even found some FOSS solutions that I've never heard of before mixed in.
carpenike is offline  
post #5789 of 7909 Old 03-10-2010, 09:25 PM
AVS Special Member
 
ilovejedd's Avatar
 
Join Date: Aug 2006
Posts: 3,725
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 38 Post(s)
Liked: 59
ilovejedd is offline  
post #5790 of 7909 Old 03-10-2010, 09:32 PM
AVS Special Member
 
scientest's Avatar
 
Join Date: Nov 2007
Location: Memphis
Posts: 1,606
Mentioned: 0 Post(s)
Tagged: 0 Thread(s)
Quoted: 0 Post(s)
Liked: 12
Quote:
Originally Posted by ilovejedd View Post

Hmm, Amahi didn't really catch my interest but GlusterFS sure did. Gah, I see a lot of reading ahead. >.<

And if you're a real masochist, the two are not necessarily exclusive....
scientest is offline  
Reply Home Theater Computers

Tags
Computers

Thread Tools
Show Printable Version Show Printable Version
Email this Page Email this Page


Forum Jump: 

Posting Rules  
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off
Trackbacks are Off
Pingbacks are Off
Refbacks are Off