or Connect
AVS › AVS Forum › Video Components › Home Theater Computers › Slow write speeds on FlexRaid (RAID-F Snapshot RAID)
New Posts  All Forums:Forum Nav:

Slow write speeds on FlexRaid (RAID-F Snapshot RAID)

post #1 of 114
Thread Starter 
I've been trying out Flexraid on a new WHS 2011 build, and prior to installing Flexraid, I was geting 60-70 MB/s transfer speeds between my drives (mostly Hitachi Coolspin 4 TB drives) all on a single Supermicro AOC-SAS2LP-MV8 controller.

After installing Flexraid (I tried real-time RAID, but Flexraid wouldn't start the array after every reboot, saying it needed to be reconciled) in snapshot mode, and adding some of the drives to the pool, now my transfer rates to the pooled drives dropped to ~25 MB/s.

Since it's not trying to calculate parity in snapshot mode, why would the performance of my writes drop by more than 50%? Are there any setting/tweaks to try to fix this? I posted this on the Flexraid forums, but there's a lot more traffic here on this site. I followed Assassin's Flexraid guide in the setup and configuration, but there's nothing in the guide about troubleshooting performance issues..
post #2 of 114
Quote:
Originally Posted by jcruse View Post

I've been trying out Flexraid on a new WHS 2011 build, and prior to installing Flexraid, I was geting 60-70 MB/s transfer speeds between my drives (mostly Hitachi Coolspin 4 TB drives) all on a single Supermicro AOC-SAS2LP-MV8 controller.

After installing Flexraid (I tried real-time RAID, but Flexraid wouldn't start the array after every reboot, saying it needed to be reconciled) in snapshot mode, and adding some of the drives to the pool, now my transfer rates to the pooled drives dropped to ~25 MB/s.

Since it's not trying to calculate parity in snapshot mode, why would the performance of my writes drop by more than 50%? Are there any setting/tweaks to try to fix this? I posted this on the Flexraid forums, but there's a lot more traffic here on this site. I followed Assassin's Flexraid guide in the setup and configuration, but there's nothing in the guide about troubleshooting performance issues..

I am not sure as I have not experienced this. There shouldn't be any significant performance issues with using FlexRaid --- at least none that I am aware of. Have you added data to your drives since using FlexRaid? Some drives slow down quite a bit when they are full.

Keep me posted what you find.
post #3 of 114
Thread Starter 
Quote:
Originally Posted by assassin View Post

I am not sure as I have not experienced this. There shouldn't be any significant performance issues with using FlexRaid --- at least none that I am aware of. Have you added data to your drives since using FlexRaid? Some drives slow down quite a bit when they are full.

Keep me posted what you find.

There's some info on the Flexraid forums about slow writes on Linux (about a year ago). Brahim hinted at a fix at the time, but I haven't seen any replies after that:

http://forum.flexraid.com/index.php?topic=494.0

My slow write speeds are to an empty pool, so it's not a slow-down due to full disks.
post #4 of 114
Turn off pooling and disable flex raid. Then try a copy paste to each drive and see the various results. Restart up flex raid and try again. You should gain insight into issue.

Could be a slow drive. Could be a set up issue. Could be network issue.

I'd do that and then test the network speed.
post #5 of 114
Thread Starter 
Thanks Mfusick.

I replied to your PM as well, but as I said there, I deleted my FlexRaid config and the transfer speeds immediately jumped back up to 70 MB/s. I only tested the one drive to which FlexRAID was writing, but I'll try the others as well. I remember reading somewhere that spun down drives could impact write performance, even on active disks. I'll try to disable drive spin-down, as well.

It can't be a network issue, because it's an internal transfer (disk to disk) on the server itself.
post #6 of 114
Thread Starter 
Hmm...I seemed to have improved things very slightly.

My original configuration of my pool was this:

Config #1
Parity: 4 TB 5400 rpm
DRU1: 4 TB 5400 rpm
DRU2: 2 x 2 TB 5700 rpm (2 disks spanned into a single DRU)

In my new config, I decided not to span the smaller 2 TB disks into a single DRU

Config #2
Parity: 4 TB 5400 rpm
DRU1: 4 TB 5400 rpm
DRU2: 2 TB 5700 rpm
DRU3: 2 TB 5700 rpm

Writing to DRU1, DRU2, or DRU3 with no pool, my write speeds are 70 MB/s
Writing to DRU1 in Config #1, my write speeds are 25 MB/s
Writing to DRU1 in Config #2, my write speeds are 35 MB/s

So by not spanning the smaller disks into a single DRU, I got some improvement, but it's still half of the non-pooled performance.
post #7 of 114
Looks like slow HDDS.

I had all sorts of various speed issues on my first fake raid server. I tore it apart and benched everything and basically figured out that some of my older slower HDDS were just slow.

Without any flex raid at all I'd get like 40-60mb sec on the slower smaller drives. The newer faster empty ones did 100mb sec easily.

So when I pooled them all together I got very inconsistent speeds from flex raid but really it was just changing based on what drive in the flex raid pool was being used.

I've since upgraded to 100% Seagaye 3TB 7200.14 drives - and I consistently see over 100MB second performance.

Bottom line is flex raid will work best with modern 7200 rpm hard drives. I'm looking to max out and saturate my gigabit LAN connection and I do it easily.

Your problem is probably in your HDDs and the issue is just worsened and exposed more by both pooling , spanning, and flex raid.

Each extra step is lowering performance a little bit. On a fully performs ninth modern drive its not an issue but in your case it might be.

I'd say swap out the slowest drives for a 3TB or 4TB Seagate.
post #8 of 114
Thread Starter 
That still doesn't explain why flexraid imposes a 50% reduction in write speeds. As I said in my post all my drives have ~ 70 MB/s write performance prior to pooling. After pooling, it's half of that. You can't just blame " slow drives" when the problem clearly lies within flexraid somewhere.
post #9 of 114
but I get much more than that on mine. Perhaps it's because my HDD's are just way faster ? (160MB/sec) and the penalty only brings me down to average between 95MB and 85MB/sec ??
post #10 of 114
Thread Starter 
Quote:
Originally Posted by Mfusick View Post

but I get much more than that on mine. Perhaps it's because my HDD's are just way faster ? (160MB/sec) and the penalty only brings me down to average between 95MB and 85MB/sec ??

Ok...so Flexraid DOES impose a 50% write performance hit in snapshot mode, even on your system

So the obvious question is WHY? If it isn't calculating parity until the Update event why is it writing at half speed? That's a poorly coded app that imposes a 50% reduction in throughput for no reason.
post #11 of 114
There's something wrong with your system. I DO NOT get any reduction in speeds while reading OR writing. I get the native speed of the drives.
post #12 of 114
What CPU and hardware you using ?
post #13 of 114
Could it be network traffic?
post #14 of 114
Quote:
Originally Posted by Mfusick View Post

What CPU and hardware you using ?

Me? I use an i7 920 (3.7Ghz) with a Gigabyte EX58-UD4P motheboard and 6GB RAM. Primary drive is a WD Black.
post #15 of 114
Quote:
Originally Posted by Mfusick View Post

Could it be network traffic?

Could be. Try moving some files around locally on your server.
post #16 of 114
Thread Starter 
All my tests and benchmarks have been file transfers within the server (using RDP). The network is not involved at all at this point.

My hardware is all new, and it's a very powerful machine since I intend on using PLEX transcoding from the server. I use 5400 rpm drives to minimize noise and power consumption on the server, but both Mfusick and I are seeing 50% reduction in I/O write speed, regardless of disk original speed:

CPU: i7 3770k (3.5 GHz)
MOBO: ASRock H77M
RAM: 16 GB DDR3 1600 CAS 9 latency (WHS is only using 8GB due to it's internal limit)
OS DIsk: Samsung 840 Pro 128 GB SSD
OS and Parity disks are attached to MOBO SATA 3 ports
Disk Controller: Supermicro AOC-SAS2LP-MV8
All data disks are attached to the Supermicro controller.
post #17 of 114
No I'm not seeing a 50% drop at all.

I might have explained it wrong.

My network is 50% the speed of my HDDs.
post #18 of 114
Thread Starter 
Quote:
Originally Posted by Mfusick View Post

No I'm not seeing a 50% drop at all.

I might have explained it wrong.

My network is 50% the speed of my HDDs.

So can you confirm that writing a file internally to the Flexraid pool is at 160 MB/s?
post #19 of 114
Quote:
Originally Posted by Mfusick View Post

No I'm not seeing a 50% drop at all.
I might have explained it wrong.
My network is 50% the speed of my HDDs.

What I meant is my 7200.14 3TB drives can do as much as 160MB/sec on average and I get consistently between 85MB and 95MB/sec to and from my Flexserver.

The drop in performance is probably because of my network. I can get 100MB+ sometimes but generally and consistently I'm in the 90's. I've got multiple machines on my network doing multiple stuff all the time so I just assumed its network traffic/limits.

If it isn't actually my network limitations I don't care much because if I trouble shoot my ass off and remove a bottle neck I'm going to hit another bottle neck at 5MB/sec higher.

It's just not worth my time trouble shooting a media server that's constantly very near 100MB/sec and often over. I'm just satisfied with my performance.

It used to be 60MB/sec before I upgraded my HDDs. (WD green 5400rpm)

I'm thinking your problem is HDDs.
post #20 of 114
Quote:
Originally Posted by jcruse View Post

Quote:
Originally Posted by Mfusick View Post

No I'm not seeing a 50% drop at all.

I might have explained it wrong.

My network is 50% the speed of my HDDs.

So can you confirm that writing a file internally to the Flexraid pool is at 160 MB/s?

No. I'm using known benchmarks for the 7200.14 3TB Seagate. I've never tested mine individually each HDD. I could but I'm assuming my drives are faster than my network so there is no point in doing it.
post #21 of 114
Thread Starter 
Quote:
Originally Posted by Mfusick View Post


I'm thinking your problem is HDDs.

That's not a logical conclusion. If my internal write speeds are 60-70 MB/s oustide the pool and 25-35 MB/s inside the pool, how is the disk the cause of the slowdown, when the only added variable is FlexRaid? It's indisuptably Flexraid that's causing the slowdown.
post #22 of 114
Do you own any fast HDDs ?

Which ?

Could you include them in your flex raid pool?

I have a cool idea and trouble shooting experiment for you to do and get to the bottom of this.
post #23 of 114
Thread Starter 
Quote:
Originally Posted by Mfusick View Post

Do you own any fast HDDs ?

Which ?

Could you include them in your flex raid pool?

I have a cool idea and trouble shooting experiment for you to do and get to the bottom of this.

Could you detail the experiment first? To me the only logical experimentation worth doing is changing settings within FlexRAID (since it's been identified as the cause of the slowdown). I have some SSD's but they're all used right now as OS drives for other machines, and I don't want to crack open/disable my other pc's to run an experiment I'm 99% certain will have no impact.

I expected FlexRAID to have some level of overhead. When writing to a pool of disks, the application needs to route the files to the proper physical disk while updating it's virtual disk structure, etc. It's just 50% write speed reduction is way too high an overhead for such a simple task.
Edited by jcruse - 6/11/13 at 11:29am
post #24 of 114
Try this:

Turn off pooling and flexraid.

Create a folder on each individual hard drive and label it the name of the HDD it's on.

Copy and paste a 2GB file to inside each individual drive and the folder you created and record the results.

Delete the files. Leave the folders.

Turn on flexraid pooling and do the same and record the individual results.

If the folder is on each Individual HDD then the transfer while running flexraid should be based on each HDDs performance. You can tell which HDD is working by which labeled folder your transferring into.

Make sure your source for the transfer is on SSD to limit this as a problem.
post #25 of 114
Thread Starter 
Quote:
Originally Posted by Mfusick View Post

Try this:

Turn off pooling and flexraid.

Create a folder on each individual hard drive and label it the name of the HDD it's on.

Copy and paste a 2GB file to inside each individual drive and the folder you created and record the results.

Delete the files. Leave the folders.

Turn on flexraid pooling and do the same and record the individual results.

If the folder is on each Individual HDD then the transfer while running flexraid should be based on each HDDs performance. You can tell which HDD is working by which labeled folder your transferring into.

Make sure your source for the transfer is on SSD to limit this as a problem.

I basically already did this, and posted the results in the 6th post of this thread here
post #26 of 114
Doesn't matter if the hdd with 7200rpm or 5400rpm causes the issue. A lot of people nowadays are using 5400rpm hdds on their media servers without any complaints.
Back to the topic.
Speed depends on how you set RAID via snapshot or real time. Snapshot is manually run (or set via scheduler) to see if new data was stored and to update the parity data. This does not affect performance in that parity calculations are looked at later. On the other hand, real time does parity calculations in real time. I'm affraid your flexraid may be still running in realtime mode.
For the safe side, you should uninstall flexraid, reboot the machine, re-install it. But this time pick the snapshot raid through expert mode from the begining then test again. Good luck.
post #27 of 114
Thread Starter 
Quote:
Originally Posted by Elpee View Post

Doesn't matter if the hdd with 7200rpm or 5400rpm causes the issue. A lot of people nowadays are using 5400rpm hdds on their media servers without any complaints.
Back to the topic.
Speed depends on how you set RAID via snapshot or real time. Snapshot is manually run (or set via scheduler) to see if new data was stored and to update the parity data. This does not affect performance in that parity calculations are looked at later. On the other hand, real time does parity calculations in real time. I'm affraid your flexraid may be still running in realtime mode.
For the safe side, you should uninstall flexraid, reboot the machine, re-install it. But this time pick the snapshot raid through expert mode from the begining then test again. Good luck.

Now this theory actually makes sense... When I first installed Flexraid, I tried to setup a realtime config, but after every reboot the config would be corruputed, and it wouldn't initiate the pool and wanted to run a reconciliation. I deleted that config and created a snapshot config, on which I'm seeing the slowdown, but if the realtime config is still partially active somehow, interfering, that might explain the slowness.

Will give this a try when I get home.

Thanks!
Edited by jcruse - 6/11/13 at 12:17pm
post #28 of 114
That is why I asked you in PM if your real time or Snapshot. You clearly indicated snapshot so I left that alone.

I'd say clean remove it all to be safe and try again.
post #29 of 114
Thread Starter 
Do you guys know if the uninstall does a complete uninstall, or do I need to troll the registry to remove any lingering keys?
post #30 of 114
I'd go trolling
New Posts  All Forums:Forum Nav:
  Return Home
  Back to Forum: Home Theater Computers
AVS › AVS Forum › Video Components › Home Theater Computers › Slow write speeds on FlexRaid (RAID-F Snapshot RAID)