Well I am starting to think maybe FlexRAID isn't that great after all. There was no way I could get it to do an update so I had to delete the configuration and start over.
I added all the drives again:
Parity drives: 2x WD Green 3TB
- 6x Seagate 1TB
- 2x Samsung 1.5TB
- 1x Samsung 2TB
- 1x WD Green 2TB
- 2x WD Green 3TB
Total of 14 drives in the pool, 17TB total data space, 50% free. I formatted the parity drives before creating the array.
Initialization is moving at glacial speed; 12-13 MB/s. It's taken 50 minutes to reach 1% - if this is linear, which I sure hope not, we're looking at 83 hours to compute parity for 8 TB of data.
And others report doing 20TB overnight? And I've seen the figure 1 hour per TB mentioned as well. In which case my system is only performing at 10% of expected.
My CPU is only being utilized about 20% so if I have a bottleneck it certainly isn't the processor. 8 of the 14 drives are connected to the IBM M1015, 3 to the motherboard and 3 to the (cheap) Lycom Marvell based PCIEx4 card.
When testing the drives individually, none of them did less than 120 MB/s sustained file transfer so no indication there of a bottleneck.
Maybe initialization of Flexraid arrays using non-empty drives does take this long and I am just being impatient, but that does not explain how some of you could initialize a 20TB array overnight.
Maybe it has to do with the fact I have two parity drives?
I'm going to give it a few days to see if the speed increases, but I'm not hopeful.
Edited by politby - 4/23/13 at 2:44am