Originally Posted by Dark_Slayer
I know you have a lot of pretty well doc'd info out there about tRAID, but can you do me a favor and answer my questions so I can be a lazy-good-for-nothing forum leech?
- You can continually add disks to your tRAID pool, correct? (Not just replace for larger - as is the case with ZFS)
- When you mention the array is self-healing, what steps will it take during background scrubbing?
- What does tRAID do when my read error rate gets too high on a particular disk? Do you have some determination of a reasonable drive health to continue the array? Are the s.m.a.r.t. reports still just the raw data?
- For live data reconstruction, I need multiple parity discs correct? e.g. I have RAID-F on 1x4TB parity supporting 2x4TB and 4x3TB data disks. For live reconstruction, I'd need to add another 4TB disk correct?
Some other curiosities I have are pulling the drives. . . say I have my array with a tolerance of 2 disks and I pull one. How do I know what data is on that disk? Also, I can't really put the disk back in the array if it's been read by any OS correct? At least, it isn't safe to do so. If I had some desire to take a disk out of my array (with a tolerance of more than 1 disk) and copy it's info over to a different computer in some other location (outside my LAN - which would be the only logical reason for doing this), then I could wipe it and add it back to the array for restoration without screwing anything up?
Also, how long does live data reconstruction take if I lose one of the 3 or 4 TB drives?
Finally, does the initial
parity calculation render my data unusable until it's complete?
1. Yep. You can both replace disks in the array as well as continually expand the array with new disks (unlike ZFS). Expansion can be done while the array is online or offline.
2. tRAID is not self healing. That will be a feature of NZFS, which we will not discuss for now. However, it does have automatic hot-spare rebuild.
3. Read errors are transparent from an array usage perspective as data is simply reconstructed from parity. Disk health monitoring is done separately outside of the core RAID engine. In fact, you can do SMART monitoring without configuring any array as it is an independent feature. SMART will alert you via email and/or SMS.
4. Live data reconstruction works with single or multi-parity. So, no additional disk. If your question was on data restoration, then yes, you will need a replacement disk. Conversely, you can choose to restore to the PPU and essentially un-RAID. That is, you can restore the data into the parity disk transforming the parity disk into a data disk. This can be a great option compared to running in degraded RAID mode for a long period (say, you're broke or it takes a long time to acquire a replacement disk).
5. There are various way to identify disks including generating led activities. You can also access each tRAID disk outside of the pool to explore its file content.
6. Any disk taken out of the array, read/written elsewhere, can be added back. However, you will need to run a Verify & Sync operation to rectify any parity issue.
7. Taking a disk out, copying its content somewhere else, wiping it, and then restoring it sounds like a valid way to test and play with the recovery features. No issue with that.
8. Recovering a 4TB disk could easily take 12 hours (YMMV) on a standard system, and it can be done while the array is online or offline.
Originally Posted by DotJun
If you want to go T, I suggest you just go hardware because the speed hit is so high.
Once again, you will be pressed to tell the difference between a properly tuned tRAID system and a RAID-F one.
Most performance options are off on a fresh install of tRAID as to provide a safe introduction to the system: http://wiki.flexraid.com/2013/06/27/performance-tuning-in-transparent-raid/
It is really best to be safe than sorry. I recommend that people fully test their systems for stability first before enabling the performance options.
People tend to assume their systems are stable simply because they can boot into Windows and run Prime.
Unless you are on the final release and followed the performance tuning threads, you seriously need to shake off that impression of low speed. That is unless you find writes of 50MB+ with no caching and hundreds+ with caching low. Then yeah, it is not designed to compete with striped RAIDs.
Remember that there is no overhead on read performance. Large storage arrays tend to be "write once - read often".