That's so weird that you just bumped this thread. I had searched for it a few minutes ago and was about to post...
I haven't read the entire thread, but but I would like to weigh in on my experience with the Seagate ST3000DM001 3TB Drives. In short, it was horrible.
To elaborate, I bought 8 of them to be used in a Sans Digital TRM8 storage enclosure connected via (included) HighPoint RocketRaid 622 controller. I added all 8 drives to a RAID-5 array, initialized and off I went. Everything seemed fine, but I noticed when copying data from some other drives, occasionally a drive would drop from the array. Sometimes this would force a rebuild, but typically it everything would be fine after a reboot, or a rescan of the drives. After it happened a few more times I RMA'ed the drive that failed, got a return, rebuilt the array again, and everything seemed fine... until I started copying large chucks of data over to the array, at which point it would fail again. Now suspecting there was something beyond a bad drive I contacted Sans Digital who informed me that it was probably an issue with TLER/ECR and that I might have to live with it if I stuck with those drives. They did provide me with an updated driver that was supposed to address issues with the port multiplier used in the enclosure, and it did improve the problem somewhat, (the errors were still reproducible, but it seemed like a rebuild was less likely after the driver update) Since I wasn't going to be copying 100+GB of data to the drive very often, and when I did need to do it, I could break it up into multiple chunks, I figured it would just be easier to live with, than to return them, and start over from scratch.
Fast forward a few months...
I'm moving some data (10GB) to the drive and it drops. Sigh. Bring it back up and it starts to rebuild. Gets to 59%, then drops again. Sigh. Repeat twice, Groan. I come to the conclusion that I have a bad drive. I get a new pair of ST3000DM001 (one to replace the bad one, and one as a spare) and plug in the replacement drive, and start to rebuild again. Get's to 95%, drops a different drive. Try again, same thing. (keep in mind, a full rebuild takes around 30 hours, so with all of the partial rebuilds going on this has taken up weeks now) After talking to Sans Digital and HighPoint support we came to the conclusion that one of the other drives had some sort of error that wouldn't allow the controller to continue rebuilding (despite having "continue on error" enabled) so I ended up having to take the array offline, pulling the drive with the error on it, doing a sector by sector copy to the new drive (another 20 hours... 1 of which was double, then triple checking to make sure that I wasn't going to overwrite the old drive with the new one) plugging in the new drive back into the array, crossing my fingers, and letting the rebuild start over again. Finally this morning everything was back to normal again.
So lessons learned from this...
1) Picking the Seagate drive was my own fault. I should have done more research before picking my hardware to make sure it would all work together.
2) Not all drives are created equally. While those Seagates are probably fine to stick into a computer and use for bulk storage, they aren't good for hardware/firmware RAID arrays.
3) With hard drive capacities where they're at right now, RAID (with parity) can be a pretty dicey proposition. Yes, theoretically, it will save you in the event of a a single drive failure, when you start getting into the tens of terabytes, the likelihood of having one or more sector failures becomes significant when looking at the error rates of consumer level drives. If you're unlucky enough to encounter such an error after I drive has died, then it might take some heroic measures to get your data back.
So I had been planning to repurpose these drives for a new 24 drive storage server, but it looks like they'll be retired soon, and I'll be going with some different drives moving forward. After this last experience, I'm thinking the WD Red drives may well be worth the extra expense. (to put it in perspective, I was also looking into tape backup options, and I asked myself while the array was down, if I'd pay $3500 to avoid dealing with this crap again, and it was an easy "yes" so I figure if I can pull the trigger on an LTO Autoloader, I can probably spring for the Red drives too)
RAID protection is only for failed drives. That's it. It's no replacement for a proper backup.