First of all, great thread - I'm going to be getting this case soon, though will probably wait for the SAS version. I absolutely appreciate all the pics, etc; as you said, Newegg and Norco definitely don't provide much.
I thought I'd address many of the questions (and some misinformation) that I've seen after reading through the entire thread. (IMO, YMMV, My $.02... etc)
Originally Posted by fleggett
Hmmm. This is the third or fourth time I've read this (from different sources). I just don't understand why Norco would provide two per plane if only one is needed. Some sort of power redundancy, so that if one molex fails, the other takes over? What if you've got drives that have much greater amperage and wattage requirements?
As someone mentioned more recently, this is definitely for redundancy, and is in fact its only purpose. There's no reason to connect all of the molex connectors, only one column, unless you have redundant power supplies. This is absolutely a server case - most other cases on this scale (e.g. SuperMicro) include double or triple redundant power supplies (and note they're also only about 620-750W).
Additionally, though a 1000W power supply may seem ideal, it's also very likely overkill. For 20 WD Black drives + typical motherboard with onboard video + a few SATA cards, all you need is about 650W. You need to look at the power factor rating for how loaded it will be; often, a more loaded PSU will be more efficient (like a generator is with higher load). Obviously you need to account for burst capacity, but if anyone's getting a case with 20 drive bays, I would submit that staggered drive spinup is an absolute requirement.
EDIT: I remembered that you're using it as a main PC as well, with high powered GPU, etc.. so nevermind, 1000W definitely seems reasonable in that case. By the way, how's the airflow with the large 140mm fan blowing over to the motherboard (instead of out)? I would think it'd be especially useful to do the 3x120mm case fan mod with that type of PSU.
Based on the above, there's only a need for a power supply with six molex connectors.. one for each row of drives on the backplane, and one for everything else in the case (internal drives, slim DVD, fans). Also, get one with a *single rail*.. there's no need for division of connectors across rails then, and most people are installing drives in rows, so you're loading that one backplane connector anyway (and there's nothing wrong with that.. just don't connect the 7-to-1 as others have warned). An example is Corsair's 650W single-rail for $100 an Newegg. Granted, I'll probably go with a 750W single-rail.
I second another comment on finding server racks on Craigslist. Unless you need Middle Atlantic's slide out/rear access type rack, IMO they're overpriced for what you get. E.g. 350 lb limitations on some of them I've seen, whereas a typical datacenter 42U rack will hold on the order of 1500-2500 lbs. They're typically around $150-200 used on CL.
Final thought: I'll be running the open source OpenFiler app (linux) which can provide software RAID of basically every type, as well as iSCSI SAN capability so you can use some more enterprise level virtualization technologies such as VMWare ESX, Citrix XenServer, etc (not Hyper-V until it's mature). But that's another topic.. thought I'd mention it in case anyone else was thinking of doing the same.