I think you are making things far too complicated. I'll make it simple: there's really no situation where "double-bass" is necessary (and I'll even go as far to say that for most situations, it's undesirable). You do NOT need double-bass to prevent loss of information. If your satellites can handle all the way to 20hz, leave them as "large". They will handle all the bass that is encoded into their respective channels, and the sub will handle the all the LFE (assuming you haven't low-passed the LFE below 120hz). If your satellites CAN'T handle down to 20hz, pick the lowest crossover point that is above the lowest frequency they can comfortably handle, and the sub will handle everything below that, plus the LFE channel. All double-bass does is cause the lowest frequencies to be duplicated in the sub and the satellites. IMO, that can do a lot more harm than good.
CHRIS: There's something that's been nagging me in the back of my head that I didn't want to bring up. But the bass management issue has been beaten so bad, perhaps a change of subject would be welcome. I understand the point about Audyssey applying correction to each channel down to that speaker's -3db point, and therefore we can fine tune the crossover points after calibration, if necessary. But I thought that Audyssey also applied correction in the time domain... I have been of the assumption that this meant that it could/does correct for any phase anomolies created by the crossover network. If this is true (?), wouldn't changing the crossover point screw up those settings? Interestingly, I have the same satellites all around (NHT 1.1), and the front L-C-R aren't that far apart (maybe 4ft between each one). Yet Audyssey detects the L & R as being full range (yet I know they aren't), and the center as only good to 150hz (yet I know I've had satisfactory results previously with it set to 80hz).