Taking a step back for a minute I realize this could be confusing because we're talking about the maximum signal output a the minimum Subwoofer setting.
So, here's the process of what I'm discussing:
1) Set the SW channel a low as it goes (-10dB on my Pioneer SC-05)
2) Set the receiver to 0dB (reference)
3) Play a DTS file that contains the worst possible audio content in terms of generating a maximum amplitude LFE signal with all the speaker set small. This is a track that has a -0.01dBFS 40Hz test tone in all 6 channels (5.1).
This generates a signal on the SW output that has the highest possible amplitude (worse case) while the receiver is set to deliver the lowest possible SW signal at reference.
To put this in the obligatory car analogy think of this of this as the max speed a car can hit in first gear. Sure, the car can go faster in higher gears, but if you want to know how fast the car can go in the lowest gear you've got the run the engine up to the redline. The DTS track being played is akin to running the engine up to the redline. The SW channel being set to -10dB is akin to having the car in first gear.
In the case of my Pioneer Elite SC-05 this is what you get on the output under that condition:
The output is 4.48Vrms / 13.1Vpp.
Based on my testing, this (below) is the maximum signal the MiniDSP can take without clipping the input:
The input level is 4.76Vrms / 13.5Vpp
This means that the worst case DTS content played at reference on the master volume (with the SW level trim at -10dB) has only 0.52dB of headroom before clipping the MiniDSP input. So, if you push past reference you will clip the input of the MiniDSP. If you're playing at reference and apply some signal gain in the MiniDSP (filters that boost, LT, shelf filters, etc) you will probably clip the output. This last scenario may be avoidable by playing with the input / output gains in the MiniDSP. I will test this later today.