AVS Forum banner
  • Our native mobile app has a new name: Fora Communities. Learn more.

Localizability of bass frequencies in rooms

7794 Views 26 Replies 10 Participants Last post by  darthray
Being the Audio Video Science Forum, this is an attempt to summarize the current state of understanding of our ability to determine the direction of a source of low-frequency sound. Also included are some implications for those seeking stereo bass. I have pulled information from various sources and industry experts to make this as accurate as possible, and at times, attempted to summarize it in my own words. If you have anything constructive to add to this topic, please do so. If you have links to information that contradicts anything here, please share it and I will be happy to make edits.

I hope that this reaches an interested audience here and fosters some good discussion. Let's aim to clear up any misunderstandings about this subject that have propagated in other threads. Without further ado:

I will just jump in for kicks. First a question, is my understanding that our sound localization capabilities is primarily based on the brain processing small timing differences between sound arriving at different times to each of our ears (along with the construction of the outer ear acting as a filter of sorts)? If so, room or no room, the primary signal (unless it is entirely swamped by reflections, but even those arrive late enough that the brain can delineate them) arrives at our two ears at time that is different enough that we can determine direction. This true at any frequency, as the speed of propagation is the same for all sound frequencies.
The brain can use differences in time and level to determine the direction of a single sound. These are known as interaural differences. The "resolution" that we have to work with is determined by the spacing between our sensory devices, aka ears. An analogy (maybe not a perfect one, but it's apt) might be how radio telescope data gains resolution when the dishes are spaced many miles apart.

However, what works for us at higher frequencies does not work so well at low frequencies.

While the speed of sound remains the same, our perception vs frequency does not, due to the resolution we have with our natural ear spacing and how different wavelengths act when they reach us. And, it is confounded by the room. Note that there isn't a specific frequency that sets a sudden limit. We progressively lose accuracy in the ability to point to a sound source as frequency drops. So let's say we can point laterally to within 1° of a sound source at 1 kHz. Down at 200 Hz, our accuracy may drop by 10-20 degrees. Below 50-100 Hz, depending on the individual and the size of the room, it gets fuzzy and that number grows to 360°.

Starting with the basics: Sound localization - Wikipedia

Evaluation for low frequencies

"For frequencies below 800 Hz, the dimensions of the head (ear distance 21.5 cm, corresponding to an interaural time delay of 625 µs) are smaller than the half wavelength of the sound waves. So the auditory system can determine phase delays between both ears without confusion. Interaural level differences are very low in this frequency range, especially below about 200 Hz, so a precise evaluation of the input direction is nearly impossible on the basis of level differences alone. As the frequency drops below 80 Hz it becomes difficult or impossible to use either time difference or level difference to determine a sound's lateral source, because the phase difference between the ears becomes too small for a directional evaluation."

Explained further here: Introduction to Psychoacoustics - Module 07A

We know that, in a free-field environment:
  • For low frequency sounds (<500Hz) with wavelengths >~ 28 inches or >~0.68m (>~ 4/3 of the average head's circumference) the auditory system relies mainly on period-related interaural time differences (ITDs).
  • Low frequency sounds arrive at the two ears with interpretable phase differences. However, by being efficiently diffracted, they don't result in interaural level differences that are large enough to be perceptible.
  • The absolute highest frequency for which interaural time differences provide useful cues is 1/0.0013 = ~770Hz.
  • For high frequency sounds (>1500Hz) with wavelengths <~9 inches or <~0.24m (<~1/2 of the average head's circumference) the auditory system relies mainly on interaural level / intensity / amplitude differences (ILDs or IIDs) when making auditory localization judgments.
  • High frequency sounds cannot diffract efficiently around a listener's head, which blocks acoustic energy and produces interpretable intensity level differences.
  • IIDs are negligible for frequencies below 500Hz and increase gradually with increase in frequency. For high frequencies, IIDs change systematically with azimuth changes and provide reliable localization cues on the horizontal plane (except for front-to-back confusion).
Therefore while interaural time differences are used for bass sounds, the accuracy of this is reduced with frequency and fails to be accurate at subwoofer frequencies in small rooms, which confuse our perception further due to reflections.

In any case, In my room I can localize sound at any frequency that I can hear. I use two subs, but my main speakers are +/- a couple of DB down to 35 Hz. I can clearly identify which speaker is producing 50 Hz sound when switching back and forth between them (someone else doing the switching with my eyes closed). My neighbor and also my then partner could likewise Identify where the sound was coming from. This result was repeatable.
Not to discount your experience, but each speaker also drives modal activity differently, so that is a variable you can not eliminate from your test. But keep reading to get to the larger issues at play.

Consider that when a very long wavelength passes through you (50 Hz is 22 feet long), reaches the back wall (and the ceiling, and the floor, and the side and front walls), then comes back at you again, the end of the wave hasn't even left the transducer yet. Not only are you experiencing the "beginning" of the wave multiple times from different directions in rapid succession – with limited ear-to-ear spacing – you are also experiencing the 3-dimensional pressure environment like you are inside the wave itself. Without some spatial cue to accompany this wave, how can your brain know the original source direction? Your ears can not differentiate what is the initial sound and what is a reflection, which is all occurring within one cycle of the sound. The room is effectively steady state (the combination of direct and all reflected sounds integrated over a time interval) at low frequencies. You will see this term used many times throughout Floyd Toole's research, especially in his seminal work Sound Reproduction: The Acoustics and Psychoacoustics of Loudspeakers and Rooms.

This is a unique combination of factors that does not exist with higher frequencies.

With certain cues, however, we can be clued in to a sound's origin. Such cues include the initial attack of an instrument which contains high frequency sound, vibrations in the enclosure or in the floor or furniture, etc caused by the low frequency energy, or the harmonics produced by the transducers themselves. For example, playing a 50 Hz tone will get you a 2nd harmonic at 100 Hz and a 3rd harmonic at 150 Hz simultaneously.

If you use music or movie material you can definitely localize subs more easily due to the crossover being a slope and not a hard stop. In fact I have had this problem many times and it is why I have reduced my crossover to 60 Hz (70 Hz is not available with my equipment). 80 Hz can work if I use a steeper crossover slope to kill off the subs faster. I have used up to 90 Hz successfully with a steep crossover filter in a larger room with greater distances between all of the components and the seats.

A speaker without bass management won't present this problem because all of the sounds will come from the same apparent location, but pure tones will produce harmonics, the sum of which could push audibility into localizable territory, depending on frequency and level. Either way, the effect is similar: you are getting higher-frequency content from that sub/speaker whether you intended to or not, and these effects can trick you into thinking low frequency sounds are localizable.

Of course, this is my personal anecdotal experience – but if we reference Floyd Toole in Loudspeakers and Rooms for Stereophonic Sound Reproduction, and Loudspeakers and Rooms - Working Together we find the same conclusions:
  • Tests where bass transient signals were used support 60Hz being the limit of localizability.
  • Tests where 2nd order slope crossovers are used support 60Hz.
  • Tests where 4th-8th order slope crossovers are used support 80Hz.
  • The early research on woofer localization used bass transients, bass sine waves, and music for blind testing as well as for measurement. Conclusion: bass transients over 60Hz can be localized. For sine waves, it is higher.
  • The AES Technical Council notes in the Multichannel surround sound systems and operations document AESTD1001.1.01-10 that in small rooms, bass transients under 80Hz cannot be localized to the subs when steep crossovers are used.
And this doesn't even touch on the sighted bias of seeing the subwoofers where they are placed. Another problem altogether.

Next, I will defer to quotes from some audio industry luminaries who have something to say on this subject, in order for our readers to have access to accurate information.

From Loudspeakers and Rooms for Sound Reproduction—A Scientific Review, p. 470:
Floyd Toole said:
As with any subwoofer system, the low-pass filtering must be such that the sound output is attenuated rapidly above the crossover frequency (80 Hz). Excessive output, distortion products, or noises at higher frequencies increase the risk that listeners will localize the subwoofers. A second issue relates to the fact that in order for these systems to function fully, the bass must be monophonic below the crossover frequency. Most of the bass in common program material is highly correlated or monophonic to begin with and bass-management systems are commonplace, but some have argued that it is necessary to preserve at least two-channel playback down to some very low frequency. Experimental evidence thus far has not been encouraging to supporters of this notion (see [72] and references therein). Audible differences appear to be near or below the threshold of detection, even when experienced listeners are exposed to isolated low-frequency sounds. Another recent investigation concludes that the audible effects benefiting from channel separation relate to frequencies above about 80 Hz [73]. (In their conclusion, the authors identify a “cutoff-frequency boundary between 50 Hz and 63 Hz,” these being the center frequencies of the octave bands of noise used as signals. However, when the upper frequency limits of the bands are taken into account, the numbers change to about 71 and 89 Hz, the average of which is 80 Hz.)
David Griesinger, who advocated for stereo bass while working at Harman, wrote:
David Griesinger said:
I believe this model is of high importance for both the study of hearing and speech, and for the practical problem of designing better concert halls and opera houses. The model - assuming it is correct - shows that the human auditory mechanism has evolved over millions of years for the purpose of extracting the maxumum amount of information from a sound field in the presence of non-vital interference of many kinds. The information most needed is the identification of the pitch, timber, localization, and distance of a possibly life threatening source of sound. It makes sense that most of this information is encoded in the sound waves that reach the ear in the harmonics of complex tones - not in the fundamentals. Most background noise is inversely proportional to frequency in its spectrum, and thus is much stronger at low frequencies than at high frequencies. But in addition, the harmonics - being at higher frequencies - contain more information about the pitch of the fundamentals than the fundamentals themselves, and are also easier to localize, since the interaural level differences at high frequencies are much higher than at low frequencies.
He also acknowledges:
David Griesinger said:
The statement that low frequencies cannot be localized is easily shown to be true when a sine tone as used as a signal.
He goes on to propose some "tricks" and conditions in which a system can be created to localize low bass frequencies, however these are specialized conditions and usually do not exist in home rooms. If you are interested you can read about that in his paper here: http://www.davidgriesinger.com/asa05.pdf

Earl Geddes is one of the leading authorities in loudspeaker design and small room acoustics:
Earl Geddes said:
It is only reasonable that as the frequency falls the differences in level and timing at the two ears must vanish. This is simple physics. As such the spatial resolving power of our two ear system must also vanish. What is not so clear is the speed that this happens. I think that WithTarragon has it correct in that the resolution is pretty well gone at 50 Hz and pretty well apparent at 300 Hz. My question would be "is this in a free field?" where I would certainly assume that it has to be (otherwise what are the conditions?) That means that adding in the effect a small room has - the fact that the sound is now arriving at all kinds of different angles within even a ms. - and the resolution cannot possibly go up - it has to go down. Hence for me, I would find that LF localization in a small room would be pretty poor up to about 100 Hz. It will improve up until the spatial resolution is dominated by the rooms early reflections (which limit the spatial resolution at higher frequencies, aka imaging.)

Now the LF resolution I have stated above assumes a reverberant field. If one is close to the source then there are near-field and direct-field effects that could be more audible than the reverberant field effects. For example, there could be differences in level at the two ears from the 6 dB falloff with distance, or near-field phase effects that differ at the two ears.
Nevertheless, all of this is academic. Even if we could perfectly localize bass down to 20 Hz no matter what, practically all source material would not support this configuration, being encoded with mono bass to avoid phase issues between speakers and complaints from headphone users who do not tolerate bass in one ear. Therefore instead of pursuing stereo bass, Harman developed Soundfield Management – because the research is clear that minimizing modal activity gives an immediate and positive effect that translates across all material. In the rare event where a recording is found to have significant bass in only one channel (I have heard some, and with headphones it is not pleasant), is it worth the tradeoff of living with unoptimized room modes and phase interactions between your components, in order to maximize this effect?

All of this follows the same theme: it's not the source of low frequencies themselves that we are locating from a subwoofer in a room, but the higher frequency content on the crossover slope, the harmonics, the mechanical or enclosure noises your subwoofer may be making, and/or sympathetic vibrations that accompany the waves that tip us off to their apparent source. Speakers can not create pure tones, and the localization of instruments is determined mostly by their overtones and higher-frequency transient attacks, not their fundamentals. Music contains complex sounds that inevitably have plenty of localizable content in their spectrum. If they didn't, localization would be exceedingly difficult due to the way our hearing works at low frequencies and how rooms confound the situation further.

No one can dictate how an enthusiast should set up a system, including as to attempt to enjoy certain kinds of spatial cues that accompany their bass. However, they should understand the underlying physics of what they are actually hearing, including any potential shortcomings of setting up a system to do so. They should also understand the best practices for achieving good bass in a room and make decisions accordingly.
See less See more
  • Like
Reactions: 5
1 - 20 of 27 Posts
Further reading

Perception of infrasound:

Although I've never looked into the topic in that much detail, I knew bass was considered non directional under 80 Hz or so, with practical experience matching the accepted principle. I have high quality low distortion subs crossed at 80 Hz no where near my main speakers. I don't gamble but probably would bet money that no one would be able to point to my subs with music. However, if we know where they are, we know the ears/brain are easily fooled by visual cues. I can imagine someone with a strong, long held belief that they can determine where low bass comes from could easily fool themselves, as they obviously know where subs are in their own room.
  • Like
Reactions: 2
I run dual subs as well, but they are set in perfectly symetrical positions in relation to the room and the main listening position. I use right angle RCA adapters for cable managment of my subs, and somehow the cable on my right sub had fallen out, probably due to vibration. While I couldn't quite put my finger on it, I knew that something was off. Though I think this example does not exactly match the conditions, as we use multiple subs to smooth out frequency response. So technically I determined where the bass came from (of lack thereof), due to the different frequency response profiles each sub emits. Well that, and that the output level was lower. Though I will say that pinpoint location is nearly impossible, as each alone simply feels like a radiation of sound, I can really only discern wether it eminated from roughly the left or right side of my body.
See less See more
I run dual subs as well, but they are set in perfectly symetrical positions in relation to the room and the main listening position. I use right angle RCA adapters for cable managment of my subs, and somehow the cable on my right sub had fallen out, probably due to vibration. While I couldn't quite put my finger on it, I knew that something was off. Though I think this example does not exactly match the conditions, as we use multiple subs to smooth out frequency response. So technically I determined where the bass came from (of lack thereof), due to the different frequency response profiles each sub emits. Well that, and that the output level was lower. Though I will say that pinpoint location is nearly impossible, as each alone simply feels like a radiation of sound, I can really only discern wether it eminated from roughly the left or right side of my body.
That is certainly possible as we are very sensitive to small changes in bass level and tone. If you didn't notice a difference, you would probably wonder why you bought two subs. If one of my subs became unplugged I would wonder pretty quickly what is wrong and start looking. In fact, a similar thing happened to me recently and I eventually discovered that I had bumped some switches for damping and extension on one of my subs.

Another possibility is the small tactile effects coming from that general direction. Also consider that subs will still play sounds above your crossover point which are definitely localizable, unless you cross them at 60 Hz or lower, or 80 Hz with a sharp filter rolloff. On my Rythmiks I can double up the crossover slope if desired, known as "cascading crossovers". I don't currently do that but I have in the past in order to achieve a higher crossover without negative effects. I've been able to do up to 100 Hz in a larger room and still have a pretty clean sounding sub, as long as I switch my sub to add more to the low pass filter.

The LPF for LFE setting is another source to look at. During movie playback, the LFE channel has content up to 120 Hz (well, some films have some junk at higher frequencies but the general practice is to filter them at 120 Hz in the studio). So you'll get LFE up to 120 Hz in addition to the redirected bass from your speakers at your lower crossover point. It is this low pass filter for the LFE channel that can be a source of localization during movies. Some find a benefit in setting it a bit lower in order to attenuate those sounds further.

Anyway, your cables shouldn't be falling out! If you're looking for an upgrade, take a look at these I found last year and am using on my subs. Top quality. The connections are super tight and not going anywhere. A bonus is the cable is nice and flexible and easy to route around the room, plus it rejects noise very well. You can specify different colors if you wish to differentiate your connections. I chose red for sub 1 and white for sub 2 for the RCA's at the back of the AVR but kept the cables black. If I ran my two sub cables next to each other, I might have colored one of them to tell them apart. Either way, you have options.

See less See more
I run every channel in my system full-range. It sounds better this way.
  • Like
Reactions: 1
Being the Audio Video Science Forum, this is an attempt to summarize the current state of understanding of our ability to determine the direction of a source of low-frequency sound in a typical small home theater. Also included are some implications for those seeking stereo bass. I have pulled information from various sources and industry experts to make this as accurate as possible, and at times, attempted to summarize it in my own words. If you have anything constructive to add to this topic, please do so. If you have links to information that contradicts anything here, please share it and I will be happy to make edits.

Some of this will appear as a continuation of another discussion. Those parts were lifted from a purportedly authoritative guide thread on bass after being swiftly rejected by its author. I hope that it is able to reach an interested audience in its own thread here. Let's aim to clear up any misunderstandings about this subject. Without further ado:


Yes, that is also my understanding. The brain can use differences in time and level to determine direction of a single sound. These are known as interaural differences. The "resolution" that we have to work with is determined by the spacing between our sensory devices, aka ears. An analogy (maybe not a perfect one, but it's apt) might be how radio telescope data gains resolution when the dishes are spaced many miles apart.

However, what works for us at higher frequencies does not work so well at low frequencies.

While the speed of sound remains the same, our perception vs frequency does not, due to the resolution we have with our natural ear spacing and how different wavelengths act when they reach us. And again, it is confounded by the room. Note that there isn't a hard limit. We progressively lose accuracy in the ability to point to a sound source as frequency drops. So let's say we can point laterally to within 1° of a sound source at 1 kHz. Down at 200 Hz, our accuracy may drop by 10-20 degrees. Below 50-100 Hz, depending on the individual and the size of the room, it gets fuzzy and that number grows to 360°.

Starting with the basics: Sound localization - Wikipedia

Evaluation for low frequencies

"For frequencies below 800 Hz, the dimensions of the head (ear distance 21.5 cm, corresponding to an interaural time delay of 625 µs) are smaller than the half wavelength of the sound waves. So the auditory system can determine phase delays between both ears without confusion. Interaural level differences are very low in this frequency range, especially below about 200 Hz, so a precise evaluation of the input direction is nearly impossible on the basis of level differences alone. As the frequency drops below 80 Hz it becomes difficult or impossible to use either time difference or level difference to determine a sound's lateral source, because the phase difference between the ears becomes too small for a directional evaluation."

Explained further here: Introduction to Psychoacoustics - Module 07A

We know that, in a free-field environment:
  • For low frequency sounds (<500Hz) with wavelengths >~ 28 inches or >~0.68m (>~ 4/3 of the average head's circumference) the auditory system relies mainly on period-related interaural time differences (ITDs).
  • Low frequency sounds arrive at the two ears with interpretable phase differences. However, by being efficiently diffracted, they don't result in interaural level differences that are large enough to be perceptible.
  • The absolute highest frequency for which interaural time differences provide useful cues is 1/0.0013 = ~770Hz.
  • For high frequency sounds (>1500Hz) with wavelengths <~9 inches or <~0.24m (<~1/2 of the average head's circumference) the auditory system relies mainly on interaural level / intensity / amplitude differences (ILDs or IIDs) when making auditory localization judgments.
  • High frequency sounds cannot diffract efficiently around a listener's head, which blocks acoustic energy and produces interpretable intensity level differences.
  • IIDs are negligible for frequencies below 500Hz and increase gradually with increase in frequency. For high frequencies, IIDs change systematically with azimuth changes and provide reliable localization cues on the horizontal plane (except for front-to-back confusion).
Therefore while interaural time differences are used for bass sounds, the accuracy of this is reduced with frequency and fails to be accurate at subwoofer frequencies in small rooms, which confuse our perception further due to reflections.


Not to discount your experience, but each speaker also drives modal activity differently, so that is a variable you can not eliminate from your test. But keep reading to get to the larger issues at play.

Consider that when a very long wavelength passes through you (50 Hz is 22 feet long), reaches the back wall (and the ceiling, and the floor, and the side and front walls), then comes back at you again, the end of the wave hasn't even left the transducer yet. Not only are you experiencing the "beginning" of the wave multiple times from different directions in rapid succession – with limited ear-to-ear spacing – you are also experiencing the 3-dimensional pressure environment like you are inside the wave itself. Without some spatial cue to accompany this wave, how can your brain know the original source direction? Your ears can not differentiate what is the initial sound and what is a reflection, which is all occurring within one cycle of the sound. The room is effectively steady state (the combination of direct and all reflected sounds integrated over a time interval) at low frequencies. You will see this term used many times throughout Floyd Toole's research, especially in his seminal work Sound Reproduction: The Acoustics and Psychoacoustics of Loudspeakers and Rooms.

This is a unique combination of factors that does not exist with higher frequencies.

With certain cues, however, we can be clued in to a sound's origin. Such cues include the initial attack of an instrument which contains high frequency sound, vibrations in the enclosure or in the floor or furniture, etc caused by the low frequency energy, or the harmonics produced by the transducers themselves. For example, playing a 50 Hz tone will get you a 2nd harmonic at 100 Hz and a 3rd harmonic at 150 Hz simultaneously.

If you use music or movie material you can definitely localize subs more easily due to the crossover being a slope and not a hard stop. In fact I have had this problem many times and it is why I have reduced my crossover to 60 Hz (70 Hz is not available with my equipment). 80 Hz can work if I use a steeper crossover slope to kill off the subs faster. I have used up to 90 Hz successfully with a steep crossover filter in a larger room with greater distances between all of the components and the seats.

A speaker without bass management won't present this problem because all of the sounds will come from the same apparent location, but pure tones will produce harmonics, the sum of which could push audibility into localizable territory, depending on frequency and level. Either way, the effect is similar: you are getting higher-frequency content from that sub/speaker whether you intended to or not, and these effects can trick you into thinking low frequency sounds are localizable.

Of course, this is my personal anecdotal experience – but if we reference Floyd Toole in Loudspeakers and Rooms for Stereophonic Sound Reproduction, and Loudspeakers and Rooms - Working Together we find the same conclusions:
  • Tests where bass transient signals were used support 60Hz being the limit of localizability.
  • Tests where 2nd order slope crossovers are used support 60Hz.
  • Tests where 4th-8th order slope crossovers are used support 80Hz.
  • The early research on woofer localization used bass transients, bass sine waves, and music for blind testing as well as for measurement. Conclusion: bass transients over 60Hz can be localized. For sine waves, it is higher.
  • The AES Technical Council notes in the Multichannel surround sound systems and operations document AESTD1001.1.01-10 that in small rooms, bass transients under 80Hz cannot be localized to the subs when steep crossovers are used.
And this doesn't even touch on the sighted bias of seeing the subwoofers where they are placed. Another problem altogether.

Next, I will defer to quotes from some audio industry luminaries who have something to say on this subject, in order for our readers to have access to accurate information.

From Loudspeakers and Rooms for Sound Reproduction—A Scientific Review


David Griesinger, who advocated for stereo bass while working at Harman, wrote:


He also acknowledges:


He goes on to propose some "tricks" and conditions in which a system can be created to localize low bass frequencies, however these are specialized conditions and usually do not exist in home rooms. If you are interested you can read about that in his paper here: http://www.davidgriesinger.com/asa05.pdf

Earl Geddes is one of the leading authorities in loudspeaker design and small room acoustics:


Nevertheless, all of this is academic. Even if we could perfectly localize bass down to 20 Hz no matter what, practically all source material would not support this configuration, being encoded with mono bass to avoid phase issues between speakers and complaints from headphone users who do not tolerate bass in one ear. Therefore instead of pursuing stereo bass, Harman developed Soundfield Management – because the research is clear that minimizing modal activity gives an immediate and positive effect that translates across all material. In the rare event where a recording is found to have significant bass in only one channel (I have heard some, and with headphones it is not pleasant), is it worth the tradeoff of living with unoptimized room modes and phase interactions between your components, in order to maximize this effect?

All of this follows the same theme: it's not the source of low frequencies themselves that we are locating from a subwoofer in a room, but the higher frequency content on the crossover slope, the harmonics, the mechanical or enclosure noises your subwoofer may be making, and/or sympathetic vibrations that accompany the waves that tip us off to their apparent source. Speakers can not create pure tones, and the localization of instruments is determined mostly by their overtones and higher-frequency transient attacks, not their fundamentals. Music contains complex sounds that inevitably have plenty of localizable content in their spectrum. If they didn't, localization would be exceedingly difficult due to the way our hearing works at low frequencies and how rooms confound the situation further.

No one can dictate how an enthusiast should set up a system, including as to attempt to enjoy certain kinds of spatial cues that accompany their bass. However, they should understand the underlying physics of what they are actually hearing, including any potential shortcomings of setting up a system to do so. They should also understand the best practices for achieving good bass in a room and make decisions accordingly.
Wow, that's a looooong post man ... you have put some serious thought and research into this topic (y)

I have read somewhere (can't recall where though) that they tested a number of people in a room to determine at what frequency they could correctly identify the location of the sound. They all could localize the sound above 180 Hz and nobody could below 80 Hz. I don't really know how such test was conducted or even who did it and how many subwoofers were used to get the result.

I can tell you that I have 4 subwoofers integrated together with REW+miniDSP and have them crossed over to my mains (and the rest of my speakers) at the rather high cross-over frequency of 130 Hz. Tried really hard to pinpoint where the bass was coming from with many movie and music tracks and could not! It's possible that if I only had 1 sub in the room I might have. I suspect multiple subs may allow you to increase the popular THX threshold cross-over well over 80 Hz.
See less See more
The two most interesting things (for me personally) I got out of the post and linked material was the role of distortion in promoting localization, yet another argument for sufficient subwooferage in your system, especially if all subs will not be up front.

The other was the effect of precedence (Haas effect) in apparent localization. When dealing with localization issues the instinctive step is to reduce output (in whole or in part) of the offending sub, but a better first step often is to reduce distance/add delay, as noted here-

Subwoofer being Directional | AVS Forum
  • Like
Reactions: 1
Wow, that's a looooong post man ... you have put some serious thought and research into this topic (y)
Thank you. I think it's important to summarize what is known so we don't waste too much time re-inventing the wheel. Knowledge is power.

I can tell you that I have 4 subwoofers integrated together with REW+miniDSP and have them crossed over to my mains (and the rest of my speakers) at the rather high cross-over frequency of 130 Hz. Tried really hard to pinpoint where the bass was coming from with many movie and music tracks and could not! It's possible that if I only had 1 sub in the room I might have. I suspect multiple subs may allow you to increase the popular THX threshold cross-over well over 80 Hz.
I think you are probably right when there are multiple sources. I also think the fact that you couldn't localize anything speaks to doing a good job with integration and setup.
The two most interesting things (for me personally) I got out of the post and linked material was the role of distortion in promoting localization, yet another argument for sufficient subwooferage in your system, especially if all subs will not be up front.

The other was the effect of precedence (Haas effect) in apparent localization. When dealing with localization issues the instinctive step is to reduce output (in whole or in part) of the offending sub, but a better first step often is to reduce distance/add delay, as noted here-

Subwoofer being Directional | AVS Forum
Interesting statement. I think Mark's note about time mattering more than level at these frequencies (the stuff coming through on the crossover slope, say 80-320 hz) is right in line with the research cited above, noting that level differences are used to determine direction mostly from ~770 Hz and up. That is where, due to the combination of the size of our head, the interaural distance, and the wavelengths involved, there is a shading effect where our head itself blocks enough sound energy that the far ear will get less SPL, giving us a clue to the sound's direction (in addition to timing differences, up to a point). This does not work below ~770 Hz and timing differences are instead used, which gradually lose accuracy as you enter the bass region due to the limited resolution we have with our interaural distance only being 22-23 cm.

Seeing that his post was from 2008, I think in modern systems with aligned subwoofers and Room EQ, I'm not sure we would be adjusting our sub timing now or we'd risk making the response worse. I think it would be best to stick to other known techniques to reduce localization, which should be done anyway.
See less See more
Interesting statement. I think Mark's note about time mattering more than level at these frequencies (the stuff coming through on the crossover slope, say 80-320 hz) is right in line with the research cited above, noting that level differences are used to determine direction mostly from ~770 Hz and up. That is where, due to the combination of the size of our head, the interaural distance, and the wavelengths involved, there is a shading effect where our head itself blocks enough sound energy that the far ear will get less SPL, giving us a clue to the sound's direction (in addition to timing differences, up to a point). This does not work below ~770 Hz and timing differences are instead used, which gradually lose accuracy as you enter the bass region due to the limited resolution we have with our interaural distance only being 22-23 cm.

Seeing that his post was from 2008, I think in modern systems with aligned subwoofers and Room EQ, I'm not sure we would be adjusting our sub timing now or we'd risk making the response worse. I think it would be best to stick to other known techniques to reduce localization, which should be done anyway.
Actually I think modern tools enable making adjustments like this and measuring their effect easier than ever. He does acknowledge too large an adjustment can result in a degradation of bass clarity.
Interesting statement. I think Mark's note about time mattering more than level at these frequencies (the stuff coming through on the crossover slope, say 80-320 hz) is right in line with the research cited above, noting that level differences are used to determine direction mostly from ~770 Hz and up. That is where, due to the combination of the size of our head, the interaural distance, and the wavelengths involved, there is a shading effect where our head itself blocks enough sound energy that the far ear will get less SPL, giving us a clue to the sound's direction (in addition to timing differences, up to a point). This does not work below ~770 Hz and timing differences are instead used, which gradually lose accuracy as you enter the bass region due to the limited resolution we have with our interaural distance only being 22-23 cm.

Seeing that his post was from 2008, I think in modern systems with aligned subwoofers and Room EQ, I'm not sure we would be adjusting our sub timing now or we'd risk making the response worse. I think it would be best to stick to other known techniques to reduce localization, which should be done anyway.
Actually I think modern tools enable making adjustments like this and measuring their effect easier than ever. He does acknowledge too large an adjustment can result in a degradation of bass clarity.
Later, in March of 2010, Mark Seaton posted the following my response to my situation:

Later still, we talked about ADDING distance to improve the splice between speakers and subs.
This later became the basis for the "Subwoofer Distance Tweak". (See attachment).

Craig

Attachments

Later, in March of 2010, Mark Seaton posted the following my response to my situation:

Later still, we talked about ADDING distance to improve the splice between speakers and subs.
This later became the basis for the "Subwoofer Distance Tweak". (See attachment).

Craig
Good stuff. It's a shame Mark doesn't post here anymore. It's a loss to the forum.
Here is a few thought to thing about, and will go against the flow. While I could not locate my front sub near my front speakers when trying many front locations. I could locate the rear one, when it was in the right rear corner. While it could be contributed for the LPF using 80Hz on the sub and not been a brick wall, and some frequencies were still played at the higher ones. It still reveal the general location of the sub, of where the sound came from.

But in the other hand, since I live in area with many thunder storm even in sunny days. Most of the time, my coworker and I hear the low rumble when a thunder storm it is approaching. Way before the storm snap when near by, and in most case a few of us do look in the general direction of the rumble sound to see where that storm is coming from or will pass by our location. According to this site, the rumble is around 40-50Hz;
2003-08-29 · If you're just looking for sound frequency's--the chest rumbling resonance is in the 70-80hz range. The lower grumblings that add so much "air" to the low end are in the 40-50 hz range, and the sharp crack of thunder is usually in the 2k area with a mid range lower harmonic in the 600-800hz range to add a thickness to the crack. Remember--a crack of thunder is not sibilant--but a streak of lighting …

In conclusion, some will hear where there sub/s are located while other will not. What matter is what work best for us, no matter our believe are for this one.

Darth
See less See more
Here is a few thought to thing about, and will go against the flow. While I could not locate my front sub near my front speakers when trying many front locations. I could locate the rear one, when it was in the right rear corner. While it could be contributed for the LPF using 80Hz on the sub and not been a brick wall, and some frequencies were still played at the higher ones. It still reveal the general location of the sub, of where the sound came from.

But in the other hand, since I live in area with many thunder storm even in sunny days. Most of the time, my coworker and I hear the low rumble when a thunder storm it is approaching. Way before the storm snap when near by, and in most case a few of us do look in the general direction of the rumble sound to see where that storm is coming from or will pass by our location. According to this site, the rumble is around 40-50Hz;
2003-08-29 · If you're just looking for sound frequency's--the chest rumbling resonance is in the 70-80hz range. The lower grumblings that add so much "air" to the low end are in the 40-50 hz range, and the sharp crack of thunder is usually in the 2k area with a mid range lower harmonic in the 600-800hz range to add a thickness to the crack. Remember--a crack of thunder is not sibilant--but a streak of lighting …

In conclusion, some will hear where there sub/s are located while other will not. What matter is what work best for us, no matter our believe are for this one.

Darth
I ran the sound of thunder through REW's RTA awhile back. The normal cadence is a relatively short "crack" followed by a longer "rumble". The "crack" was usually a bundle of frequencies from about 300-800 hz, the "rumble" was a bundle of approx 50-250 hz. Either could be localized I'm sure.


Subwoofer (powered) for Distant Thunder reproduction | AVS Forum
I ran the sound of thunder through REW's RTA awhile back. The normal cadence is a relatively short "crack" followed by a longer "rumble". The "crack" was usually a bundle of frequencies from about 300-800 hz, the "rumble" was a bundle of approx 50-250 hz. Either could be localized I'm sure.


Subwoofer (powered) for Distant Thunder reproduction | AVS Forum
While I do not disagree with you, around here most of our storm rumble are heard way before the crack of a storm approaching in most case.

Darth
The research seems pretty clear that our ears are not spaced far enough apart to determine the direction of sound either by level or timing differences below around 60 Hz, and with very poor accuracy below maybe 100 Hz, but I would accept there to be some variation between indoors and outdoors because different variables are in play. For example, indoors seems harder due to more reflections. However, outdoor isn't a reflection-free zone – we still have the ground. Still, I would believe that low thunder would contain a spectra of sound rather than being a very narrow tone. Living with lots of thunderstorms in Central USA, 50-250 Hz seems reasonable.

I will pay much closer attention the next time I am able to.
On the topic, here is a really good way to easily hear whether your subwoofer crossover is too high, using movie dialogue as our canary in the coal mine:
  1. Physically disconnect your front left, center, and right speakers. Do not make any configuration changes – leave the speakers enabled and set to small.
  2. Play a movie that has dialogue, particularly males with deeper voices. There is some good demo material here.
  3. When people are talking, it should sound like your system is muted, save for LFE and occasional surround speaker effects.
  4. If you hear anything vocal related coming from your subwoofer(s), your crossover is not optimal for clear dialogue, which is also a bellwether for music clarity.
With a 60 Hz crossover, I get absolute silence in the room, except for surround and LFE effects. There is no drone at all coming from the sub from dialogue. With 80 Hz I do get a little, unless I can roll off the sub faster on the top end.

To improve the clarity of your system, try reducing your crossover, or increasing the slope if you have the ability as mentioned in the above references by Floyd Toole. 80 Hz with a cascading crossover will probably work well if that is all the adjustability you have.
See less See more
On the topic, here is a really good way to easily hear whether your subwoofer crossover is too high, using movie dialogue as our canary in the coal mine:
  1. Physically disconnect your front left, center, and right speakers. Do not make any configuration changes.
  2. Play a movie that has dialogue, particularly males with deeper voices. There is some good demo material here.
  3. When people are talking, it should sound like your system is muted, save for LFE and occasional surround speaker effects.
  4. If you hear anything vocal related coming from your subwoofer(s), your crossover is not optimal for clear dialogue, which is also a bellwether for music clarity.
After Batman, Iron Man, and several others, my subs were as quiet as a mouse with a 60 Hz crossover. At 80 Hz I start to hear some noise.

To improve the clarity of your system, try reducing your crossover, or increasing the slope as mentioned in the above references by Floyd Toole. 80 Hz with a cascading crossover will probably work well. I imagine 80 Hz LR4 (24 dB per octave on either side) would be ideal. Otherwise, 60 Hz seems safe with the standard 12 dB from the sub and 12 dB from the typical AVR. Some receiver brands use different filtering.
Thanks but how do counter that with crossover that has the smoothest FR? My crossover was determined by taking REW sweeps and figuring out the one which had the smoothest crossover
Thanks but how do counter that with crossover that has the smoothest FR? My crossover was determined by taking REW sweeps and figuring out the one which had the smoothest crossover
Indeed that is another can of worms. This assumes you have the ability to choose, which likely means freedom from major room modes due to good sub placement. Otherwise your best bet is to try flipping the sub into 24 dB / oct with the 80/24 switch on Rythmiks. I believe PSA and HSU offer a similar function. I'm unsure if room correction needs to be run again, but it might be needed. That is the only thing stopping me from doing it right now. My system sounds really good at 65 Hz (I mean really good) so I'm reluctant to put in the work. Maybe when I add more absorption to the front of the room to improve my upper bass/midrange issues.

I am also unsure if the drop in SPL at the crossover caused by changing this setting on the sub will be picked up by room correction. I have a feeling that it won't, so I'm unsure the best way to remedy that. A system that builds the crossover into the corrections like Dirac Live Bass Control would probably work, but I'm getting into territory I don't have experience with there.

I would welcome any comments that could provide some insight on the effects of a cascading crossover on crossover smoothness. @darthray?
See less See more
1 - 20 of 27 Posts
This is an older thread, you may not receive a response, and could be reviving an old thread. Please consider creating a new thread.
Top