Originally Posted by Alimentall
I brush them off because they're made up and/or incorrect.
OK, since I provided some of the references, I would like to know which ones are "made up" and which are incorrect. So that you don't need to go searching, here they are again:Perceptual recalibration in human sound localization: Learning to remediate front-back reversals
from The Journal of the Acoustical Society of America -- July 2006 -- Volume 120, Issue 1, pp. 343-359.
The Role of Dynamic Information in Virtual Acoustic Displays
The efficacy of a sound localization training procedure that provided listeners with auditory, visual, and proprioceptive/vestibular feedback as to the correct sound-source position was evaluated using a virtual auditory display that used nonindividualized head-related transfer functions (HRTFs). Under these degraded stimulus conditions, in which the monaural spectral cues to sound-source direction were inappropriate, localization accuracy was initially poor with frequent front-back reversals (source localized to the incorrect front-back hemifield) for five of six listeners. Short periods of training (two 30-min sessions) were found to significantly reduce the rate of front-back reversal responses for four of five listeners that showed high initial reversal rates. Reversal rates remained unchanged for all listeners in a control group that did not participate in the training procedure. Because analyses of the HRTFs used in the display demonstrated a simple and robust front-back cue related to energy in the 3-7-kHz bandwidth, it is suggested that the reductions observed in reversal rates following the training procedure resulted from improved processing of this front-back cue, which is perhaps a form of rapid perceptual recalibration. Reversal rate reductions were found to generalize to untrained source locations, and persisted at least 4 months following the training procedure. ©2006 Acoustical Society of America
from Advanced Displays and Spatial Perception Laboratory
3-D Audio Using Loudspeakers, William Gardner, Page 106
Currently, we have data for base-line performance of localization of spatialized sound using static (non-head-coupled) anechoic (echoless) sounds. Such stimuli tend to produce increased localization errors (relative to real sound sources) including increased reversal rates (sound heard with a front-back or up-down error across the interaural and horizontal axes), decreased elevation accuracy, and failures of externalization (Figure 3.1a). Such errors are probably due to the static nature of the stimulus and the inherent ambiguities resulting from the geometry of the head and ears (the so-called cones of confusion; (Figure 3.1b). The rather fragile cues provided by the complex spectral shaping of the HRTFs as a function of location (Figure 3.1c) are essentially the only means for disambiguating the location of static sounds corresponding to a particular cone of confusion. With head- motion (Figure 3.1d), however, the situation may improve greatly; it has been hypothesized that the listener can disambiguate front-back locations by tracking changes in the size of the interaural cues over time and that pinna cues are always dominated by interaural cues (Wallach, 1939; 1940). The early work by Wallach using real sound sources also suggests that head motion may also be a factor in externalization.
We propose that localization errors such as reversal rates, poor elevation accuracy, and the proportion of non-externalized stimuli will be reduced (relative to baseline conditions using static, non-reverberant synthesis techniques) by enabling head and/or source movement. The methods used were based on standard absolute judgement paradigms in which the subjects' task was to provide verbal estimates of sound source azimuth, elevation and distance. Acoustic stimuli consisted of broadband stimuli (e.g., continuous noise, noise- bursts) which were filtered by a Convolvotron or similar spatial audio system. Each study included six or more adult volunteer, paid subjects with normal hearing in both ears as measured by a standard audiometric test. In general, the experimental designs were within- subjects, repeated-measures factorial designs with at least 5 repetitions per stimulus condition tested over a range of locations intended to sample the stimulus space as fully as practicable.
Figure 5.3 shows histograms of judged azimuths at each target azimuth on the horizontal plane, across all subjects, for both headphone and loudspeaker presentation. Error-free localization would result in a straigth line of responses along the y = x diagonal. The histograms clearly show both response variation and front-back reversals. With headphones, almost all the target locations are perceived in the rear. With loudspeakers, most of the front targets are correctly perceived in front, but many of the rear targets are also perceived in front.
Front-back reversal percentages for Horizontal targets and all targets are given in Table 5.1 on page 109. The pattern of front-back reversal is very specific to the individual subject; this is shown in figure 5.4, which is a bargraph of the individual reversal percentages for horizontal targets. With headphones, only Subject D reversed a rear location to the front, and subject D had the lowest percentage of front-back reversals. With loudspeakers, Subject D reversed all rear targets to the front, and reversed none of the front targets. Subjects A and B, on the other hand, have a propensity to perceive the stimulus from the rear.
So John, which of these is "made up" and which is wrong? After reading these, it's *very* clear to me that the human ear can be tricked to hear something behind us as orginating from in front of us. What is less clear to me is what impact this phenomenon has in multi-channel audio systems.