Originally Posted by Tom Danley
I am not sure I did mention that specifically. I was able to use the coax driver in an SH100 because the final radiation angle was the same as the cone's and the transition error at the hf exit was smallest of the possible choices.
The SM- series does use a coax driver but in a different way as you noticed. The cone is a compression driver as you suspect.
This configuration was one I wanted to get to eventually but it took a while to be able to get into molding horns like this. You can get the jist of how it works from the patent application which shows a cutaway view , examine Fig7 here;http://www.google.com/patents?id=exX...page&q&f=false
As one can see in the general description, the small volume between the cone and throat acts like an acoustic low pass filter. By sizing these for the frequency involved, one can place the low pass corner just above the highest operating point for which ever range driver that is.
By low passing the mid (and low ranges in this case) before driving the horn, one has greatly reduced the inevitable harmonics the drivers produce, some to many fall above the low pass corner, this harmonic distortion is reduced. In that case the horn increases the radiation efficiency of each driver by increasing the acoustic load on it which reduces it's motion required at a given SPL.
I had meant to reply to Penngray about the HOM question and such but have been working on something. I will try now.
I wouldn't say I disagree with Earl but we do look at things from a different perspective. Earl is one of the very sharpest people I have ever met and I envy Earls ability to look, think, talk and imagine the math. In my head is a large cavity where the math co-processor and math ability would have gone and actually, I have had some of my largest leaps while working with people like Earl on hard technical problems.
If you want HOM's, take a very wide angle horn and drive it with a cheap 2 inch driver, actually I am saying the normal whatnot to do here.
I will explain how I see it, this is based largely on observation while trying to develop acoustic levitation transducers. These operated at 22KHz usually and required to make 160+dB for the levitation.
Also I will speak in practical terms by that I mean a distortion count may never reach zero but once below X is irrelevant or how theory requires bandwidth from DC to infinity to reproduce a square wave but to make one that looks perfect on an oscilloscope only requires about 10 X to 1 / 10 the center F.
Sound as in hifi is sort of like a set of Russian Dolls where the largest one (called 20Hz)is 1000 times bigger than the smallest one (they call her the dangerous one 20KHz). A given event at one frequency is the same at another accounting for scale if you follow.
We are aware of the first transition when we place two or more sources of sound like subwoofers close together. It is the closest thing to a free lunch in audio, when you double the number of sources; you have increased the power by four. This is because you have doubled the drive power with two and you have doubled the radiating area by two and this doubles the radiation efficiency, 2+2 =4, or 6dB.
As long as the sources all remain less than about 1 / 4 wavelength edge to edge, they combine coherently into one new source. If you measured the total sound power (going around the surface of an invisible sphere with a sound level meter) and then you reverse one of the two subs, you would find that because of coherent addition, you have near total cancellation (the principal of active sound cancellation).
AS one increases the spacing acoustically (by leaving the location the same but raising the frequency) one finds that once the two sources are say 1/ 2 wavelength apart, one is into a new form a behavior called an interference pattern. These are made of local regions of addition and cancellations and when viewed as a polar plot, have lobes and nulls.
In this interference region, if one measured the total radiated power and then reverse one source, the total energy may not be effected at all, only the interference pattern changes. Think of it this way, if you reverse one of your hifi speakers, they don't cancel each other because they are so far apart (acoustically) they don't add coherently and (hopefully) what you hear is a funny empty feeling in the middle and no bass (where the wavelengths are larger).
Sound can bend around corners just fine within some boundaries.
Sound can bend around a low frequency horn just fine, I have passed 22KHz sound through 3 feet of copper hypodermic tubing, wrapped around a coffee cup, with no problem. This bending can be done when the inner and outer acoustic paths in the bend are less than about ¼ wavelength different start to finish.
Here is an example of a device I developed called a Paraline used as an acoustic lens. We use it in a couple of our products but the illustration is better here in a design built by some friends under license. In this case used to convert a point source into a uni-phase plane wave (for use in a line array in live sound). Note how the sound path bends and converges with all paths being equal.http://www.vtcproaudio.com/paraline02.html
We all know that say a 30 Hz horn requires a wall sized mouth to be ideal and this is true for one flow by helicopter. Theory requires, either by boundary reflection or actual that the mouth circumference should be about one wavelength at that low corner frequency.
The reason for that size is the radiation resistance curve, the curve goes from sloped to flat in this region and so continuing horn past here has no significant effect on efficiency.
Thus, an octave higher, one finds the end of the region of impedance transformation has moved up the horn towards the throat the point where it is now about 1wl in circumference and so on.
Octaves above the low cutoff, the active region well back in the horn but the unused part now serves to confine the radiation angle. In Don Keele's paper what's so sacred about exponential horns we have first steps into designing the horns directivity. (if you reading this so far, Google up his paper later). In it he also comes up with a thumb rule which defines the frequency where a horn mouth looses pattern control based on it's dimension and angle.
His interest can be seen when you consider that the exponential horn and others similar in shape have the above behavior and because of the shape of the horn, as the frequency climbs, the pattern width narrows according the portion of the horn governing the pattern at that frequency.
Again, I don't visualize math like Earl but to me his horn does this; It recognizes especially that as the waterfront becomes acoustically larger, that there is a maximum rate of change one can undertake based on avoiding a secondary radiation (something like edge diffraction). One can get a feel for how this could be when you consider that at 20KHz, the wavelength is only 5/8 inch!
That means all the impedance transformation has taken place well within the driver and even a one inch exit is large enough to have directivity, to confine the radiation angle to 60-90 degrees (depending on the driver internal geometry). You want HOM's put a big old driver on a big old 90 degree horn. Picture how well a source several wavelengths across fills the horn with luscious horn soundeewwww don't do that, it's like finding your cat left you something special in your shoe except for your ears..
Frankly, I don't know what to call what I hear with the Synergy horns now. As they got better and better in the measurements, more like one source, they sounded different too. It was not what I expected or would have guessed, nothing like that, the sound got simpler. Also, if I played a voice through a single speaker, it was a snap to hear what direction it was BUT it became much harder to hear how far away it was with your eyes closed. I have been calling it source identity for lack of a better term.
I have had a TEF machine for about 30 years now and have taken a zillion loudspeaker measurements as well as on all kinds of goofy stuff. I never saw anything that would explain what happened and this has was technically troubling while lead to a frequent topic to ponder. Eventually it gradually dawned on me what an explanation was.
Understand, this is how I see it right now.
We measure with a microphone which is one place in space.
WE hear from two points in space BUT we have learned that all the ripples and comb filtering ones ears CAUSE as a function of position are how we can hear height, front to back and such, domains seemingly inaccessible with only two reference points. We can't hear any of those huge deviations because they are the only thing we know, we don't hear a source grossly distorted because of what our ears do to it with changing position. In many ways, we do not hear like we measure.
If a speaker presents essentially the same signal to both ears, there is little or nothing for your ears to identify as a source location. However, if the source radiates differences that your ear can localize, then the physical location in depth is easy to identify with your ears closed.
On the other hand, take the grill off of an SH-50 and listen to it with a voice etc and you can literally walk up and put your head into it and it always sounds like it's somewhere floating in front of you.
Now, ironically that was sort of a side effect of trying to make the all the drivers combine coherently into one source letting the horn produce the radiation pattern. All of the problem one faces providing the best quality sound possible in a large space are so much larger than in the home and why commercial sound has never been as good as in the home.
There, it was clear that the larger the pa system, the worse it sounded generally.
Most of that was from the same self inference from sources too far apart to add coherently yet there was no solution to making one more powerful coherent source. Anyway, it's taken 12 years to get these horns where they are now, feels like I've been writing about that long so I'm going to get back to work now, hope this helps.