During CES, I didn’t get to spend much time at the Venetian, where most of the high-end audio was. One demo I did hear there was Comhear MyBeam beamforming technology, which creates a binaural soundfield without headphones, and it was quite impressive. Beamforming is a technique in which digital signal processing (DSP) is used to manipulate the output of identical speaker drivers in a linear array in order to control the interference pattern in the sound waves that emanate from them.
As you may recall from high-school physics, when two or more sources of sound—or any—waves are active at the same time, the waves interfere with each other, increasing the amplitude in some locations (constructive interference) and decreasing it in other locations (destructive interference). This is more evident with pure sine waves, but the effect occurs with any waves.
When two identical speakers emit sine waves, an interference pattern is created. In this illustration, the red and yellow areas are reinforced (high amplitude), while the blue areas are cancelled (low amplitude).
This phenomenon can be controlled by applying DSP to a linear array of identical drivers, allowing narrow beams of sound to be directed at will.
Listening to a pair of speakers involves some compromise—specifically, sound from the right speaker reaches the right ear as intended, but it also reaches the left ear. Similarly, sound from the left speaker reaches the left ear, but it also reaches the right ear. This is called crosstalk, and it can blur the spatial image while compromising the dimensionality of the stereo source. Of course, you could wear headphones to avoid this, but that’s not always desirable or practical.
Using beamforming, Comhear has developed a system that greatly reduces crosstalk by aiming narrow beams of sound at each ear. This provides the listener a much more binaural experience, which can include virtual surround localization. This technique maintains important psychoacoustic cues, such as interaural time difference, interaural intensity difference, and interaural phase difference, which are encapsulated in a mathematical formalism known as the head-related transfer function (HRTF).
The demo system I heard included an Oppo BDP-103D Blu-ray player feeding a Benchmark DAC2 HGC digital-to-analog converter, and a small Comhear linear array with 12 33mm drivers, each with its own DSP and 2.5-watt amp. I started at a distance of three feet from the array and played several clips from a custom Blu-ray disc. First up was a binaural recording of David Chesky in a giant cathedral talking as he walked up to the right microphone and whispered into it. The effect was remarkable—it sounded just as it would if I had been in that cathedral, including whispering in my ear.
I also listened to several 2-channel clips from the AIX Records catalog, and the soundfield extended far beyond the physical cabinet; it was rather eerie. The soundtrack from one of the Jason Bourne movies (I didn’t write down which one) was reproduced in a completely credible 5.1 surround soundfield.
Next, I listened at a distance of 10 feet, for which the beamforming algorithms had to be changed. The effect was not nearly as pronounced as it had been in the near field, but the soundstage was still wider than the small cabinet.
A company rep told me that one of the first applications of this technology is to mount small units on the backs of seats in an auditorium to give each audience member the same aural experience instead of different experiences depending on where you are with respect to the PA system. But I have no doubt the technology will find its way into consumer products as well. Speaking of which, this is similar to the technology in the Lexicon SL-1 speaker to adjust the apparent soundstage, which was also introduced at CES; read more about that here.