Interesting article. Who will win? ATMOS OR AURO 3D
TOWARD AN OPEN-STANDARD SURROUND-SOUND FORMAT
Within Theatre 8 at AMC 16, Burbank, a total of six QSC SC-424 four-way screen channel loudspeakers — LCR screen channel and LCR height screen channels — are augmented by 42 SR-1030 two-way surround loudspeakers, arrayed as 12 (six per side) side-lower, eight (four per side) side-upper, six rear wall, six top-left and six top-right, and four top-center. Low frequencies are handled by an array of four SB-7218 floor subwoofers; two GP 118sw subwoofers suspended from the ceiling are fed from a derived LFE feed for the surround arrays low-passed at 60 Hz. A rack of QSC DCA Series amplifiers powers the loudspeakers. All signal processing, including EQ, time alignment and crossovers, plus routing, monitoring, control and calibration, is handled by a QSC Q-Sys Core 500i processor, using a series of FIR filters to correct loudspeaker performance. The Q-Sys Core also serves as the rendering engine for MDA object-based soundtracks.
Towards a SMPTE Standard
MDA Cinema Proponents Demo Open-Standard Surround-Sound Format
by Mel Lambert
It will come as no surprise to anybody involved in film and TV post that our industry is rapidly embracing immersive surround-sound technologies. With several hundred movie theatres around the world now capable of replaying Dolby Atmos and/or Barco Auro-3D soundtracks, Digital Cinema Initiative — a joint venture of Disney, Fox, Paramount, Sony Pictures, Universal and Warner Bros. motion-picture studios — has turned to the Society of Motion Picture and Television Engineers to help develop an open-format, object-based playback standard for immersive surround. The financial and operational benefits are immediately obvious. The same Digital Cinema Package/DCP media carrying an object-based soundtrack could play back in any immersive sound-equipped theatre located anywhere in the world.
“Our goal is to develop a single, interoperable distribution file format for immersive sound, which will be an object-based audio essence that can be used within the D-cinema architecture,” explains Brian Vessa, chairman of the SMPTE Technical Committee 25CSS, and executive director of Digital Audio Mastering at Sony Pictures Entertainment. “We are developing a common standardized method of delivering immersive audio to cinema systems regardless of the playback configuration.” Vessa also serves this year as DCI technical chairman, representing Sony Pictures.
A special TC-25CSS Working Group, chaired by Peter Lude, a consultant with Mission Rock Digital, is examining the interoperability of immersive sound systems for digital cinema, and a deliverable file format for the DCP, which is a collection of digital files used to store and convey audio, image and data streams to a theatre. “We need to provide the standardized tools for post-production facilities to prepare a single soundtrack and not a number of discrete mixes,” Lude says. “Film studios and exhibitors want a single
format, and to date we have had tremendous support for the standardization process. The working group intends to have a draft standard for immersive sound available in draft form within 12 months.”
Two organizations are contributing input on object-based formats to SMPTE: Dolby Laboratories, whose proposal is based on Atmos; and MDA Cinema Proponents Group, which includes DTS, Doremi Laboratories, Ultra-Stereo Laboratories, QSC, Barco and Auro Technologies. The MDA Group’s immersive-surround proposal is based on Multi-Dimensional Audio, an uncompressed PCM sound format that derives from research initiated at SRS Labs and refined by DTS.
The Fairlight 3DAW MDA mixing environment is based on the firm’s Crystal Core Media processor, and provides on-screen 3D panning via a DAW plug-in, as well as comprehensive monitoring functions. A separate tablet control application allows for easy previewing and demonstration of audio output to verify and display the creative results.
To date, the MDA Cinema Proponents Group has held two demonstrations of its proposed format for working group members and other industry professionals at the AMC 16-theatre complex in Burbank. Theatre 8 has been outfitted by QSC Audio Products with a total of 54 behind-the-screen, surround and ceiling loudspeakers, plus subwoofers to create an audio test bed for replaying various surround-sound configurations. All signal processing, including EQ, time alignment and crossovers, is handled by a QSC Q-Sys Core 500i processor; replay is from a Doremi cinema server. Playback material for the special demonstrations comprised a short video produced by DTS, entitled The Escape
, accompanied by replay of a single MDA object-based soundtrack that was rendered in real-time through the Q-Sys processor to produce outputs appropriate to targeted loudspeaker channels.
According to John Kellogg, senior director of corporate strategy and development at DTS, “The soundtrack mix for our demonstrations was made by Marti Humphrey and Chris Jacobson at The Dub Stage, Burbank [via a 35-speaker/26.1-channel system], using MDA Creator, a Pro Tools plug-in that facilitates the mixing and creation [on the facility’s Avid D-Control console] of an MDA interoperable file. That single mix as an MDA object-based audio file was wrapped into a DCP file, and played back on the Q-Sys cinema system in the AMC theatre. A major advantage for film studios and post facilities is that a single mix can service many different theatres and loudspeaker configurations.”
Like other object-based immersive surround formats, Multi-Dimensional Audio effectively models a variable number of sound objects located in three-dimensional space, rather than sounds that are assigned to a specific channel or loudspeaker configuration. For MDA, each object — or group of objects — is assigned its own identity, allowing them to be addressed individually during the re-recording process. Conventional PCM-format files are used to re-record and deliver the soundtrack, with metadata that contains information about where in 3D space each object is located.
During these specially staged presentations, the MDA Group first replayed the object-based mix mapped to all of the AMC theatre’s 48.1 channels. “We then mapped the same mix to the two immersive speaker configurations currently in use [Atmos and Auro-3D],” Kellogg continues, “then to a 7.1-speaker arrangement with four height speakers — 11.1 — and lastly in conventional 7.1. This capability shows that MDA is fully scalable, meaning that the same mix maps up and down with excellent results to all speaker arrangements; it is also affordable and available from multiple vendors. Unlike ‘fixed’ immersive speaker systems, we used the same MDA mix and the same file mapping via Q-Sys to all of those different speaker arrangements to illustrate that MDA is flexible; it does not matter how many speakers are in the room or where they are located.”
The cost of the AMC16 test-bed installation has been underwritten jointly by DTS, Barco, Doremi, QSC and AMC. “In exchange, the MDA Cinema Proponents Group can use the facility outside normal exhibition hours two days per week for internal testing and on-site demonstrations,” explains Paul Brink, QSC’s cinema sales engineer.
To date, Dolby Atmos immersive sound systems have been installed or are planned for over 450 movie theaters worldwide, as well as more than 55 post facilities. Recent Academy Awards include sound mixing and editing Oscars for
Gravity, which was re-recorded in Atmos immersive soundtrack at Warner Bros.’ Burbank facility, and the Oscar-winning animated feature
Frozen, which was dubbed in native Dolby Atmos at Disney Digital Studio Services’ Stage A, Burbank.
Other organizations are pursuing alternate ways of carrying immersive surround to consumers. Founded in 2007, Iosono was the first company to offer an object-based playback format based on wave-field synthesis using up to 128 data tracks to relay encoded sound to movie theatres. “To date, we have installed multiple systems in Europe and more recently in China,” says CEO Olaf Stepputat. “The next Iosono cinema multiplex will open in August this year.” In the UK, the Higher Order Ambisonics Group is extending the original full-sphere Ambisonics surround-sound technique that is said to enable rotation, reflection, movement and upmixing from legacy formats such as 5.1-channel mixes. NHK, Japan’s state broadcaster, has been developing a 22.2-channel system, consisting of nine ceiling speakers, including a center overhead channel, 10 surround speakers and three channels across the foot of the screen to reproduce footsteps, car noises and falling objects — with a matrix for downmixing to legacy loudspeaker layouts.
Once the current immersive audio standards effort concludes, the SMPTE technical committee will consider the future ability to combine conventional channel-based mixes with object-based immersive mixes. In this way, a legacy 5.1/7.1 cinema processor could be retrofitted with new firmware to accept an immersive soundtrack and render it to appropriate loudspeaker channels. In this scenario, techniques would need to be developed for mixing natively in an immersive format and then, while collapsing that mix to 5.1 or 7.1, capturing the appropriate vector-based metadata for the various object-based elements. The same metadata could be used by a suitably equipped cinema processor to re-render the original immersive mix in real time to any channel-based playback system. “But we need to take that process one step at a time,” Vessa advises, “rather than boil the ocean.”
Post-Production Tools for MDA Mixes
Several manufacturers are working on post-production tools for native MDA mixing. In addition to the DTS’ MDA Creator plug-in for Avid Pro Tools, MOTU Digital Performer, Apple Logic Pro, Steinberg Cubase and Nuendo workstations, Fairlight’s 3DAW audio production platform enables sound designers to mix object-based audio in three-dimensional space and monitor the result on any MDA playback configuration. 3DAW is based on the firm’s Crystal Core Media processor, with on-screen 3D panning via a DAW plug-in available in RTAS, AU and VST formats, as well as monitoring functions. MDA Creator is also said to be backwards compatible with legacy systems, allowing post facilities to export into any number of channel-based configurations, including stereo, 5.1, 7.1, 9.1+2 and DTS Neo:X. “Fairlight is also heavily involved in NHK's 22.2 vision, and can produce audio in this advanced format,” adds Tino Fibaek, Fairlight’s chief technology officer. “At NAB 2014, we will unveil support for additional 3D/object-based formats.”
Auro Technologies’ Auro-3D Authoring Tools is a set of plug-ins offering panning and simultaneous mixing to multiple formats, including an MDA-compatible export mode. “Our 3D mixing tools [use] vector-based panning and internal virtual bussing, which made the addition of MDA, or any format SMPTE decides upon, a simple effort,” says CTO Bert Van Daele. The plug-ins are available in AAX2 (64-bit), VST and AU format.
Barco reports that currently there are 150 Auro-3D systems in theatres — and 270 committed — together with 23 post facilities worldwide.
USL also is working on an extension of MDA playback. “Our implementation is unique in that the object-based audio is rendered, using patent-pending techniques, to channel-based [outputs] within the media block,” states company president Jack Cashin. This technique is said to offer three advantages: No outboard rendering system is needed (in many cases, a theatre’s existing sound processor can be used); the audio remains encrypted or forensically marked when outside the media block to prevent pirating; and existing systems can be updated to render object-based soundtracks. Since current hardware is limited to 16 audio output channels, of which two are used for visually- and hearing-impaired material, USL’s demonstrations render the MDA material to 13.1 outputs. It is reported that USL will be able to adapt its in-development system to the new SMPTE standard when it is finalized.
But the SMPTE process is not a contest between two competing technology companies. As Dean Bullock, director of Cinema Technology Strategy at Dolby Laboratories, explains: “Although the details of the work, by SMPTE rule, are not public, it is very clear from the active participation of several members with differing perspectives that this process will require very deliberate consideration of technologies from all of the 25CSS members. Inputs to the SMPTE group inform the result, but do not define its final output.”
“Our target for the SMPTE TC-25CSS inter-operability of immersive sound systems in digital cinema is three-fold,” Vessa concludes. “First, we want to develop a common, standardized file format, where one immersive audio mix made on any dub stage can be replayed through any immersive sound system with any number of playback channels. Secondly, we are developing an updated architecture for digital cinema, with standardized connectors and pipelines to facilitate immersive sound systems. Finally, we are looking at the calibration of playback systems to ensure consistency between the re-recording stage and a movie theatre.
“The development of a single interoperable standard for immersive audio soundtrack delivery with corresponding standards to insure interoperability between immersive sound systems is a noble and challenging goal,” he concedes. “But I absolutely believe we can get there.”
Mel Lambert has been intimately involved with production industries on both sides of the Atlantic for more years than he cares to remember. He is a principal of Media&Marketing, a Los Angeles-based consulting service, and can be reached at firstname.lastname@example.org