THE #1 AV NEWS PUBLICATION. PERIOD.

InfoComm: Multi-channel Audio Goes Way Beyond 5.1

By Dan Daley
InfoComm International

3DAudio_0915 If you’ve been to the movies lately, you may have noticed that your head’s been snapping back and forth a lot more. That’s because cinema has been the new frontier for the next generation of multi-channel sound systems.

The new audio formats — among them, Dolby’s Atmos, Barco’s Auro and DTS’ DTS:X — are the vanguard of the 3D immersive-sound universe. With immersive sound, highly directional audio, using as many as 22 or more discrete speakers, envelops listeners in an experiential environment — as aurally dazzling as 4K and 8K video purport to be visually. In fact, these new audio formats were developed, in part, to be a sonic complement to Ultra HD.

The Players

Each format has its own characteristics, although they all utilize some form of “object-based” mixing, which essentially assigns audio elements to specific speakers distributed throughout a space.

Dolby’s Atmos, which arrived in 2012, starts with a basic 7.1.2-channel “bed” from which objects can be placed anywhere in the three-dimensional sound field, based on speaker placement. In theaters, this configuration can extend to as many as 64 channels; at home, Dolby Atmos can support up to 34 channels (up to 24 ear-level speakers and up to 10 height speakers, above the listener).

Soundtracks for Barco’s Auro, which came along a year earlier than Atmos but was developed in Europe and had to wait for a U.S. distributor, are mixed for an 11-channel, three-dimensional soundstage. It envisions channels at ear level, above ear level (usually mounted high on a wall) and directly overhead.

DTS:X, which was introduced at the beginning of this year, is configured for up to 32 channels, but its processing is the key: The system will allocate audio objects based on whatever speaker configuration it encounters, algorithmically placing sounds as close to where the mixer intended them as it can. The DTS:X renderer will simply remap a soundtrack to whatever layout is in use, within a hemispherical layout.

What all these formats have in common is support for speaker placement at ceiling level, adding the height dimension to conventional surround sound.

In the Real World

People have been trying to create immersive audio environments at live events for decades. Perhaps best known were The Who’s live productions of their Quadrophenia LP in 1973, when PA guru Bob Heil placed speaker groups in each of a hall’s four corners and then, using a pair of linked mix consoles, was able to “fly” vocalist Roger Daltrey’s voice around the room, from speaker to speaker.

The effect received mixed reviews, but it set the stage for multichannel sound for a mass audience. In 1976, Logan’s Run introduced moviegoers to surround sound with Dolby Stereo, which despite its name was actually a four-channel audio system with two rear channels and a stereo front image, but which prompted an industrywide revamp of cinema-sound infrastructure. In 1992, the Dolby Digital Surround format added a third front-array channel to create the L-C-R array in front and a subwoofer channel that established the now-familiar 5.1 format.

DTS and Sony’s SDDS soon gave Dolby competition in the multichannel space. The 7.1 iteration added side speakers. Today’s immersive approach adds speaker elements for height and overhead sound.

However, challenges with using multichannel sound in live environments persist.

“Acoustics are the big problem with immersive audio, because it’s difficult to keep sound precisely located,” explains Brian Claypool, Vice President of Strategic Business Development for Barco. “It’s also expensive because of the number of speakers needed. Physics and cost work against it.”

Barco has applied immersive-sound technologies developed by Iosono, a German company it acquired in 2014 and integrated as Barco Audio Technologies, for corporate and showroom spaces. Iosono’s technology is based on wave-field synthesis, a spatial-audio rendering technique that can be used to create virtual acoustic environments. It produces artificial wave fronts, synthesized by a large number of individually-driven speakers, which seems to the listener to originate from a specific (though virtual) starting point.

Importantly, the localization of these virtual sources does not depend on or change with the listener’s position, allowing most people in an area to experience the movement of audio objects through space. Iosono technology was used for a retrospective of the work of music artist Björk at New York’s Museum of Modern Art this year, where sound was distributed through 25 B&W CT8.2 speakers ringing the perimeter of the room, 18 B&W AM-1 speakers on the ceiling and six B&W subwoofers.

“The waveform synthesis algorithms create a very personalized experience, depending upon where you are in the room,” says Claypool. “The Iosono core lets us control the directionality of the air pressure from each speaker, so we can precisely focus the sound.”

The Iosono system has been used at a number of other live events around the world, including at the Kazakhstan Pavilion at the World Expo 2015 in Milan, where it’s been deployed using a 42.4-speaker (42 object channels and four subwoofer channels) audio installation.

Gauging the Possibilities

Although Dolby’s Atmos has garnered lots of attention for its use in cinema applications (more than 200 films have been mixed using the technology), the company says there aren’t any use cases in live-event environments. Still, Brett Crockett, vice president of Dolby’s Audio Technology Group, says “Anywhere that audio can go, Atmos can go.”

To that point, Atmos recently rolled out for mobile use, including on the Lenovo A7000 smartphone and two tablets, the Lenovo TAB 2 A8 and the Lenovo TAB 2 A10-70. In a mobile-device application, a hardware encoder will take Atmos-certified content and recreate an object-oriented soundscape with conventional headphones. These types of devices are increasingly being integrated into live-event production, and as a result, Atmos may find its way into this space sooner than its developers envisioned.

Atmos is also a contender, along with the MPEG-H codec, fielded by Fraunhofer, Qualcomm and Technicolor, to become the audio codec for the next U.S. broadcast standard ATSC 3.0, expected to be announced later this year.

Geir Skaaden, senior vice president of corporate business development, digital content and media solutions at DTS, acknowledges that cinema will be the initial battlefield for the company’s DTS:X object-based audio format. (DTS dropped out of the ATSC 3.0 process in May.)

However, Skaaden adds, the potential for object-based audio in the live environment is considerable, especially in large-but-managed environments, such as theme parks. With current 5.1 technologies, he says, listeners get a location-dependent version of a surround-sound field, based on where they are relative to the sweet spot.

“With object-based sound, we’ll have the ability to give everyone the same experience anywhere in the room, or give everyone a different experience in different locations, if that’s your creative intent,” he says. “Object-based audio lets you deliver both direct and ambient sound that’s fully immersive. What can be done in a live space hasn’t even been imagined yet.”

See more here.

This column is reprinted with permission from InfoComm International and originally appeared here.

Top