Those of us who have tried mixing music in Dolby Atmos understand the challenges of learning a new skill. Some of those challenges lie in creative dilemmas, such as “What should go in the overhead speakers?” Yet, the most exasperating struggle lies in predicting the audience’s experience when listening to our mix. Two primary issues affect our listeners. First, a large percentage will listen to our immersive mix on headphones rather than speakers. Second, a significant portion of these listeners will encounter our mix with Apple Music’s Spatial Audio processing instead of Dolby's Atmos Binaural representation. To clarify, Dolby provides Atmos mixers the capability to incorporate spatialization that defines how close an object will sound to the listener’s head when an Atmos mix is played back through stereo headphones, as opposed to a multichannel speaker system. Four user-specified render modes (near, mid, far, off) determine the level distance model (convolution reverb) applied to each object or bed channel during a particular song. This is known as the binaural render of an Atmos mix.
Apple Music does not utilize the Dolby-specified binaural metadata; instead, it employs its own algorithm called Spatial Audio. This means that Atmos mixes heard on headphones through Apple Music will sound different from the same mix heard on Tidal or any other digital streaming service that honors Dolby’s Atmos Binaural metadata. The binaural settings applied during a mix ultimately affect how "reverby" certain sounds render in an Atmos mix on headphones versus speakers, so the challenge in mixing for headphones is finding binaural settings that work well in Spatial Audio playback as well as the standard Atmos Binaural playback.
During a mix, the Dolby Atmos Renderer and some DAWs allow us to seamlessly audition re-renders of our immersive mix in formats including Atmos Binaural, stereo, 5.1, 7.1, etc. However, to listen to the Apple Music Spatial Audio mix, one either needs to mix in a recent version of Apple's Logic Pro [Tape Op #74] or export the Atmos mix as an MP4 file, import it to an iOS device, and then play the file not through the Apple Music player, but as a movie from the file system. This workaround makes it difficult to quickly assess how a mix in progress will translate to different playback environments. I find it extremely useful to loop a section of a song and compare how the mix translates between speakers, Spatial Audio, and Atmos Binaural headphone playback, but I couldn't until now. Ginger Audio, the developer of GroundControl SPHERE monitor control software [Tape Op #158], has introduced the iRender plug-in. This plug-in enables real-time monitoring of an Apple Spatial Audio version of an Atmos mix. What's more, the iRender plug-in is available free of charge to SPHERE owners.
I've been using SPHERE as my monitor control software via an Elgato Stream Deck + for hardware control for about a year, and it has proven to be an excellent choice. The integration of iRender now saves me quite a bit of time and hassle during Atmos mixes, while allowing me to reassure nervous clients that the time and money spent on their Atmos mix will yield a result that translates well to headphones on every streaming platform. Keep in mind that Spatial Audio playback is supported not just on Apple and Beats headphones and earbuds, but also when using the built-in speakers on many iPhones, iPads, MacBooks, iMacs, the Apple Vision Pro, and certain Android devices with compatible headphones.
Here’s how it all works for me: Input A of my SPHERE monitor controller receives the signal from my 7.1.4 Atmos mix, either from the Dolby Renderer or from the DAW. This signal is routed to various speakers via SPHERE’s output section. SPHERE also provides aux sends from its input section, and I can use Aux A to send the Apple Music Spatial Audio mix to my Bluetooth Apple AirPods. In my setup, Output A is my 7.1.4 speaker system, Output B routes to my stereo monitors, and Output C goes to my regular stereo headphones. Additionally, the Aux A output feeds the built-in output of my Mac’s audio system, which can be set to the headphone jack or the Bluetooth output.
I should mention that any of SPHERE’s inputs, outputs, or aux channels can host an AU plug-in. The iRender plug-in is inserted on the aux send from Input A, allowing the Apple Music Spatial Audio signal to be routed to my Mac's Bluetooth output. With this setup, I can instantly select an output to audition my mix in various formats: as a multichannel speaker setup (Output A), a stereo fold-down on speakers (Output B), Dolby's Atmos Binaural headphone mix (Output C), and Apple's Spatial Audio (Aux A). The only inconvenience in switching from the Atmos Binaural to Spatial Audio mix is the switching from my wired headphones to the AirPods.
iRender supports music and movie modes, dynamic head tracking, and personalized HRTFs, so you can audition Apple Music Spatial Audio in the manner that you are used to. iRender’s input meters display the 7.1.4 channel levels, and a LUFS meter lets us monitor the overall loudness of our mix. Once the plug-in is set up, it really never needs to be open while monitoring the Spatial Audio signal or any of the other monitor formats.
Ginger Audio continually updates the features and functionality of SPHERE, and iRender is just one of the reasons why I consider SPHERE a crucial component of an immersive mixing workflow. iRender comes integrated with GroundControl SPHERE and is currently available in macOS only.