When I'm reviewing gear, audio analysis software like FuzzMeasure [this issue] gives me the ability to resolve differences in sound quality and consequently use objective language when conveying those differences. But I would argue that all recording engineers would benefit from learning how to use an analyzer and decipher the data that it presents. Even if we're not doing formal product evaluations, we're still constantly presented with a choice of recording tools. Knowing how single pieces or whole signal-chains of gear react to certain sounds or techniques can steer us toward more efficient workflows - mic'ing, processing, and mixing.
But shouldn't we trust our ears over numbers, specifications, and charts? Of course we should. But can we absolutely trust what we're hearing across space, time, and memory? And more specifically, can we trust the monitoring system that our ears are hearing, or the room that we're mixing in, without having some kind of baseline to accurately describe that system or room? At the very least, an audio analysis package can highlight the anomalies in how our rooms react to the sounds coming out of our speakers, and if we choose to go further, analysis is the first step in identifying what kinds of acoustic treatment we should consider to offset those anomalies.
Years ago, Ethan Winer of RealTraps [Tape Op #36, #38, #48, #85] recommended Room EQ Wizard <www.roomeqwizard.com> to me. REW is a donation-ware Java application that was originally developed by John Mulcahy to measure room and speaker responses. Despite its confusing UI, it excels at those tasks. In the years since its release, as REW itself has gained features, and the audio interfaces it supports have gotten better, REW running on my Windows tablet, coupled with a USB audio interface, has become my go-to system for testing recording gear and software too. My system may not have the accuracy of a $25,000 Audio Precision analyzer, and the numbers that I'm able to capture may not be absolute in calibration, but I can still make meaningful comparisons between pieces of gear.
For example, both the Shure SRH1840 [#89] and Audio-Technica ATH-R70x [#108] open-back headphones have exemplary time-domain response, with very little ringing or smearing, but REW confirmed that the ATH-R70x exhibits much less harmonic distortion (high-order especially) in the bass region. REW has also proven that small, closed-box, single-driver speakers, like the Auratone 5C [this issue] and Avantone Mixcube [#55, #88] are very accurate in the time-domain, even if they are band-limited in their frequency response. The Avantone's waterfall plot shows no significant resonances or ringing, and the speaker has a flat phase response, which explains its revealing and transparent midrange. On the other hand, REW showed me that the phase response of Apogee AD/DA-16X converters [#59] is not linear above 1 kHz, so watch out for harsh cymbals and "phasey" reverb tails if you rely on these converters for multiple steps of conversion or for parallel processing.
I can also count on REW when I'm not sure that what I'm hearing is real. For example, although the virtually perfect frequency and phase responses of the Antelope Satori are what you would expect of a "mastering-grade" monitor controller, when you A/B its inputs, the output level of its summing bus changes, and clicking noises are imprinted on the bus. Moreover, clicks appear on the summing bus when you operate the monitor volume control. At first, I thought bone-conductance was causing me to feel and "hear" the clicks of the attenuator and switching relays through my fingers, but REW quickly confirmed that the level changes and clicks are indeed making into the audio stream. If you're a Satori user, don't touch the monitoring volume or input controls when you're printing a mix through the Satori's summing bus.