THE #1 AV NEWS PUBLICATION. PERIOD.

We Can Measure But Do We Know?

Occasionally this column will dive deeply into a topic, starting, as is my wont to do, with a historical perspective. The history is there for a reason. To understand the often complex and multi-faceted themes you need to know how we got here and what drove the development of a particular technology or methodology and why. Equally, you need to accumulate a perspective on what it really means from a practical useful standpoint. This is the first of that type of article for rAVe.

Audio Measurement and Analysis

For more than four centuries, natural philosophers and scientists have sought to quantify, calculate and derive qualitative standards for the immensely complex multi-science topic we choose to use one simple word to describe — sound. Today’s technology enables investigations into the most minute detail and structure of sound and audio signals. But why do we seek to measure parameters like frequency response, signal transfer functions and a host of other variables?

The straightforward answer is to quantify and present data on the performance of sound (audio) systems, so that anomalies can be rectified. But do we really understand what all that data tells us about what we will and have heard? More on that a bit later.

Some History First

Authors Note: For a far more complete discourse on this topic, read this, an AES paper written many years back by Ted Uzzle and myself, which details the history of this topic from Aristotle (300 BC) to more or less the present day.

The idea of fixing — or more accurately, correcting — audio system performance goes way, way, way back, in fact well over a century or more depending on what level of fix you want set as your reference.

From a real world commercial perspective, we can probably set the year 1923 as a marker point, since that was the year in which Otto Zobel of the expansive research complex simply known as Bell Telephone Laboratories published his historic paper on filter designs to correct and improve audio quality over copper telephone lines (an obviously vital issue for his employer The Bell Telephone System or as it was usually called in those years MaBell or THE telephone company — aren’t monopolies wonderful?).

The Home of Genius and the DNA of Modern Scientific Acoustics

It is important to recognize that in the 1920s, Bell Telephone Laboratories was a giant in the world of science. The historic laboratory originated in the late 19th century as the Volta Laboratory and Bureau created by Alexander Graham Bell. Bell Labs was also initially a division of the American Telephone & Telegraph Company (née AT&T Corporation), half-owned through its Western Electric manufacturing subsidiary.

Researchers working at Bell Labs are credited with the development of radio astronomy, the transistor, the laser, the charge-coupled device (CCD), information theory, the operating systems Unix, Plan 9, Inferno and the programming languages C, C++ and S (a popular modern implementation of which is R). Eight Nobel Prizes have been awarded for work completed at Bell Laboratories.

That list barely scratches the surface of the massive amount of research done (and technology produced from that research) by the now-legendary science teams of what became the world’s largest industrial research lab. Names such as Harvey Fletcher, W.A. Munson, Richard Hamming, W.B. Snow, Harry Nyquist, Manfred R. Schroeder, E.C. Wente, A.L.Thuras, W.B. Shockley all walked the halls of the gigantic facility.

Some of the names above should be familiar to anyone in the audio world, including Harvey Fletcher father of stereophonic sound. (As director of research at Bell Labs, he oversaw research in electrical sound recording, including more than 100 stereo recordings with celebrated conductor Leopold Stokowski in 1931–1932.)

Certainly the names Nyquist, Schroeder and Hamming should be recognizable by any measurement user for their mathematical and engineering contributions still in use today. And of course, let us not forget the Fletcher-Munson curves denoting human hearing sensitivity, which remain a cornerstone of auditory science. The list of other contributions made by the thousands of people who made the labs their home at some point, would easily fill more than a few large books and anyone interested should look up the individual names.

There is also an exceedingly large and, until relatively recently, highly-classified body of research done by the scientists of the various divisions of the Labs during the WW II years, another book of its own for sure. Two of the most noteworthy developments were the refinement of the radar technologies first pioneered by British researchers, allowing their use on airplanes, ships and submarines, and the acoustic homing torpedo, which was a major factor in the successes of Allied submarine’s during the middle and later years of the global conflict.

Back to Audio and Acoustics

From the audio perspective, the initial development and refinement of what became known as Zobel networks (and still found occasionally in ancient POTS** telephone systems) was critical and laid the groundwork for what rapidly became the first organized scientifically rigorous major effort to analyze and then design a correction method for a measureable problem in sound (audio) — poor quality speech reproduction over telephone lines. (**Plain Old Telephone Systems — if you’re old enough to remember rotary dial phone the Zobel electrical filter designs were what all such analog networks used to make you sound almost normal.)

By the early ’30s, John E. Volkmann and others developed electrical filter systems (most prominently under the RCA and Western Electric nameplates) for use in Cinema audio systems. In all probability these would be recognized as the first real commercial application of the idea of frequency correction of a loudspeaker based sound system.

Now we had hardware to make the system “better” but how much better, and what effect was being produced. For that you needed to be able to measure what was happening. Thus the development of “measurement” technologies began.

Measurement Begins

The real beginnings of electrically based acoustical measurement can plausibly be dated to 1917, when Western Electric engineers combined four separate and at the time unrelated inventions to create a physically imposing machine for practical, reliable sound measurements. Today, it would almost be recognizable as a several hundred pound version of a very primitive but useful sound level meter. (For the historically inclined or just plain curious, please see here for the whole story, along with, Ampel Fred, and Ted Uzzle, “The Quality Of Quantification,” Proc. Inst. Acoust., v13 Part 7, pp47-56, 1991.)

Four decades later, Don Davis at Altec Lansing presented the professional audio world with Acousta-Voice technology, the first true equalization system designed for sound systems (or PA systems as they were then more popularly called). Of course this was followed by many, many others including White Instruments, but what remained to be perfected or at least make more easily accessible, was how to accurately acquire sufficiently precise data on sound system performance to make these technologies truly effective.

This took the addition in the early 1970s of a true audio focused test instrument, which was made for Altec Lansing by Hewlett Packard and provided reliable, third-octave data that matched up nicely with the filter sets available for the Acousta-Voice unit. As shown below in Figure 1, the 8050 Real time Analyzer was that device (it rapidly became known as the “real time” in the industry of the day).

The original Acousta-Voice filter set used modules for specific frequencies similar to the concepts developed by C.P. (Doc) Boner a bit earlier on as shown in Figure 2, and later modified and enhanced, becoming the cleaner and more easily adjusted rack mount format as shown in Figure 3.

 

Now four-plus decades later, we have dozens of very high quality software based analytical tools at our disposal, which can precisely measure parameters and data to fractions of a dB, and provide vastly more data than anyone can reasonably absorb.

What we have created for ourselves with all the high-powered computer driven technology being thrown at the question, is the ability to generate definitive and quantitative data to a level of precision way beyond our ability to hear or, frankly, correct.

What we have is mass quantities of numbers, graphs, curves and information that can delve into a sound system’s performance on a truly microscopic level. The question that is not being asked (at least not often enough) is: How useful is all that data, and how much functionally applicable knowledge is it producing?

The legendary Richard C. Heyser (Inventor of Time Delay Spectrometry, which when commercialized turned into the TEF analyzer from Crown International) said in one of his early papers on the subject, “There are only three useful and measureable variables in audio — those being time, energy and frequency.”

He added, presciently, “Attempting to analyze or correct other parameters is essentially an exercise in futility.” And that, ladies and gentlemen, IS the problem.

In the paper quoted above Heyser also said, “It is… perfectly plausible to expect that a system which has a ‘better’ measured frequency response, may in fact sound worse simply because the coordinates of… measurement are not those of subjective (human) perception.”

He is also reported to have said during an AES session on the topic that he could tell you that there was a bump in a response curve at precisely 5.312549 kHz from a reflecting surface, but realistically there was literally nothing you could do to ameliorate the problem, since surface in question was the face of the recording console in a control room he had measured!

So lets summarize. In essence, this all comes down to five decisive realities:

  1. The ability to measure something does not automatically carry with it the ability to understand the measurement’s meaning with respect to human aural perception.
  2. Machines and microphones do not and will never be able to “hear” in precisely the same way that humans do. (Since we do not as yet completely understand all the parameters of human auditory perception, it is therefore essentially impossible to build any device to mimic it or replicate what happens between the external ear and the auditory processing centers of the brain).
  3. The ear-brain system performs subjective (often classified as non-linear) analysis. It is also a system wherein the “code” used to process the information is still understood on a limited basis and the subject of much unsupported (scientifically speaking) assumption.
  4. Just because we can quantify a parameter does not mean we need or can effectively use that data.
  5. Despite its lack of scientifically-acceptable specifics and mathematically-correct formulas, the subjectively based analysis of perceived acoustic or audio quality is still the measurement system that most sentient residents of this planet accept and understand – and most importantly base their qualitative judgments upon.

At its core level, the problem is that any computer system running any code/software or generating any measurements we are likely to acquire or need produces a numbers-intensive linear world view.

It lives and breathes digit after digit, but it knows them not. It cannot tell you that these numbers are right and those are wrong; it simply crunches and crunches until some answer appears on some form of display.

In summarizing this quandary, Dr. Heyser stated, “…one commonsense fact should be kept in mind, the electrical and perceived acoustic manifestations of audio are what is real. Mathematics and its implementations are at best a detailed simulation that we choose to employ to model and predict our observations of the real world. We should not get so impressed by our ability to delve into the finest details, that we assume the universe must also solve these equations or look at things in that particular way. It does not.”

Although our hardware has evolved across the centuries, the user is still a biological entity that sees objects in space and hears events in time. Measurement hardware hears sound as waves, not events. Understanding this distinction is critical, because it focuses on the essential difference between purely logic-based systems and those operating in the biological domain.

The same subjective-objective argument was made about triode tubes versus pentode tubes and then later about tubes versus transistors. As has been repeatedly stated by many authors, opinions about auditory qualitative analysis, its nature, definition and measurement persist. It does not seem as if the many sides in the discussion have reached much common ground and whether that is even feasible remains an open question.

It is crucial that professional practitioners understand that although objective measurements supply a scientifically valid, thoroughly repeatable data, and must be an integral part of the science of sound, the devices under test will ultimately be used by human beings and not machines. Thus it is also incumbent upon all to accept as equally valid the somewhat less-scientific subjective judgments. After all, it is the sentient biological quality-assessment systems that will be the final judge of our success or failure.

Top