The title of this column is a phrase that is the basis of a legal oath that has been part of English Law since at least medieval times and has essentially become embedded in our shared societal structure. The beliefs that have come to encompass our laws and governance concepts remain at their core based on the codices of Roman law, probably the versions from around 200 or so BCE. In the Roman treatises from that time the focus was on one major idea: precision of language.
Also of considerable concern within that focus were two basic social principles: fairness (aequitas) and practicality (utilitas). That fixation on precise and exact legal terminology to avoid ambiguity or even misinterpretation of ideas and rules has remained as a convention through the millennia.
Why? Because these ideas have also framed our fundamental cultural concepts of fairness and practicality and helped delineate the definitions trustworthiness and honesty. These are important to our whole social fabric because time has proven, repeatedly that the truth can be distorted by only including some of the facts; and/or by giving misleading indications about how to interpret these facts. Sometimes the partial use of truth can be used to give legitimacy to deception.
And that short introductory essay brings us to our main discussion.
How far from the truth are we willing to allow the information we rely on to migrate?
If what I see and hear every day both in the information about products and services and in the promises made by software developers and suppliers is to be used as the data point — way, way too far!
The Minefield of Software
Let’s look at the software side first, since it has become inescapable in systems deployed today. To an exponentially increasing extent, critical systems that were once controlled mechanically, or by people, are coming to depend on code as was recently discussed in an article authored by James Somers entitled “The Coming Software Apocalypse,” published by The Atlantic in September 2017.
That article notes, ominously, that this problem “was perhaps never clearer than in the summer of 2015, when on a single day, United Airlines grounded its fleet because of a problem with its departure-management system; trading was suspended on the New York Stock Exchange after an upgrade; the front page of The Wall Street Journal’s website crashed; and Seattle’s 911 system went down because a remotely located router failed. The simultaneous breakdown of so many software systems smelled at first of a coordinated cyber attack. Almost more frightening was the realization, later that same day, that it was just a coincidence.”
Nancy Leveson, a professor of aeronautics and astronautics at the Massachusetts Institute of Technology who has been studying software safety for 35 years commented in that same article that, “When we had electromechanical systems, we used to be able to test them exhaustively, we used to be able to think through all the things it could do, all the states it could get into. For example, the electromechanical interlocking that controlled train movements at railroad crossings only had so many configurations; a few sheets of paper could describe the whole system, and you could run physical trains against each configuration to see how it would behave. Once you’d built and tested it, you knew exactly what you were dealing with.” (Leveson became widely know for her report on the Therac-25, a radiation-therapy machine that killed six patients because of a software error.)
But as industry in general and our world in particular is rapidly discovering, software is different. Just by editing a text file (which could be stored in a “cloud” server anywhere on the planet), the same chipset can become the core of an autopilot or the server responsible for an inventory-control system. It doesn’t know and it doesn’t care about the functionality it is assigned.
This flexibility is software’s miracle and its curse. Because it can be changed inexpensively, software is constantly changed and because it’s unmoored from anything physical — a program that is a thousand times more complex than another takes up the same actual space — it tends to grow without bound. Witness the bloatware install size of any word processing program of your choice today, against the size of that same program even five years ago — the growth is by orders of magnitude at least — and to what end? How much of the additional functionality is ever actually used even by so-called ‘power users’? I’d wager not much more than a small double-digit percentage!
Professor Leveson went on to pointedly say, “the problem is that we are attempting to build systems that are beyond our ability to intellectually manage and for the most part software engineers don’t understand the problem they’re trying to solve, and don’t care to… because they’re too wrapped up in getting their code to simply work at all.”
The landmine waiting to be stepped on here is this scenario: The software did exactly what it was told to do. The reason it failed is that it was told to do the wrong thing.
The Plan vs. The Unexpected
I don’t know about you, but my computer is always doing what I tell it to instead of what I wanted it to, but… I don’t always know what I should tell it to do, especially when it presents one of those “unexpected error” messages. In that same vein, it seems from conversations with numerous consultants and integrators over the last year or so that the consensus is that the biggest issue with AV systems is that with even a moderately complex control design, it is virtually impossible to test every possible operating state or condition to see what happens with unplanned button pushes or function calls. You diligently conduct reasonable proof of performance tests to ensure that it does what you expect by using expected button pushes in expected sequences and look for dead ends or lockups. There’s no amount of time you could devote to even getting close to being able to test what happens when people push buttons in more random or inappropriate sequences or what happens when a frustrated client is impatient and pushes buttons in too fast a sequence. The undeniable fact is that our species is extraordinarily unpredictable and is prone to taking the unexpected (or at from a programming standpoint the “Well, we never considered that option,” path only intensified this quandary.
Been there? Anyone have a viable answer to this dilemma? If you do please help the rest of us.
Close Loop, Closed Minds
And all of those keep you awake at night problems, don’t even consider the two other gigantic problems surging through the software world:
- The promises about functionality and compatibility made by overly optimistic vendors which of course all of our clients have read, seen, or even worse actually believed and…
- The continuing practice by far to many vendors of designing deliberately incompatible products, and deploying un-announced software revisions, that force us to solve the “how do we make all this connect and talk to itself issues on the job, in the field, with the client tapping his/her foot impatiently as we spend time (and money) fixing something that should NEVER have occurred in the first place?” problem. I’ve lost count of the number of times I or my colleagues have been in contact with a vendor’s tech support and heard that whistling death phrase, “We’ve never heard of that happening before.” Or its even more fatal companion: “That’s simply not possible.” Sound all too familiar?
Don’t think the un-announced software or firmware deployments or the can’t happen guarantee could affect you? You are wrong! Let’s look briefly at a recent apparently simple but disastrous example.
Around 3 a.m. one night not too long ago, a major TV manufacturer pushed out a firmware/software update from its offshore headquarters to every one of its 55 inches and larger, newly introduced multi-thousand dollar 4K UHD TVs. The number of units being “targeted” by this improvement was in the six-figure range globally. The update was sent in at least 10 languages and automatically downloaded by the units, which then installed the new code upon power up.
So far so good you might think. NOT SO! There was an untested, unresolved and unfixable bug in the code. It rendered every hand-held remote inoperable. No longer could anyone make any changes using the remote. And… the update ‘locked’ the turn-on volume for the sets audio systems at maximum, regardless of any previous setting. The new firmware/software totally overwrote everything that had been in place including any custom settings and all calibrations.
Imagine the surprise of the set owners the next day when they pushed on and discovered nothing would happen. If they manually turned the set on at the panel they were immediately deafened by the volume setting. And the off button no longer worked. The only way to turn things off was to unplug the set. The only solution to the software issue was to force the set to restore its factory defaults (an option only available in the non-user accessible service menu).
The company’s customer service number exploded and all you could get was a busy signal (for days) and their consumer website crashed! The North American offices had had no idea what had happened.
It took a day and a half to trace it down to the unannounced update, and then at least four more days to get the information out to owners/dealers/retailers, etc. Overall it was a two-week long nightmare caused by a simple failure to bother to check the update BEFORE spewing it worldwide. What was wrong? A simple, easily-caught (if anyone had bothered to look) reversed numeral sequence in the update to the sets’ remote control recognition software made all non-service menu related commands useless.
Sure it was an annoying and painful episode, but it never had to happen, if common sense and any sort of quality control mindset had been in place in the software/firmware department’s operations.
So the next time you hear “that can’t happen” or any similar phrase, be apprehensive and — more importantly — check to see if the statement is actually true.
Hardware Myths and Fantasies
Look at any hardware data sheet or brochure and what do you find? A lot of technical details, probably a good dose of marketing fluff and blather, and some very carefully worded content about functionality, warranty and a half-dozen, brutally dense legalese disclaimers in tiny text about everything and anything that might conceivably be an issue down the road.
What is clearly missing from all of this is a functionality and reliability discussion or, better yet, warranty of serviceability for the task intended, the one they claim it can do in the marketing fluff.
Globally we have a ton of standards and specifications to tell us the most mundane details about any technical parameter or engineering figure of merit. Just a quick list includes those published and maintained by AES, AVIXA and IEC.
Various countries or other governmental entities (cities, states, provinces, etc.) may have additional specific standards or requirements — these should be verified and conformance assured either by the manufacturer or its distribution structure.
So there’s no shortage of data and details on the pure specification side of the fence. In fact, there are immense documents describing in microscopic detail how this information is to be formatted, presented and collated.
For example in the U.S. and Canada, we have the CSI/CSC structure and the Technical Specification Format, Master Format and multiple Division formats, rules, regulations and requirements for any building project, construction job or… you name it, there’s a rulebook for it. The level of detail required could easily require employing someone full time just to manage the paperwork.
A portion of that massive set of formats and rules requires inclusion of a section in the project documentation that describes equipment and systems requirements. Each major piece of equipment required for the project should have a paragraph that describes concisely and in detail, the minimum acceptable specifications for the item.
Unless it is a public project, there are usually two or three pre-accepted makes and models listed for each item. That section also has this little-hidden gem: “This information gives the bidders a very clear idea of the expected quality.”
Please take careful note of that wording — expected quality — not actual quality, promised quality, guaranteed quality, tested for quality or any other kind of quality assurance, only “expected.” Well golly gee, I expected to get a million dollars from Publishers Clearing House, but…
Is expected the best we can hope for? Is the truth what someone decides it is or is it based on incontrovertible and verifiable facts and data?
The Real World
Out there, where the customers are, published specifications performance is beside the point. In fact, it is totally, utterly, completely, absolutely and unequivocally irrelevant!
Other than within the tour-sound universe, where equipment rider(s) may call for specific brands or models to satisfy a perceived or possibly actual artistic requirement, brand identity is not front-of-mind with the majority of end-users/buyers.
Certainly, there may be name recognition, word of mouth created opinion on need or desire (the “well, our major competitor’s facility uses xyz so shouldn’t we?” statement). But putting those essentially artificial needs aside, what logo is on a power amplifier is not really a make-or-break issue for clients (with of course the caveat that there is always one to whom it really does matter for a reason you will never really know).
Twenty-first century automated manufacturing processes, globally sourced component supply chains and a host of other factors make it very, very, very hard to separate any one of the 40 or so 1 kilowatt-per-channel amplifiers costing $500 to $999 you can easily find on any major gear website, based on pure numerical data or rated (expected) performance information. In fact, it is essentially impossible using simply the numbers if you remove the marketing/brand identity from the data — I know. I’ve tested this many times with “knowledgeable” professionals asking them to pick out their favorite brand simply based on stated generic technical data.
But these sets of data, which amount to a performance promise, are not the information we need or should be using to decide what to specify, what to install or who to support product-wise.
New Rules for a New World
It’s time for a new constitution (as The Who said), wherein we have a “BILL OF THE TRUTH” requirement for all hardware and software we are expected to recommend, buy, specify, install or otherwise have involvement (and thus responsibility) for.
Herewith are the provisions of the “BILL OF THE TRUTH”:
- No product shall be made available for sale or use until it has been thoroughly and completely tested in real-world conditions under normal operating parameters by non-manufacturer third-party verification.
- No updates or changes to existing software or firmware may be automatically installed.
- No software shall be sold until it has been tested “in the real world” under normal end-user conditions, by a neutral third party verification process using actual hardware upon which the software is ‘expected’ to function.
- A new parameter for hardware shall be implemented stating the MTBF [mean time before/between failure] conditions and how that data was derived and verified.
- All core or kernel code for software shall be tested and verified as to its suitability and functionality on the actual platform and with the accompanying hardware and software that would typically be deployed to ensure that it actually works.
- Specifiers/buyers/installers and related professionals shall be indemnified by a hold-harmless clause in which the costs associated with making a system or device functional shall be born by its manufacturer or developer in the case of software.
- No assumed or expected functionality shall be allowed in a product specification/sales sheet until and unless that performance or functionality has been third party verified to actually exist.
- Software or firmware source code written in a language other than English shall be tested and verified to function in English and any other language in which it is expected to be sold /used/operate. No un-tested code shall be permitted with a product in any language.
- Control systems shall be tested not only for expected functionality but for non-expected conditions or operation to ensure that no harm or failure can be induced by such operation.
- Your ideas here!
It is time we got more than we expected and less ‘it can’t possibly do that’ from our supply chain.
Help write the new BILL OF TRUTH for equipment and software. We shall no longer be asked to tolerate being used as beta test platforms for unfinished hardware or software. It’s time for the truth to be simply that and not some highly manipulated carefully couched, legal CYA verbiage.