THE #1 AV NEWS PUBLICATION. PERIOD.

Part 3: ITIL, ITSM and AV — The Benefits of Benchmarking

itil

In the last article, we discussed how adopting ITIL methodology can fundamentally change how we align our products and services to meet customer needs. As the AV industry continues to grow and evolve, we need to ensure that we find better ways to execute and continuously improve outcomes. The fundamental ideas behind ITIL are about ensuring that IT is not an ad hoc delivery of random hardware and software, but a fully baked end-to-end service based on mutually agreed-upon outcomes.

Over time, other methodologies popped up to fill in the gaps and address new needs. However, we have evolved past those early days of ITIL to a much broader world called Information Technology Service Management (ITSM). ITSM is the overall framework of how we do what we do in the IT world. For scale, ITIL is just one methodology under that umbrella. While ITIL is the most broadly used ITSM practice, it is frequently used in conjunction with many other methods. Here are a few: ISO 20000, TOGAF (The Open Group Architecture Framework), Application Service Library (ASL), Microsoft Operations Framework (MOF), COBIT, Six Sigma and Agile. IT is not a one-size-fits-all business, so why should there be only one ITSM practice? Here’s where we draw the parallel: AV isn’t one-size-fits-all either. Be that as it may, if AVIXA and the rest of our industry developed an AV-oriented ITSM practice, the IT industry would jump for joy and absolutely adopt it! There are two immediate advantages to this:

  • We (AV/IT) would speak the same language and work toward the same overarching goals and objectives.
  • We would be able to provide our own benchmarking, allowing us to tell our story.

One of the significant benefits of ITSM is the use of many metrics and KPIs (key performance indicators) common throughout the industry. These metrics are published so that you can not only measure your own performance (year/year) but also assess it against peers and the industry as a whole.

So, without further ado, let’s dive into one of the big things that the world of ITSM brings us — benchmarking of metrics and KPIs.

A repeat of the disclaimer: I am in no way an expert on ITIL or the other systems that I will talk about in this article. Any definitions or explanations are deliberately high-level to keep things simple. This is not a manual as to what we need to do to improve our industry. Instead, this is a call to action to take a new look at how we do the things we do and to pressure AVIXA and manufacturers to get on board to help us deliver more effective services.

The Benchmark Issue

Everyone likes to talk about how happy their customers are and how impactful their AV systems are. However, as there are no standard definitions or measurements for “impactful” or any other defining aspects of these systems, we cannot measure customer satisfaction, degree of impact or effectiveness. So, frankly stated: No one has a way of definitively saying that they do a better job than any other firm.

While we in the AV world can say that we did “stuff” for a customer and they seemed “happier” with the outcomes, the issue at hand is that all we can compare ourselves against is our concept of what the system should provide, not the idea of our customer’s satisfaction. As a result, we have no objective measurement of whether we are doing better or worse than our peers. Until we get to a point in which we can compare with others that do what we do and show (with data) hard numbers that illustrate how we are doing, AV can never truly advance to the next level: AV-as-a-Service. We can’t move on to AVaaS since we have no way to measure results in a meaningful fashion. (I will return to this point in a future installment of this series.)

The root issue is that, since you cannot definitively show customers that you can bring them better outcomes, it’s impossible to set effective, measured and meaningful benchmarks. There is a critical concept buried here. Benchmarks don’t just measure how well we do; they also set expected outcomes. This is of particular importance to us in AV, as we are “different” from IT, and we need to set realistic expectations.

In IT, you can set up systems that have a 100% failover. The internet circuit drops off — boom, you cut over to another. Router fails — instant failover. Now, while not all systems can do this, understanding and expectations are set. In AV, however, if a TV fails — we need a new one. While we may have a few spares around, it still takes time to swap out (or a codec, or a DSP). To set expectations correctly, it is critical that we set benchmarks to measure performance and set SLAs realistically. So, benchmarking helps us in more than one way. And if the IT director wants to swap a TV in 15 minutes, we have data that shows how unrealistic that expectation is.

I saw an email the other day from a large integrator talking about how much better it is than others and how awesome they can make my life. While it was a nice piece of marketing, this claim is objectively impossible. I say this to further hammer down my point: Since there is no mechanism to compare to others and prove that statement, how can any one integrator move in, take over and subjectively make things better? I am sure that in actuality, there are lots that can — in that gap is where AVaaS lives at present. Once again, I’ll revisit this point in the future.

AV Versus IT

A few years back, as I prepared for my first internal Quarterly Business Review (QBR) of the IT group I was in, I revisited previous paperwork to search for a particular piece of data. I looked at what the AV group regularly reported and dug and dug to find the updated numbers. But something was bugging me. While the data I was pulling looked good — regular improvements, lower costs, and less downtime — it said nothing about outcomes. There was nothing that said customers were happy, that meetings were being started quicker, less clicks. There was nothing to show that we actually improved customer experience. I was troubled by this.

We later started the meeting, and as I saw the metrics others were delivering on the usage of their services, changes in workload Q/Q and measuring customer effectiveness. It was at that point I realized that AV had to deliver metrics based on outcomes. AV has never had anything like that. I knew that we had to start evolving what we were measuring to show the benefits that we were providing to users. And to do so, I had to dive into the data that I did have and start figuring out what we were missing. The main issue was that we just did not have enough information to accurately reflect the impact on what our group had done over the last quarter. We say AV is critical; AV is imperative. Yet there are no industry-wide metrics standards we can use to prove it. If you can’t determine your value, and someone else can, guess who is getting a resource request funded?

Back to the meeting: over its course, it got worse. The others didn’t just have these cool metrics to measure outcomes and compare against themselves — oh no, it was far worse. They could refer to industry trade publications and measure how their help desk compared to peers — not just the industry as a whole, but other financial companies, other software companies, healthcare, you name it. This blew up everything in my mind. I realized that while I could develop my own metrics (and we did develop some very cool ones), I could only see how we were doing. I had no way to compare to see how we compared to other companies. How could I assure my management that we were doing better, worse, the same as other people? There was no way to tell.

As stated before, during that time, we did develop some useful metrics. Many other companies do too, don’t get me wrong! If you don’t believe me, go talk to any decent-sized enterprise AV department. You will hear of some great stuff that they have come up with. I am sure that if we speak to many of the large AV companies that are doing managed services, they will tell you about really awesome and creative ones they have had to develop for their customers. There is a good likelihood that many of these metrics are similar. This is all fine and dandy; however, the issue remains: Am I doing better or worse than peers? Who can tell?

If I want to add two AV team members, my CIO will want justification. However, if I wanted to add to the service desk team, I could easily provide this justification. I could pull up metrics that show staff numbers against the employee population. I could show where these employees stand against peers and then compare that to ticket closures, time to close, etc. and make a solid case that we are understaffed. Or, I might see that we are overstaffed, and in that case, begin to find out why others can do more with less. AV cannot do this.

No Justification, No Headcount, No Capital Expenditures

No justification means no headcount, no extra capital expenditures. Interestingly though, we went through an exercise once — trying to determine if we had enough headcount based on a question from our CIO. The question was simple: “Do you have enough staff to cover events with white-glove service?” We had no idea how to answer that question. And as we had to prove our answer either way, we began an exercise to try to quantify the question. Then answer the question. Then assign triggers to let us know if we needed more staff over time. (The answer was 42!) As we went digging into it, we started to have some “Aha!” moments — these moments led us on a wild goose chase to all sorts of different places trying to pull up various data points to make that point that we were starting to coalesce around. Eventually, we figured it out! We obtained the data (from lots of different places), and we built a worksheet where we could change a few variables and show what would happen. We went from saying, “I think … we have enough people” to the CIO granting two more people right away. So the opposite of the first sentence in this paragraph would be — strong justification means more headcount, more capital to spend!

Still, with all of us using our own metrics, we live in our own silos. Even if we get to a place where there is a comprehensive suite of metrics and data available to benchmark and compare with across the industry, this does not mean that we would stop developing other metrics. Every business has unique characteristics that they need to measure even though others might not. In many cases, these unique metrics may actually be submitted to industry and become a standardized one. But the brutal truth remains. We need to be able to compare to others.

So, this leads us to a question. How might these metrics look? That’s not a rhetorical question. I don’t actually know. It’s a tough question that our industry (AV integrators, end users, manufacturers — I’m looking at all of you) will have to get together and determine. In that process, if it ever happens, standard metrics would be put together. While I can’t say what those metrics would look like, I can tell you that any well-constructed parameter has at least two goals:

  • Measure performance against others (or even ourselves).
  • Gain insights that allow us to improve performance in that metric.

Some Examples — Food for Thought

Although I don’t have any idea what these finalized metrics will look like (as I reiterated), I do have some starter ideas to put forth as examples.

TTSM (Time To Start Meeting)

If a room was booked at 1 p.m. for a 50-minute meeting, (See Scott Walker’s LinkedIn post about CVF), at what point was the meeting rolling?

  • This metric could involve a few data points:
    • AV connected
    • VC call placed
    • Quorum of people in the room
  •  What can this tell us about meeting rooms:
    • Are they easily able to connect?
    • Are the VC systems getting started/connected quickly?
    • Are people getting to the room promptly?
  • We would need the following data:
    • Exchange (meeting start time and number of people correlated by office location)
    • Size of room (*bonus to flag rooms with fewer people in a larger room)
    • When the technology was connected
    • Tech in the room (no VC, better not try to measure that)
    • Occupancy sensor to count bodies

Keep this in mind: One metric can feed us an ocean of data and give us insights that can guide either capital investment or changes in how we do things. If our meetings are always starting in seven minutes, but we see that in our industry sector that the average is five … we have work to do.

I recently had a conversation with someone on LinkedIn (who wished to stay anonymous). He works in higher ed and shared a great example of what I am getting at: Their management recently told him and his colleagues that they “weren’t doing as well as other universities.” It turns out that the comment was based on a report that no one in AV had ever heard of, by an organization no one had heard of. When they dug into it to question where the data came from or how it was measured, there were no reliable answers. The employees in question eventually worked out that the report was from a consulting organization that runs service effectiveness surveys across their segment. They also worked out that the questions were framed broadly around services. As such, “audiovisual” was just one service all lumped in together. The questions were just statements that survey respondents rated from 1-5. So while the survey was customer-focused, that focus was narrow and very broad-stroke. So, trying to parse it out to take meaningful actions was next to impossible.

This above example tells us two important things:

  1. How we formulate what data goes into a metric is crucial. If there is an issue, we have to unpack it to derive those insights I have referred to recently.
  2. (And this one is most important) If we don’t control the creation and delivery of metrics to our organizations … they will do it themselves.

Here is another made-up metric related to overall video usage.

VUP – (Video Usage Performance)

We have video everywhere, but is it being used? Are cameras on? I was doing some work for an organization with a huge investment in Cisco Webex-based video — all calls were placed via Webex. Yet, folks rarely turned on their cameras. It just wasn’t in the culture. Now as we all know, video is key to improving both performance as well as building rapport in geographically dispersed teams. And if you talked to the CEO of that company, he probably thinks that they are video ninjas! And they are … in that you can’t see them! So, how do we feed a number up the chain that measures how well we are living on video?

Again, here are some examples.

In short, who is using video?

  • This metric would involve a few data points.
    • Number of employees with active accounts
      • It should be close to 100%.
      • Filters out those that aren’t expected to be on calls — factory workers, cleaning staff, etc.)
    • Total VC systems (breakdown calling from laptop versus from a room)
    • Number of people in video rooms during calls
    • Laptop cameras on or off
    • Correlate above to functional groups (engineering, finance, etc.)
  • What can this tell us about video versus nonvideo calling
    • Who is getting to rooms?
    • Who is using their camera?
    • Do we see more or less usage in certain groups?
  • We would need the following data
    • Exchange (meeting start time and number of people correlated by office location)
    • Back end VC system data of users versus camera on/off
    • Occupancy sensor to count bodies in video rooms

This is an example of how one metric need can drive an industry. If we opted to create this, I suspect that not all VC systems would be able to provide the needed data. I do, however, know for a fact that at least one can. And then you might also want to add in bits so that you could tell when maybe someone joined a meeting with no video … but they are doing it for content-sharing/viewing only. This metric is something along the lines of one that BlueJeans created. And we loved it; it was a great idea. However, if only one manufacturer does it … it’s cool, but the value is limited.

Standardized metrics help everyone. And as AV is absolutely a critical business process, we have to start treating it that way. I was recently talking to the CTO of a Bay Area unicorn startup, and he asked me point-blank: “How come AV never works?” This is not a question born of an integrator using the wrong extender or DSP. It comes from massive frustration at an enterprise level. Maybe he was right, and they have systemic issues, perhaps it’s just him? Without the ability to measure all systems’ performance across the enterprise, we can’t know the answer. And it’s next to impossible to take steps to fix things in a meaningful way.

For the AV industry to move into the future, the industry must come together to create an AV Service Management methodology to pair hand-in-hand with ITSM to keep improving outcomes and find concrete ways to measure it.

As always, please let me know your thoughts. Tell me if you agree, disagree or how you see something like this happening.

Thanks for reading. Please share!

Top