In the last article, I gave an overview of what ITIL/ITSM are and how a similar methodology could be beneficial in the AV world. In this follow-up, I want to discuss the five volumes or books of ITIL and how they can translate to AV.
A repeat of the disclaimer: I am in no way an expert on ITIL or the other systems I will discuss in this article. Any definitions or explanations are deliberately high-level to keep things simple. This is not a manual to define what we need to do to improve our industry. Instead, this is a call to action to take a new look at how we do the things we do, and pressure AVIXA and manufacturers to get on board to help us deliver ever more effective services.
As was stated before, ITIL came about as a way to measure the delivery of IT services. Over time, the result was a playbook of practices on how to order services, measure the quality of services delivered and continuously improve those services. It’s essential to — once again — clearly state, ITIL has nothing to do with what products are used; it doesn’t tell you how to design a system. It is just a framework to ensure that we align with the business goals we are serving and ensure we are continually meeting customer needs.
The early version of ITIL had five main chapters or books that defined the overall goals:
- Service Strategy
- Service Design
- Service Transition
- Service Operation
- Continual Service Improvement
We must understand that these chapters form the pillars of a high-performing IT organization. If we align with them, we have accomplished one of the critical feats in building relationships with our customers. We are speaking their language — the language of ITIL/ITSM.
We have two issues in the AV world: First, we speak a language that only refers to the performance of the products and not the language of services’ services’ delivery to customers. We say “1800 nits and 160-degree viewing angle” and not “60% improved visibility to the whole room.” We talk about things that are meaningful to AV folks about the room but are useless to the CIO when reporting the benefit to these spaces’ business. The second thing is that we tend to look at things transactionally. “I need two rooms, five rooms, OK, a new floor is being built out — 18 rooms …” and so on.
Throw all that out the window — IT does not look at the world like this. The whole point of ITIL was to define the service is that is being provided and calculate how much of that service is needed for the business. Everything is about aligning to the market with appropriate service offerings. And then consistently fine-tuning and improving and questioning if this is the right service-offering for that business. In AV, we respond to an ask for new rooms. We ask what gear the customer wants to fit in it. Now I know that some companies are getting much better at developing a longer-term understanding of the business. With that — hopefully — they’re gaining an understanding of what they are trying to do. But I will say that it has been my experience that integrators tend to know the company and the major products we build, but don’t really dive into things much more than that.
In the IT world, however, we always have to dive into the users’ business needs to understand what offerings they might need. We do not upgrade/add new services because it’s new/cool/better. Only by understanding the needs of the business can we formulate a compelling service offering that will enhance/improve the business. Let’s go over those five books/chapters of ITIL to try to compare them to the AV world and how we can do things differently. And, of course, in a perfect world, adopt industry-wide practices that follow these guidelines so that we all come at this from a similar viewpoint. This is the language of our customers. It’s long past time we spoke it!
This is the first step in designing and building a service. Note that I am not saying a room, space, an application … it’s a service. And that service has to satisfy specific business needs — from the tasks you need to accomplish in that room, to supporting that room (refresh, daily room sweeps, support response or even operating the room in a more sophisticated space) to how we may even bill back business units for their use of that room. The service strategy book also defines all the stages in the service life cycle. We need to know what happens to this “service” as it ages to know when we need to replace components or reevaluate the service entirely. We also need to understand how the efficacy of this room/space/service is measured and monitored.
This first stage is not about what we build. It’s about understanding and aligning with the needs of the business. It’s the foundation of the services built and the critical step in meeting customer needs. Remember, we aren’t just building some conference rooms; we are building a service to meet the business’s needs.
The Service Design book is where we start talking about what we are building, maintaining and refreshing. This is where we start getting into features and functions — the type of room, the type of tech that is needed. The accounting group may have very different needs than marketing teams or the engineering folks. They will each use the rooms differently, and their outcomes can be measured quite differently. So you can see that we have to move past the current thinking of huddle, small, medium, large and event space. While those aspects can and do provide guidance on seating capacity and monitor size and a count of rooms with VC, we need to define much more comprehensive templates that align with what those groups above do from day-to-day. This approach can allow us to maintain the base consistency we need for support and standardization, but at the same time, get us past the plethora of one-offs and unique rooms. Not everyone does the same thing; delivering an optimized service requires an approach that allows us to optimize for those needs.
Of course, it’s not enough to just define what goes in the room; we need to keep these rooms running. A standard room with just a VC system can be monitored and mainly validated through remote systems, but other rooms may need that physical checkup once in a while. Whether it’s to make sure that cables and adaptors are still in place (never has there been a greater justification to move to wireless content-sharing systems than the overhead of keeping HDMI cables and adaptors up and working) or just to make sure that the TV still looks OK, we need a combination of physical checks and remote diagnostics to ensure that these services are running. Depending on the space, or user community in that location, we might need to test a room once per day, once per month or even less often. However, in addition to this, we have to design things in such a way that if at all humanly possible, we can reset/reboot every part of that room remotely to keep things working. Barring a TV that has had a boot through it (seen it … was in Dallas, I’m guessing the Cowboys lost), we need to consider any instance where we have to deploy a person to the room to be a design failure.
Of course, this is just talking about the AV needs! Increasingly, we need to collect data and analytics from the room. This means that the sensors we integrate into the space can have as much importance as the AV gear. How we manage and interpret all that data to make sense of it and draw actionable conclusions from it is critical. While this is a part of our industry that is entirely new, the pace at which it is advancing is mind-boggling. From sensor-driven devices like the incredible Jabra Panacast camera and its ML stack delivering insane amounts of data, to what folks like Mersive are doing behind the scenes with overall system data to draw insights, there has never been a better time to dive into all of this. When we take the time to build an entire service offering with a life cycle, and all aspects are understood, we are well on our way to far better outcomes for our customers.
The worst habit that AV integrators have is the constant desire to sell us new stuff! I say this tongue-in-cheek as we must understand that when it comes to change — ALL. CHANGE. IS. BAD! It scares the pants off us. And as such, we live in a world where change management is a significant force. Any time we need to change something, we have to balance the business continuity interruption, with the potential benefits of whatever new whizbang we are looking to introduce. The one phrase that always makes me laugh when I hear it at trade shows from manufacturers is, “You just need to train your users!” Um, I have 10,000 users. Please explain how I will train them. Somehow Apple managed to sell in the neighborhood of 185 million iPhones last year with no training. Folks figured it out as it was built on the previous one — so there was an incremental change but no big surprises. We have to approach things with this mindset.
If we are going to introduce something new, it’s a change. And we need to manage that. I have had integrators come in and make (what they felt was) a minor change on a touch panel because “it made sense” to them. Then I had to deal with an angry CEO who unintentionally muted his speakers in the room instead of the microphone and called a financial analyst lineage into question. Even if I needed to reboot a VC bridge, I had to get internal approval from the change management board.
Change management is not just about a new tech being deployed. It can be as simple as a room being taken offline for a few days to refresh the tech. We have to understand the impact on those who will be affected, warn them so they can make alternate plans and notify them that the tech will be new at the end of the work. We then should tell them how to get in touch with us if there are issues, ensure that once the work is done that it is thoroughly tested and then notify the users that the room is back up. Finally, we follow up with the users to ensure they are happy. This can be lots of different steps, some of it can be quite formal or informal, but either way, we have to make sure that tech does not get in the way of business. No matter how great the new space is, there will be lots of users who hate it and want the old one back. Why? Because all change is bad! But — managed change is workable.
I was talking to someone at a local startup recently. I was told of a high-value room that got redone and had a different videoconferencing platform installed — no one was notified. So here’s your example of good intentions with no change management — disastrous results. If we want/need/have to make changes, so be it. After all, we do have to evolve systems and services. But we need to guarantee that we manage that process to ensure minimal interruptions to business and maximum effectiveness to users.
If you are proposing some new service for a customer, make sure that you have taken a cursory look at how that change might be managed.
In many ways, this is the easy part. This is where we define how we manage and maintain the systems. What back-office systems are we using to collect data and present them in dashboards so that we can see the fruits of our labors? How are we alerted to system faults, and what is the defined process to respond to those faults? Years ago, I worked at a company called Verisign. And amongst other things, we ran the internet. We were the folks that redirected your browser to 184.108.40.206 when you typed in cnn.com. As a part of our overall operations, we had a GNOC that monitored all the back-office systems. And based on ITIL practices, they had documentation that spelled out what to do to troubleshoot and the escalation path to reach out to in case of some issue with any systems. Our penalties, should “.com” ever go down, were significant and time-bound — so we had to know what to do to get things back up and get them back up fast! And if things went sideways, and then they couldn’t see the video wall, then my service was failing their service needs. As a result, I had documentation for basic troubleshooting and then a list of contacts to reach out to for assistance if that did not work. I had to validate this documentation every quarter in keeping with standard ITIL practices.
So, Book 4 is keeping things running, responding and ensuring that it was all documented and clear and up to date. This is where value can be added and where partners can become ever more sticky in their relationships.
Continual Service Improvement
This is where I think that we fail badly in many corners of the AV industry. I don’t think we fail due to lack of trying; I think that we fail due to an incorrect approach. Who has ever gotten some sort of service and NOT been inundated afterward with customer satisfaction polls? Yes, we are sick of them (and that is a whole different issue that affects this chapter). But without asking the users of the service directly, how can we know how happy they are? If you work in an enterprise and you reach out to the help desk for just about anything — they document it. They write a ticket, they answer your questions, or remotely control your machine to make a change, or maybe they point you to a KB article. No matter what, there are a few things after the fact that happen:
- First is the survey. If you are happy with the service, you likely ignore it or put in minimal effort to it. Should you not be satisfied, then you probably offer more detail. All those survey results are viewed, and the results compared to see where there is an issue. If we see the same things happening repeatedly, we know that we need to do something about it.
- We look at the data about the incident itself. How long did the ticket take to resolve? How often does that user have issues? We take all the data that we can find and correlate it to find ways to improve things. Maybe it’s the business service itself; perhaps it’s the help-desk process. Many organizations are moving help-desk support onto Slack and similar platforms, as nobody likes making the phone call. I can keep a Slack conversation open after I close my computer and am on the train. That in itself is a service improvement.
Whatever service we deploy, we need to ask ourselves how we will measure client satisfaction with that service. One cool thing I saw was using Alexa for Business to do surveys. When I get a survey email, I mostly ignore it — I’m busy. What about this: Alexa for Business knows when you hang up the VC call. One minute later, Alexa asks two questions:
- Did you have any issues with the meeting?
- Are there any issues in the room that need attention?
Based on that, a ticket can be initiated that is time-accurate to the issue. A tech can be deployed to the room right away to find the issue. As opposed to the eventual hallway conversation, “Hey, I had a meeting in the Chili room earlier, there was a problem with the VC system — I think?” By asking the question right away, we get accurate data, a much higher response rate and fewer death-by-survey emails.
Continual improvement should be the goal of every service organization. The only way to be useful is to capture data and gain real insights to implement those improvements. ITIL/ITSM is the framework to deliver better, more aligned services and then keep evolving and getting better. For the AV world to keep getting better, we need to adopt similar frameworks.
In my next article, I will discuss why we must standardize metrics to benchmark across the industry. As always, please add comments; tell me your thoughts, tell me if you think I am onto something or even if you think I’m full of it.
Thanks for reading.