In my recent rAVe DS column, I wrote about the value of developing a strategy for digital signage and from that strategy developing metrics to measure whether you have achieved your goals. As we prepare for another academic year to begin, I am thinking about how we measure the success of technologies in our classrooms and presentation spaces. How do we, as technology managers, or the integrators we hire, decide whether the spaces we support are providing the function for which they were designed?
I have developed a set of recommendations that can apply to either tech mangers who support the rooms in-house, or to integrators who support the spaces remotely. First, you need to create a balanced scorecard that clearly states your goals. Second you need to start collecting useful metrics about the spaces you are supporting. Finally, you need to analyze the metrics you created and see if you are achieving your goals on the balanced scorecard.
A balanced scorecard is a concept used by many businesses to make sure they are achieving goals that are not strictly financial. The concept of a balanced scorecard is not used much in the higher ed world, but is very useful because it does not focus solely on financial measures. One of the measures in the balanced scorecard is customers. How do you want your customers to view you? In the case of technology support, this looks like: time to close calls, number of calls per room/per usages, wait time between problem report and arrival of (or communication with) a technician. A second measure of the balanced scorecard is internal business processes (also called operational performance by some). This measure is used to gauge things like total system up-time, peak time available and whether your systems are secure. A third part of the balanced scorecard is learning and growth. This is measured by driving down the number of problem calls you get. If you are learning and growing, you should be eliminating predictable problems as you discover them. It also can be used to measure whether your employees are achieving certifications, taking online refresher courses and staying up to date. Finally, there is a portion of the balanced scorecard that looks at financials. This is something that we, in education, are very weak with. However, when done well it can be very informative and useful. When we talk about class capture for instance, we normally look at the installation and equipment cost. Yet, we don’t often break that down into comparative numbers that tell us how much it costs per use. Or better yet, how much it costs per viewed recording. Those are numbers that are actually useful. If your institution has a push towards encouraging faculty to use Class Capture, then having those numbers (on the costs), and setting a goal of driving them down (because they are used more), is a great metric for your scorecard. For more information on the Balanced Scorecard, check out the Harvard Business Review. There is a great read from the people who invented the concept, Kaplan and Norton. The image included here shows a potential scorecard with the goals discussed in this article.
Now let’s think about metrics. You should only be collecting metrics that are valuable to you, and you can put to direct use. Some examples of useful metrics for most institutions would be:
- Number of hours in a defined time period that the technology is used.
- Number of problem calls that are classified as technical problems
- Number of problem calls that are classified as user/training problems
- Cost of resolving an average trouble call
- Number of hours in a defined period that specific technology (class capture, clicker, collaboration) is used
- Time to close open problems
- Down times of spaces
I am sure that you have some very specific metrics that may be of interest to your institution that I have not listed. I just warn you to be thoughtful of what you collect and how you collect it. For example, in the past we collected the number of lamp hours used in a day. We assumed this would tell us how much the projection system was used. However, we found that projectors were left on when not being used. So, while the data may have been useful for some analysis, it was not an accurate number of how often the technology was being used. If you engage with a skilled programmer, together you should be able to define the specific metric you want to understand, and the programmer should be able to find a way to collect and report that metric.
If integrators were to start thinking about offering services and consulting to institutions to help them think about metrics and scorecards, it could be a lucrative business opportunity. Many of the statistical systems, and reporting would be the same for different institutions, but the integrator would make money on the services. One of the few places that there is a profit margin left in the industry. If you are a technology manager, being able to provide this data to your administration will show that you are constantly looking to improve your services. Also, by simply implementing the scorecard your services will improve. A win, win for everybody.