Balanced IT Service Desk Performance

Taking a Balanced View of IT Service Desk Performance

Does your IT service desk use a portfolio of IT metrics? I’m sure the answer is “yes,” but is it really a portfolio? Most online definitions of a portfolio relate to financial investments – where, ideally, the investor has a portfolio that balances risk and return in line with their needs (including risk appetite). So translate this portfolio definition to your IT service desk metrics – does your portfolio of metrics provide a balanced view of IT service desk performance? Even if you think it does, please keep reading to understand the difference between different types of metrics and the common issues encountered when IT service desks employ the wrong metrics. Plus, you might just find that your IT service desk metric portfolio isn’t as balanced as you think.

Does your portfolio of metrics provide a balanced view of IT service desk performance? Check out this blog by @Joe_the_IT_Guy to see where you might be going wrong. #servicedesk #ITSM Share on X

Service desk metrics to evaluate IT service desk performance

There are many ways to differentiate metrics, but whichever method is used, it’s essential to appreciate the differences. For example, ITIL 4 guidance states that “the absence of a certain type of metric in a measurement system can cause some characteristics of a management object to be left unmeasured.”

The ITIL 4 Measurement and Reporting management practice details four key metric types:

  • Effectiveness metrics – which show how an activity fulfills its purpose and achieves its objective(s). An IT service desk example is first-contact resolution.
  • Efficiency metrics – which show how an organization utilizes resources to perform activities and manage products and services. An IT service desk example is average resolution time.
  • Productivity metrics – which show the amount of work performed and the resulting outputs, or the “throughput.” An IT service desk example is tickets resolved per agent.
  • Conformance metrics – which are of interest to service owners and governing bodies. An IT service desk example is the service level agreement (SLA) target compliance percentage.

You might think all is good with your IT service desk metrics, but I haven’t finished covering the different metric types yet.

Check out this blog by @Joe_the_IT_Guy to understand the difference between different types of #servicedesk metrics & the common issues encountered when IT employs the wrong metrics. #ITSM Share on X

Different Service Desk metric types

Leading metrics show what’s likely to happen in the future (without any course correction). Leading metrics can be challenging to measure but are relatively easy to influence. Examples of leading IT service desk metrics include:

  • Service desk queue length, i.e. the number of open tickets – where a growing queue length might indicate an increase in incidents or the service desk is not resolving tickets efficiently.
  • First-contact rate – where tracking the trend of the first-contact resolution rate over time can indicate that service desk performance is deteriorating or more complex issues are arising.
  • Percentage of incidents by category – where an increase in a specific type can highlight an underlying issue that needs to be addressed.

Whereas lagging metrics report what has already been achieved. There’s, therefore, limited ability to influence these metrics. Examples of lagging IT service desk metrics include:

  • Mean time to restore service (MTRS) – the average time to restore service after an incident has occurred, which reflects the efficiency of the incident management process and the IT service desk.
  • Customer satisfaction score (CSAT) – which measures customer and end-user satisfaction with the IT support services provided.
  • Percentage of SLA compliance – this conformance metric measures the proportion of tickets resolved within the agreed-upon time frames.
  • Number of incidents – the number of incidents reported over a specific timeframe.

Knowing the difference between leading and lagging metrics is important for at least two reasons. First is the distinction between which metrics look back and which look forward (and can be improved upon). Second, key performance indicators (KPIs), which ITIL 4 defines as “An important metric used to evaluate the success in meeting an objective,” that make up the key data points communicated to stakeholders are often leading metrics because they allow stakeholders to understand where improvement action can be taken.

Do you know the difference between leading & lagging #servicedesk metrics? Check out this blog by @Joe_the_IT_Guy. #ITSM Share on X

You’re still probably thinking that your IT service desk metric portfolio covers the range of metrics needs and allows you to drive improvement effectively. Perhaps, even without knowing which are leading and lagging metrics. But a worthwhile exercise is mapping out how your existing IT service desk metrics demonstrate performance and highlight improvement opportunities across:

  • Operations
  • Services
  • Experiences

In doing this, you might find that your current portfolio of IT service desk metrics, even if not used to drive IT-support improvements, only highlights operational and service-based improvement opportunities. The following section explains why.

When was the last time you mapped out out how your existing IT #servicedesk metrics demonstrate performance & highlight improvement opportunities? Too long ago? Get started! #ITSM Share on X

The operational nature of traditional IT service desk metrics

As this blog has shown, the metrics employed by your IT service desk are balanced, but they’re likely not balanced enough.

Sadly, your traditional IT service desk metrics – even though they might be adopted industry benchmarks – are likely based on the IT service desk leadership’s (the supply-side) view of what’s important rather than the demand-side’s. They focus on the “mechanics” of IT support and measure operational performance – for example, “how many” and “how long.” These metrics can’t tell IT service desk leadership about the outcomes that result, i.e. what’s achieved through what’s done.

The measurement is also usually taken at the IT supply point, not the end-user consumption point. So, an incident handling service-level target might show that the average handling time met the SLA, but the end-user perspective is far from happy. It might have been that the measurement is biased, with the “clock stopped” for IT but not the end-user. Or the end-user’s issue might not have been resolved (so the outcome was poor despite the operational efficiency).

Are your #servicedesk metrics based on the IT service desk leadership’s (the supply-side) view of what’s important rather than the demand-side’s? It's time for change says @Joe_the_IT_Guy #ITSM Share on X

These metrics issues result in a gap between the IT service desk’s view of its performance and the end-users’. The IT service desk has a “sea of green” across its performance dashboards, but the end-user perception is poor. This gap is caused by what the IT industry terms “watermelon SLAs” – this term denotes that performance measurement is “green on the outside but red on the inside,” like a watermelon. But this isn’t the only issue because the IT service desk’s view that “all is well” causes it to miss important improvement opportunities and likely focus any improvements they make on the wrong things. These areas will usually be what the IT service desk deems to be opportunities or issues rather than end-users’ real needs.

The answer? Experience metrics

Experience metrics help tackle this watermelon SLA issue by moving the performance measurement focus beyond the operational “mechanics” to include the end-user perspective of their outcomes. Experience data allows your IT service desk to identify the real issues end-users face and improves the understanding of “what matters most” to them. Thus, providing a more balanced view of IT service desk performance that goes beyond operational effectiveness, efficiency, productivity, and conformance to consider the business outcomes of the IT service desk’s performance.

Thoughts? Let me know in the comments!

 

 


Posted by Joe the IT Guy

Joe the IT Guy

Native New Yorker. Loves everything IT-related (and hugs). Passionate blogger and Twitter addict. Oh...and resident IT Guy at SysAid Technologies (almost forgot the day job!).